Mar 12 07:58:45 crc systemd[1]: Starting Kubernetes Kubelet... Mar 12 07:58:45 crc restorecon[4684]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 12 07:58:45 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 12 07:58:46 crc restorecon[4684]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 12 07:58:46 crc restorecon[4684]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Mar 12 07:58:46 crc kubenswrapper[4809]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 07:58:46 crc kubenswrapper[4809]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 12 07:58:46 crc kubenswrapper[4809]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 07:58:46 crc kubenswrapper[4809]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 07:58:46 crc kubenswrapper[4809]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 12 07:58:46 crc kubenswrapper[4809]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.858763 4809 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.864019 4809 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.864050 4809 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.864060 4809 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.864069 4809 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.864077 4809 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.864085 4809 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.864093 4809 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.864101 4809 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.864108 4809 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.864145 4809 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.864153 4809 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.864161 4809 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.864168 4809 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.864176 4809 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.864184 4809 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.864193 4809 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.864200 4809 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.864207 4809 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.864216 4809 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.864223 4809 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.864231 4809 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.864239 4809 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.864246 4809 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.864253 4809 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.864261 4809 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.864269 4809 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.864276 4809 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.864287 4809 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.864296 4809 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.864334 4809 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.864345 4809 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.864353 4809 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.864363 4809 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.864373 4809 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.864384 4809 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.864395 4809 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.864405 4809 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.864415 4809 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.864424 4809 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.864432 4809 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.864441 4809 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.864450 4809 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.864457 4809 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.864465 4809 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.864472 4809 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.864480 4809 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.864488 4809 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.864495 4809 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.864503 4809 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.864511 4809 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.864518 4809 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.864528 4809 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.864535 4809 feature_gate.go:330] unrecognized feature gate: Example Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.864544 4809 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.864551 4809 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.864559 4809 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.864567 4809 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.864574 4809 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.864581 4809 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.864589 4809 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.864596 4809 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.864604 4809 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.864611 4809 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.864619 4809 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.864626 4809 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.864633 4809 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.864641 4809 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.864649 4809 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.864657 4809 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.864667 4809 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.864677 4809 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.865762 4809 flags.go:64] FLAG: --address="0.0.0.0" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.865784 4809 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.865797 4809 flags.go:64] FLAG: --anonymous-auth="true" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.865809 4809 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.865820 4809 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.865830 4809 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.865841 4809 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.865852 4809 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.865861 4809 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.865870 4809 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.865880 4809 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.865889 4809 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.865898 4809 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.865907 4809 flags.go:64] FLAG: --cgroup-root="" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.865916 4809 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.865925 4809 flags.go:64] FLAG: --client-ca-file="" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.865934 4809 flags.go:64] FLAG: --cloud-config="" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.865944 4809 flags.go:64] FLAG: --cloud-provider="" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.865953 4809 flags.go:64] FLAG: --cluster-dns="[]" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.865965 4809 flags.go:64] FLAG: --cluster-domain="" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.865973 4809 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.865983 4809 flags.go:64] FLAG: --config-dir="" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.865991 4809 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866001 4809 flags.go:64] FLAG: --container-log-max-files="5" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866012 4809 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866021 4809 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866031 4809 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866041 4809 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866050 4809 flags.go:64] FLAG: --contention-profiling="false" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866059 4809 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866068 4809 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866078 4809 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866086 4809 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866104 4809 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866140 4809 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866150 4809 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866159 4809 flags.go:64] FLAG: --enable-load-reader="false" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866168 4809 flags.go:64] FLAG: --enable-server="true" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866177 4809 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866187 4809 flags.go:64] FLAG: --event-burst="100" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866197 4809 flags.go:64] FLAG: --event-qps="50" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866206 4809 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866216 4809 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866227 4809 flags.go:64] FLAG: --eviction-hard="" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866240 4809 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866251 4809 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866262 4809 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866275 4809 flags.go:64] FLAG: --eviction-soft="" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866286 4809 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866298 4809 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866309 4809 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866321 4809 flags.go:64] FLAG: --experimental-mounter-path="" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866332 4809 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866343 4809 flags.go:64] FLAG: --fail-swap-on="true" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866354 4809 flags.go:64] FLAG: --feature-gates="" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866367 4809 flags.go:64] FLAG: --file-check-frequency="20s" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866378 4809 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866391 4809 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866400 4809 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866408 4809 flags.go:64] FLAG: --healthz-port="10248" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866418 4809 flags.go:64] FLAG: --help="false" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866426 4809 flags.go:64] FLAG: --hostname-override="" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866435 4809 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866445 4809 flags.go:64] FLAG: --http-check-frequency="20s" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866453 4809 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866462 4809 flags.go:64] FLAG: --image-credential-provider-config="" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866470 4809 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866479 4809 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866488 4809 flags.go:64] FLAG: --image-service-endpoint="" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866499 4809 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866508 4809 flags.go:64] FLAG: --kube-api-burst="100" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866517 4809 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866526 4809 flags.go:64] FLAG: --kube-api-qps="50" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866536 4809 flags.go:64] FLAG: --kube-reserved="" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866545 4809 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866553 4809 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866562 4809 flags.go:64] FLAG: --kubelet-cgroups="" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866571 4809 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866580 4809 flags.go:64] FLAG: --lock-file="" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866589 4809 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866598 4809 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866607 4809 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866620 4809 flags.go:64] FLAG: --log-json-split-stream="false" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866628 4809 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866638 4809 flags.go:64] FLAG: --log-text-split-stream="false" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866646 4809 flags.go:64] FLAG: --logging-format="text" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866655 4809 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866665 4809 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866673 4809 flags.go:64] FLAG: --manifest-url="" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866682 4809 flags.go:64] FLAG: --manifest-url-header="" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866693 4809 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866702 4809 flags.go:64] FLAG: --max-open-files="1000000" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866712 4809 flags.go:64] FLAG: --max-pods="110" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866721 4809 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866732 4809 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866743 4809 flags.go:64] FLAG: --memory-manager-policy="None" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866754 4809 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866765 4809 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866777 4809 flags.go:64] FLAG: --node-ip="192.168.126.11" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866789 4809 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866828 4809 flags.go:64] FLAG: --node-status-max-images="50" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866839 4809 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866849 4809 flags.go:64] FLAG: --oom-score-adj="-999" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866859 4809 flags.go:64] FLAG: --pod-cidr="" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866869 4809 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866885 4809 flags.go:64] FLAG: --pod-manifest-path="" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866895 4809 flags.go:64] FLAG: --pod-max-pids="-1" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866905 4809 flags.go:64] FLAG: --pods-per-core="0" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866915 4809 flags.go:64] FLAG: --port="10250" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866928 4809 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866938 4809 flags.go:64] FLAG: --provider-id="" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866948 4809 flags.go:64] FLAG: --qos-reserved="" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866958 4809 flags.go:64] FLAG: --read-only-port="10255" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866967 4809 flags.go:64] FLAG: --register-node="true" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866978 4809 flags.go:64] FLAG: --register-schedulable="true" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.866988 4809 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.867017 4809 flags.go:64] FLAG: --registry-burst="10" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.867028 4809 flags.go:64] FLAG: --registry-qps="5" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.867039 4809 flags.go:64] FLAG: --reserved-cpus="" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.867050 4809 flags.go:64] FLAG: --reserved-memory="" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.867063 4809 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.867074 4809 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.867086 4809 flags.go:64] FLAG: --rotate-certificates="false" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.867098 4809 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.867107 4809 flags.go:64] FLAG: --runonce="false" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.867157 4809 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.867170 4809 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.867182 4809 flags.go:64] FLAG: --seccomp-default="false" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.867193 4809 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.867202 4809 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.867211 4809 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.867221 4809 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.867231 4809 flags.go:64] FLAG: --storage-driver-password="root" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.867239 4809 flags.go:64] FLAG: --storage-driver-secure="false" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.867248 4809 flags.go:64] FLAG: --storage-driver-table="stats" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.867257 4809 flags.go:64] FLAG: --storage-driver-user="root" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.867266 4809 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.867275 4809 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.867284 4809 flags.go:64] FLAG: --system-cgroups="" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.867292 4809 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.867306 4809 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.867318 4809 flags.go:64] FLAG: --tls-cert-file="" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.867327 4809 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.867338 4809 flags.go:64] FLAG: --tls-min-version="" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.867347 4809 flags.go:64] FLAG: --tls-private-key-file="" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.867355 4809 flags.go:64] FLAG: --topology-manager-policy="none" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.867365 4809 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.867377 4809 flags.go:64] FLAG: --topology-manager-scope="container" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.867388 4809 flags.go:64] FLAG: --v="2" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.867403 4809 flags.go:64] FLAG: --version="false" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.867416 4809 flags.go:64] FLAG: --vmodule="" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.867437 4809 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.867449 4809 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.867756 4809 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.867773 4809 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.867785 4809 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.867795 4809 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.867805 4809 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.867815 4809 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.867824 4809 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.867847 4809 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.867856 4809 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.867866 4809 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.867875 4809 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.867886 4809 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.867897 4809 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.867908 4809 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.867920 4809 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.867929 4809 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.867939 4809 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.867948 4809 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.867957 4809 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.867968 4809 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.867978 4809 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.867987 4809 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.867996 4809 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.868005 4809 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.868017 4809 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.868027 4809 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.868040 4809 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.868052 4809 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.868063 4809 feature_gate.go:330] unrecognized feature gate: Example Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.868073 4809 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.868083 4809 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.868093 4809 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.868102 4809 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.868150 4809 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.868164 4809 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.868174 4809 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.868183 4809 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.868193 4809 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.868203 4809 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.868218 4809 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.868228 4809 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.868238 4809 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.868248 4809 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.868257 4809 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.868265 4809 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.868274 4809 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.868284 4809 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.868294 4809 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.868303 4809 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.868344 4809 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.868354 4809 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.868365 4809 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.868375 4809 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.868385 4809 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.868392 4809 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.868401 4809 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.868411 4809 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.868424 4809 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.868435 4809 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.868444 4809 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.868457 4809 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.868467 4809 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.868476 4809 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.868486 4809 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.868495 4809 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.868505 4809 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.868518 4809 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.868530 4809 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.868544 4809 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.868555 4809 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.868565 4809 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.868599 4809 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.886307 4809 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.886349 4809 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.886454 4809 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.886464 4809 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.886471 4809 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.886476 4809 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.886482 4809 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.886487 4809 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.886492 4809 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.886497 4809 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.886502 4809 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.886507 4809 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.886512 4809 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.886518 4809 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.886523 4809 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.886528 4809 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.886532 4809 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.886537 4809 feature_gate.go:330] unrecognized feature gate: Example Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.886542 4809 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.886548 4809 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.886553 4809 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.886560 4809 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.886565 4809 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.886571 4809 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.886579 4809 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.886587 4809 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.886594 4809 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.886600 4809 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.886605 4809 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.886611 4809 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.886621 4809 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.886626 4809 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.886631 4809 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.886637 4809 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.886642 4809 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.886647 4809 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.886652 4809 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.886657 4809 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.886661 4809 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.886667 4809 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.886671 4809 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.886676 4809 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.886681 4809 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.886686 4809 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.886691 4809 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.886696 4809 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.886703 4809 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.886710 4809 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.886715 4809 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.886720 4809 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.886725 4809 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.886730 4809 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.886735 4809 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.886740 4809 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.886746 4809 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.886751 4809 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.886756 4809 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.886763 4809 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.886768 4809 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.886774 4809 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.886779 4809 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.886785 4809 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.886790 4809 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.886794 4809 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.886799 4809 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.886804 4809 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.886809 4809 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.886814 4809 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.886819 4809 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.886824 4809 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.886830 4809 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.886837 4809 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.886843 4809 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.886852 4809 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.887010 4809 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.887020 4809 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.887026 4809 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.887031 4809 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.887036 4809 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.887041 4809 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.887046 4809 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.887051 4809 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.887056 4809 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.887061 4809 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.887066 4809 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.887071 4809 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.887078 4809 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.887084 4809 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.887092 4809 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.887097 4809 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.887103 4809 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.887109 4809 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.887132 4809 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.887138 4809 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.887144 4809 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.887149 4809 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.887154 4809 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.887159 4809 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.887164 4809 feature_gate.go:330] unrecognized feature gate: Example Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.887169 4809 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.887174 4809 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.887178 4809 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.887183 4809 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.887188 4809 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.887210 4809 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.887216 4809 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.887221 4809 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.887226 4809 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.887231 4809 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.887235 4809 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.887240 4809 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.887254 4809 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.887260 4809 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.887264 4809 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.887269 4809 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.887274 4809 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.887279 4809 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.887284 4809 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.887288 4809 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.887293 4809 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.887299 4809 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.887305 4809 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.887311 4809 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.887317 4809 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.887322 4809 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.887327 4809 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.887332 4809 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.887337 4809 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.887342 4809 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.887346 4809 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.887351 4809 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.887356 4809 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.887361 4809 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.887366 4809 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.887371 4809 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.887375 4809 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.887380 4809 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.887387 4809 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.887393 4809 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.887399 4809 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.887404 4809 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.887410 4809 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.887415 4809 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.887421 4809 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 12 07:58:46 crc kubenswrapper[4809]: W0312 07:58:46.887426 4809 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.887434 4809 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.888368 4809 server.go:940] "Client rotation is on, will bootstrap in background" Mar 12 07:58:46 crc kubenswrapper[4809]: E0312 07:58:46.892213 4809 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2026-02-24 05:52:08 +0000 UTC" logger="UnhandledError" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.895710 4809 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.895805 4809 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.897734 4809 server.go:997] "Starting client certificate rotation" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.897770 4809 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.897949 4809 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.924338 4809 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.926972 4809 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 12 07:58:46 crc kubenswrapper[4809]: E0312 07:58:46.930072 4809 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.947615 4809 log.go:25] "Validated CRI v1 runtime API" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.988234 4809 log.go:25] "Validated CRI v1 image API" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.990525 4809 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.995149 4809 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-03-12-07-54-07-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Mar 12 07:58:46 crc kubenswrapper[4809]: I0312 07:58:46.995195 4809 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.020098 4809 manager.go:217] Machine: {Timestamp:2026-03-12 07:58:47.017293745 +0000 UTC m=+0.599329488 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:f124771b-42e6-4243-8a1b-002105aaecbe BootID:f5485f66-5ce6-43bd-a720-57f2c3255098 Filesystems:[{Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:6f:89:38 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:6f:89:38 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:c1:4c:4c Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:ed:75:21 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:a3:5a:80 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:ba:c6:cc Speed:-1 Mtu:1496} {Name:eth10 MacAddress:be:49:d9:23:fc:65 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:d6:c9:95:44:ec:13 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.020673 4809 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.020900 4809 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.021480 4809 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.021806 4809 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.021855 4809 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.023910 4809 topology_manager.go:138] "Creating topology manager with none policy" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.023952 4809 container_manager_linux.go:303] "Creating device plugin manager" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.024541 4809 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.024576 4809 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.025360 4809 state_mem.go:36] "Initialized new in-memory state store" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.025457 4809 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.028476 4809 kubelet.go:418] "Attempting to sync node with API server" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.028500 4809 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.028516 4809 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.028531 4809 kubelet.go:324] "Adding apiserver pod source" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.028589 4809 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.032687 4809 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.034022 4809 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Mar 12 07:58:47 crc kubenswrapper[4809]: W0312 07:58:47.034066 4809 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Mar 12 07:58:47 crc kubenswrapper[4809]: E0312 07:58:47.034150 4809 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Mar 12 07:58:47 crc kubenswrapper[4809]: W0312 07:58:47.034247 4809 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Mar 12 07:58:47 crc kubenswrapper[4809]: E0312 07:58:47.034341 4809 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.036223 4809 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.038373 4809 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.038401 4809 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.038411 4809 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.038419 4809 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.038432 4809 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.038441 4809 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.038450 4809 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.038464 4809 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.038475 4809 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.038484 4809 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.038496 4809 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.038504 4809 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.039237 4809 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.039761 4809 server.go:1280] "Started kubelet" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.040200 4809 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.040764 4809 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.041272 4809 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.041744 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Mar 12 07:58:47 crc systemd[1]: Started Kubernetes Kubelet. Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.042218 4809 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.042246 4809 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.042871 4809 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.043012 4809 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.043210 4809 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 12 07:58:47 crc kubenswrapper[4809]: E0312 07:58:47.043493 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 07:58:47 crc kubenswrapper[4809]: E0312 07:58:47.043571 4809 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="200ms" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.043702 4809 factory.go:55] Registering systemd factory Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.044685 4809 factory.go:221] Registration of the systemd container factory successfully Mar 12 07:58:47 crc kubenswrapper[4809]: W0312 07:58:47.043873 4809 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Mar 12 07:58:47 crc kubenswrapper[4809]: E0312 07:58:47.044744 4809 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.047618 4809 factory.go:153] Registering CRI-O factory Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.047880 4809 factory.go:221] Registration of the crio container factory successfully Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.048297 4809 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.048505 4809 factory.go:103] Registering Raw factory Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.048890 4809 manager.go:1196] Started watching for new ooms in manager Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.049366 4809 server.go:460] "Adding debug handlers to kubelet server" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.050854 4809 manager.go:319] Starting recovery of all containers Mar 12 07:58:47 crc kubenswrapper[4809]: E0312 07:58:47.051764 4809 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.80:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.189c09123293718e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:47.039717774 +0000 UTC m=+0.621753517,LastTimestamp:2026-03-12 07:58:47.039717774 +0000 UTC m=+0.621753517,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.059042 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.059393 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.059428 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.059464 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.059490 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.059514 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.059539 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.059603 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.059637 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.059664 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.059690 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.059717 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.059743 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.059772 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.059792 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.059818 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.059842 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.059868 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.059894 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.060081 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.060192 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.060219 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.060279 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.060299 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.060361 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.060430 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.060488 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.060524 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.060543 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.060560 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.060582 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.060661 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.060700 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.060736 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.060756 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.060774 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.060792 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.060812 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.060830 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.060847 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.060865 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.060885 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.060904 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.060921 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.060940 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.060959 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.060978 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.060997 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.061018 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.061035 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.061058 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.061075 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.061100 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.061152 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.061177 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.061195 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.061216 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.061236 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.061254 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.061276 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.061296 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.061313 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.061330 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.061350 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.063312 4809 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.063354 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.063381 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.063401 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.063420 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.063441 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.063460 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.063477 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.063495 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.063515 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.063534 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.063554 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.063574 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.063597 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.063622 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.063647 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.063667 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.063685 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.063703 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.063720 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.063740 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.063758 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.063776 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.063794 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.063812 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.063831 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.063849 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.063867 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.063886 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.063904 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.063920 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.063961 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.063980 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.063997 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.064016 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.064034 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.064055 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.064073 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.064094 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.064149 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.064180 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.064220 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.064263 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.064304 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.064368 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.064389 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.064409 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.064432 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.064452 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.064471 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.064491 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.064513 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.064534 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.064551 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.064570 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.064595 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.064621 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.064646 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.064666 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.064684 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.064702 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.064720 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.064736 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.064754 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.064772 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.064789 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.064808 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.064827 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.064846 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.064896 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.064922 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.064947 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.064969 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.064988 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.065006 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.065024 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.065041 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.065061 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.065080 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.065108 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.065187 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.065206 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.065224 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.065241 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.065261 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.065279 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.065298 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.065315 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.065343 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.065367 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.065386 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.065411 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.065438 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.065454 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.065474 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.065491 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.065518 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.065535 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.065555 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.065593 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.065630 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.065659 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.065685 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.065711 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.065736 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.065761 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.065785 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.065827 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.065846 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.065864 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.065883 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.065902 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.065921 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.065940 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.065958 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.065976 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.065995 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.066014 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.066034 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.066100 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.066172 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.066197 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.066216 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.066236 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.066254 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.066276 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.066294 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.066312 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.066332 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.066350 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.066370 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.066389 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.066408 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.066426 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.066444 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.066462 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.066479 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.066498 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.066516 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.066536 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.066556 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.066576 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.066595 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.066613 4809 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.066630 4809 reconstruct.go:97] "Volume reconstruction finished" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.066642 4809 reconciler.go:26] "Reconciler: start to sync state" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.093420 4809 manager.go:324] Recovery completed Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.101546 4809 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.104498 4809 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.104537 4809 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.104682 4809 kubelet.go:2335] "Starting kubelet main sync loop" Mar 12 07:58:47 crc kubenswrapper[4809]: E0312 07:58:47.104725 4809 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.107084 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:58:47 crc kubenswrapper[4809]: W0312 07:58:47.107593 4809 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Mar 12 07:58:47 crc kubenswrapper[4809]: E0312 07:58:47.107733 4809 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.108845 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.108892 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.108904 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.110192 4809 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.110210 4809 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.110230 4809 state_mem.go:36] "Initialized new in-memory state store" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.131065 4809 policy_none.go:49] "None policy: Start" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.132308 4809 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.132361 4809 state_mem.go:35] "Initializing new in-memory state store" Mar 12 07:58:47 crc kubenswrapper[4809]: E0312 07:58:47.143933 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 07:58:47 crc kubenswrapper[4809]: E0312 07:58:47.205150 4809 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.212558 4809 manager.go:334] "Starting Device Plugin manager" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.212733 4809 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.212761 4809 server.go:79] "Starting device plugin registration server" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.213338 4809 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.213370 4809 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.213646 4809 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.213742 4809 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.213749 4809 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 12 07:58:47 crc kubenswrapper[4809]: E0312 07:58:47.221002 4809 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 12 07:58:47 crc kubenswrapper[4809]: E0312 07:58:47.245046 4809 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="400ms" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.313837 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.315778 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.315816 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.315829 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.315854 4809 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 12 07:58:47 crc kubenswrapper[4809]: E0312 07:58:47.316384 4809 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.80:6443: connect: connection refused" node="crc" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.405651 4809 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.405810 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.408421 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.408488 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.408509 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.408725 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.409843 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.409882 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.409893 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.410272 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.410329 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.410386 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.410421 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.410331 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.411300 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.411326 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.411335 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.411454 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.411549 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.411588 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.412029 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.412106 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.412166 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.412510 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.412543 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.412554 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.412652 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.412802 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.412855 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.413232 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.413272 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.413288 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.413422 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.413543 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.413564 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.413869 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.413910 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.413922 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.413939 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.413951 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.414080 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.414109 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.414188 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.415498 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.415546 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.415562 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.471344 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.471415 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.471465 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.471545 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.471627 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.471693 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.471754 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.471811 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.471913 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.471971 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.472002 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.472040 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.472072 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.472100 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.472159 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.516582 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.517910 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.518001 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.518052 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.518091 4809 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 12 07:58:47 crc kubenswrapper[4809]: E0312 07:58:47.518658 4809 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.80:6443: connect: connection refused" node="crc" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.573594 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.573930 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.573831 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.573961 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.574002 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.574481 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.574035 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.574488 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.574553 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.574599 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.574651 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.574657 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.574695 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.574709 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.574730 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.574762 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.574759 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.574795 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.574817 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.574827 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.574848 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.574917 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.574963 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.575071 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.575083 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.575156 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.575177 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.575205 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.575241 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.575496 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 07:58:47 crc kubenswrapper[4809]: E0312 07:58:47.645953 4809 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="800ms" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.743176 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.757445 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.783336 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.793246 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 07:58:47 crc kubenswrapper[4809]: W0312 07:58:47.808624 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-9f6d27039cc52942be2eb2a97e39b13aa90a7d903ac3e84bbb592ee00f231ff6 WatchSource:0}: Error finding container 9f6d27039cc52942be2eb2a97e39b13aa90a7d903ac3e84bbb592ee00f231ff6: Status 404 returned error can't find the container with id 9f6d27039cc52942be2eb2a97e39b13aa90a7d903ac3e84bbb592ee00f231ff6 Mar 12 07:58:47 crc kubenswrapper[4809]: W0312 07:58:47.810878 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-d092d561112864a18cb48782c4c56212bd026a3be47b7981e9cef7d8558f2403 WatchSource:0}: Error finding container d092d561112864a18cb48782c4c56212bd026a3be47b7981e9cef7d8558f2403: Status 404 returned error can't find the container with id d092d561112864a18cb48782c4c56212bd026a3be47b7981e9cef7d8558f2403 Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.817417 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 12 07:58:47 crc kubenswrapper[4809]: W0312 07:58:47.823247 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-3ad641880b787bfd9000bdcdca89d7fc4894fdf4cea6f8d41b6edf778d9c40a4 WatchSource:0}: Error finding container 3ad641880b787bfd9000bdcdca89d7fc4894fdf4cea6f8d41b6edf778d9c40a4: Status 404 returned error can't find the container with id 3ad641880b787bfd9000bdcdca89d7fc4894fdf4cea6f8d41b6edf778d9c40a4 Mar 12 07:58:47 crc kubenswrapper[4809]: W0312 07:58:47.839161 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-5d6e553421555c0500e3a34205b71bb2002b7fd3e1bea3f17ab407912f7d1a5c WatchSource:0}: Error finding container 5d6e553421555c0500e3a34205b71bb2002b7fd3e1bea3f17ab407912f7d1a5c: Status 404 returned error can't find the container with id 5d6e553421555c0500e3a34205b71bb2002b7fd3e1bea3f17ab407912f7d1a5c Mar 12 07:58:47 crc kubenswrapper[4809]: W0312 07:58:47.842072 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-387dc24dfc30f73b20f26f6ea9864dd1d7236df0dc6cb71304e618d9cd1561be WatchSource:0}: Error finding container 387dc24dfc30f73b20f26f6ea9864dd1d7236df0dc6cb71304e618d9cd1561be: Status 404 returned error can't find the container with id 387dc24dfc30f73b20f26f6ea9864dd1d7236df0dc6cb71304e618d9cd1561be Mar 12 07:58:47 crc kubenswrapper[4809]: W0312 07:58:47.915484 4809 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Mar 12 07:58:47 crc kubenswrapper[4809]: E0312 07:58:47.915581 4809 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.918937 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.920585 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.920682 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.920691 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:58:47 crc kubenswrapper[4809]: I0312 07:58:47.920717 4809 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 12 07:58:47 crc kubenswrapper[4809]: E0312 07:58:47.921074 4809 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.80:6443: connect: connection refused" node="crc" Mar 12 07:58:48 crc kubenswrapper[4809]: I0312 07:58:48.043605 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Mar 12 07:58:48 crc kubenswrapper[4809]: I0312 07:58:48.109296 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"5d6e553421555c0500e3a34205b71bb2002b7fd3e1bea3f17ab407912f7d1a5c"} Mar 12 07:58:48 crc kubenswrapper[4809]: I0312 07:58:48.110569 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"387dc24dfc30f73b20f26f6ea9864dd1d7236df0dc6cb71304e618d9cd1561be"} Mar 12 07:58:48 crc kubenswrapper[4809]: I0312 07:58:48.111923 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3ad641880b787bfd9000bdcdca89d7fc4894fdf4cea6f8d41b6edf778d9c40a4"} Mar 12 07:58:48 crc kubenswrapper[4809]: I0312 07:58:48.113171 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"d092d561112864a18cb48782c4c56212bd026a3be47b7981e9cef7d8558f2403"} Mar 12 07:58:48 crc kubenswrapper[4809]: I0312 07:58:48.114154 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9f6d27039cc52942be2eb2a97e39b13aa90a7d903ac3e84bbb592ee00f231ff6"} Mar 12 07:58:48 crc kubenswrapper[4809]: W0312 07:58:48.211005 4809 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Mar 12 07:58:48 crc kubenswrapper[4809]: E0312 07:58:48.211139 4809 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Mar 12 07:58:48 crc kubenswrapper[4809]: W0312 07:58:48.362556 4809 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Mar 12 07:58:48 crc kubenswrapper[4809]: E0312 07:58:48.362621 4809 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Mar 12 07:58:48 crc kubenswrapper[4809]: E0312 07:58:48.447094 4809 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="1.6s" Mar 12 07:58:48 crc kubenswrapper[4809]: W0312 07:58:48.617564 4809 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Mar 12 07:58:48 crc kubenswrapper[4809]: E0312 07:58:48.617683 4809 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Mar 12 07:58:48 crc kubenswrapper[4809]: I0312 07:58:48.721152 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:58:48 crc kubenswrapper[4809]: I0312 07:58:48.722940 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:58:48 crc kubenswrapper[4809]: I0312 07:58:48.722994 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:58:48 crc kubenswrapper[4809]: I0312 07:58:48.723015 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:58:48 crc kubenswrapper[4809]: I0312 07:58:48.723052 4809 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 12 07:58:48 crc kubenswrapper[4809]: E0312 07:58:48.723644 4809 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.80:6443: connect: connection refused" node="crc" Mar 12 07:58:49 crc kubenswrapper[4809]: I0312 07:58:49.042807 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Mar 12 07:58:49 crc kubenswrapper[4809]: I0312 07:58:49.119089 4809 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="a08fd81f3edc99cf92fd9feec5a8ef32ad3566a45d6247cd7e7d947cea7979bd" exitCode=0 Mar 12 07:58:49 crc kubenswrapper[4809]: I0312 07:58:49.119190 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"a08fd81f3edc99cf92fd9feec5a8ef32ad3566a45d6247cd7e7d947cea7979bd"} Mar 12 07:58:49 crc kubenswrapper[4809]: I0312 07:58:49.119198 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:58:49 crc kubenswrapper[4809]: I0312 07:58:49.120535 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:58:49 crc kubenswrapper[4809]: I0312 07:58:49.120617 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:58:49 crc kubenswrapper[4809]: I0312 07:58:49.120640 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:58:49 crc kubenswrapper[4809]: I0312 07:58:49.121230 4809 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="28106812f58b36dc991efb2c3b84b4c3f38016271262418de7cb6d3130ee2743" exitCode=0 Mar 12 07:58:49 crc kubenswrapper[4809]: I0312 07:58:49.121309 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"28106812f58b36dc991efb2c3b84b4c3f38016271262418de7cb6d3130ee2743"} Mar 12 07:58:49 crc kubenswrapper[4809]: I0312 07:58:49.121446 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:58:49 crc kubenswrapper[4809]: I0312 07:58:49.122265 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:58:49 crc kubenswrapper[4809]: I0312 07:58:49.122307 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:58:49 crc kubenswrapper[4809]: I0312 07:58:49.122323 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:58:49 crc kubenswrapper[4809]: I0312 07:58:49.123784 4809 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="9274fd03c22f70f9a14af97eb08241f288c6fa0c1c69b75f77c1057d35a7cbc9" exitCode=0 Mar 12 07:58:49 crc kubenswrapper[4809]: I0312 07:58:49.123912 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"9274fd03c22f70f9a14af97eb08241f288c6fa0c1c69b75f77c1057d35a7cbc9"} Mar 12 07:58:49 crc kubenswrapper[4809]: I0312 07:58:49.123961 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:58:49 crc kubenswrapper[4809]: I0312 07:58:49.126034 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:58:49 crc kubenswrapper[4809]: I0312 07:58:49.126088 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:58:49 crc kubenswrapper[4809]: I0312 07:58:49.126209 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:58:49 crc kubenswrapper[4809]: I0312 07:58:49.127953 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"37c0f55493f640a3d45a1337317ad9715adf5e032d370a4c4fa72254a52368ab"} Mar 12 07:58:49 crc kubenswrapper[4809]: I0312 07:58:49.128016 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7e0fd6b286ccf1bf7747d31f1174259c16f2a9e0d371eb039a48c135b6118ace"} Mar 12 07:58:49 crc kubenswrapper[4809]: I0312 07:58:49.128031 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e5d018b14132bda1464a87d5f4073560c28ae90351c58099d35605cfa0a18492"} Mar 12 07:58:49 crc kubenswrapper[4809]: I0312 07:58:49.130258 4809 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 12 07:58:49 crc kubenswrapper[4809]: I0312 07:58:49.130547 4809 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2" exitCode=0 Mar 12 07:58:49 crc kubenswrapper[4809]: I0312 07:58:49.130577 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2"} Mar 12 07:58:49 crc kubenswrapper[4809]: I0312 07:58:49.130752 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:58:49 crc kubenswrapper[4809]: I0312 07:58:49.131943 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:58:49 crc kubenswrapper[4809]: I0312 07:58:49.131984 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:58:49 crc kubenswrapper[4809]: I0312 07:58:49.132000 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:58:49 crc kubenswrapper[4809]: E0312 07:58:49.132637 4809 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Mar 12 07:58:49 crc kubenswrapper[4809]: I0312 07:58:49.136005 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:58:49 crc kubenswrapper[4809]: I0312 07:58:49.137043 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:58:49 crc kubenswrapper[4809]: I0312 07:58:49.137078 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:58:49 crc kubenswrapper[4809]: I0312 07:58:49.137095 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:58:50 crc kubenswrapper[4809]: I0312 07:58:50.043563 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Mar 12 07:58:50 crc kubenswrapper[4809]: E0312 07:58:50.048173 4809 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="3.2s" Mar 12 07:58:50 crc kubenswrapper[4809]: I0312 07:58:50.135919 4809 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="759e1933c261395715f2209ea1aa72957dfb4c5b1049c830e87854b0dcdba55f" exitCode=0 Mar 12 07:58:50 crc kubenswrapper[4809]: I0312 07:58:50.136053 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"759e1933c261395715f2209ea1aa72957dfb4c5b1049c830e87854b0dcdba55f"} Mar 12 07:58:50 crc kubenswrapper[4809]: I0312 07:58:50.136178 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:58:50 crc kubenswrapper[4809]: I0312 07:58:50.137523 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:58:50 crc kubenswrapper[4809]: I0312 07:58:50.137572 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:58:50 crc kubenswrapper[4809]: I0312 07:58:50.137592 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:58:50 crc kubenswrapper[4809]: I0312 07:58:50.139057 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"7f6b51b0d31592c1695ecc8f19dd957bcb44a4a9c6ef3cf371818c729aee03e6"} Mar 12 07:58:50 crc kubenswrapper[4809]: I0312 07:58:50.139098 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"4cb0402282c7f399b69b70fb9af47794517f105267b536b58393e3c69aa0cacc"} Mar 12 07:58:50 crc kubenswrapper[4809]: I0312 07:58:50.139126 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"10a8ee6bd6947c48013498a11e9479a358028706146a645bc6632fcda8717bc8"} Mar 12 07:58:50 crc kubenswrapper[4809]: I0312 07:58:50.139177 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:58:50 crc kubenswrapper[4809]: I0312 07:58:50.140697 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:58:50 crc kubenswrapper[4809]: I0312 07:58:50.140724 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:58:50 crc kubenswrapper[4809]: I0312 07:58:50.140736 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:58:50 crc kubenswrapper[4809]: I0312 07:58:50.142084 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"31ad14167b5c219c3f37847ba32dd923e81e22b517173000441413914c132d99"} Mar 12 07:58:50 crc kubenswrapper[4809]: I0312 07:58:50.142164 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:58:50 crc kubenswrapper[4809]: I0312 07:58:50.142907 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:58:50 crc kubenswrapper[4809]: I0312 07:58:50.142935 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:58:50 crc kubenswrapper[4809]: I0312 07:58:50.142945 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:58:50 crc kubenswrapper[4809]: I0312 07:58:50.144617 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"411ee556dd1461097ea682e93564110b2a46d02734cb91026ba06ffe7de7a80f"} Mar 12 07:58:50 crc kubenswrapper[4809]: I0312 07:58:50.144645 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c386318c2476130eb02e6620ab75fce7d5cd814d7f214bed4f449de60d126a14"} Mar 12 07:58:50 crc kubenswrapper[4809]: I0312 07:58:50.144683 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"efb3f6bb09e8fcec0267b3fdbdf43301e33aa2ba8bfaf8b3a124c7e7714949ab"} Mar 12 07:58:50 crc kubenswrapper[4809]: I0312 07:58:50.144693 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"95634c408015e53a4c8d044b27803c5070ad086be1f3372a760744db1bc9c033"} Mar 12 07:58:50 crc kubenswrapper[4809]: I0312 07:58:50.147250 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"5a062252e0e5fcfebe2a2395329a45050cbcc0b991ac12aeaf1120d054fc4217"} Mar 12 07:58:50 crc kubenswrapper[4809]: I0312 07:58:50.147386 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:58:50 crc kubenswrapper[4809]: I0312 07:58:50.148591 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:58:50 crc kubenswrapper[4809]: I0312 07:58:50.148607 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:58:50 crc kubenswrapper[4809]: I0312 07:58:50.148617 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:58:50 crc kubenswrapper[4809]: W0312 07:58:50.215032 4809 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Mar 12 07:58:50 crc kubenswrapper[4809]: E0312 07:58:50.215136 4809 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Mar 12 07:58:50 crc kubenswrapper[4809]: W0312 07:58:50.242840 4809 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Mar 12 07:58:50 crc kubenswrapper[4809]: E0312 07:58:50.242918 4809 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Mar 12 07:58:50 crc kubenswrapper[4809]: I0312 07:58:50.324461 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:58:50 crc kubenswrapper[4809]: I0312 07:58:50.325863 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:58:50 crc kubenswrapper[4809]: I0312 07:58:50.325920 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:58:50 crc kubenswrapper[4809]: I0312 07:58:50.325937 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:58:50 crc kubenswrapper[4809]: I0312 07:58:50.325970 4809 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 12 07:58:50 crc kubenswrapper[4809]: E0312 07:58:50.326577 4809 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.80:6443: connect: connection refused" node="crc" Mar 12 07:58:51 crc kubenswrapper[4809]: I0312 07:58:51.157879 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8b4fb267753d3035607c5d77875cc73bc33c7760b7f9ae200c556a5741ac1014"} Mar 12 07:58:51 crc kubenswrapper[4809]: I0312 07:58:51.158028 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:58:51 crc kubenswrapper[4809]: I0312 07:58:51.160064 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:58:51 crc kubenswrapper[4809]: I0312 07:58:51.160106 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:58:51 crc kubenswrapper[4809]: I0312 07:58:51.160152 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:58:51 crc kubenswrapper[4809]: I0312 07:58:51.162362 4809 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="c0d58c5fd72b76f75c191f46f25e0aa9fb3026191f5fb97c210907e1ea38baa1" exitCode=0 Mar 12 07:58:51 crc kubenswrapper[4809]: I0312 07:58:51.162411 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"c0d58c5fd72b76f75c191f46f25e0aa9fb3026191f5fb97c210907e1ea38baa1"} Mar 12 07:58:51 crc kubenswrapper[4809]: I0312 07:58:51.162473 4809 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 07:58:51 crc kubenswrapper[4809]: I0312 07:58:51.162503 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:58:51 crc kubenswrapper[4809]: I0312 07:58:51.162536 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:58:51 crc kubenswrapper[4809]: I0312 07:58:51.162543 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:58:51 crc kubenswrapper[4809]: I0312 07:58:51.162511 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:58:51 crc kubenswrapper[4809]: I0312 07:58:51.168213 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:58:51 crc kubenswrapper[4809]: I0312 07:58:51.168425 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:58:51 crc kubenswrapper[4809]: I0312 07:58:51.168587 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:58:51 crc kubenswrapper[4809]: I0312 07:58:51.168492 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:58:51 crc kubenswrapper[4809]: I0312 07:58:51.168805 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:58:51 crc kubenswrapper[4809]: I0312 07:58:51.168830 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:58:51 crc kubenswrapper[4809]: I0312 07:58:51.168311 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:58:51 crc kubenswrapper[4809]: I0312 07:58:51.168911 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:58:51 crc kubenswrapper[4809]: I0312 07:58:51.168941 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:58:51 crc kubenswrapper[4809]: I0312 07:58:51.168385 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:58:51 crc kubenswrapper[4809]: I0312 07:58:51.169090 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:58:51 crc kubenswrapper[4809]: I0312 07:58:51.169147 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:58:51 crc kubenswrapper[4809]: I0312 07:58:51.348228 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 12 07:58:51 crc kubenswrapper[4809]: I0312 07:58:51.789307 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 07:58:51 crc kubenswrapper[4809]: I0312 07:58:51.859626 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 07:58:52 crc kubenswrapper[4809]: I0312 07:58:52.155398 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 07:58:52 crc kubenswrapper[4809]: I0312 07:58:52.175231 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"01ecaed5f8be6ee7aa9b143575897770874696cfd0d71c30ef8e46d75a67be75"} Mar 12 07:58:52 crc kubenswrapper[4809]: I0312 07:58:52.175292 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:58:52 crc kubenswrapper[4809]: I0312 07:58:52.175312 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:58:52 crc kubenswrapper[4809]: I0312 07:58:52.175311 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0afe86a0784b3f9ecefb87d8d8de1e616420abe185dfcf0e47ad17f7d3e18a0e"} Mar 12 07:58:52 crc kubenswrapper[4809]: I0312 07:58:52.175438 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"dcce0bfc6e523f2bd2e05ab2403c80a280f3c22b7c9d4e61bde3bb414b1b6c1f"} Mar 12 07:58:52 crc kubenswrapper[4809]: I0312 07:58:52.175460 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2c32315e2dbe601b08a2b2c3db8a50dfc5d99be959dbf33c19f6a2c55df38c54"} Mar 12 07:58:52 crc kubenswrapper[4809]: I0312 07:58:52.175248 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:58:52 crc kubenswrapper[4809]: I0312 07:58:52.176928 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:58:52 crc kubenswrapper[4809]: I0312 07:58:52.176996 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:58:52 crc kubenswrapper[4809]: I0312 07:58:52.177014 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:58:52 crc kubenswrapper[4809]: I0312 07:58:52.176936 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:58:52 crc kubenswrapper[4809]: I0312 07:58:52.177053 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:58:52 crc kubenswrapper[4809]: I0312 07:58:52.177075 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:58:52 crc kubenswrapper[4809]: I0312 07:58:52.177055 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:58:52 crc kubenswrapper[4809]: I0312 07:58:52.177164 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:58:52 crc kubenswrapper[4809]: I0312 07:58:52.177023 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:58:53 crc kubenswrapper[4809]: I0312 07:58:53.183326 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4d974ea0c31aedbb16a24b692f45399aa47ba7a1e4b14467bb954c10757b38cc"} Mar 12 07:58:53 crc kubenswrapper[4809]: I0312 07:58:53.183424 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:58:53 crc kubenswrapper[4809]: I0312 07:58:53.183501 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:58:53 crc kubenswrapper[4809]: I0312 07:58:53.183637 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:58:53 crc kubenswrapper[4809]: I0312 07:58:53.186964 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:58:53 crc kubenswrapper[4809]: I0312 07:58:53.187000 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:58:53 crc kubenswrapper[4809]: I0312 07:58:53.187083 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:58:53 crc kubenswrapper[4809]: I0312 07:58:53.186964 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:58:53 crc kubenswrapper[4809]: I0312 07:58:53.187029 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:58:53 crc kubenswrapper[4809]: I0312 07:58:53.187343 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:58:53 crc kubenswrapper[4809]: I0312 07:58:53.187449 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:58:53 crc kubenswrapper[4809]: I0312 07:58:53.187101 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:58:53 crc kubenswrapper[4809]: I0312 07:58:53.187449 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:58:53 crc kubenswrapper[4809]: I0312 07:58:53.232937 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Mar 12 07:58:53 crc kubenswrapper[4809]: I0312 07:58:53.462832 4809 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 12 07:58:53 crc kubenswrapper[4809]: I0312 07:58:53.484612 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 07:58:53 crc kubenswrapper[4809]: I0312 07:58:53.492874 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 07:58:53 crc kubenswrapper[4809]: I0312 07:58:53.527659 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:58:53 crc kubenswrapper[4809]: I0312 07:58:53.528899 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:58:53 crc kubenswrapper[4809]: I0312 07:58:53.528925 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:58:53 crc kubenswrapper[4809]: I0312 07:58:53.528937 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:58:53 crc kubenswrapper[4809]: I0312 07:58:53.528961 4809 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 12 07:58:54 crc kubenswrapper[4809]: I0312 07:58:54.039347 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 07:58:54 crc kubenswrapper[4809]: I0312 07:58:54.186230 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:58:54 crc kubenswrapper[4809]: I0312 07:58:54.186268 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:58:54 crc kubenswrapper[4809]: I0312 07:58:54.186325 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:58:54 crc kubenswrapper[4809]: I0312 07:58:54.187534 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:58:54 crc kubenswrapper[4809]: I0312 07:58:54.187593 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:58:54 crc kubenswrapper[4809]: I0312 07:58:54.187610 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:58:54 crc kubenswrapper[4809]: I0312 07:58:54.187734 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:58:54 crc kubenswrapper[4809]: I0312 07:58:54.187784 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:58:54 crc kubenswrapper[4809]: I0312 07:58:54.187804 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:58:54 crc kubenswrapper[4809]: I0312 07:58:54.187828 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:58:54 crc kubenswrapper[4809]: I0312 07:58:54.187853 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:58:54 crc kubenswrapper[4809]: I0312 07:58:54.187866 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:58:55 crc kubenswrapper[4809]: I0312 07:58:55.004299 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Mar 12 07:58:55 crc kubenswrapper[4809]: I0312 07:58:55.189607 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:58:55 crc kubenswrapper[4809]: I0312 07:58:55.189675 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:58:55 crc kubenswrapper[4809]: I0312 07:58:55.191081 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:58:55 crc kubenswrapper[4809]: I0312 07:58:55.191175 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:58:55 crc kubenswrapper[4809]: I0312 07:58:55.191204 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:58:55 crc kubenswrapper[4809]: I0312 07:58:55.191243 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:58:55 crc kubenswrapper[4809]: I0312 07:58:55.191266 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:58:55 crc kubenswrapper[4809]: I0312 07:58:55.191209 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:58:55 crc kubenswrapper[4809]: I0312 07:58:55.516491 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 07:58:55 crc kubenswrapper[4809]: I0312 07:58:55.516688 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:58:55 crc kubenswrapper[4809]: I0312 07:58:55.517857 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:58:55 crc kubenswrapper[4809]: I0312 07:58:55.517889 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:58:55 crc kubenswrapper[4809]: I0312 07:58:55.517901 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:58:55 crc kubenswrapper[4809]: I0312 07:58:55.839207 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 07:58:56 crc kubenswrapper[4809]: I0312 07:58:56.192843 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:58:56 crc kubenswrapper[4809]: I0312 07:58:56.192894 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:58:56 crc kubenswrapper[4809]: I0312 07:58:56.197850 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:58:56 crc kubenswrapper[4809]: I0312 07:58:56.197926 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:58:56 crc kubenswrapper[4809]: I0312 07:58:56.197945 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:58:56 crc kubenswrapper[4809]: I0312 07:58:56.199292 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:58:56 crc kubenswrapper[4809]: I0312 07:58:56.199355 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:58:56 crc kubenswrapper[4809]: I0312 07:58:56.199374 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:58:57 crc kubenswrapper[4809]: E0312 07:58:57.221297 4809 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 12 07:58:58 crc kubenswrapper[4809]: I0312 07:58:58.839359 4809 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 07:58:58 crc kubenswrapper[4809]: I0312 07:58:58.839492 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 07:59:01 crc kubenswrapper[4809]: I0312 07:59:01.043893 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Mar 12 07:59:01 crc kubenswrapper[4809]: W0312 07:59:01.167450 4809 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T07:59:01Z is after 2026-02-23T05:33:13Z Mar 12 07:59:01 crc kubenswrapper[4809]: E0312 07:59:01.167561 4809 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T07:59:01Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 12 07:59:01 crc kubenswrapper[4809]: E0312 07:59:01.174076 4809 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T07:59:01Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189c09123293718e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:47.039717774 +0000 UTC m=+0.621753517,LastTimestamp:2026-03-12 07:58:47.039717774 +0000 UTC m=+0.621753517,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:01 crc kubenswrapper[4809]: W0312 07:59:01.174656 4809 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T07:59:01Z is after 2026-02-23T05:33:13Z Mar 12 07:59:01 crc kubenswrapper[4809]: E0312 07:59:01.174731 4809 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T07:59:01Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 12 07:59:01 crc kubenswrapper[4809]: W0312 07:59:01.176050 4809 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T07:59:01Z is after 2026-02-23T05:33:13Z Mar 12 07:59:01 crc kubenswrapper[4809]: E0312 07:59:01.176137 4809 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T07:59:01Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 12 07:59:01 crc kubenswrapper[4809]: E0312 07:59:01.176918 4809 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T07:59:01Z is after 2026-02-23T05:33:13Z" interval="6.4s" Mar 12 07:59:01 crc kubenswrapper[4809]: W0312 07:59:01.177089 4809 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T07:59:01Z is after 2026-02-23T05:33:13Z Mar 12 07:59:01 crc kubenswrapper[4809]: E0312 07:59:01.177156 4809 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T07:59:01Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 12 07:59:01 crc kubenswrapper[4809]: E0312 07:59:01.178438 4809 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T07:59:01Z is after 2026-02-23T05:33:13Z" node="crc" Mar 12 07:59:01 crc kubenswrapper[4809]: E0312 07:59:01.182524 4809 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T07:59:01Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 12 07:59:01 crc kubenswrapper[4809]: I0312 07:59:01.183322 4809 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 12 07:59:01 crc kubenswrapper[4809]: I0312 07:59:01.183381 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Mar 12 07:59:01 crc kubenswrapper[4809]: I0312 07:59:01.191916 4809 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 12 07:59:01 crc kubenswrapper[4809]: I0312 07:59:01.191980 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Mar 12 07:59:01 crc kubenswrapper[4809]: I0312 07:59:01.790275 4809 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Mar 12 07:59:01 crc kubenswrapper[4809]: I0312 07:59:01.790424 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Mar 12 07:59:01 crc kubenswrapper[4809]: I0312 07:59:01.867175 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 07:59:01 crc kubenswrapper[4809]: I0312 07:59:01.867597 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:59:01 crc kubenswrapper[4809]: I0312 07:59:01.871443 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:59:01 crc kubenswrapper[4809]: I0312 07:59:01.871494 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:59:01 crc kubenswrapper[4809]: I0312 07:59:01.871516 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:59:02 crc kubenswrapper[4809]: I0312 07:59:02.048615 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T07:59:02Z is after 2026-02-23T05:33:13Z Mar 12 07:59:02 crc kubenswrapper[4809]: I0312 07:59:02.211680 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Mar 12 07:59:02 crc kubenswrapper[4809]: I0312 07:59:02.214046 4809 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8b4fb267753d3035607c5d77875cc73bc33c7760b7f9ae200c556a5741ac1014" exitCode=255 Mar 12 07:59:02 crc kubenswrapper[4809]: I0312 07:59:02.214094 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"8b4fb267753d3035607c5d77875cc73bc33c7760b7f9ae200c556a5741ac1014"} Mar 12 07:59:02 crc kubenswrapper[4809]: I0312 07:59:02.214275 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:59:02 crc kubenswrapper[4809]: I0312 07:59:02.215214 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:59:02 crc kubenswrapper[4809]: I0312 07:59:02.215267 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:59:02 crc kubenswrapper[4809]: I0312 07:59:02.215286 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:59:02 crc kubenswrapper[4809]: I0312 07:59:02.216028 4809 scope.go:117] "RemoveContainer" containerID="8b4fb267753d3035607c5d77875cc73bc33c7760b7f9ae200c556a5741ac1014" Mar 12 07:59:03 crc kubenswrapper[4809]: I0312 07:59:03.048357 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T07:59:03Z is after 2026-02-23T05:33:13Z Mar 12 07:59:03 crc kubenswrapper[4809]: I0312 07:59:03.218747 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Mar 12 07:59:03 crc kubenswrapper[4809]: I0312 07:59:03.220606 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2ed278765de4f4bd5b11200d5e07a45400d8e984e628dc5ca43cd9b171e8ac0c"} Mar 12 07:59:03 crc kubenswrapper[4809]: I0312 07:59:03.220759 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:59:03 crc kubenswrapper[4809]: I0312 07:59:03.221534 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:59:03 crc kubenswrapper[4809]: I0312 07:59:03.221571 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:59:03 crc kubenswrapper[4809]: I0312 07:59:03.221583 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:59:04 crc kubenswrapper[4809]: I0312 07:59:04.046346 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T07:59:04Z is after 2026-02-23T05:33:13Z Mar 12 07:59:04 crc kubenswrapper[4809]: I0312 07:59:04.047071 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 07:59:04 crc kubenswrapper[4809]: I0312 07:59:04.225724 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 12 07:59:04 crc kubenswrapper[4809]: I0312 07:59:04.226812 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Mar 12 07:59:04 crc kubenswrapper[4809]: I0312 07:59:04.229442 4809 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2ed278765de4f4bd5b11200d5e07a45400d8e984e628dc5ca43cd9b171e8ac0c" exitCode=255 Mar 12 07:59:04 crc kubenswrapper[4809]: I0312 07:59:04.229497 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"2ed278765de4f4bd5b11200d5e07a45400d8e984e628dc5ca43cd9b171e8ac0c"} Mar 12 07:59:04 crc kubenswrapper[4809]: I0312 07:59:04.229544 4809 scope.go:117] "RemoveContainer" containerID="8b4fb267753d3035607c5d77875cc73bc33c7760b7f9ae200c556a5741ac1014" Mar 12 07:59:04 crc kubenswrapper[4809]: I0312 07:59:04.229592 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:59:04 crc kubenswrapper[4809]: I0312 07:59:04.231878 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:59:04 crc kubenswrapper[4809]: I0312 07:59:04.231924 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:59:04 crc kubenswrapper[4809]: I0312 07:59:04.231941 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:59:04 crc kubenswrapper[4809]: I0312 07:59:04.232805 4809 scope.go:117] "RemoveContainer" containerID="2ed278765de4f4bd5b11200d5e07a45400d8e984e628dc5ca43cd9b171e8ac0c" Mar 12 07:59:04 crc kubenswrapper[4809]: E0312 07:59:04.233175 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 12 07:59:04 crc kubenswrapper[4809]: I0312 07:59:04.236922 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 07:59:05 crc kubenswrapper[4809]: I0312 07:59:05.042152 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Mar 12 07:59:05 crc kubenswrapper[4809]: I0312 07:59:05.042397 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:59:05 crc kubenswrapper[4809]: I0312 07:59:05.043962 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:59:05 crc kubenswrapper[4809]: I0312 07:59:05.044027 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:59:05 crc kubenswrapper[4809]: I0312 07:59:05.044051 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:59:05 crc kubenswrapper[4809]: I0312 07:59:05.045633 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T07:59:05Z is after 2026-02-23T05:33:13Z Mar 12 07:59:05 crc kubenswrapper[4809]: I0312 07:59:05.060479 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Mar 12 07:59:05 crc kubenswrapper[4809]: I0312 07:59:05.237791 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 12 07:59:05 crc kubenswrapper[4809]: I0312 07:59:05.239997 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:59:05 crc kubenswrapper[4809]: I0312 07:59:05.240149 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:59:05 crc kubenswrapper[4809]: I0312 07:59:05.240898 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:59:05 crc kubenswrapper[4809]: I0312 07:59:05.240964 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:59:05 crc kubenswrapper[4809]: I0312 07:59:05.240988 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:59:05 crc kubenswrapper[4809]: I0312 07:59:05.241409 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:59:05 crc kubenswrapper[4809]: I0312 07:59:05.241457 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:59:05 crc kubenswrapper[4809]: I0312 07:59:05.241478 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:59:05 crc kubenswrapper[4809]: I0312 07:59:05.242277 4809 scope.go:117] "RemoveContainer" containerID="2ed278765de4f4bd5b11200d5e07a45400d8e984e628dc5ca43cd9b171e8ac0c" Mar 12 07:59:05 crc kubenswrapper[4809]: E0312 07:59:05.242645 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 12 07:59:05 crc kubenswrapper[4809]: W0312 07:59:05.725678 4809 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T07:59:05Z is after 2026-02-23T05:33:13Z Mar 12 07:59:05 crc kubenswrapper[4809]: E0312 07:59:05.725829 4809 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T07:59:05Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 12 07:59:06 crc kubenswrapper[4809]: I0312 07:59:06.045345 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T07:59:06Z is after 2026-02-23T05:33:13Z Mar 12 07:59:06 crc kubenswrapper[4809]: W0312 07:59:06.184014 4809 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T07:59:06Z is after 2026-02-23T05:33:13Z Mar 12 07:59:06 crc kubenswrapper[4809]: E0312 07:59:06.184169 4809 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T07:59:06Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 12 07:59:06 crc kubenswrapper[4809]: I0312 07:59:06.243224 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:59:06 crc kubenswrapper[4809]: I0312 07:59:06.244579 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:59:06 crc kubenswrapper[4809]: I0312 07:59:06.244643 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:59:06 crc kubenswrapper[4809]: I0312 07:59:06.244661 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:59:06 crc kubenswrapper[4809]: I0312 07:59:06.245618 4809 scope.go:117] "RemoveContainer" containerID="2ed278765de4f4bd5b11200d5e07a45400d8e984e628dc5ca43cd9b171e8ac0c" Mar 12 07:59:06 crc kubenswrapper[4809]: E0312 07:59:06.245925 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 12 07:59:07 crc kubenswrapper[4809]: I0312 07:59:07.044904 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T07:59:07Z is after 2026-02-23T05:33:13Z Mar 12 07:59:07 crc kubenswrapper[4809]: E0312 07:59:07.221434 4809 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 12 07:59:07 crc kubenswrapper[4809]: I0312 07:59:07.578502 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:59:07 crc kubenswrapper[4809]: I0312 07:59:07.579649 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:59:07 crc kubenswrapper[4809]: I0312 07:59:07.579678 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:59:07 crc kubenswrapper[4809]: I0312 07:59:07.579688 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:59:07 crc kubenswrapper[4809]: I0312 07:59:07.579707 4809 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 12 07:59:07 crc kubenswrapper[4809]: E0312 07:59:07.581022 4809 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T07:59:07Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 12 07:59:07 crc kubenswrapper[4809]: E0312 07:59:07.582695 4809 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T07:59:07Z is after 2026-02-23T05:33:13Z" node="crc" Mar 12 07:59:08 crc kubenswrapper[4809]: I0312 07:59:08.049054 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 07:59:08 crc kubenswrapper[4809]: I0312 07:59:08.840295 4809 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 07:59:08 crc kubenswrapper[4809]: I0312 07:59:08.840985 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 07:59:09 crc kubenswrapper[4809]: I0312 07:59:09.046820 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 07:59:09 crc kubenswrapper[4809]: I0312 07:59:09.132063 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 07:59:09 crc kubenswrapper[4809]: I0312 07:59:09.132659 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:59:09 crc kubenswrapper[4809]: I0312 07:59:09.134323 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:59:09 crc kubenswrapper[4809]: I0312 07:59:09.134368 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:59:09 crc kubenswrapper[4809]: I0312 07:59:09.134380 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:59:09 crc kubenswrapper[4809]: I0312 07:59:09.135073 4809 scope.go:117] "RemoveContainer" containerID="2ed278765de4f4bd5b11200d5e07a45400d8e984e628dc5ca43cd9b171e8ac0c" Mar 12 07:59:09 crc kubenswrapper[4809]: E0312 07:59:09.135298 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 12 07:59:09 crc kubenswrapper[4809]: I0312 07:59:09.624385 4809 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 12 07:59:09 crc kubenswrapper[4809]: I0312 07:59:09.644686 4809 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 12 07:59:10 crc kubenswrapper[4809]: I0312 07:59:10.046396 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 07:59:10 crc kubenswrapper[4809]: W0312 07:59:10.277063 4809 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Mar 12 07:59:10 crc kubenswrapper[4809]: E0312 07:59:10.277156 4809 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 12 07:59:11 crc kubenswrapper[4809]: I0312 07:59:11.049651 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 07:59:11 crc kubenswrapper[4809]: W0312 07:59:11.171809 4809 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.171887 4809 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.176348 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c09123293718e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:47.039717774 +0000 UTC m=+0.621753517,LastTimestamp:2026-03-12 07:58:47.039717774 +0000 UTC m=+0.621753517,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.182690 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c091236b2a46f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:47.108871279 +0000 UTC m=+0.690907022,LastTimestamp:2026-03-12 07:58:47.108871279 +0000 UTC m=+0.690907022,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.188712 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c091236b31590 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:47.10890024 +0000 UTC m=+0.690935983,LastTimestamp:2026-03-12 07:58:47.10890024 +0000 UTC m=+0.690935983,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.195425 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c091236b33c96 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:47.10891023 +0000 UTC m=+0.690945973,LastTimestamp:2026-03-12 07:58:47.10891023 +0000 UTC m=+0.690945973,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.202278 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c09123d24cfce default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:47.217016782 +0000 UTC m=+0.799052525,LastTimestamp:2026-03-12 07:58:47.217016782 +0000 UTC m=+0.799052525,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.210158 4809 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c091236b2a46f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c091236b2a46f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:47.108871279 +0000 UTC m=+0.690907022,LastTimestamp:2026-03-12 07:58:47.315803238 +0000 UTC m=+0.897838981,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.216908 4809 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c091236b31590\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c091236b31590 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:47.10890024 +0000 UTC m=+0.690935983,LastTimestamp:2026-03-12 07:58:47.315823219 +0000 UTC m=+0.897858962,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.224069 4809 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c091236b33c96\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c091236b33c96 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:47.10891023 +0000 UTC m=+0.690945973,LastTimestamp:2026-03-12 07:58:47.315835509 +0000 UTC m=+0.897871252,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.231000 4809 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c091236b2a46f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c091236b2a46f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:47.108871279 +0000 UTC m=+0.690907022,LastTimestamp:2026-03-12 07:58:47.408464655 +0000 UTC m=+0.990500448,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.239024 4809 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c091236b31590\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c091236b31590 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:47.10890024 +0000 UTC m=+0.690935983,LastTimestamp:2026-03-12 07:58:47.408502886 +0000 UTC m=+0.990538649,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.246620 4809 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c091236b33c96\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c091236b33c96 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:47.10891023 +0000 UTC m=+0.690945973,LastTimestamp:2026-03-12 07:58:47.408518367 +0000 UTC m=+0.990554140,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.253850 4809 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c091236b2a46f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c091236b2a46f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:47.108871279 +0000 UTC m=+0.690907022,LastTimestamp:2026-03-12 07:58:47.409866893 +0000 UTC m=+0.991902626,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.261584 4809 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c091236b31590\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c091236b31590 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:47.10890024 +0000 UTC m=+0.690935983,LastTimestamp:2026-03-12 07:58:47.409887784 +0000 UTC m=+0.991923517,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.268817 4809 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c091236b33c96\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c091236b33c96 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:47.10891023 +0000 UTC m=+0.690945973,LastTimestamp:2026-03-12 07:58:47.409897514 +0000 UTC m=+0.991933247,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.274469 4809 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c091236b2a46f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c091236b2a46f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:47.108871279 +0000 UTC m=+0.690907022,LastTimestamp:2026-03-12 07:58:47.411317552 +0000 UTC m=+0.993353285,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.280952 4809 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c091236b31590\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c091236b31590 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:47.10890024 +0000 UTC m=+0.690935983,LastTimestamp:2026-03-12 07:58:47.411331673 +0000 UTC m=+0.993367406,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.287759 4809 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c091236b33c96\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c091236b33c96 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:47.10891023 +0000 UTC m=+0.690945973,LastTimestamp:2026-03-12 07:58:47.411341113 +0000 UTC m=+0.993376846,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.293935 4809 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c091236b2a46f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c091236b2a46f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:47.108871279 +0000 UTC m=+0.690907022,LastTimestamp:2026-03-12 07:58:47.412074513 +0000 UTC m=+0.994110286,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.300173 4809 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c091236b31590\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c091236b31590 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:47.10890024 +0000 UTC m=+0.690935983,LastTimestamp:2026-03-12 07:58:47.412155565 +0000 UTC m=+0.994191338,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.305058 4809 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c091236b33c96\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c091236b33c96 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:47.10891023 +0000 UTC m=+0.690945973,LastTimestamp:2026-03-12 07:58:47.412210057 +0000 UTC m=+0.994245830,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.311751 4809 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c091236b2a46f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c091236b2a46f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:47.108871279 +0000 UTC m=+0.690907022,LastTimestamp:2026-03-12 07:58:47.412529825 +0000 UTC m=+0.994565558,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.318411 4809 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c091236b31590\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c091236b31590 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:47.10890024 +0000 UTC m=+0.690935983,LastTimestamp:2026-03-12 07:58:47.412550656 +0000 UTC m=+0.994586389,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.324591 4809 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c091236b33c96\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c091236b33c96 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:47.10891023 +0000 UTC m=+0.690945973,LastTimestamp:2026-03-12 07:58:47.412558986 +0000 UTC m=+0.994594719,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.331100 4809 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c091236b2a46f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c091236b2a46f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:47.108871279 +0000 UTC m=+0.690907022,LastTimestamp:2026-03-12 07:58:47.413261606 +0000 UTC m=+0.995297379,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.334833 4809 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c091236b31590\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c091236b31590 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:47.10890024 +0000 UTC m=+0.690935983,LastTimestamp:2026-03-12 07:58:47.413282616 +0000 UTC m=+0.995318389,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.339658 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c091261254059 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:47.821025369 +0000 UTC m=+1.403061142,LastTimestamp:2026-03-12 07:58:47.821025369 +0000 UTC m=+1.403061142,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.346036 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189c09126125cfc0 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:47.82106208 +0000 UTC m=+1.403097853,LastTimestamp:2026-03-12 07:58:47.82106208 +0000 UTC m=+1.403097853,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.352217 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c0912619693a4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:47.82845226 +0000 UTC m=+1.410487993,LastTimestamp:2026-03-12 07:58:47.82845226 +0000 UTC m=+1.410487993,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.358577 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c0912627fd643 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:47.843739203 +0000 UTC m=+1.425774976,LastTimestamp:2026-03-12 07:58:47.843739203 +0000 UTC m=+1.425774976,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.365080 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c091262d4f408 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:47.849317384 +0000 UTC m=+1.431353157,LastTimestamp:2026-03-12 07:58:47.849317384 +0000 UTC m=+1.431353157,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.370464 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c091282588f35 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:48.378036021 +0000 UTC m=+1.960071754,LastTimestamp:2026-03-12 07:58:48.378036021 +0000 UTC m=+1.960071754,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.379531 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c0912830363b8 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:48.389231544 +0000 UTC m=+1.971267277,LastTimestamp:2026-03-12 07:58:48.389231544 +0000 UTC m=+1.971267277,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.384624 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189c09128314d077 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:48.390373495 +0000 UTC m=+1.972409228,LastTimestamp:2026-03-12 07:58:48.390373495 +0000 UTC m=+1.972409228,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.390597 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c09128315d82f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:48.390441007 +0000 UTC m=+1.972476730,LastTimestamp:2026-03-12 07:58:48.390441007 +0000 UTC m=+1.972476730,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.395868 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c0912831c2348 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:48.390853448 +0000 UTC m=+1.972889181,LastTimestamp:2026-03-12 07:58:48.390853448 +0000 UTC m=+1.972889181,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.401190 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c09128320cec8 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:48.391159496 +0000 UTC m=+1.973195219,LastTimestamp:2026-03-12 07:58:48.391159496 +0000 UTC m=+1.973195219,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.408301 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c091283a2dae5 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:48.399682277 +0000 UTC m=+1.981718060,LastTimestamp:2026-03-12 07:58:48.399682277 +0000 UTC m=+1.981718060,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.412041 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c091283ba92e5 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:48.401236709 +0000 UTC m=+1.983272452,LastTimestamp:2026-03-12 07:58:48.401236709 +0000 UTC m=+1.983272452,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.416541 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189c091283e8948e openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:48.40425179 +0000 UTC m=+1.986287523,LastTimestamp:2026-03-12 07:58:48.40425179 +0000 UTC m=+1.986287523,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.421728 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c09128408d116 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:48.406364438 +0000 UTC m=+1.988400171,LastTimestamp:2026-03-12 07:58:48.406364438 +0000 UTC m=+1.988400171,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.425989 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c0912841a830d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:48.407524109 +0000 UTC m=+1.989559842,LastTimestamp:2026-03-12 07:58:48.407524109 +0000 UTC m=+1.989559842,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.430153 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c091298e2028d openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:48.756142733 +0000 UTC m=+2.338178506,LastTimestamp:2026-03-12 07:58:48.756142733 +0000 UTC m=+2.338178506,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.434331 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c091299a617ce openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:48.76899323 +0000 UTC m=+2.351028963,LastTimestamp:2026-03-12 07:58:48.76899323 +0000 UTC m=+2.351028963,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.440872 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c091299b8e97a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:48.770226554 +0000 UTC m=+2.352262327,LastTimestamp:2026-03-12 07:58:48.770226554 +0000 UTC m=+2.352262327,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.447675 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c0912a507aa02 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:48.959937026 +0000 UTC m=+2.541972779,LastTimestamp:2026-03-12 07:58:48.959937026 +0000 UTC m=+2.541972779,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.452365 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c0912a5c34a2b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:48.972233259 +0000 UTC m=+2.554269032,LastTimestamp:2026-03-12 07:58:48.972233259 +0000 UTC m=+2.554269032,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.456324 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c0912a6017bc6 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:48.97630919 +0000 UTC m=+2.558344963,LastTimestamp:2026-03-12 07:58:48.97630919 +0000 UTC m=+2.558344963,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.461382 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189c0912aebe7243 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:49.122910787 +0000 UTC m=+2.704946530,LastTimestamp:2026-03-12 07:58:49.122910787 +0000 UTC m=+2.704946530,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.465878 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c0912aed30c64 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:49.124260964 +0000 UTC m=+2.706296707,LastTimestamp:2026-03-12 07:58:49.124260964 +0000 UTC m=+2.706296707,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.470080 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c0912af066b54 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:49.127627604 +0000 UTC m=+2.709663347,LastTimestamp:2026-03-12 07:58:49.127627604 +0000 UTC m=+2.709663347,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.474994 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c0912af837290 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:49.135821456 +0000 UTC m=+2.717857209,LastTimestamp:2026-03-12 07:58:49.135821456 +0000 UTC m=+2.717857209,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.479501 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c0912b5028dba openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:49.228037562 +0000 UTC m=+2.810073305,LastTimestamp:2026-03-12 07:58:49.228037562 +0000 UTC m=+2.810073305,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.483553 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c0912b62f6e1d openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:49.247755805 +0000 UTC m=+2.829791548,LastTimestamp:2026-03-12 07:58:49.247755805 +0000 UTC m=+2.829791548,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.487188 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189c0912bc2706b9 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:49.347868345 +0000 UTC m=+2.929904088,LastTimestamp:2026-03-12 07:58:49.347868345 +0000 UTC m=+2.929904088,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.492549 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c0912bc4600d1 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:49.349898449 +0000 UTC m=+2.931934182,LastTimestamp:2026-03-12 07:58:49.349898449 +0000 UTC m=+2.931934182,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.495816 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c0912bc9c7e2e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:49.355566638 +0000 UTC m=+2.937602381,LastTimestamp:2026-03-12 07:58:49.355566638 +0000 UTC m=+2.937602381,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.499996 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c0912bce3f09f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:49.360248991 +0000 UTC m=+2.942284734,LastTimestamp:2026-03-12 07:58:49.360248991 +0000 UTC m=+2.942284734,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.504470 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189c0912bd0f616c openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:49.363095916 +0000 UTC m=+2.945131659,LastTimestamp:2026-03-12 07:58:49.363095916 +0000 UTC m=+2.945131659,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.508962 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c0912bd14a410 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:49.363440656 +0000 UTC m=+2.945476399,LastTimestamp:2026-03-12 07:58:49.363440656 +0000 UTC m=+2.945476399,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.512609 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c0912bd290bc0 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:49.36477792 +0000 UTC m=+2.946813653,LastTimestamp:2026-03-12 07:58:49.36477792 +0000 UTC m=+2.946813653,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.516282 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c0912bdd97e8d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:49.376341645 +0000 UTC m=+2.958377398,LastTimestamp:2026-03-12 07:58:49.376341645 +0000 UTC m=+2.958377398,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.521153 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c0912bdee239d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:49.377694621 +0000 UTC m=+2.959730354,LastTimestamp:2026-03-12 07:58:49.377694621 +0000 UTC m=+2.959730354,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.525716 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c0912be8f7038 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:49.388265528 +0000 UTC m=+2.970301261,LastTimestamp:2026-03-12 07:58:49.388265528 +0000 UTC m=+2.970301261,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.529869 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c0912c93b61a5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:49.567306149 +0000 UTC m=+3.149341882,LastTimestamp:2026-03-12 07:58:49.567306149 +0000 UTC m=+3.149341882,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.534059 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c0912c962d90a openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:49.569892618 +0000 UTC m=+3.151928351,LastTimestamp:2026-03-12 07:58:49.569892618 +0000 UTC m=+3.151928351,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.538358 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c0912c9fb30e4 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:49.57987658 +0000 UTC m=+3.161912313,LastTimestamp:2026-03-12 07:58:49.57987658 +0000 UTC m=+3.161912313,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.542160 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c0912ca0862df openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:49.580741343 +0000 UTC m=+3.162777076,LastTimestamp:2026-03-12 07:58:49.580741343 +0000 UTC m=+3.162777076,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.546724 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c0912ca5198bb openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:49.585539259 +0000 UTC m=+3.167574992,LastTimestamp:2026-03-12 07:58:49.585539259 +0000 UTC m=+3.167574992,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.551266 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c0912ca614ace openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:49.586567886 +0000 UTC m=+3.168603619,LastTimestamp:2026-03-12 07:58:49.586567886 +0000 UTC m=+3.168603619,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.556340 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c0912d43a48b4 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:49.751783604 +0000 UTC m=+3.333819337,LastTimestamp:2026-03-12 07:58:49.751783604 +0000 UTC m=+3.333819337,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.560312 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c0912d458c5ca openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:49.753781706 +0000 UTC m=+3.335817439,LastTimestamp:2026-03-12 07:58:49.753781706 +0000 UTC m=+3.335817439,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.565207 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c0912d5639008 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:49.771266056 +0000 UTC m=+3.353301789,LastTimestamp:2026-03-12 07:58:49.771266056 +0000 UTC m=+3.353301789,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.569458 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c0912d5843a20 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:49.773406752 +0000 UTC m=+3.355442485,LastTimestamp:2026-03-12 07:58:49.773406752 +0000 UTC m=+3.355442485,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.573063 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c0912d595bee5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:49.774554853 +0000 UTC m=+3.356590586,LastTimestamp:2026-03-12 07:58:49.774554853 +0000 UTC m=+3.356590586,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.577691 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c0912e101c0d7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:49.966182615 +0000 UTC m=+3.548218348,LastTimestamp:2026-03-12 07:58:49.966182615 +0000 UTC m=+3.548218348,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.582719 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c0912e193ce43 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:49.975754307 +0000 UTC m=+3.557790040,LastTimestamp:2026-03-12 07:58:49.975754307 +0000 UTC m=+3.557790040,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.587663 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c0912e1a48cfa openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:49.976851706 +0000 UTC m=+3.558887449,LastTimestamp:2026-03-12 07:58:49.976851706 +0000 UTC m=+3.558887449,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.589602 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c0912eb59be91 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:50.139721361 +0000 UTC m=+3.721757094,LastTimestamp:2026-03-12 07:58:50.139721361 +0000 UTC m=+3.721757094,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.594973 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c0912efc1abec openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:50.213641196 +0000 UTC m=+3.795676969,LastTimestamp:2026-03-12 07:58:50.213641196 +0000 UTC m=+3.795676969,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.596582 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c0912f0aa702c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:50.228895788 +0000 UTC m=+3.810931521,LastTimestamp:2026-03-12 07:58:50.228895788 +0000 UTC m=+3.810931521,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.600184 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c0912f92d04af openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:50.371671215 +0000 UTC m=+3.953706978,LastTimestamp:2026-03-12 07:58:50.371671215 +0000 UTC m=+3.953706978,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.606900 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c0912f9e6867c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:50.383828604 +0000 UTC m=+3.965864377,LastTimestamp:2026-03-12 07:58:50.383828604 +0000 UTC m=+3.965864377,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.612306 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c091328d7bfe9 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:51.171389417 +0000 UTC m=+4.753425180,LastTimestamp:2026-03-12 07:58:51.171389417 +0000 UTC m=+4.753425180,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.616037 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c091336e811f9 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:51.407340025 +0000 UTC m=+4.989375758,LastTimestamp:2026-03-12 07:58:51.407340025 +0000 UTC m=+4.989375758,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.622229 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c091337857937 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:51.417655607 +0000 UTC m=+4.999691340,LastTimestamp:2026-03-12 07:58:51.417655607 +0000 UTC m=+4.999691340,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.629673 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c09133799e64c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:51.418994252 +0000 UTC m=+5.001029985,LastTimestamp:2026-03-12 07:58:51.418994252 +0000 UTC m=+5.001029985,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.634422 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c091346fa3130 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:51.67696312 +0000 UTC m=+5.258998883,LastTimestamp:2026-03-12 07:58:51.67696312 +0000 UTC m=+5.258998883,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.641528 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c0913480d8dd3 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:51.695009235 +0000 UTC m=+5.277045008,LastTimestamp:2026-03-12 07:58:51.695009235 +0000 UTC m=+5.277045008,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.648197 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c09134828b8cf openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:51.696789711 +0000 UTC m=+5.278825474,LastTimestamp:2026-03-12 07:58:51.696789711 +0000 UTC m=+5.278825474,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.655363 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c0913561f3f9d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:51.931049885 +0000 UTC m=+5.513085628,LastTimestamp:2026-03-12 07:58:51.931049885 +0000 UTC m=+5.513085628,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.662305 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c09135707bfb2 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:51.946287026 +0000 UTC m=+5.528322769,LastTimestamp:2026-03-12 07:58:51.946287026 +0000 UTC m=+5.528322769,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.668927 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c09135716d8d6 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:51.947276502 +0000 UTC m=+5.529312255,LastTimestamp:2026-03-12 07:58:51.947276502 +0000 UTC m=+5.529312255,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.676215 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c09136309907f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:52.147732607 +0000 UTC m=+5.729768350,LastTimestamp:2026-03-12 07:58:52.147732607 +0000 UTC m=+5.729768350,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.682240 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c091364067d81 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:52.164308353 +0000 UTC m=+5.746344096,LastTimestamp:2026-03-12 07:58:52.164308353 +0000 UTC m=+5.746344096,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.688054 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c0913641e9c32 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:52.165889074 +0000 UTC m=+5.747924847,LastTimestamp:2026-03-12 07:58:52.165889074 +0000 UTC m=+5.747924847,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.692238 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c0913735a6e20 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:52.42146768 +0000 UTC m=+6.003503423,LastTimestamp:2026-03-12 07:58:52.42146768 +0000 UTC m=+6.003503423,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.696301 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c091374698284 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:52.439233156 +0000 UTC m=+6.021268929,LastTimestamp:2026-03-12 07:58:52.439233156 +0000 UTC m=+6.021268929,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.701910 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 12 07:59:11 crc kubenswrapper[4809]: &Event{ObjectMeta:{kube-controller-manager-crc.189c0914f1e533d9 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Mar 12 07:59:11 crc kubenswrapper[4809]: body: Mar 12 07:59:11 crc kubenswrapper[4809]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:58.839458777 +0000 UTC m=+12.421494570,LastTimestamp:2026-03-12 07:58:58.839458777 +0000 UTC m=+12.421494570,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 12 07:59:11 crc kubenswrapper[4809]: > Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.707300 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c0914f1e66193 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:58.839536019 +0000 UTC m=+12.421571782,LastTimestamp:2026-03-12 07:58:58.839536019 +0000 UTC m=+12.421571782,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.711832 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Mar 12 07:59:11 crc kubenswrapper[4809]: &Event{ObjectMeta:{kube-apiserver-crc.189c09157d9a74a4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Mar 12 07:59:11 crc kubenswrapper[4809]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 12 07:59:11 crc kubenswrapper[4809]: Mar 12 07:59:11 crc kubenswrapper[4809]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:59:01.183370404 +0000 UTC m=+14.765406137,LastTimestamp:2026-03-12 07:59:01.183370404 +0000 UTC m=+14.765406137,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 12 07:59:11 crc kubenswrapper[4809]: > Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.717805 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c09157d9ae3a9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:59:01.183398825 +0000 UTC m=+14.765434558,LastTimestamp:2026-03-12 07:59:01.183398825 +0000 UTC m=+14.765434558,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.722951 4809 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189c09157d9a74a4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Mar 12 07:59:11 crc kubenswrapper[4809]: &Event{ObjectMeta:{kube-apiserver-crc.189c09157d9a74a4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Mar 12 07:59:11 crc kubenswrapper[4809]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 12 07:59:11 crc kubenswrapper[4809]: Mar 12 07:59:11 crc kubenswrapper[4809]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:59:01.183370404 +0000 UTC m=+14.765406137,LastTimestamp:2026-03-12 07:59:01.191967968 +0000 UTC m=+14.774003701,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 12 07:59:11 crc kubenswrapper[4809]: > Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.728888 4809 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189c09157d9ae3a9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c09157d9ae3a9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:59:01.183398825 +0000 UTC m=+14.765434558,LastTimestamp:2026-03-12 07:59:01.191999539 +0000 UTC m=+14.774035272,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.733811 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Mar 12 07:59:11 crc kubenswrapper[4809]: &Event{ObjectMeta:{kube-apiserver-crc.189c0915a1c8d720 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Mar 12 07:59:11 crc kubenswrapper[4809]: body: Mar 12 07:59:11 crc kubenswrapper[4809]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:59:01.790390048 +0000 UTC m=+15.372425811,LastTimestamp:2026-03-12 07:59:01.790390048 +0000 UTC m=+15.372425811,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 12 07:59:11 crc kubenswrapper[4809]: > Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.738354 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c0915a1ca9e87 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:59:01.790506631 +0000 UTC m=+15.372542394,LastTimestamp:2026-03-12 07:59:01.790506631 +0000 UTC m=+15.372542394,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.744605 4809 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189c0912e1a48cfa\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c0912e1a48cfa openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:49.976851706 +0000 UTC m=+3.558887449,LastTimestamp:2026-03-12 07:59:02.217501529 +0000 UTC m=+15.799537302,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.750525 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 12 07:59:11 crc kubenswrapper[4809]: &Event{ObjectMeta:{kube-controller-manager-crc.189c09174607e777 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 12 07:59:11 crc kubenswrapper[4809]: body: Mar 12 07:59:11 crc kubenswrapper[4809]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:59:08.840953719 +0000 UTC m=+22.422989472,LastTimestamp:2026-03-12 07:59:08.840953719 +0000 UTC m=+22.422989472,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 12 07:59:11 crc kubenswrapper[4809]: > Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.754397 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c0917460a8067 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:59:08.841123943 +0000 UTC m=+22.423159676,LastTimestamp:2026-03-12 07:59:08.841123943 +0000 UTC m=+22.423159676,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:11 crc kubenswrapper[4809]: I0312 07:59:11.789642 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 07:59:11 crc kubenswrapper[4809]: I0312 07:59:11.790022 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:59:11 crc kubenswrapper[4809]: I0312 07:59:11.791509 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:59:11 crc kubenswrapper[4809]: I0312 07:59:11.791560 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:59:11 crc kubenswrapper[4809]: I0312 07:59:11.791584 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:59:11 crc kubenswrapper[4809]: I0312 07:59:11.792556 4809 scope.go:117] "RemoveContainer" containerID="2ed278765de4f4bd5b11200d5e07a45400d8e984e628dc5ca43cd9b171e8ac0c" Mar 12 07:59:11 crc kubenswrapper[4809]: E0312 07:59:11.792837 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 12 07:59:12 crc kubenswrapper[4809]: I0312 07:59:12.052507 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 07:59:13 crc kubenswrapper[4809]: I0312 07:59:13.049247 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 07:59:13 crc kubenswrapper[4809]: W0312 07:59:13.674016 4809 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Mar 12 07:59:13 crc kubenswrapper[4809]: E0312 07:59:13.674086 4809 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 12 07:59:14 crc kubenswrapper[4809]: I0312 07:59:14.049986 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 07:59:14 crc kubenswrapper[4809]: I0312 07:59:14.582782 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:59:14 crc kubenswrapper[4809]: I0312 07:59:14.584504 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:59:14 crc kubenswrapper[4809]: I0312 07:59:14.584769 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:59:14 crc kubenswrapper[4809]: I0312 07:59:14.584976 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:59:14 crc kubenswrapper[4809]: I0312 07:59:14.585240 4809 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 12 07:59:14 crc kubenswrapper[4809]: E0312 07:59:14.589285 4809 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 12 07:59:14 crc kubenswrapper[4809]: E0312 07:59:14.589963 4809 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 12 07:59:15 crc kubenswrapper[4809]: I0312 07:59:15.046802 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 07:59:15 crc kubenswrapper[4809]: W0312 07:59:15.047241 4809 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Mar 12 07:59:15 crc kubenswrapper[4809]: E0312 07:59:15.047314 4809 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 12 07:59:16 crc kubenswrapper[4809]: I0312 07:59:16.049714 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 07:59:17 crc kubenswrapper[4809]: I0312 07:59:17.050556 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 07:59:17 crc kubenswrapper[4809]: E0312 07:59:17.221691 4809 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 12 07:59:18 crc kubenswrapper[4809]: I0312 07:59:18.048577 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 07:59:18 crc kubenswrapper[4809]: I0312 07:59:18.840198 4809 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 07:59:18 crc kubenswrapper[4809]: I0312 07:59:18.840267 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 07:59:18 crc kubenswrapper[4809]: I0312 07:59:18.840321 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 07:59:18 crc kubenswrapper[4809]: I0312 07:59:18.840440 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:59:18 crc kubenswrapper[4809]: I0312 07:59:18.841727 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:59:18 crc kubenswrapper[4809]: I0312 07:59:18.841762 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:59:18 crc kubenswrapper[4809]: I0312 07:59:18.841775 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:59:18 crc kubenswrapper[4809]: I0312 07:59:18.842317 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"7e0fd6b286ccf1bf7747d31f1174259c16f2a9e0d371eb039a48c135b6118ace"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 12 07:59:18 crc kubenswrapper[4809]: I0312 07:59:18.842508 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://7e0fd6b286ccf1bf7747d31f1174259c16f2a9e0d371eb039a48c135b6118ace" gracePeriod=30 Mar 12 07:59:18 crc kubenswrapper[4809]: E0312 07:59:18.845762 4809 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189c09174607e777\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 12 07:59:18 crc kubenswrapper[4809]: &Event{ObjectMeta:{kube-controller-manager-crc.189c09174607e777 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 12 07:59:18 crc kubenswrapper[4809]: body: Mar 12 07:59:18 crc kubenswrapper[4809]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:59:08.840953719 +0000 UTC m=+22.422989472,LastTimestamp:2026-03-12 07:59:18.840250608 +0000 UTC m=+32.422286341,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 12 07:59:18 crc kubenswrapper[4809]: > Mar 12 07:59:18 crc kubenswrapper[4809]: E0312 07:59:18.850173 4809 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189c0917460a8067\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c0917460a8067 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:59:08.841123943 +0000 UTC m=+22.423159676,LastTimestamp:2026-03-12 07:59:18.840293319 +0000 UTC m=+32.422329052,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:18 crc kubenswrapper[4809]: E0312 07:59:18.858940 4809 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c09199a2b3d57 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Killing,Message:Container cluster-policy-controller failed startup probe, will be restarted,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:59:18.842490199 +0000 UTC m=+32.424525952,LastTimestamp:2026-03-12 07:59:18.842490199 +0000 UTC m=+32.424525952,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:18 crc kubenswrapper[4809]: E0312 07:59:18.972355 4809 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189c091283ba92e5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c091283ba92e5 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:48.401236709 +0000 UTC m=+1.983272452,LastTimestamp:2026-03-12 07:59:18.966065148 +0000 UTC m=+32.548100881,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:19 crc kubenswrapper[4809]: I0312 07:59:19.047079 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 07:59:19 crc kubenswrapper[4809]: E0312 07:59:19.216470 4809 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189c091298e2028d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c091298e2028d openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:48.756142733 +0000 UTC m=+2.338178506,LastTimestamp:2026-03-12 07:59:19.210468943 +0000 UTC m=+32.792504716,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:19 crc kubenswrapper[4809]: E0312 07:59:19.227359 4809 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189c091299a617ce\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c091299a617ce openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 07:58:48.76899323 +0000 UTC m=+2.351028963,LastTimestamp:2026-03-12 07:59:19.220769461 +0000 UTC m=+32.802805204,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 07:59:19 crc kubenswrapper[4809]: I0312 07:59:19.281658 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 12 07:59:19 crc kubenswrapper[4809]: I0312 07:59:19.282185 4809 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="7e0fd6b286ccf1bf7747d31f1174259c16f2a9e0d371eb039a48c135b6118ace" exitCode=255 Mar 12 07:59:19 crc kubenswrapper[4809]: I0312 07:59:19.282228 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"7e0fd6b286ccf1bf7747d31f1174259c16f2a9e0d371eb039a48c135b6118ace"} Mar 12 07:59:19 crc kubenswrapper[4809]: I0312 07:59:19.282262 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"fceb15dac1090a34620398fe604b2fe04af865b536619b0491dbc17edb6242b4"} Mar 12 07:59:19 crc kubenswrapper[4809]: I0312 07:59:19.282367 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:59:19 crc kubenswrapper[4809]: I0312 07:59:19.283732 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:59:19 crc kubenswrapper[4809]: I0312 07:59:19.283796 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:59:19 crc kubenswrapper[4809]: I0312 07:59:19.283812 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:59:20 crc kubenswrapper[4809]: I0312 07:59:20.051836 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 07:59:21 crc kubenswrapper[4809]: I0312 07:59:21.048959 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 07:59:21 crc kubenswrapper[4809]: I0312 07:59:21.589445 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:59:21 crc kubenswrapper[4809]: I0312 07:59:21.591250 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:59:21 crc kubenswrapper[4809]: I0312 07:59:21.591305 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:59:21 crc kubenswrapper[4809]: I0312 07:59:21.591323 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:59:21 crc kubenswrapper[4809]: I0312 07:59:21.591360 4809 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 12 07:59:21 crc kubenswrapper[4809]: E0312 07:59:21.598407 4809 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 12 07:59:21 crc kubenswrapper[4809]: E0312 07:59:21.598756 4809 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 12 07:59:22 crc kubenswrapper[4809]: I0312 07:59:22.048316 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 07:59:22 crc kubenswrapper[4809]: I0312 07:59:22.156468 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 07:59:22 crc kubenswrapper[4809]: I0312 07:59:22.156741 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:59:22 crc kubenswrapper[4809]: I0312 07:59:22.158200 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:59:22 crc kubenswrapper[4809]: I0312 07:59:22.158311 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:59:22 crc kubenswrapper[4809]: I0312 07:59:22.158376 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:59:23 crc kubenswrapper[4809]: I0312 07:59:23.049469 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 07:59:24 crc kubenswrapper[4809]: I0312 07:59:24.047824 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 07:59:25 crc kubenswrapper[4809]: I0312 07:59:25.048916 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 07:59:25 crc kubenswrapper[4809]: I0312 07:59:25.839173 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 07:59:25 crc kubenswrapper[4809]: I0312 07:59:25.839366 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:59:25 crc kubenswrapper[4809]: I0312 07:59:25.840780 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:59:25 crc kubenswrapper[4809]: I0312 07:59:25.840825 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:59:25 crc kubenswrapper[4809]: I0312 07:59:25.840837 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:59:25 crc kubenswrapper[4809]: I0312 07:59:25.843803 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 07:59:26 crc kubenswrapper[4809]: I0312 07:59:26.050390 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 07:59:26 crc kubenswrapper[4809]: I0312 07:59:26.299136 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:59:26 crc kubenswrapper[4809]: I0312 07:59:26.300163 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:59:26 crc kubenswrapper[4809]: I0312 07:59:26.300236 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:59:26 crc kubenswrapper[4809]: I0312 07:59:26.300256 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:59:27 crc kubenswrapper[4809]: I0312 07:59:27.048650 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 07:59:27 crc kubenswrapper[4809]: I0312 07:59:27.105436 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:59:27 crc kubenswrapper[4809]: I0312 07:59:27.106886 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:59:27 crc kubenswrapper[4809]: I0312 07:59:27.106939 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:59:27 crc kubenswrapper[4809]: I0312 07:59:27.106958 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:59:27 crc kubenswrapper[4809]: I0312 07:59:27.107778 4809 scope.go:117] "RemoveContainer" containerID="2ed278765de4f4bd5b11200d5e07a45400d8e984e628dc5ca43cd9b171e8ac0c" Mar 12 07:59:27 crc kubenswrapper[4809]: E0312 07:59:27.221843 4809 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 12 07:59:27 crc kubenswrapper[4809]: W0312 07:59:27.893209 4809 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Mar 12 07:59:27 crc kubenswrapper[4809]: E0312 07:59:27.893266 4809 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 12 07:59:28 crc kubenswrapper[4809]: I0312 07:59:28.046970 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 07:59:28 crc kubenswrapper[4809]: W0312 07:59:28.195909 4809 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Mar 12 07:59:28 crc kubenswrapper[4809]: E0312 07:59:28.195976 4809 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 12 07:59:28 crc kubenswrapper[4809]: I0312 07:59:28.305862 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 12 07:59:28 crc kubenswrapper[4809]: I0312 07:59:28.306490 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 12 07:59:28 crc kubenswrapper[4809]: I0312 07:59:28.308901 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"05b573c6b217799f42371b76347a819d5189af76b94989bcff50001b563248d2"} Mar 12 07:59:28 crc kubenswrapper[4809]: I0312 07:59:28.309072 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:59:28 crc kubenswrapper[4809]: I0312 07:59:28.310186 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:59:28 crc kubenswrapper[4809]: I0312 07:59:28.310214 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:59:28 crc kubenswrapper[4809]: I0312 07:59:28.310223 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:59:28 crc kubenswrapper[4809]: I0312 07:59:28.310619 4809 scope.go:117] "RemoveContainer" containerID="05b573c6b217799f42371b76347a819d5189af76b94989bcff50001b563248d2" Mar 12 07:59:28 crc kubenswrapper[4809]: E0312 07:59:28.310759 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 12 07:59:28 crc kubenswrapper[4809]: I0312 07:59:28.599326 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:59:28 crc kubenswrapper[4809]: I0312 07:59:28.601166 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:59:28 crc kubenswrapper[4809]: I0312 07:59:28.601223 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:59:28 crc kubenswrapper[4809]: I0312 07:59:28.601241 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:59:28 crc kubenswrapper[4809]: I0312 07:59:28.601274 4809 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 12 07:59:28 crc kubenswrapper[4809]: E0312 07:59:28.606718 4809 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 12 07:59:28 crc kubenswrapper[4809]: E0312 07:59:28.606871 4809 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 12 07:59:29 crc kubenswrapper[4809]: I0312 07:59:29.049626 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 07:59:29 crc kubenswrapper[4809]: I0312 07:59:29.131590 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 07:59:29 crc kubenswrapper[4809]: I0312 07:59:29.314677 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 12 07:59:29 crc kubenswrapper[4809]: I0312 07:59:29.315933 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 12 07:59:29 crc kubenswrapper[4809]: I0312 07:59:29.319835 4809 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="05b573c6b217799f42371b76347a819d5189af76b94989bcff50001b563248d2" exitCode=255 Mar 12 07:59:29 crc kubenswrapper[4809]: I0312 07:59:29.319899 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"05b573c6b217799f42371b76347a819d5189af76b94989bcff50001b563248d2"} Mar 12 07:59:29 crc kubenswrapper[4809]: I0312 07:59:29.319944 4809 scope.go:117] "RemoveContainer" containerID="2ed278765de4f4bd5b11200d5e07a45400d8e984e628dc5ca43cd9b171e8ac0c" Mar 12 07:59:29 crc kubenswrapper[4809]: I0312 07:59:29.319978 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:59:29 crc kubenswrapper[4809]: I0312 07:59:29.325585 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:59:29 crc kubenswrapper[4809]: I0312 07:59:29.325671 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:59:29 crc kubenswrapper[4809]: I0312 07:59:29.325707 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:59:29 crc kubenswrapper[4809]: I0312 07:59:29.327655 4809 scope.go:117] "RemoveContainer" containerID="05b573c6b217799f42371b76347a819d5189af76b94989bcff50001b563248d2" Mar 12 07:59:29 crc kubenswrapper[4809]: E0312 07:59:29.328034 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 12 07:59:30 crc kubenswrapper[4809]: I0312 07:59:30.047968 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 07:59:30 crc kubenswrapper[4809]: I0312 07:59:30.326384 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 12 07:59:30 crc kubenswrapper[4809]: I0312 07:59:30.329566 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:59:30 crc kubenswrapper[4809]: I0312 07:59:30.330856 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:59:30 crc kubenswrapper[4809]: I0312 07:59:30.330911 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:59:30 crc kubenswrapper[4809]: I0312 07:59:30.330921 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:59:30 crc kubenswrapper[4809]: I0312 07:59:30.331547 4809 scope.go:117] "RemoveContainer" containerID="05b573c6b217799f42371b76347a819d5189af76b94989bcff50001b563248d2" Mar 12 07:59:30 crc kubenswrapper[4809]: E0312 07:59:30.331739 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 12 07:59:31 crc kubenswrapper[4809]: I0312 07:59:31.049867 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 07:59:31 crc kubenswrapper[4809]: I0312 07:59:31.790415 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 07:59:31 crc kubenswrapper[4809]: I0312 07:59:31.790657 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:59:31 crc kubenswrapper[4809]: I0312 07:59:31.792425 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:59:31 crc kubenswrapper[4809]: I0312 07:59:31.792670 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:59:31 crc kubenswrapper[4809]: I0312 07:59:31.792711 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:59:31 crc kubenswrapper[4809]: I0312 07:59:31.793718 4809 scope.go:117] "RemoveContainer" containerID="05b573c6b217799f42371b76347a819d5189af76b94989bcff50001b563248d2" Mar 12 07:59:31 crc kubenswrapper[4809]: E0312 07:59:31.794056 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 12 07:59:32 crc kubenswrapper[4809]: I0312 07:59:32.049359 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 07:59:32 crc kubenswrapper[4809]: I0312 07:59:32.163575 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 07:59:32 crc kubenswrapper[4809]: I0312 07:59:32.163808 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:59:32 crc kubenswrapper[4809]: I0312 07:59:32.165232 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:59:32 crc kubenswrapper[4809]: I0312 07:59:32.165408 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:59:32 crc kubenswrapper[4809]: I0312 07:59:32.165548 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:59:32 crc kubenswrapper[4809]: W0312 07:59:32.331219 4809 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Mar 12 07:59:32 crc kubenswrapper[4809]: E0312 07:59:32.331734 4809 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 12 07:59:33 crc kubenswrapper[4809]: I0312 07:59:33.051254 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 07:59:34 crc kubenswrapper[4809]: I0312 07:59:34.048648 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 07:59:35 crc kubenswrapper[4809]: I0312 07:59:35.049819 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 07:59:35 crc kubenswrapper[4809]: I0312 07:59:35.607265 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:59:35 crc kubenswrapper[4809]: I0312 07:59:35.608950 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:59:35 crc kubenswrapper[4809]: I0312 07:59:35.609245 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:59:35 crc kubenswrapper[4809]: I0312 07:59:35.609436 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:59:35 crc kubenswrapper[4809]: I0312 07:59:35.609648 4809 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 12 07:59:35 crc kubenswrapper[4809]: E0312 07:59:35.614279 4809 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 12 07:59:35 crc kubenswrapper[4809]: E0312 07:59:35.614516 4809 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 12 07:59:36 crc kubenswrapper[4809]: I0312 07:59:36.048495 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 07:59:37 crc kubenswrapper[4809]: I0312 07:59:37.048149 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 07:59:37 crc kubenswrapper[4809]: E0312 07:59:37.222286 4809 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 12 07:59:37 crc kubenswrapper[4809]: W0312 07:59:37.920735 4809 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Mar 12 07:59:37 crc kubenswrapper[4809]: E0312 07:59:37.920799 4809 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 12 07:59:38 crc kubenswrapper[4809]: I0312 07:59:38.048581 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 07:59:39 crc kubenswrapper[4809]: I0312 07:59:39.047710 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 07:59:40 crc kubenswrapper[4809]: I0312 07:59:40.049316 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 07:59:41 crc kubenswrapper[4809]: I0312 07:59:41.047659 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 07:59:41 crc kubenswrapper[4809]: I0312 07:59:41.355072 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 12 07:59:41 crc kubenswrapper[4809]: I0312 07:59:41.355328 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:59:41 crc kubenswrapper[4809]: I0312 07:59:41.357582 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:59:41 crc kubenswrapper[4809]: I0312 07:59:41.357626 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:59:41 crc kubenswrapper[4809]: I0312 07:59:41.357644 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:59:42 crc kubenswrapper[4809]: I0312 07:59:42.050160 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 07:59:42 crc kubenswrapper[4809]: I0312 07:59:42.615335 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:59:42 crc kubenswrapper[4809]: I0312 07:59:42.617190 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:59:42 crc kubenswrapper[4809]: I0312 07:59:42.617258 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:59:42 crc kubenswrapper[4809]: I0312 07:59:42.617277 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:59:42 crc kubenswrapper[4809]: I0312 07:59:42.617314 4809 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 12 07:59:42 crc kubenswrapper[4809]: E0312 07:59:42.623539 4809 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 12 07:59:42 crc kubenswrapper[4809]: E0312 07:59:42.623712 4809 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 12 07:59:43 crc kubenswrapper[4809]: I0312 07:59:43.048739 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 07:59:44 crc kubenswrapper[4809]: I0312 07:59:44.047059 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 07:59:45 crc kubenswrapper[4809]: I0312 07:59:45.049257 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 07:59:46 crc kubenswrapper[4809]: I0312 07:59:46.050451 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 07:59:47 crc kubenswrapper[4809]: I0312 07:59:47.049958 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 07:59:47 crc kubenswrapper[4809]: I0312 07:59:47.106073 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:59:47 crc kubenswrapper[4809]: I0312 07:59:47.107541 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:59:47 crc kubenswrapper[4809]: I0312 07:59:47.107600 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:59:47 crc kubenswrapper[4809]: I0312 07:59:47.107618 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:59:47 crc kubenswrapper[4809]: I0312 07:59:47.108589 4809 scope.go:117] "RemoveContainer" containerID="05b573c6b217799f42371b76347a819d5189af76b94989bcff50001b563248d2" Mar 12 07:59:47 crc kubenswrapper[4809]: E0312 07:59:47.108896 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 12 07:59:47 crc kubenswrapper[4809]: E0312 07:59:47.222887 4809 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 12 07:59:48 crc kubenswrapper[4809]: I0312 07:59:48.047610 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 07:59:49 crc kubenswrapper[4809]: I0312 07:59:49.047917 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 07:59:49 crc kubenswrapper[4809]: I0312 07:59:49.624180 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:59:49 crc kubenswrapper[4809]: I0312 07:59:49.630431 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:59:49 crc kubenswrapper[4809]: I0312 07:59:49.630477 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:59:49 crc kubenswrapper[4809]: I0312 07:59:49.630494 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:59:49 crc kubenswrapper[4809]: I0312 07:59:49.630528 4809 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 12 07:59:49 crc kubenswrapper[4809]: E0312 07:59:49.631716 4809 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 12 07:59:49 crc kubenswrapper[4809]: E0312 07:59:49.639734 4809 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 12 07:59:50 crc kubenswrapper[4809]: I0312 07:59:50.049896 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 07:59:51 crc kubenswrapper[4809]: I0312 07:59:51.046383 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 07:59:52 crc kubenswrapper[4809]: I0312 07:59:52.045990 4809 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 07:59:52 crc kubenswrapper[4809]: I0312 07:59:52.619919 4809 csr.go:261] certificate signing request csr-rshpq is approved, waiting to be issued Mar 12 07:59:52 crc kubenswrapper[4809]: I0312 07:59:52.628935 4809 csr.go:257] certificate signing request csr-rshpq is issued Mar 12 07:59:52 crc kubenswrapper[4809]: I0312 07:59:52.696371 4809 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Mar 12 07:59:52 crc kubenswrapper[4809]: I0312 07:59:52.896643 4809 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Mar 12 07:59:53 crc kubenswrapper[4809]: I0312 07:59:53.629986 4809 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-12-22 06:48:25.082945713 +0000 UTC Mar 12 07:59:53 crc kubenswrapper[4809]: I0312 07:59:53.630037 4809 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6838h48m31.452911481s for next certificate rotation Mar 12 07:59:54 crc kubenswrapper[4809]: I0312 07:59:54.106462 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:59:54 crc kubenswrapper[4809]: I0312 07:59:54.107869 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:59:54 crc kubenswrapper[4809]: I0312 07:59:54.107910 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:59:54 crc kubenswrapper[4809]: I0312 07:59:54.107918 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:59:56 crc kubenswrapper[4809]: I0312 07:59:56.011378 4809 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 12 07:59:56 crc kubenswrapper[4809]: I0312 07:59:56.640070 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 07:59:56 crc kubenswrapper[4809]: I0312 07:59:56.641504 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:59:56 crc kubenswrapper[4809]: I0312 07:59:56.641543 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:59:56 crc kubenswrapper[4809]: I0312 07:59:56.641554 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:59:56 crc kubenswrapper[4809]: I0312 07:59:56.641670 4809 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 12 07:59:56 crc kubenswrapper[4809]: I0312 07:59:56.649994 4809 kubelet_node_status.go:115] "Node was previously registered" node="crc" Mar 12 07:59:56 crc kubenswrapper[4809]: I0312 07:59:56.650393 4809 kubelet_node_status.go:79] "Successfully registered node" node="crc" Mar 12 07:59:56 crc kubenswrapper[4809]: E0312 07:59:56.650445 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Mar 12 07:59:56 crc kubenswrapper[4809]: I0312 07:59:56.654171 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:59:56 crc kubenswrapper[4809]: I0312 07:59:56.654263 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:59:56 crc kubenswrapper[4809]: I0312 07:59:56.654286 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:59:56 crc kubenswrapper[4809]: I0312 07:59:56.654309 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 07:59:56 crc kubenswrapper[4809]: I0312 07:59:56.654330 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T07:59:56Z","lastTransitionTime":"2026-03-12T07:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 07:59:56 crc kubenswrapper[4809]: E0312 07:59:56.672340 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T07:59:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T07:59:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T07:59:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T07:59:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f5485f66-5ce6-43bd-a720-57f2c3255098\\\",\\\"systemUUID\\\":\\\"f124771b-42e6-4243-8a1b-002105aaecbe\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 07:59:56 crc kubenswrapper[4809]: I0312 07:59:56.676456 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:59:56 crc kubenswrapper[4809]: I0312 07:59:56.676489 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:59:56 crc kubenswrapper[4809]: I0312 07:59:56.676497 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:59:56 crc kubenswrapper[4809]: I0312 07:59:56.676510 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 07:59:56 crc kubenswrapper[4809]: I0312 07:59:56.676537 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T07:59:56Z","lastTransitionTime":"2026-03-12T07:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 07:59:56 crc kubenswrapper[4809]: E0312 07:59:56.691799 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T07:59:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T07:59:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T07:59:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T07:59:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f5485f66-5ce6-43bd-a720-57f2c3255098\\\",\\\"systemUUID\\\":\\\"f124771b-42e6-4243-8a1b-002105aaecbe\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 07:59:56 crc kubenswrapper[4809]: I0312 07:59:56.696352 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:59:56 crc kubenswrapper[4809]: I0312 07:59:56.696459 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:59:56 crc kubenswrapper[4809]: I0312 07:59:56.696560 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:59:56 crc kubenswrapper[4809]: I0312 07:59:56.696656 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 07:59:56 crc kubenswrapper[4809]: I0312 07:59:56.696755 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T07:59:56Z","lastTransitionTime":"2026-03-12T07:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 07:59:56 crc kubenswrapper[4809]: E0312 07:59:56.711614 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T07:59:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T07:59:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T07:59:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T07:59:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f5485f66-5ce6-43bd-a720-57f2c3255098\\\",\\\"systemUUID\\\":\\\"f124771b-42e6-4243-8a1b-002105aaecbe\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 07:59:56 crc kubenswrapper[4809]: I0312 07:59:56.715738 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 07:59:56 crc kubenswrapper[4809]: I0312 07:59:56.715927 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 07:59:56 crc kubenswrapper[4809]: I0312 07:59:56.716021 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 07:59:56 crc kubenswrapper[4809]: I0312 07:59:56.716131 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 07:59:56 crc kubenswrapper[4809]: I0312 07:59:56.716228 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T07:59:56Z","lastTransitionTime":"2026-03-12T07:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 07:59:56 crc kubenswrapper[4809]: E0312 07:59:56.730865 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T07:59:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T07:59:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T07:59:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T07:59:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f5485f66-5ce6-43bd-a720-57f2c3255098\\\",\\\"systemUUID\\\":\\\"f124771b-42e6-4243-8a1b-002105aaecbe\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 07:59:56 crc kubenswrapper[4809]: E0312 07:59:56.731230 4809 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 12 07:59:56 crc kubenswrapper[4809]: E0312 07:59:56.731334 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 07:59:56 crc kubenswrapper[4809]: E0312 07:59:56.832170 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 07:59:56 crc kubenswrapper[4809]: E0312 07:59:56.932557 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 07:59:57 crc kubenswrapper[4809]: E0312 07:59:57.033599 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 07:59:57 crc kubenswrapper[4809]: E0312 07:59:57.134727 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 07:59:57 crc kubenswrapper[4809]: E0312 07:59:57.223911 4809 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 12 07:59:57 crc kubenswrapper[4809]: E0312 07:59:57.235470 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 07:59:57 crc kubenswrapper[4809]: E0312 07:59:57.336087 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 07:59:57 crc kubenswrapper[4809]: E0312 07:59:57.436492 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 07:59:57 crc kubenswrapper[4809]: E0312 07:59:57.536894 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 07:59:57 crc kubenswrapper[4809]: E0312 07:59:57.638016 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 07:59:57 crc kubenswrapper[4809]: E0312 07:59:57.739061 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 07:59:57 crc kubenswrapper[4809]: E0312 07:59:57.840004 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 07:59:57 crc kubenswrapper[4809]: E0312 07:59:57.941161 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 07:59:58 crc kubenswrapper[4809]: E0312 07:59:58.041331 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 07:59:58 crc kubenswrapper[4809]: E0312 07:59:58.141924 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 07:59:58 crc kubenswrapper[4809]: E0312 07:59:58.242962 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 07:59:58 crc kubenswrapper[4809]: E0312 07:59:58.343984 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 07:59:58 crc kubenswrapper[4809]: E0312 07:59:58.444841 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 07:59:58 crc kubenswrapper[4809]: E0312 07:59:58.544939 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 07:59:58 crc kubenswrapper[4809]: E0312 07:59:58.645975 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 07:59:58 crc kubenswrapper[4809]: E0312 07:59:58.746771 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 07:59:58 crc kubenswrapper[4809]: E0312 07:59:58.847781 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 07:59:58 crc kubenswrapper[4809]: E0312 07:59:58.948765 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 07:59:59 crc kubenswrapper[4809]: E0312 07:59:59.049871 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 07:59:59 crc kubenswrapper[4809]: E0312 07:59:59.150504 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 07:59:59 crc kubenswrapper[4809]: E0312 07:59:59.251372 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 07:59:59 crc kubenswrapper[4809]: E0312 07:59:59.351825 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 07:59:59 crc kubenswrapper[4809]: E0312 07:59:59.452312 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 07:59:59 crc kubenswrapper[4809]: E0312 07:59:59.552847 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 07:59:59 crc kubenswrapper[4809]: E0312 07:59:59.653422 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 07:59:59 crc kubenswrapper[4809]: E0312 07:59:59.753833 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 07:59:59 crc kubenswrapper[4809]: E0312 07:59:59.854556 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 07:59:59 crc kubenswrapper[4809]: E0312 07:59:59.955632 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 08:00:00 crc kubenswrapper[4809]: E0312 08:00:00.056755 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 08:00:00 crc kubenswrapper[4809]: E0312 08:00:00.157738 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 08:00:00 crc kubenswrapper[4809]: E0312 08:00:00.258352 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 08:00:00 crc kubenswrapper[4809]: E0312 08:00:00.359280 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 08:00:00 crc kubenswrapper[4809]: E0312 08:00:00.460141 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 08:00:00 crc kubenswrapper[4809]: E0312 08:00:00.560709 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 08:00:00 crc kubenswrapper[4809]: E0312 08:00:00.661536 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 08:00:00 crc kubenswrapper[4809]: E0312 08:00:00.762317 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 08:00:00 crc kubenswrapper[4809]: E0312 08:00:00.863143 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 08:00:00 crc kubenswrapper[4809]: E0312 08:00:00.964310 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 08:00:01 crc kubenswrapper[4809]: E0312 08:00:01.065401 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 08:00:01 crc kubenswrapper[4809]: I0312 08:00:01.105218 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 08:00:01 crc kubenswrapper[4809]: I0312 08:00:01.106499 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:01 crc kubenswrapper[4809]: I0312 08:00:01.106566 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:01 crc kubenswrapper[4809]: I0312 08:00:01.106580 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:01 crc kubenswrapper[4809]: I0312 08:00:01.107281 4809 scope.go:117] "RemoveContainer" containerID="05b573c6b217799f42371b76347a819d5189af76b94989bcff50001b563248d2" Mar 12 08:00:01 crc kubenswrapper[4809]: E0312 08:00:01.166062 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 08:00:01 crc kubenswrapper[4809]: E0312 08:00:01.267058 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 08:00:01 crc kubenswrapper[4809]: I0312 08:00:01.273792 4809 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 12 08:00:01 crc kubenswrapper[4809]: E0312 08:00:01.367152 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 08:00:01 crc kubenswrapper[4809]: I0312 08:00:01.416740 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 12 08:00:01 crc kubenswrapper[4809]: I0312 08:00:01.418737 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9"} Mar 12 08:00:01 crc kubenswrapper[4809]: I0312 08:00:01.418963 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 08:00:01 crc kubenswrapper[4809]: I0312 08:00:01.420769 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:01 crc kubenswrapper[4809]: I0312 08:00:01.420802 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:01 crc kubenswrapper[4809]: I0312 08:00:01.420812 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:01 crc kubenswrapper[4809]: E0312 08:00:01.467393 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 08:00:01 crc kubenswrapper[4809]: E0312 08:00:01.567782 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 08:00:01 crc kubenswrapper[4809]: E0312 08:00:01.669031 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 08:00:01 crc kubenswrapper[4809]: E0312 08:00:01.769409 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 08:00:01 crc kubenswrapper[4809]: I0312 08:00:01.789724 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 08:00:01 crc kubenswrapper[4809]: E0312 08:00:01.870384 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 08:00:01 crc kubenswrapper[4809]: E0312 08:00:01.971488 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 08:00:02 crc kubenswrapper[4809]: E0312 08:00:02.071987 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 08:00:02 crc kubenswrapper[4809]: E0312 08:00:02.172300 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 08:00:02 crc kubenswrapper[4809]: E0312 08:00:02.272422 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 08:00:02 crc kubenswrapper[4809]: E0312 08:00:02.373418 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 08:00:02 crc kubenswrapper[4809]: I0312 08:00:02.423511 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 12 08:00:02 crc kubenswrapper[4809]: I0312 08:00:02.424011 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 12 08:00:02 crc kubenswrapper[4809]: I0312 08:00:02.426228 4809 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9" exitCode=255 Mar 12 08:00:02 crc kubenswrapper[4809]: I0312 08:00:02.426264 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9"} Mar 12 08:00:02 crc kubenswrapper[4809]: I0312 08:00:02.426328 4809 scope.go:117] "RemoveContainer" containerID="05b573c6b217799f42371b76347a819d5189af76b94989bcff50001b563248d2" Mar 12 08:00:02 crc kubenswrapper[4809]: I0312 08:00:02.426424 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 08:00:02 crc kubenswrapper[4809]: I0312 08:00:02.427678 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:02 crc kubenswrapper[4809]: I0312 08:00:02.427714 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:02 crc kubenswrapper[4809]: I0312 08:00:02.427728 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:02 crc kubenswrapper[4809]: I0312 08:00:02.428450 4809 scope.go:117] "RemoveContainer" containerID="94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9" Mar 12 08:00:02 crc kubenswrapper[4809]: E0312 08:00:02.428650 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 12 08:00:02 crc kubenswrapper[4809]: E0312 08:00:02.473932 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 08:00:02 crc kubenswrapper[4809]: E0312 08:00:02.574330 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 08:00:02 crc kubenswrapper[4809]: E0312 08:00:02.674478 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 08:00:02 crc kubenswrapper[4809]: E0312 08:00:02.775443 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 08:00:02 crc kubenswrapper[4809]: E0312 08:00:02.875574 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 08:00:02 crc kubenswrapper[4809]: E0312 08:00:02.976518 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 08:00:03 crc kubenswrapper[4809]: E0312 08:00:03.076928 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 08:00:03 crc kubenswrapper[4809]: E0312 08:00:03.178017 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 08:00:03 crc kubenswrapper[4809]: E0312 08:00:03.278944 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 08:00:03 crc kubenswrapper[4809]: E0312 08:00:03.379368 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 08:00:03 crc kubenswrapper[4809]: I0312 08:00:03.430283 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 12 08:00:03 crc kubenswrapper[4809]: I0312 08:00:03.432407 4809 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 08:00:03 crc kubenswrapper[4809]: I0312 08:00:03.433162 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:03 crc kubenswrapper[4809]: I0312 08:00:03.433222 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:03 crc kubenswrapper[4809]: I0312 08:00:03.433241 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:03 crc kubenswrapper[4809]: I0312 08:00:03.434276 4809 scope.go:117] "RemoveContainer" containerID="94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9" Mar 12 08:00:03 crc kubenswrapper[4809]: E0312 08:00:03.434632 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 12 08:00:03 crc kubenswrapper[4809]: E0312 08:00:03.479740 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 08:00:03 crc kubenswrapper[4809]: E0312 08:00:03.579889 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 08:00:03 crc kubenswrapper[4809]: E0312 08:00:03.680916 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 08:00:03 crc kubenswrapper[4809]: E0312 08:00:03.781449 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 08:00:03 crc kubenswrapper[4809]: E0312 08:00:03.881646 4809 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 08:00:03 crc kubenswrapper[4809]: I0312 08:00:03.948872 4809 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 12 08:00:03 crc kubenswrapper[4809]: I0312 08:00:03.984912 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:03 crc kubenswrapper[4809]: I0312 08:00:03.984972 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:03 crc kubenswrapper[4809]: I0312 08:00:03.984991 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:03 crc kubenswrapper[4809]: I0312 08:00:03.985015 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:03 crc kubenswrapper[4809]: I0312 08:00:03.985032 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:03Z","lastTransitionTime":"2026-03-12T08:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.074366 4809 apiserver.go:52] "Watching apiserver" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.079580 4809 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.079755 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g"] Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.080254 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.080387 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.080424 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.080430 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.080433 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.080838 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:00:04 crc kubenswrapper[4809]: E0312 08:00:04.080913 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 08:00:04 crc kubenswrapper[4809]: E0312 08:00:04.080973 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 08:00:04 crc kubenswrapper[4809]: E0312 08:00:04.081244 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.084416 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.084513 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.084589 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.084589 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.084721 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.084727 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.084776 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.084954 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.085051 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.087055 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.087291 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.087365 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.087397 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.087419 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:04Z","lastTransitionTime":"2026-03-12T08:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.110772 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.128067 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.138286 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.144306 4809 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.154781 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.164369 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.169950 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.169997 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.170020 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.170041 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.170157 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.170291 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.170362 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.170399 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.170443 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.170487 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.170511 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.170528 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.170532 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.170565 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.170669 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.170691 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.170706 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.170729 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.170757 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.170781 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.171140 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.171796 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.171815 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.171832 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.171906 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.171952 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.171984 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.172022 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.172341 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.172344 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.172415 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.172612 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.172615 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.172471 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.172666 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.172694 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.172722 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.172745 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.172765 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.172789 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.172811 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.172814 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.172832 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.172854 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.172876 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.172897 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.172918 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.172941 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.172962 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.172982 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.173002 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.173021 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.173042 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.173066 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.173091 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.173133 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.173166 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.173394 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.173424 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.173448 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.173471 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.173493 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.173515 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.173534 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.173556 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.173580 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.173605 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.173629 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.173651 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.173675 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.173699 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.173722 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.173778 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.173802 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.173825 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.173847 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.173870 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.173892 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.173917 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.173940 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.173964 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.173987 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.174011 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.174034 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.174055 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.174079 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.174103 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.174147 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.174172 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.174197 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.174224 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.174233 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.174253 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.174278 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.174303 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.174330 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.174354 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.174379 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.174405 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.174430 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.174453 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.174429 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.174475 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.174638 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.174821 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.174864 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.174909 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.174977 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.175051 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.175212 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.175351 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.175373 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.175452 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.175538 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.175581 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.175655 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.175725 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.176005 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.176044 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.176149 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.176191 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.176261 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.176297 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.176372 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.176441 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.176476 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.176553 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.176629 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.176667 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.176739 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.176804 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.176841 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.176917 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.177078 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.177182 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.177258 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.177296 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.177365 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.177395 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.177422 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.177457 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.177490 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.177537 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.177572 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.177610 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.177648 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.177686 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.177719 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.177780 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.177813 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.177844 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.177877 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.177909 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.177941 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.177973 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.178008 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.178040 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.178169 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.178214 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.178248 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.178282 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.175451 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.178318 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.178395 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.178508 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.178584 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.178653 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.178691 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.178763 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.178834 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.178871 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.178944 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.179022 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.179415 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.179460 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.179499 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.179547 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.179581 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.179623 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.179658 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.179693 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.179727 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.179765 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.179810 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.179847 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.179879 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.179914 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.179947 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.179981 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.180017 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.180053 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.180092 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.180154 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.180190 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.180224 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.180256 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.180291 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.180325 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.180360 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.180395 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.180430 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.180466 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.180500 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.180536 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.180574 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.180610 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.180648 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.180692 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.180728 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.180763 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.180798 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.180831 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.180875 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.180908 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.180970 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.181007 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.181054 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.181095 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.181185 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.181222 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.181260 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.181299 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.181338 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.181380 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.181422 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.181458 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.181492 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.181531 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.180581 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.182569 4809 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.182600 4809 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.182623 4809 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.182643 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.182664 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.182687 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.182706 4809 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.182725 4809 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.182744 4809 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.182765 4809 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.182785 4809 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.182803 4809 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.182821 4809 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.182839 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.182856 4809 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.182873 4809 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.182889 4809 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.182908 4809 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.182928 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.185416 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.191867 4809 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.193357 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.175855 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.175881 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.175835 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.176341 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.176348 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.176752 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.176784 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.176919 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.177192 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.177207 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.177259 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.177431 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.177482 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.177283 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.178071 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.178364 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.178366 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.178390 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.178434 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.178511 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.178573 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.179530 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.179567 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.179571 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.179938 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.180396 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.180352 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.180492 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.180949 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.181110 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.181161 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.181224 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.181358 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.181347 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.181672 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.181720 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.182011 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.182178 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.181785 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.182642 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.182689 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.183045 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.183233 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.183340 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.183518 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.183576 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.183663 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.183674 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.183697 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.184604 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.184713 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.184964 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.185053 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.185003 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.185833 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.186027 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.186267 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.186480 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.186496 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.186810 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.186838 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.186997 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.187234 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.187305 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.187759 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.188229 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.188303 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.188302 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.188543 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.189164 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.189586 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.189620 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.189648 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.189999 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: E0312 08:00:04.190301 4809 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 12 08:00:04 crc kubenswrapper[4809]: E0312 08:00:04.194729 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-12 08:00:04.694685597 +0000 UTC m=+78.276721330 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.193431 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.194993 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.195015 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.195027 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:04Z","lastTransitionTime":"2026-03-12T08:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.190332 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: E0312 08:00:04.190380 4809 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 12 08:00:04 crc kubenswrapper[4809]: E0312 08:00:04.195236 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-12 08:00:04.695224531 +0000 UTC m=+78.277260264 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.190480 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.190498 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.190526 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.190764 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.190944 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.190954 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.191384 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.191432 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.191499 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.191898 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.191907 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.191909 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.192271 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.192341 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.192481 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.192720 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.192800 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.193146 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.199817 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 12 08:00:04 crc kubenswrapper[4809]: E0312 08:00:04.217544 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 08:00:04.717496561 +0000 UTC m=+78.299532344 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.217905 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.218461 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: E0312 08:00:04.218494 4809 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 08:00:04 crc kubenswrapper[4809]: E0312 08:00:04.218533 4809 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 08:00:04 crc kubenswrapper[4809]: E0312 08:00:04.218555 4809 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.218503 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.218620 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: E0312 08:00:04.218641 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-12 08:00:04.718614281 +0000 UTC m=+78.300650074 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.219151 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.220278 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.223730 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.228291 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.230196 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.235448 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.235971 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.236019 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.236073 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.237387 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.238281 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.238434 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.238837 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.238891 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.240110 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.238994 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.238938 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.240217 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.239125 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.240860 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.243141 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.244378 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.244653 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.244933 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.245254 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.246666 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.246823 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.247408 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.248530 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.253023 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.253387 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.253646 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.254217 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.254303 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.254393 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.254552 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.254918 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.255127 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.255156 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.255639 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.257644 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.258766 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.258929 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.258956 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.259192 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.259206 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.259197 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.259268 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.258817 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.259279 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.260605 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.260778 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.260913 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.261222 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.261324 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.261372 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.261405 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.261457 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.261228 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.261773 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.262150 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.262247 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.262252 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.262290 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.262407 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.262501 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.262794 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.262867 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.264036 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.264671 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.264687 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.264748 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.265207 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.265340 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.265523 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.265582 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.265843 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.265857 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.266073 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.265864 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.266195 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.266327 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: E0312 08:00:04.266567 4809 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 08:00:04 crc kubenswrapper[4809]: E0312 08:00:04.266647 4809 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 08:00:04 crc kubenswrapper[4809]: E0312 08:00:04.266708 4809 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 08:00:04 crc kubenswrapper[4809]: E0312 08:00:04.266802 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-12 08:00:04.766786308 +0000 UTC m=+78.348822031 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.277101 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.280368 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.281526 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.284109 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.284236 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.284320 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.284355 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.284331 4809 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.284471 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.284528 4809 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.284581 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.284631 4809 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.284686 4809 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.284741 4809 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.284796 4809 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.284852 4809 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.284904 4809 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.284959 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.285015 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.285066 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.285146 4809 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.285202 4809 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.285261 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.285312 4809 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.285369 4809 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.285421 4809 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.285479 4809 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.285530 4809 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.285583 4809 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.285634 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.285718 4809 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.285779 4809 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.285838 4809 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.285895 4809 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.285948 4809 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.285999 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.286056 4809 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.286133 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.286200 4809 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.286255 4809 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.286310 4809 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.286365 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.286417 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.286471 4809 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.286525 4809 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.286579 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.286630 4809 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.286685 4809 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.286737 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.286794 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.286846 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.286902 4809 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.286953 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.287007 4809 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.287064 4809 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.287140 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.287195 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.287252 4809 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.287324 4809 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.287382 4809 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.287440 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.287523 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.287576 4809 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.287632 4809 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.287688 4809 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.287743 4809 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.287795 4809 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.287846 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.287899 4809 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.287950 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.288004 4809 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.288066 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.288142 4809 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.288200 4809 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.288250 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.288300 4809 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.288350 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.288410 4809 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.288465 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.288520 4809 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.288572 4809 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.288630 4809 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.288685 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.288737 4809 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.288796 4809 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.288848 4809 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.288901 4809 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.288955 4809 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.289010 4809 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.289065 4809 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.289135 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.289190 4809 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.289241 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.289298 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.289354 4809 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.289408 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.289463 4809 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.289522 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.289577 4809 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.289636 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.289702 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.289758 4809 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.289813 4809 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.289865 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.289919 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.289976 4809 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.290031 4809 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.290083 4809 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.290160 4809 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.290221 4809 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.290296 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.290358 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.290409 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.290459 4809 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.290515 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.290566 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.290623 4809 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.290678 4809 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.290732 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.290791 4809 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.290846 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.290897 4809 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.291142 4809 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.291202 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.291290 4809 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.291407 4809 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.291469 4809 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.291527 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.291579 4809 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.291634 4809 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.291689 4809 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.291744 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.291836 4809 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.291899 4809 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.291952 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.292007 4809 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.292058 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.292125 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.292199 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.292331 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.292390 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.292446 4809 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.292504 4809 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.292590 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.292658 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.292711 4809 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.292762 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.292819 4809 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.292876 4809 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.292927 4809 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.292982 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.293034 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.293085 4809 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.293152 4809 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.293234 4809 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.293298 4809 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.293355 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.293407 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.293458 4809 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.293513 4809 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.293568 4809 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.293626 4809 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.293678 4809 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.293758 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.293810 4809 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.293954 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.294041 4809 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.294101 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.294178 4809 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.294237 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.294324 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.294406 4809 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.294461 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.294519 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.294571 4809 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.294627 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.294678 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.294730 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.294784 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.294840 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.294893 4809 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.294956 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.297459 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.297490 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.297501 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.297514 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.297523 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:04Z","lastTransitionTime":"2026-03-12T08:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.399753 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.399899 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.399975 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.400051 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.400131 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:04Z","lastTransitionTime":"2026-03-12T08:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.404072 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.450091 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.450232 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 12 08:00:04 crc kubenswrapper[4809]: W0312 08:00:04.465830 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-036ec8acbf0157fcf35eb6ba1e20b492b94ed9abc3cbb264346f29622a05c86c WatchSource:0}: Error finding container 036ec8acbf0157fcf35eb6ba1e20b492b94ed9abc3cbb264346f29622a05c86c: Status 404 returned error can't find the container with id 036ec8acbf0157fcf35eb6ba1e20b492b94ed9abc3cbb264346f29622a05c86c Mar 12 08:00:04 crc kubenswrapper[4809]: W0312 08:00:04.467302 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-9d99f470e823a6dfbc0465cdc0993f21f7245be59f0a7f6f4dd35c6a4fa1f3d0 WatchSource:0}: Error finding container 9d99f470e823a6dfbc0465cdc0993f21f7245be59f0a7f6f4dd35c6a4fa1f3d0: Status 404 returned error can't find the container with id 9d99f470e823a6dfbc0465cdc0993f21f7245be59f0a7f6f4dd35c6a4fa1f3d0 Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.502602 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.502898 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.502913 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.502930 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.502944 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:04Z","lastTransitionTime":"2026-03-12T08:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.606017 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.606054 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.606065 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.606140 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.606155 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:04Z","lastTransitionTime":"2026-03-12T08:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.699569 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.699621 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:00:04 crc kubenswrapper[4809]: E0312 08:00:04.699841 4809 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 12 08:00:04 crc kubenswrapper[4809]: E0312 08:00:04.699943 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-12 08:00:05.699928551 +0000 UTC m=+79.281964284 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 12 08:00:04 crc kubenswrapper[4809]: E0312 08:00:04.700104 4809 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 12 08:00:04 crc kubenswrapper[4809]: E0312 08:00:04.700213 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-12 08:00:05.700176348 +0000 UTC m=+79.282212081 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.707413 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.707437 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.707445 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.707458 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.707467 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:04Z","lastTransitionTime":"2026-03-12T08:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.800634 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.800725 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.800747 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:00:04 crc kubenswrapper[4809]: E0312 08:00:04.800842 4809 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 08:00:04 crc kubenswrapper[4809]: E0312 08:00:04.800855 4809 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 08:00:04 crc kubenswrapper[4809]: E0312 08:00:04.800855 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 08:00:05.80081959 +0000 UTC m=+79.382855333 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:00:04 crc kubenswrapper[4809]: E0312 08:00:04.800865 4809 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 08:00:04 crc kubenswrapper[4809]: E0312 08:00:04.800898 4809 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 08:00:04 crc kubenswrapper[4809]: E0312 08:00:04.800940 4809 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 08:00:04 crc kubenswrapper[4809]: E0312 08:00:04.800952 4809 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 08:00:04 crc kubenswrapper[4809]: E0312 08:00:04.800939 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-12 08:00:05.800926503 +0000 UTC m=+79.382962246 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 08:00:04 crc kubenswrapper[4809]: E0312 08:00:04.801015 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-12 08:00:05.800999535 +0000 UTC m=+79.383035268 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.811066 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.811095 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.811103 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.811132 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.811144 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:04Z","lastTransitionTime":"2026-03-12T08:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.913383 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.913434 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.913444 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.913455 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:04 crc kubenswrapper[4809]: I0312 08:00:04.913463 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:04Z","lastTransitionTime":"2026-03-12T08:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.016575 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.016641 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.016660 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.016683 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.016701 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:05Z","lastTransitionTime":"2026-03-12T08:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.113006 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.114288 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.116445 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.117785 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.119393 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.119417 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.119427 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.119441 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.119463 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:05Z","lastTransitionTime":"2026-03-12T08:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.119769 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.120767 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.121938 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.123824 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.124894 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.126794 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.127704 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.129591 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.130408 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.131289 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.132761 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.133631 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.135288 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.135969 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.137341 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.139348 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.140190 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.141845 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.142586 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.144660 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.145370 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.146367 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.148182 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.148921 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.150618 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.151451 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.152980 4809 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.153195 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.156081 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.157683 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.158199 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.160135 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.160927 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.162043 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.162801 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.164192 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.164831 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.166040 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.166851 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.168032 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.168604 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.169689 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.170318 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.171594 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.172241 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.173403 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.173914 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.175104 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.175833 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.176360 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.222272 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.222318 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.222328 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.222345 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.222356 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:05Z","lastTransitionTime":"2026-03-12T08:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.324843 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.324876 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.324884 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.324899 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.324910 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:05Z","lastTransitionTime":"2026-03-12T08:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.427383 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.427411 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.427422 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.427434 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.427443 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:05Z","lastTransitionTime":"2026-03-12T08:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.456642 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"a0dc5e31901266db36374e4881b99b1a40767bc6ff58afe1e6226adf1f5c60fc"} Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.456719 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"3aef513f474f25e56fd48d7c8b5f110939514be75dec224356dee883508d604f"} Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.456739 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"f3a240a33cd5f221ab0075b4157ee76f9287c973afe8b4f73c0e6ec64c7a6a33"} Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.457985 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"83bfb2b1536a6a7a823599dddb47b6ad50b038509d00298d2fcee85b0a5b6c3c"} Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.458036 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"036ec8acbf0157fcf35eb6ba1e20b492b94ed9abc3cbb264346f29622a05c86c"} Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.458719 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"9d99f470e823a6dfbc0465cdc0993f21f7245be59f0a7f6f4dd35c6a4fa1f3d0"} Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.473409 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:05Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.487953 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:05Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.505080 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:05Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.520491 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:05Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.530319 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.530382 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.530402 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.530429 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.530451 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:05Z","lastTransitionTime":"2026-03-12T08:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.532996 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:05Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.544584 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0dc5e31901266db36374e4881b99b1a40767bc6ff58afe1e6226adf1f5c60fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aef513f474f25e56fd48d7c8b5f110939514be75dec224356dee883508d604f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:05Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.554398 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:05Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.569044 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0dc5e31901266db36374e4881b99b1a40767bc6ff58afe1e6226adf1f5c60fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aef513f474f25e56fd48d7c8b5f110939514be75dec224356dee883508d604f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:05Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.581156 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:05Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.591643 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:05Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.604168 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:05Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.616957 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83bfb2b1536a6a7a823599dddb47b6ad50b038509d00298d2fcee85b0a5b6c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:05Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.632912 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.632953 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.632963 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.632978 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.632990 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:05Z","lastTransitionTime":"2026-03-12T08:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.709676 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.709757 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:00:05 crc kubenswrapper[4809]: E0312 08:00:05.709836 4809 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 12 08:00:05 crc kubenswrapper[4809]: E0312 08:00:05.709862 4809 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 12 08:00:05 crc kubenswrapper[4809]: E0312 08:00:05.709904 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-12 08:00:07.709885115 +0000 UTC m=+81.291920848 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 12 08:00:05 crc kubenswrapper[4809]: E0312 08:00:05.709922 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-12 08:00:07.709914176 +0000 UTC m=+81.291949909 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.735679 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.735712 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.735722 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.735752 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.735762 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:05Z","lastTransitionTime":"2026-03-12T08:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.810642 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.810735 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.810759 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:00:05 crc kubenswrapper[4809]: E0312 08:00:05.810884 4809 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 08:00:05 crc kubenswrapper[4809]: E0312 08:00:05.810900 4809 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 08:00:05 crc kubenswrapper[4809]: E0312 08:00:05.810909 4809 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 08:00:05 crc kubenswrapper[4809]: E0312 08:00:05.810933 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 08:00:07.810896557 +0000 UTC m=+81.392932330 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:00:05 crc kubenswrapper[4809]: E0312 08:00:05.810958 4809 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 08:00:05 crc kubenswrapper[4809]: E0312 08:00:05.810985 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-12 08:00:07.810970679 +0000 UTC m=+81.393006452 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 08:00:05 crc kubenswrapper[4809]: E0312 08:00:05.810991 4809 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 08:00:05 crc kubenswrapper[4809]: E0312 08:00:05.811015 4809 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 08:00:05 crc kubenswrapper[4809]: E0312 08:00:05.811087 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-12 08:00:07.811063611 +0000 UTC m=+81.393099374 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.838128 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.838158 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.838167 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.838181 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.838190 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:05Z","lastTransitionTime":"2026-03-12T08:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.940955 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.941010 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.941027 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.941051 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:05 crc kubenswrapper[4809]: I0312 08:00:05.941068 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:05Z","lastTransitionTime":"2026-03-12T08:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.044443 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.044497 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.044515 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.044536 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.044551 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:06Z","lastTransitionTime":"2026-03-12T08:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.105402 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.105410 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:00:06 crc kubenswrapper[4809]: E0312 08:00:06.105613 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 08:00:06 crc kubenswrapper[4809]: E0312 08:00:06.105737 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.105425 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:00:06 crc kubenswrapper[4809]: E0312 08:00:06.105929 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.148887 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.148956 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.148975 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.149097 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.149161 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:06Z","lastTransitionTime":"2026-03-12T08:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.252649 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.252700 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.252709 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.252723 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.252737 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:06Z","lastTransitionTime":"2026-03-12T08:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.355060 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.355102 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.355138 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.355162 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.355173 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:06Z","lastTransitionTime":"2026-03-12T08:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.457429 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.457479 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.457487 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.457504 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.457513 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:06Z","lastTransitionTime":"2026-03-12T08:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.560131 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.560174 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.560185 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.560202 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.560214 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:06Z","lastTransitionTime":"2026-03-12T08:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.663444 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.663512 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.663532 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.663562 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.663581 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:06Z","lastTransitionTime":"2026-03-12T08:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.766728 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.766799 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.766814 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.766836 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.766850 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:06Z","lastTransitionTime":"2026-03-12T08:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.869053 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.869173 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.869191 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.869212 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.869226 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:06Z","lastTransitionTime":"2026-03-12T08:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.893800 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.893903 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.893929 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.893968 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.893992 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:06Z","lastTransitionTime":"2026-03-12T08:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:06 crc kubenswrapper[4809]: E0312 08:00:06.908534 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f5485f66-5ce6-43bd-a720-57f2c3255098\\\",\\\"systemUUID\\\":\\\"f124771b-42e6-4243-8a1b-002105aaecbe\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:06Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.913642 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.913703 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.913717 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.913739 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.913758 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:06Z","lastTransitionTime":"2026-03-12T08:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:06 crc kubenswrapper[4809]: E0312 08:00:06.926314 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f5485f66-5ce6-43bd-a720-57f2c3255098\\\",\\\"systemUUID\\\":\\\"f124771b-42e6-4243-8a1b-002105aaecbe\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:06Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.930304 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.930353 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.930362 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.930380 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.930392 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:06Z","lastTransitionTime":"2026-03-12T08:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:06 crc kubenswrapper[4809]: E0312 08:00:06.948727 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f5485f66-5ce6-43bd-a720-57f2c3255098\\\",\\\"systemUUID\\\":\\\"f124771b-42e6-4243-8a1b-002105aaecbe\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:06Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.953867 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.953922 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.953935 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.953954 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.953968 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:06Z","lastTransitionTime":"2026-03-12T08:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:06 crc kubenswrapper[4809]: E0312 08:00:06.971183 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f5485f66-5ce6-43bd-a720-57f2c3255098\\\",\\\"systemUUID\\\":\\\"f124771b-42e6-4243-8a1b-002105aaecbe\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:06Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.975979 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.976019 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.976028 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.976044 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.976056 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:06Z","lastTransitionTime":"2026-03-12T08:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:06 crc kubenswrapper[4809]: E0312 08:00:06.992476 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f5485f66-5ce6-43bd-a720-57f2c3255098\\\",\\\"systemUUID\\\":\\\"f124771b-42e6-4243-8a1b-002105aaecbe\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:06Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:06 crc kubenswrapper[4809]: E0312 08:00:06.992589 4809 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.994526 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.994579 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.994596 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.994619 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:06 crc kubenswrapper[4809]: I0312 08:00:06.994633 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:06Z","lastTransitionTime":"2026-03-12T08:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:07 crc kubenswrapper[4809]: I0312 08:00:07.097335 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:07 crc kubenswrapper[4809]: I0312 08:00:07.097417 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:07 crc kubenswrapper[4809]: I0312 08:00:07.097438 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:07 crc kubenswrapper[4809]: I0312 08:00:07.097473 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:07 crc kubenswrapper[4809]: I0312 08:00:07.097501 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:07Z","lastTransitionTime":"2026-03-12T08:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:07 crc kubenswrapper[4809]: I0312 08:00:07.126085 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83bfb2b1536a6a7a823599dddb47b6ad50b038509d00298d2fcee85b0a5b6c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:07Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:07 crc kubenswrapper[4809]: I0312 08:00:07.144421 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:07Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:07 crc kubenswrapper[4809]: I0312 08:00:07.168385 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:07Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:07 crc kubenswrapper[4809]: I0312 08:00:07.183445 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:07Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:07 crc kubenswrapper[4809]: I0312 08:00:07.199458 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:07Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:07 crc kubenswrapper[4809]: I0312 08:00:07.200571 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:07 crc kubenswrapper[4809]: I0312 08:00:07.200632 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:07 crc kubenswrapper[4809]: I0312 08:00:07.200652 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:07 crc kubenswrapper[4809]: I0312 08:00:07.200684 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:07 crc kubenswrapper[4809]: I0312 08:00:07.200705 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:07Z","lastTransitionTime":"2026-03-12T08:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:07 crc kubenswrapper[4809]: I0312 08:00:07.214923 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0dc5e31901266db36374e4881b99b1a40767bc6ff58afe1e6226adf1f5c60fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aef513f474f25e56fd48d7c8b5f110939514be75dec224356dee883508d604f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:07Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:07 crc kubenswrapper[4809]: I0312 08:00:07.303223 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:07 crc kubenswrapper[4809]: I0312 08:00:07.303255 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:07 crc kubenswrapper[4809]: I0312 08:00:07.303264 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:07 crc kubenswrapper[4809]: I0312 08:00:07.303278 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:07 crc kubenswrapper[4809]: I0312 08:00:07.303287 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:07Z","lastTransitionTime":"2026-03-12T08:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:07 crc kubenswrapper[4809]: I0312 08:00:07.406234 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:07 crc kubenswrapper[4809]: I0312 08:00:07.406297 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:07 crc kubenswrapper[4809]: I0312 08:00:07.406323 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:07 crc kubenswrapper[4809]: I0312 08:00:07.406338 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:07 crc kubenswrapper[4809]: I0312 08:00:07.406349 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:07Z","lastTransitionTime":"2026-03-12T08:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:07 crc kubenswrapper[4809]: I0312 08:00:07.465252 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"4677e418d05a581aee3be8fd421402c02922c5eaf004cbcae1ae691fb67d283e"} Mar 12 08:00:07 crc kubenswrapper[4809]: I0312 08:00:07.481217 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4677e418d05a581aee3be8fd421402c02922c5eaf004cbcae1ae691fb67d283e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:07Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:07 crc kubenswrapper[4809]: I0312 08:00:07.493538 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0dc5e31901266db36374e4881b99b1a40767bc6ff58afe1e6226adf1f5c60fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aef513f474f25e56fd48d7c8b5f110939514be75dec224356dee883508d604f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:07Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:07 crc kubenswrapper[4809]: I0312 08:00:07.505811 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83bfb2b1536a6a7a823599dddb47b6ad50b038509d00298d2fcee85b0a5b6c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:07Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:07 crc kubenswrapper[4809]: I0312 08:00:07.508701 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:07 crc kubenswrapper[4809]: I0312 08:00:07.508740 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:07 crc kubenswrapper[4809]: I0312 08:00:07.508752 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:07 crc kubenswrapper[4809]: I0312 08:00:07.508769 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:07 crc kubenswrapper[4809]: I0312 08:00:07.508781 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:07Z","lastTransitionTime":"2026-03-12T08:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:07 crc kubenswrapper[4809]: I0312 08:00:07.516353 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:07Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:07 crc kubenswrapper[4809]: I0312 08:00:07.527023 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:07Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:07 crc kubenswrapper[4809]: I0312 08:00:07.538674 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:07Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:07 crc kubenswrapper[4809]: I0312 08:00:07.611209 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:07 crc kubenswrapper[4809]: I0312 08:00:07.611275 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:07 crc kubenswrapper[4809]: I0312 08:00:07.611295 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:07 crc kubenswrapper[4809]: I0312 08:00:07.611314 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:07 crc kubenswrapper[4809]: I0312 08:00:07.611327 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:07Z","lastTransitionTime":"2026-03-12T08:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:07 crc kubenswrapper[4809]: I0312 08:00:07.713981 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:07 crc kubenswrapper[4809]: I0312 08:00:07.714060 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:07 crc kubenswrapper[4809]: I0312 08:00:07.714076 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:07 crc kubenswrapper[4809]: I0312 08:00:07.714100 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:07 crc kubenswrapper[4809]: I0312 08:00:07.714149 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:07Z","lastTransitionTime":"2026-03-12T08:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:07 crc kubenswrapper[4809]: I0312 08:00:07.728919 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:00:07 crc kubenswrapper[4809]: I0312 08:00:07.729014 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:00:07 crc kubenswrapper[4809]: E0312 08:00:07.729211 4809 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 12 08:00:07 crc kubenswrapper[4809]: E0312 08:00:07.729275 4809 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 12 08:00:07 crc kubenswrapper[4809]: E0312 08:00:07.729364 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-12 08:00:11.72933478 +0000 UTC m=+85.311370523 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 12 08:00:07 crc kubenswrapper[4809]: E0312 08:00:07.729393 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-12 08:00:11.729382551 +0000 UTC m=+85.311418294 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 12 08:00:07 crc kubenswrapper[4809]: I0312 08:00:07.817467 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:07 crc kubenswrapper[4809]: I0312 08:00:07.817534 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:07 crc kubenswrapper[4809]: I0312 08:00:07.817549 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:07 crc kubenswrapper[4809]: I0312 08:00:07.817570 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:07 crc kubenswrapper[4809]: I0312 08:00:07.817587 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:07Z","lastTransitionTime":"2026-03-12T08:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:07 crc kubenswrapper[4809]: I0312 08:00:07.829975 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 08:00:07 crc kubenswrapper[4809]: E0312 08:00:07.830000 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 08:00:11.829966821 +0000 UTC m=+85.412002564 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:00:07 crc kubenswrapper[4809]: I0312 08:00:07.830218 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:00:07 crc kubenswrapper[4809]: I0312 08:00:07.830269 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:00:07 crc kubenswrapper[4809]: E0312 08:00:07.830551 4809 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 08:00:07 crc kubenswrapper[4809]: E0312 08:00:07.830563 4809 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 08:00:07 crc kubenswrapper[4809]: E0312 08:00:07.830631 4809 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 08:00:07 crc kubenswrapper[4809]: E0312 08:00:07.830581 4809 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 08:00:07 crc kubenswrapper[4809]: E0312 08:00:07.830661 4809 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 08:00:07 crc kubenswrapper[4809]: E0312 08:00:07.830760 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-12 08:00:11.830726392 +0000 UTC m=+85.412762165 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 08:00:07 crc kubenswrapper[4809]: E0312 08:00:07.830664 4809 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 08:00:07 crc kubenswrapper[4809]: E0312 08:00:07.830839 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-12 08:00:11.830827655 +0000 UTC m=+85.412863418 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 08:00:07 crc kubenswrapper[4809]: I0312 08:00:07.920367 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:07 crc kubenswrapper[4809]: I0312 08:00:07.920405 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:07 crc kubenswrapper[4809]: I0312 08:00:07.920414 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:07 crc kubenswrapper[4809]: I0312 08:00:07.920429 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:07 crc kubenswrapper[4809]: I0312 08:00:07.920438 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:07Z","lastTransitionTime":"2026-03-12T08:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:08 crc kubenswrapper[4809]: I0312 08:00:08.022914 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:08 crc kubenswrapper[4809]: I0312 08:00:08.022944 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:08 crc kubenswrapper[4809]: I0312 08:00:08.022954 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:08 crc kubenswrapper[4809]: I0312 08:00:08.022966 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:08 crc kubenswrapper[4809]: I0312 08:00:08.022976 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:08Z","lastTransitionTime":"2026-03-12T08:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:08 crc kubenswrapper[4809]: I0312 08:00:08.105551 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:00:08 crc kubenswrapper[4809]: E0312 08:00:08.105666 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 08:00:08 crc kubenswrapper[4809]: I0312 08:00:08.105567 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:00:08 crc kubenswrapper[4809]: E0312 08:00:08.105729 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 08:00:08 crc kubenswrapper[4809]: I0312 08:00:08.105551 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:00:08 crc kubenswrapper[4809]: E0312 08:00:08.105952 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 08:00:08 crc kubenswrapper[4809]: I0312 08:00:08.125592 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:08 crc kubenswrapper[4809]: I0312 08:00:08.125644 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:08 crc kubenswrapper[4809]: I0312 08:00:08.125653 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:08 crc kubenswrapper[4809]: I0312 08:00:08.125669 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:08 crc kubenswrapper[4809]: I0312 08:00:08.125685 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:08Z","lastTransitionTime":"2026-03-12T08:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:08 crc kubenswrapper[4809]: I0312 08:00:08.229762 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:08 crc kubenswrapper[4809]: I0312 08:00:08.229822 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:08 crc kubenswrapper[4809]: I0312 08:00:08.229842 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:08 crc kubenswrapper[4809]: I0312 08:00:08.229868 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:08 crc kubenswrapper[4809]: I0312 08:00:08.229886 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:08Z","lastTransitionTime":"2026-03-12T08:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:08 crc kubenswrapper[4809]: I0312 08:00:08.332609 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:08 crc kubenswrapper[4809]: I0312 08:00:08.332637 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:08 crc kubenswrapper[4809]: I0312 08:00:08.332646 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:08 crc kubenswrapper[4809]: I0312 08:00:08.332658 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:08 crc kubenswrapper[4809]: I0312 08:00:08.332667 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:08Z","lastTransitionTime":"2026-03-12T08:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:08 crc kubenswrapper[4809]: I0312 08:00:08.434680 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:08 crc kubenswrapper[4809]: I0312 08:00:08.434718 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:08 crc kubenswrapper[4809]: I0312 08:00:08.434726 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:08 crc kubenswrapper[4809]: I0312 08:00:08.434739 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:08 crc kubenswrapper[4809]: I0312 08:00:08.434747 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:08Z","lastTransitionTime":"2026-03-12T08:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:08 crc kubenswrapper[4809]: I0312 08:00:08.538183 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:08 crc kubenswrapper[4809]: I0312 08:00:08.538241 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:08 crc kubenswrapper[4809]: I0312 08:00:08.538260 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:08 crc kubenswrapper[4809]: I0312 08:00:08.538283 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:08 crc kubenswrapper[4809]: I0312 08:00:08.538302 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:08Z","lastTransitionTime":"2026-03-12T08:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:08 crc kubenswrapper[4809]: I0312 08:00:08.641647 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:08 crc kubenswrapper[4809]: I0312 08:00:08.641711 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:08 crc kubenswrapper[4809]: I0312 08:00:08.641728 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:08 crc kubenswrapper[4809]: I0312 08:00:08.641753 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:08 crc kubenswrapper[4809]: I0312 08:00:08.641771 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:08Z","lastTransitionTime":"2026-03-12T08:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:08 crc kubenswrapper[4809]: I0312 08:00:08.745366 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:08 crc kubenswrapper[4809]: I0312 08:00:08.745452 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:08 crc kubenswrapper[4809]: I0312 08:00:08.745478 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:08 crc kubenswrapper[4809]: I0312 08:00:08.745507 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:08 crc kubenswrapper[4809]: I0312 08:00:08.745526 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:08Z","lastTransitionTime":"2026-03-12T08:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:08 crc kubenswrapper[4809]: I0312 08:00:08.848988 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:08 crc kubenswrapper[4809]: I0312 08:00:08.849315 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:08 crc kubenswrapper[4809]: I0312 08:00:08.849434 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:08 crc kubenswrapper[4809]: I0312 08:00:08.849548 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:08 crc kubenswrapper[4809]: I0312 08:00:08.849664 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:08Z","lastTransitionTime":"2026-03-12T08:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:08 crc kubenswrapper[4809]: I0312 08:00:08.953082 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:08 crc kubenswrapper[4809]: I0312 08:00:08.953185 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:08 crc kubenswrapper[4809]: I0312 08:00:08.953211 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:08 crc kubenswrapper[4809]: I0312 08:00:08.953247 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:08 crc kubenswrapper[4809]: I0312 08:00:08.953271 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:08Z","lastTransitionTime":"2026-03-12T08:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:09 crc kubenswrapper[4809]: I0312 08:00:09.056766 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:09 crc kubenswrapper[4809]: I0312 08:00:09.056834 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:09 crc kubenswrapper[4809]: I0312 08:00:09.056867 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:09 crc kubenswrapper[4809]: I0312 08:00:09.056898 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:09 crc kubenswrapper[4809]: I0312 08:00:09.056917 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:09Z","lastTransitionTime":"2026-03-12T08:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:09 crc kubenswrapper[4809]: I0312 08:00:09.132586 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 08:00:09 crc kubenswrapper[4809]: I0312 08:00:09.160236 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:09 crc kubenswrapper[4809]: I0312 08:00:09.160543 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:09 crc kubenswrapper[4809]: I0312 08:00:09.160701 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:09 crc kubenswrapper[4809]: I0312 08:00:09.160862 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:09 crc kubenswrapper[4809]: I0312 08:00:09.161032 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:09Z","lastTransitionTime":"2026-03-12T08:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:09 crc kubenswrapper[4809]: I0312 08:00:09.165765 4809 scope.go:117] "RemoveContainer" containerID="94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9" Mar 12 08:00:09 crc kubenswrapper[4809]: E0312 08:00:09.166000 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 12 08:00:09 crc kubenswrapper[4809]: I0312 08:00:09.166139 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 12 08:00:09 crc kubenswrapper[4809]: I0312 08:00:09.264413 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:09 crc kubenswrapper[4809]: I0312 08:00:09.264484 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:09 crc kubenswrapper[4809]: I0312 08:00:09.264497 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:09 crc kubenswrapper[4809]: I0312 08:00:09.264517 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:09 crc kubenswrapper[4809]: I0312 08:00:09.264530 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:09Z","lastTransitionTime":"2026-03-12T08:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:09 crc kubenswrapper[4809]: I0312 08:00:09.367924 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:09 crc kubenswrapper[4809]: I0312 08:00:09.368211 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:09 crc kubenswrapper[4809]: I0312 08:00:09.368305 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:09 crc kubenswrapper[4809]: I0312 08:00:09.368383 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:09 crc kubenswrapper[4809]: I0312 08:00:09.368468 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:09Z","lastTransitionTime":"2026-03-12T08:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:09 crc kubenswrapper[4809]: I0312 08:00:09.471638 4809 scope.go:117] "RemoveContainer" containerID="94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9" Mar 12 08:00:09 crc kubenswrapper[4809]: I0312 08:00:09.471866 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:09 crc kubenswrapper[4809]: I0312 08:00:09.471935 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:09 crc kubenswrapper[4809]: I0312 08:00:09.471959 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:09 crc kubenswrapper[4809]: I0312 08:00:09.471991 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:09 crc kubenswrapper[4809]: I0312 08:00:09.472015 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:09Z","lastTransitionTime":"2026-03-12T08:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:09 crc kubenswrapper[4809]: E0312 08:00:09.472726 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 12 08:00:09 crc kubenswrapper[4809]: I0312 08:00:09.575950 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:09 crc kubenswrapper[4809]: I0312 08:00:09.576016 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:09 crc kubenswrapper[4809]: I0312 08:00:09.576034 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:09 crc kubenswrapper[4809]: I0312 08:00:09.576060 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:09 crc kubenswrapper[4809]: I0312 08:00:09.576082 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:09Z","lastTransitionTime":"2026-03-12T08:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:09 crc kubenswrapper[4809]: I0312 08:00:09.679022 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:09 crc kubenswrapper[4809]: I0312 08:00:09.679064 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:09 crc kubenswrapper[4809]: I0312 08:00:09.679074 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:09 crc kubenswrapper[4809]: I0312 08:00:09.679094 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:09 crc kubenswrapper[4809]: I0312 08:00:09.679107 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:09Z","lastTransitionTime":"2026-03-12T08:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:09 crc kubenswrapper[4809]: I0312 08:00:09.782849 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:09 crc kubenswrapper[4809]: I0312 08:00:09.782925 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:09 crc kubenswrapper[4809]: I0312 08:00:09.782942 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:09 crc kubenswrapper[4809]: I0312 08:00:09.782963 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:09 crc kubenswrapper[4809]: I0312 08:00:09.782978 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:09Z","lastTransitionTime":"2026-03-12T08:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:09 crc kubenswrapper[4809]: I0312 08:00:09.887452 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:09 crc kubenswrapper[4809]: I0312 08:00:09.887490 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:09 crc kubenswrapper[4809]: I0312 08:00:09.887501 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:09 crc kubenswrapper[4809]: I0312 08:00:09.887521 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:09 crc kubenswrapper[4809]: I0312 08:00:09.887532 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:09Z","lastTransitionTime":"2026-03-12T08:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:09 crc kubenswrapper[4809]: I0312 08:00:09.991451 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:09 crc kubenswrapper[4809]: I0312 08:00:09.991543 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:09 crc kubenswrapper[4809]: I0312 08:00:09.991566 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:09 crc kubenswrapper[4809]: I0312 08:00:09.991605 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:09 crc kubenswrapper[4809]: I0312 08:00:09.991627 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:09Z","lastTransitionTime":"2026-03-12T08:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:10 crc kubenswrapper[4809]: I0312 08:00:10.095371 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:10 crc kubenswrapper[4809]: I0312 08:00:10.095429 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:10 crc kubenswrapper[4809]: I0312 08:00:10.095449 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:10 crc kubenswrapper[4809]: I0312 08:00:10.095475 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:10 crc kubenswrapper[4809]: I0312 08:00:10.095492 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:10Z","lastTransitionTime":"2026-03-12T08:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:10 crc kubenswrapper[4809]: I0312 08:00:10.105268 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:00:10 crc kubenswrapper[4809]: I0312 08:00:10.105452 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:00:10 crc kubenswrapper[4809]: I0312 08:00:10.105504 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:00:10 crc kubenswrapper[4809]: E0312 08:00:10.105796 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 08:00:10 crc kubenswrapper[4809]: E0312 08:00:10.105971 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 08:00:10 crc kubenswrapper[4809]: E0312 08:00:10.106249 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 08:00:10 crc kubenswrapper[4809]: I0312 08:00:10.198749 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:10 crc kubenswrapper[4809]: I0312 08:00:10.198818 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:10 crc kubenswrapper[4809]: I0312 08:00:10.198840 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:10 crc kubenswrapper[4809]: I0312 08:00:10.198868 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:10 crc kubenswrapper[4809]: I0312 08:00:10.198924 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:10Z","lastTransitionTime":"2026-03-12T08:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:10 crc kubenswrapper[4809]: I0312 08:00:10.302223 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:10 crc kubenswrapper[4809]: I0312 08:00:10.302976 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:10 crc kubenswrapper[4809]: I0312 08:00:10.303059 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:10 crc kubenswrapper[4809]: I0312 08:00:10.303189 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:10 crc kubenswrapper[4809]: I0312 08:00:10.303275 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:10Z","lastTransitionTime":"2026-03-12T08:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:10 crc kubenswrapper[4809]: I0312 08:00:10.407395 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:10 crc kubenswrapper[4809]: I0312 08:00:10.407475 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:10 crc kubenswrapper[4809]: I0312 08:00:10.407496 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:10 crc kubenswrapper[4809]: I0312 08:00:10.407527 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:10 crc kubenswrapper[4809]: I0312 08:00:10.407547 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:10Z","lastTransitionTime":"2026-03-12T08:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:10 crc kubenswrapper[4809]: I0312 08:00:10.511431 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:10 crc kubenswrapper[4809]: I0312 08:00:10.511861 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:10 crc kubenswrapper[4809]: I0312 08:00:10.512040 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:10 crc kubenswrapper[4809]: I0312 08:00:10.512255 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:10 crc kubenswrapper[4809]: I0312 08:00:10.512422 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:10Z","lastTransitionTime":"2026-03-12T08:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:10 crc kubenswrapper[4809]: I0312 08:00:10.617438 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:10 crc kubenswrapper[4809]: I0312 08:00:10.617507 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:10 crc kubenswrapper[4809]: I0312 08:00:10.617529 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:10 crc kubenswrapper[4809]: I0312 08:00:10.617560 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:10 crc kubenswrapper[4809]: I0312 08:00:10.617582 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:10Z","lastTransitionTime":"2026-03-12T08:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:10 crc kubenswrapper[4809]: I0312 08:00:10.721344 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:10 crc kubenswrapper[4809]: I0312 08:00:10.721428 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:10 crc kubenswrapper[4809]: I0312 08:00:10.721453 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:10 crc kubenswrapper[4809]: I0312 08:00:10.721486 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:10 crc kubenswrapper[4809]: I0312 08:00:10.721511 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:10Z","lastTransitionTime":"2026-03-12T08:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:10 crc kubenswrapper[4809]: I0312 08:00:10.825776 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:10 crc kubenswrapper[4809]: I0312 08:00:10.825910 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:10 crc kubenswrapper[4809]: I0312 08:00:10.825932 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:10 crc kubenswrapper[4809]: I0312 08:00:10.825957 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:10 crc kubenswrapper[4809]: I0312 08:00:10.826009 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:10Z","lastTransitionTime":"2026-03-12T08:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:10 crc kubenswrapper[4809]: I0312 08:00:10.929395 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:10 crc kubenswrapper[4809]: I0312 08:00:10.929458 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:10 crc kubenswrapper[4809]: I0312 08:00:10.929482 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:10 crc kubenswrapper[4809]: I0312 08:00:10.929507 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:10 crc kubenswrapper[4809]: I0312 08:00:10.929525 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:10Z","lastTransitionTime":"2026-03-12T08:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:11 crc kubenswrapper[4809]: I0312 08:00:11.032552 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:11 crc kubenswrapper[4809]: I0312 08:00:11.032759 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:11 crc kubenswrapper[4809]: I0312 08:00:11.032818 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:11 crc kubenswrapper[4809]: I0312 08:00:11.032895 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:11 crc kubenswrapper[4809]: I0312 08:00:11.032951 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:11Z","lastTransitionTime":"2026-03-12T08:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:11 crc kubenswrapper[4809]: I0312 08:00:11.134631 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:11 crc kubenswrapper[4809]: I0312 08:00:11.134870 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:11 crc kubenswrapper[4809]: I0312 08:00:11.134943 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:11 crc kubenswrapper[4809]: I0312 08:00:11.135001 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:11 crc kubenswrapper[4809]: I0312 08:00:11.135053 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:11Z","lastTransitionTime":"2026-03-12T08:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:11 crc kubenswrapper[4809]: I0312 08:00:11.237534 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:11 crc kubenswrapper[4809]: I0312 08:00:11.237582 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:11 crc kubenswrapper[4809]: I0312 08:00:11.237597 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:11 crc kubenswrapper[4809]: I0312 08:00:11.237617 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:11 crc kubenswrapper[4809]: I0312 08:00:11.237631 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:11Z","lastTransitionTime":"2026-03-12T08:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:11 crc kubenswrapper[4809]: I0312 08:00:11.340473 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:11 crc kubenswrapper[4809]: I0312 08:00:11.340507 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:11 crc kubenswrapper[4809]: I0312 08:00:11.340518 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:11 crc kubenswrapper[4809]: I0312 08:00:11.340533 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:11 crc kubenswrapper[4809]: I0312 08:00:11.340545 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:11Z","lastTransitionTime":"2026-03-12T08:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:11 crc kubenswrapper[4809]: I0312 08:00:11.443430 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:11 crc kubenswrapper[4809]: I0312 08:00:11.443486 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:11 crc kubenswrapper[4809]: I0312 08:00:11.443497 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:11 crc kubenswrapper[4809]: I0312 08:00:11.443513 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:11 crc kubenswrapper[4809]: I0312 08:00:11.443525 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:11Z","lastTransitionTime":"2026-03-12T08:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:11 crc kubenswrapper[4809]: I0312 08:00:11.546697 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:11 crc kubenswrapper[4809]: I0312 08:00:11.546755 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:11 crc kubenswrapper[4809]: I0312 08:00:11.546779 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:11 crc kubenswrapper[4809]: I0312 08:00:11.546800 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:11 crc kubenswrapper[4809]: I0312 08:00:11.546812 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:11Z","lastTransitionTime":"2026-03-12T08:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:11 crc kubenswrapper[4809]: I0312 08:00:11.649653 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:11 crc kubenswrapper[4809]: I0312 08:00:11.649737 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:11 crc kubenswrapper[4809]: I0312 08:00:11.649770 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:11 crc kubenswrapper[4809]: I0312 08:00:11.649799 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:11 crc kubenswrapper[4809]: I0312 08:00:11.649820 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:11Z","lastTransitionTime":"2026-03-12T08:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:11 crc kubenswrapper[4809]: I0312 08:00:11.753514 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:11 crc kubenswrapper[4809]: I0312 08:00:11.753585 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:11 crc kubenswrapper[4809]: I0312 08:00:11.753605 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:11 crc kubenswrapper[4809]: I0312 08:00:11.753641 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:11 crc kubenswrapper[4809]: I0312 08:00:11.753664 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:11Z","lastTransitionTime":"2026-03-12T08:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:11 crc kubenswrapper[4809]: I0312 08:00:11.771416 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:00:11 crc kubenswrapper[4809]: I0312 08:00:11.771495 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:00:11 crc kubenswrapper[4809]: E0312 08:00:11.771549 4809 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 12 08:00:11 crc kubenswrapper[4809]: E0312 08:00:11.771641 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-12 08:00:19.771618671 +0000 UTC m=+93.353654414 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 12 08:00:11 crc kubenswrapper[4809]: E0312 08:00:11.771691 4809 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 12 08:00:11 crc kubenswrapper[4809]: E0312 08:00:11.771808 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-12 08:00:19.771784286 +0000 UTC m=+93.353820059 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 12 08:00:11 crc kubenswrapper[4809]: I0312 08:00:11.857420 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:11 crc kubenswrapper[4809]: I0312 08:00:11.857496 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:11 crc kubenswrapper[4809]: I0312 08:00:11.857515 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:11 crc kubenswrapper[4809]: I0312 08:00:11.857542 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:11 crc kubenswrapper[4809]: I0312 08:00:11.857562 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:11Z","lastTransitionTime":"2026-03-12T08:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:11 crc kubenswrapper[4809]: I0312 08:00:11.873024 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 08:00:11 crc kubenswrapper[4809]: E0312 08:00:11.873322 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 08:00:19.873281251 +0000 UTC m=+93.455317024 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:00:11 crc kubenswrapper[4809]: I0312 08:00:11.873471 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:00:11 crc kubenswrapper[4809]: I0312 08:00:11.873566 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:00:11 crc kubenswrapper[4809]: E0312 08:00:11.873797 4809 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 08:00:11 crc kubenswrapper[4809]: E0312 08:00:11.873866 4809 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 08:00:11 crc kubenswrapper[4809]: E0312 08:00:11.873896 4809 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 08:00:11 crc kubenswrapper[4809]: E0312 08:00:11.873811 4809 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 08:00:11 crc kubenswrapper[4809]: E0312 08:00:11.874004 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-12 08:00:19.87397012 +0000 UTC m=+93.456005893 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 08:00:11 crc kubenswrapper[4809]: E0312 08:00:11.874004 4809 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 08:00:11 crc kubenswrapper[4809]: E0312 08:00:11.874043 4809 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 08:00:11 crc kubenswrapper[4809]: E0312 08:00:11.874198 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-12 08:00:19.874170765 +0000 UTC m=+93.456206668 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 08:00:11 crc kubenswrapper[4809]: I0312 08:00:11.960666 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:11 crc kubenswrapper[4809]: I0312 08:00:11.960727 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:11 crc kubenswrapper[4809]: I0312 08:00:11.960740 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:11 crc kubenswrapper[4809]: I0312 08:00:11.960764 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:11 crc kubenswrapper[4809]: I0312 08:00:11.960778 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:11Z","lastTransitionTime":"2026-03-12T08:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:12 crc kubenswrapper[4809]: I0312 08:00:12.063712 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:12 crc kubenswrapper[4809]: I0312 08:00:12.063754 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:12 crc kubenswrapper[4809]: I0312 08:00:12.063763 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:12 crc kubenswrapper[4809]: I0312 08:00:12.063777 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:12 crc kubenswrapper[4809]: I0312 08:00:12.063791 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:12Z","lastTransitionTime":"2026-03-12T08:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:12 crc kubenswrapper[4809]: I0312 08:00:12.105406 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:00:12 crc kubenswrapper[4809]: E0312 08:00:12.105535 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 08:00:12 crc kubenswrapper[4809]: I0312 08:00:12.105405 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:00:12 crc kubenswrapper[4809]: I0312 08:00:12.105405 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:00:12 crc kubenswrapper[4809]: E0312 08:00:12.105612 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 08:00:12 crc kubenswrapper[4809]: E0312 08:00:12.105886 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 08:00:12 crc kubenswrapper[4809]: I0312 08:00:12.165790 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:12 crc kubenswrapper[4809]: I0312 08:00:12.165829 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:12 crc kubenswrapper[4809]: I0312 08:00:12.165839 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:12 crc kubenswrapper[4809]: I0312 08:00:12.165854 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:12 crc kubenswrapper[4809]: I0312 08:00:12.165863 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:12Z","lastTransitionTime":"2026-03-12T08:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:12 crc kubenswrapper[4809]: I0312 08:00:12.268307 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:12 crc kubenswrapper[4809]: I0312 08:00:12.268347 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:12 crc kubenswrapper[4809]: I0312 08:00:12.268358 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:12 crc kubenswrapper[4809]: I0312 08:00:12.268373 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:12 crc kubenswrapper[4809]: I0312 08:00:12.268384 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:12Z","lastTransitionTime":"2026-03-12T08:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:12 crc kubenswrapper[4809]: I0312 08:00:12.370971 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:12 crc kubenswrapper[4809]: I0312 08:00:12.371053 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:12 crc kubenswrapper[4809]: I0312 08:00:12.371076 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:12 crc kubenswrapper[4809]: I0312 08:00:12.371101 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:12 crc kubenswrapper[4809]: I0312 08:00:12.371150 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:12Z","lastTransitionTime":"2026-03-12T08:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:12 crc kubenswrapper[4809]: I0312 08:00:12.474363 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:12 crc kubenswrapper[4809]: I0312 08:00:12.474404 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:12 crc kubenswrapper[4809]: I0312 08:00:12.474413 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:12 crc kubenswrapper[4809]: I0312 08:00:12.474429 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:12 crc kubenswrapper[4809]: I0312 08:00:12.474451 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:12Z","lastTransitionTime":"2026-03-12T08:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:12 crc kubenswrapper[4809]: I0312 08:00:12.578042 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:12 crc kubenswrapper[4809]: I0312 08:00:12.578104 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:12 crc kubenswrapper[4809]: I0312 08:00:12.578152 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:12 crc kubenswrapper[4809]: I0312 08:00:12.578178 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:12 crc kubenswrapper[4809]: I0312 08:00:12.578196 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:12Z","lastTransitionTime":"2026-03-12T08:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:12 crc kubenswrapper[4809]: I0312 08:00:12.680372 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:12 crc kubenswrapper[4809]: I0312 08:00:12.680434 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:12 crc kubenswrapper[4809]: I0312 08:00:12.680452 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:12 crc kubenswrapper[4809]: I0312 08:00:12.680475 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:12 crc kubenswrapper[4809]: I0312 08:00:12.680528 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:12Z","lastTransitionTime":"2026-03-12T08:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:12 crc kubenswrapper[4809]: I0312 08:00:12.783257 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:12 crc kubenswrapper[4809]: I0312 08:00:12.783284 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:12 crc kubenswrapper[4809]: I0312 08:00:12.783292 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:12 crc kubenswrapper[4809]: I0312 08:00:12.783306 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:12 crc kubenswrapper[4809]: I0312 08:00:12.783315 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:12Z","lastTransitionTime":"2026-03-12T08:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:12 crc kubenswrapper[4809]: I0312 08:00:12.886328 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:12 crc kubenswrapper[4809]: I0312 08:00:12.886358 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:12 crc kubenswrapper[4809]: I0312 08:00:12.886367 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:12 crc kubenswrapper[4809]: I0312 08:00:12.886380 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:12 crc kubenswrapper[4809]: I0312 08:00:12.886390 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:12Z","lastTransitionTime":"2026-03-12T08:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:12 crc kubenswrapper[4809]: I0312 08:00:12.988369 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:12 crc kubenswrapper[4809]: I0312 08:00:12.988399 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:12 crc kubenswrapper[4809]: I0312 08:00:12.988407 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:12 crc kubenswrapper[4809]: I0312 08:00:12.988419 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:12 crc kubenswrapper[4809]: I0312 08:00:12.988428 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:12Z","lastTransitionTime":"2026-03-12T08:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:13 crc kubenswrapper[4809]: I0312 08:00:13.090829 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:13 crc kubenswrapper[4809]: I0312 08:00:13.090894 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:13 crc kubenswrapper[4809]: I0312 08:00:13.090917 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:13 crc kubenswrapper[4809]: I0312 08:00:13.090940 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:13 crc kubenswrapper[4809]: I0312 08:00:13.090956 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:13Z","lastTransitionTime":"2026-03-12T08:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:13 crc kubenswrapper[4809]: I0312 08:00:13.194532 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:13 crc kubenswrapper[4809]: I0312 08:00:13.194594 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:13 crc kubenswrapper[4809]: I0312 08:00:13.194611 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:13 crc kubenswrapper[4809]: I0312 08:00:13.194634 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:13 crc kubenswrapper[4809]: I0312 08:00:13.194652 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:13Z","lastTransitionTime":"2026-03-12T08:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:13 crc kubenswrapper[4809]: I0312 08:00:13.298482 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:13 crc kubenswrapper[4809]: I0312 08:00:13.298542 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:13 crc kubenswrapper[4809]: I0312 08:00:13.298564 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:13 crc kubenswrapper[4809]: I0312 08:00:13.298614 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:13 crc kubenswrapper[4809]: I0312 08:00:13.298632 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:13Z","lastTransitionTime":"2026-03-12T08:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:13 crc kubenswrapper[4809]: I0312 08:00:13.402339 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:13 crc kubenswrapper[4809]: I0312 08:00:13.402407 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:13 crc kubenswrapper[4809]: I0312 08:00:13.402425 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:13 crc kubenswrapper[4809]: I0312 08:00:13.402452 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:13 crc kubenswrapper[4809]: I0312 08:00:13.402474 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:13Z","lastTransitionTime":"2026-03-12T08:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:13 crc kubenswrapper[4809]: I0312 08:00:13.505696 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:13 crc kubenswrapper[4809]: I0312 08:00:13.505753 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:13 crc kubenswrapper[4809]: I0312 08:00:13.505772 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:13 crc kubenswrapper[4809]: I0312 08:00:13.505797 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:13 crc kubenswrapper[4809]: I0312 08:00:13.505815 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:13Z","lastTransitionTime":"2026-03-12T08:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:13 crc kubenswrapper[4809]: I0312 08:00:13.608876 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:13 crc kubenswrapper[4809]: I0312 08:00:13.608976 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:13 crc kubenswrapper[4809]: I0312 08:00:13.609002 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:13 crc kubenswrapper[4809]: I0312 08:00:13.609031 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:13 crc kubenswrapper[4809]: I0312 08:00:13.609051 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:13Z","lastTransitionTime":"2026-03-12T08:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:13 crc kubenswrapper[4809]: I0312 08:00:13.712255 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:13 crc kubenswrapper[4809]: I0312 08:00:13.712290 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:13 crc kubenswrapper[4809]: I0312 08:00:13.712301 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:13 crc kubenswrapper[4809]: I0312 08:00:13.712321 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:13 crc kubenswrapper[4809]: I0312 08:00:13.712333 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:13Z","lastTransitionTime":"2026-03-12T08:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:13 crc kubenswrapper[4809]: I0312 08:00:13.814585 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:13 crc kubenswrapper[4809]: I0312 08:00:13.814618 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:13 crc kubenswrapper[4809]: I0312 08:00:13.814628 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:13 crc kubenswrapper[4809]: I0312 08:00:13.814642 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:13 crc kubenswrapper[4809]: I0312 08:00:13.814653 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:13Z","lastTransitionTime":"2026-03-12T08:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:13 crc kubenswrapper[4809]: I0312 08:00:13.916976 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:13 crc kubenswrapper[4809]: I0312 08:00:13.917014 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:13 crc kubenswrapper[4809]: I0312 08:00:13.917025 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:13 crc kubenswrapper[4809]: I0312 08:00:13.917040 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:13 crc kubenswrapper[4809]: I0312 08:00:13.917052 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:13Z","lastTransitionTime":"2026-03-12T08:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.019387 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.019425 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.019436 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.019461 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.019472 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:14Z","lastTransitionTime":"2026-03-12T08:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.104979 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.104980 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:00:14 crc kubenswrapper[4809]: E0312 08:00:14.105207 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.105005 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:00:14 crc kubenswrapper[4809]: E0312 08:00:14.105290 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 08:00:14 crc kubenswrapper[4809]: E0312 08:00:14.105352 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.122727 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.122819 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.122841 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.122873 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.122895 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:14Z","lastTransitionTime":"2026-03-12T08:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.225844 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.225890 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.225904 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.225921 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.225934 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:14Z","lastTransitionTime":"2026-03-12T08:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.319760 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-kbwnt"] Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.320401 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-kbwnt" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.325316 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.325538 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.325745 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.332548 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.332593 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.332610 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.332636 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.332655 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:14Z","lastTransitionTime":"2026-03-12T08:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.345757 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4677e418d05a581aee3be8fd421402c02922c5eaf004cbcae1ae691fb67d283e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:14Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.359328 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0dc5e31901266db36374e4881b99b1a40767bc6ff58afe1e6226adf1f5c60fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aef513f474f25e56fd48d7c8b5f110939514be75dec224356dee883508d604f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:14Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.377022 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09f5e61a-c077-449f-8291-dbf93ac9aca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95634c408015e53a4c8d044b27803c5070ad086be1f3372a760744db1bc9c033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c386318c2476130eb02e6620ab75fce7d5cd814d7f214bed4f449de60d126a14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efb3f6bb09e8fcec0267b3fdbdf43301e33aa2ba8bfaf8b3a124c7e7714949ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T08:00:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 08:00:01.682190 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 08:00:01.682347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 08:00:01.683010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1444506139/tls.crt::/tmp/serving-cert-1444506139/tls.key\\\\\\\"\\\\nI0312 08:00:02.063779 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 08:00:02.066800 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 08:00:02.066837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 08:00:02.066881 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 08:00:02.066894 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 08:00:02.073309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 08:00:02.073332 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073342 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 08:00:02.073345 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 08:00:02.073348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 08:00:02.073351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 08:00:02.073351 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 08:00:02.075060 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://411ee556dd1461097ea682e93564110b2a46d02734cb91026ba06ffe7de7a80f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:14Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.391773 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83bfb2b1536a6a7a823599dddb47b6ad50b038509d00298d2fcee85b0a5b6c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:14Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.405887 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:14Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.420531 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:14Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.432461 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:14Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.435417 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.435462 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.435489 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.435507 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.435518 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:14Z","lastTransitionTime":"2026-03-12T08:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.443101 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kbwnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acc48a9a-cc0a-4361-b1bc-7c7684f9bf93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j44jn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kbwnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:14Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.493473 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/acc48a9a-cc0a-4361-b1bc-7c7684f9bf93-hosts-file\") pod \"node-resolver-kbwnt\" (UID: \"acc48a9a-cc0a-4361-b1bc-7c7684f9bf93\") " pod="openshift-dns/node-resolver-kbwnt" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.493545 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j44jn\" (UniqueName: \"kubernetes.io/projected/acc48a9a-cc0a-4361-b1bc-7c7684f9bf93-kube-api-access-j44jn\") pod \"node-resolver-kbwnt\" (UID: \"acc48a9a-cc0a-4361-b1bc-7c7684f9bf93\") " pod="openshift-dns/node-resolver-kbwnt" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.538269 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.538306 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.538317 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.538332 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.538347 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:14Z","lastTransitionTime":"2026-03-12T08:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.594976 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/acc48a9a-cc0a-4361-b1bc-7c7684f9bf93-hosts-file\") pod \"node-resolver-kbwnt\" (UID: \"acc48a9a-cc0a-4361-b1bc-7c7684f9bf93\") " pod="openshift-dns/node-resolver-kbwnt" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.595067 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j44jn\" (UniqueName: \"kubernetes.io/projected/acc48a9a-cc0a-4361-b1bc-7c7684f9bf93-kube-api-access-j44jn\") pod \"node-resolver-kbwnt\" (UID: \"acc48a9a-cc0a-4361-b1bc-7c7684f9bf93\") " pod="openshift-dns/node-resolver-kbwnt" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.595844 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/acc48a9a-cc0a-4361-b1bc-7c7684f9bf93-hosts-file\") pod \"node-resolver-kbwnt\" (UID: \"acc48a9a-cc0a-4361-b1bc-7c7684f9bf93\") " pod="openshift-dns/node-resolver-kbwnt" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.622040 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j44jn\" (UniqueName: \"kubernetes.io/projected/acc48a9a-cc0a-4361-b1bc-7c7684f9bf93-kube-api-access-j44jn\") pod \"node-resolver-kbwnt\" (UID: \"acc48a9a-cc0a-4361-b1bc-7c7684f9bf93\") " pod="openshift-dns/node-resolver-kbwnt" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.637018 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-kbwnt" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.641387 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.641475 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.641488 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.641505 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.641557 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:14Z","lastTransitionTime":"2026-03-12T08:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.720675 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-h6d4c"] Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.721546 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.724055 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-mwn7b"] Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.725437 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-4xgl7"] Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.725584 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-mwn7b" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.726336 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-4xgl7" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.728789 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.728789 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.729612 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.732279 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.732569 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.732810 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.733148 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.732608 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.733654 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.735552 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.737780 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.740626 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.743522 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"101483ba-8ed3-40eb-9855-077e9add029f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6d4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:14Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.745296 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.745330 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.745342 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.745366 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.745378 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:14Z","lastTransitionTime":"2026-03-12T08:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.760839 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4677e418d05a581aee3be8fd421402c02922c5eaf004cbcae1ae691fb67d283e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:14Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.777955 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0dc5e31901266db36374e4881b99b1a40767bc6ff58afe1e6226adf1f5c60fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aef513f474f25e56fd48d7c8b5f110939514be75dec224356dee883508d604f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:14Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.792838 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:14Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.797680 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/101483ba-8ed3-40eb-9855-077e9add029f-proxy-tls\") pod \"machine-config-daemon-h6d4c\" (UID: \"101483ba-8ed3-40eb-9855-077e9add029f\") " pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.797759 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/101483ba-8ed3-40eb-9855-077e9add029f-mcd-auth-proxy-config\") pod \"machine-config-daemon-h6d4c\" (UID: \"101483ba-8ed3-40eb-9855-077e9add029f\") " pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.797844 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ab4c7cca-c503-41d4-8abf-5b19429defff-system-cni-dir\") pod \"multus-additional-cni-plugins-mwn7b\" (UID: \"ab4c7cca-c503-41d4-8abf-5b19429defff\") " pod="openshift-multus/multus-additional-cni-plugins-mwn7b" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.798015 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ab4c7cca-c503-41d4-8abf-5b19429defff-cnibin\") pod \"multus-additional-cni-plugins-mwn7b\" (UID: \"ab4c7cca-c503-41d4-8abf-5b19429defff\") " pod="openshift-multus/multus-additional-cni-plugins-mwn7b" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.798096 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ab4c7cca-c503-41d4-8abf-5b19429defff-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mwn7b\" (UID: \"ab4c7cca-c503-41d4-8abf-5b19429defff\") " pod="openshift-multus/multus-additional-cni-plugins-mwn7b" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.798190 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ab4c7cca-c503-41d4-8abf-5b19429defff-os-release\") pod \"multus-additional-cni-plugins-mwn7b\" (UID: \"ab4c7cca-c503-41d4-8abf-5b19429defff\") " pod="openshift-multus/multus-additional-cni-plugins-mwn7b" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.798220 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ab4c7cca-c503-41d4-8abf-5b19429defff-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mwn7b\" (UID: \"ab4c7cca-c503-41d4-8abf-5b19429defff\") " pod="openshift-multus/multus-additional-cni-plugins-mwn7b" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.798249 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fp69\" (UniqueName: \"kubernetes.io/projected/ab4c7cca-c503-41d4-8abf-5b19429defff-kube-api-access-6fp69\") pod \"multus-additional-cni-plugins-mwn7b\" (UID: \"ab4c7cca-c503-41d4-8abf-5b19429defff\") " pod="openshift-multus/multus-additional-cni-plugins-mwn7b" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.798320 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/101483ba-8ed3-40eb-9855-077e9add029f-rootfs\") pod \"machine-config-daemon-h6d4c\" (UID: \"101483ba-8ed3-40eb-9855-077e9add029f\") " pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.798467 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ab4c7cca-c503-41d4-8abf-5b19429defff-cni-binary-copy\") pod \"multus-additional-cni-plugins-mwn7b\" (UID: \"ab4c7cca-c503-41d4-8abf-5b19429defff\") " pod="openshift-multus/multus-additional-cni-plugins-mwn7b" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.798560 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prxds\" (UniqueName: \"kubernetes.io/projected/101483ba-8ed3-40eb-9855-077e9add029f-kube-api-access-prxds\") pod \"machine-config-daemon-h6d4c\" (UID: \"101483ba-8ed3-40eb-9855-077e9add029f\") " pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.805967 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kbwnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acc48a9a-cc0a-4361-b1bc-7c7684f9bf93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j44jn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kbwnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:14Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.824337 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09f5e61a-c077-449f-8291-dbf93ac9aca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95634c408015e53a4c8d044b27803c5070ad086be1f3372a760744db1bc9c033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c386318c2476130eb02e6620ab75fce7d5cd814d7f214bed4f449de60d126a14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efb3f6bb09e8fcec0267b3fdbdf43301e33aa2ba8bfaf8b3a124c7e7714949ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T08:00:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 08:00:01.682190 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 08:00:01.682347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 08:00:01.683010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1444506139/tls.crt::/tmp/serving-cert-1444506139/tls.key\\\\\\\"\\\\nI0312 08:00:02.063779 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 08:00:02.066800 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 08:00:02.066837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 08:00:02.066881 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 08:00:02.066894 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 08:00:02.073309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 08:00:02.073332 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073342 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 08:00:02.073345 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 08:00:02.073348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 08:00:02.073351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 08:00:02.073351 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 08:00:02.075060 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://411ee556dd1461097ea682e93564110b2a46d02734cb91026ba06ffe7de7a80f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:14Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.840011 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83bfb2b1536a6a7a823599dddb47b6ad50b038509d00298d2fcee85b0a5b6c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:14Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.847159 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.847194 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.847206 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.847223 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.847234 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:14Z","lastTransitionTime":"2026-03-12T08:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.858190 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:14Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.878909 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:14Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.895258 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4677e418d05a581aee3be8fd421402c02922c5eaf004cbcae1ae691fb67d283e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:14Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.899207 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ab4c7cca-c503-41d4-8abf-5b19429defff-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mwn7b\" (UID: \"ab4c7cca-c503-41d4-8abf-5b19429defff\") " pod="openshift-multus/multus-additional-cni-plugins-mwn7b" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.899243 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff-cni-binary-copy\") pod \"multus-4xgl7\" (UID: \"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\") " pod="openshift-multus/multus-4xgl7" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.899279 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff-host-run-k8s-cni-cncf-io\") pod \"multus-4xgl7\" (UID: \"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\") " pod="openshift-multus/multus-4xgl7" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.899300 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ab4c7cca-c503-41d4-8abf-5b19429defff-os-release\") pod \"multus-additional-cni-plugins-mwn7b\" (UID: \"ab4c7cca-c503-41d4-8abf-5b19429defff\") " pod="openshift-multus/multus-additional-cni-plugins-mwn7b" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.899317 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fp69\" (UniqueName: \"kubernetes.io/projected/ab4c7cca-c503-41d4-8abf-5b19429defff-kube-api-access-6fp69\") pod \"multus-additional-cni-plugins-mwn7b\" (UID: \"ab4c7cca-c503-41d4-8abf-5b19429defff\") " pod="openshift-multus/multus-additional-cni-plugins-mwn7b" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.899335 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff-host-var-lib-cni-bin\") pod \"multus-4xgl7\" (UID: \"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\") " pod="openshift-multus/multus-4xgl7" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.899352 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff-host-var-lib-kubelet\") pod \"multus-4xgl7\" (UID: \"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\") " pod="openshift-multus/multus-4xgl7" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.899369 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff-multus-conf-dir\") pod \"multus-4xgl7\" (UID: \"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\") " pod="openshift-multus/multus-4xgl7" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.899385 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff-multus-cni-dir\") pod \"multus-4xgl7\" (UID: \"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\") " pod="openshift-multus/multus-4xgl7" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.899401 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff-host-var-lib-cni-multus\") pod \"multus-4xgl7\" (UID: \"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\") " pod="openshift-multus/multus-4xgl7" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.899418 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff-host-run-multus-certs\") pod \"multus-4xgl7\" (UID: \"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\") " pod="openshift-multus/multus-4xgl7" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.899432 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2nhs\" (UniqueName: \"kubernetes.io/projected/85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff-kube-api-access-x2nhs\") pod \"multus-4xgl7\" (UID: \"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\") " pod="openshift-multus/multus-4xgl7" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.899460 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ab4c7cca-c503-41d4-8abf-5b19429defff-os-release\") pod \"multus-additional-cni-plugins-mwn7b\" (UID: \"ab4c7cca-c503-41d4-8abf-5b19429defff\") " pod="openshift-multus/multus-additional-cni-plugins-mwn7b" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.899479 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff-system-cni-dir\") pod \"multus-4xgl7\" (UID: \"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\") " pod="openshift-multus/multus-4xgl7" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.899533 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff-host-run-netns\") pod \"multus-4xgl7\" (UID: \"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\") " pod="openshift-multus/multus-4xgl7" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.899593 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/101483ba-8ed3-40eb-9855-077e9add029f-proxy-tls\") pod \"machine-config-daemon-h6d4c\" (UID: \"101483ba-8ed3-40eb-9855-077e9add029f\") " pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.899659 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ab4c7cca-c503-41d4-8abf-5b19429defff-system-cni-dir\") pod \"multus-additional-cni-plugins-mwn7b\" (UID: \"ab4c7cca-c503-41d4-8abf-5b19429defff\") " pod="openshift-multus/multus-additional-cni-plugins-mwn7b" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.899727 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff-multus-daemon-config\") pod \"multus-4xgl7\" (UID: \"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\") " pod="openshift-multus/multus-4xgl7" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.899754 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff-multus-socket-dir-parent\") pod \"multus-4xgl7\" (UID: \"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\") " pod="openshift-multus/multus-4xgl7" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.899789 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff-os-release\") pod \"multus-4xgl7\" (UID: \"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\") " pod="openshift-multus/multus-4xgl7" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.899813 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ab4c7cca-c503-41d4-8abf-5b19429defff-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mwn7b\" (UID: \"ab4c7cca-c503-41d4-8abf-5b19429defff\") " pod="openshift-multus/multus-additional-cni-plugins-mwn7b" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.899821 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ab4c7cca-c503-41d4-8abf-5b19429defff-system-cni-dir\") pod \"multus-additional-cni-plugins-mwn7b\" (UID: \"ab4c7cca-c503-41d4-8abf-5b19429defff\") " pod="openshift-multus/multus-additional-cni-plugins-mwn7b" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.899840 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/101483ba-8ed3-40eb-9855-077e9add029f-rootfs\") pod \"machine-config-daemon-h6d4c\" (UID: \"101483ba-8ed3-40eb-9855-077e9add029f\") " pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.899871 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ab4c7cca-c503-41d4-8abf-5b19429defff-cni-binary-copy\") pod \"multus-additional-cni-plugins-mwn7b\" (UID: \"ab4c7cca-c503-41d4-8abf-5b19429defff\") " pod="openshift-multus/multus-additional-cni-plugins-mwn7b" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.899884 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/101483ba-8ed3-40eb-9855-077e9add029f-rootfs\") pod \"machine-config-daemon-h6d4c\" (UID: \"101483ba-8ed3-40eb-9855-077e9add029f\") " pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.899893 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prxds\" (UniqueName: \"kubernetes.io/projected/101483ba-8ed3-40eb-9855-077e9add029f-kube-api-access-prxds\") pod \"machine-config-daemon-h6d4c\" (UID: \"101483ba-8ed3-40eb-9855-077e9add029f\") " pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.900556 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ab4c7cca-c503-41d4-8abf-5b19429defff-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mwn7b\" (UID: \"ab4c7cca-c503-41d4-8abf-5b19429defff\") " pod="openshift-multus/multus-additional-cni-plugins-mwn7b" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.900603 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff-etc-kubernetes\") pod \"multus-4xgl7\" (UID: \"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\") " pod="openshift-multus/multus-4xgl7" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.900637 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/101483ba-8ed3-40eb-9855-077e9add029f-mcd-auth-proxy-config\") pod \"machine-config-daemon-h6d4c\" (UID: \"101483ba-8ed3-40eb-9855-077e9add029f\") " pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.900681 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff-cnibin\") pod \"multus-4xgl7\" (UID: \"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\") " pod="openshift-multus/multus-4xgl7" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.900726 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff-hostroot\") pod \"multus-4xgl7\" (UID: \"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\") " pod="openshift-multus/multus-4xgl7" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.900811 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ab4c7cca-c503-41d4-8abf-5b19429defff-cni-binary-copy\") pod \"multus-additional-cni-plugins-mwn7b\" (UID: \"ab4c7cca-c503-41d4-8abf-5b19429defff\") " pod="openshift-multus/multus-additional-cni-plugins-mwn7b" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.900866 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ab4c7cca-c503-41d4-8abf-5b19429defff-cnibin\") pod \"multus-additional-cni-plugins-mwn7b\" (UID: \"ab4c7cca-c503-41d4-8abf-5b19429defff\") " pod="openshift-multus/multus-additional-cni-plugins-mwn7b" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.900929 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ab4c7cca-c503-41d4-8abf-5b19429defff-cnibin\") pod \"multus-additional-cni-plugins-mwn7b\" (UID: \"ab4c7cca-c503-41d4-8abf-5b19429defff\") " pod="openshift-multus/multus-additional-cni-plugins-mwn7b" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.900973 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ab4c7cca-c503-41d4-8abf-5b19429defff-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mwn7b\" (UID: \"ab4c7cca-c503-41d4-8abf-5b19429defff\") " pod="openshift-multus/multus-additional-cni-plugins-mwn7b" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.901386 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/101483ba-8ed3-40eb-9855-077e9add029f-mcd-auth-proxy-config\") pod \"machine-config-daemon-h6d4c\" (UID: \"101483ba-8ed3-40eb-9855-077e9add029f\") " pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.905508 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/101483ba-8ed3-40eb-9855-077e9add029f-proxy-tls\") pod \"machine-config-daemon-h6d4c\" (UID: \"101483ba-8ed3-40eb-9855-077e9add029f\") " pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.912206 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"101483ba-8ed3-40eb-9855-077e9add029f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6d4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:14Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.913989 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fp69\" (UniqueName: \"kubernetes.io/projected/ab4c7cca-c503-41d4-8abf-5b19429defff-kube-api-access-6fp69\") pod \"multus-additional-cni-plugins-mwn7b\" (UID: \"ab4c7cca-c503-41d4-8abf-5b19429defff\") " pod="openshift-multus/multus-additional-cni-plugins-mwn7b" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.925413 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwn7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab4c7cca-c503-41d4-8abf-5b19429defff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwn7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:14Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.929017 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prxds\" (UniqueName: \"kubernetes.io/projected/101483ba-8ed3-40eb-9855-077e9add029f-kube-api-access-prxds\") pod \"machine-config-daemon-h6d4c\" (UID: \"101483ba-8ed3-40eb-9855-077e9add029f\") " pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.948857 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09f5e61a-c077-449f-8291-dbf93ac9aca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95634c408015e53a4c8d044b27803c5070ad086be1f3372a760744db1bc9c033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c386318c2476130eb02e6620ab75fce7d5cd814d7f214bed4f449de60d126a14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efb3f6bb09e8fcec0267b3fdbdf43301e33aa2ba8bfaf8b3a124c7e7714949ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T08:00:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 08:00:01.682190 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 08:00:01.682347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 08:00:01.683010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1444506139/tls.crt::/tmp/serving-cert-1444506139/tls.key\\\\\\\"\\\\nI0312 08:00:02.063779 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 08:00:02.066800 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 08:00:02.066837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 08:00:02.066881 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 08:00:02.066894 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 08:00:02.073309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 08:00:02.073332 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073342 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 08:00:02.073345 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 08:00:02.073348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 08:00:02.073351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 08:00:02.073351 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 08:00:02.075060 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://411ee556dd1461097ea682e93564110b2a46d02734cb91026ba06ffe7de7a80f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:14Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.950772 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.951009 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.951233 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.951392 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.951513 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:14Z","lastTransitionTime":"2026-03-12T08:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.966342 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83bfb2b1536a6a7a823599dddb47b6ad50b038509d00298d2fcee85b0a5b6c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:14Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.979882 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:14Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:14 crc kubenswrapper[4809]: I0312 08:00:14.990940 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kbwnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acc48a9a-cc0a-4361-b1bc-7c7684f9bf93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j44jn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kbwnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:14Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.002586 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff-system-cni-dir\") pod \"multus-4xgl7\" (UID: \"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\") " pod="openshift-multus/multus-4xgl7" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.002687 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff-system-cni-dir\") pod \"multus-4xgl7\" (UID: \"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\") " pod="openshift-multus/multus-4xgl7" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.002774 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff-host-run-netns\") pod \"multus-4xgl7\" (UID: \"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\") " pod="openshift-multus/multus-4xgl7" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.002729 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff-host-run-netns\") pod \"multus-4xgl7\" (UID: \"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\") " pod="openshift-multus/multus-4xgl7" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.002852 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff-multus-daemon-config\") pod \"multus-4xgl7\" (UID: \"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\") " pod="openshift-multus/multus-4xgl7" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.002913 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff-os-release\") pod \"multus-4xgl7\" (UID: \"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\") " pod="openshift-multus/multus-4xgl7" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.002936 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff-multus-socket-dir-parent\") pod \"multus-4xgl7\" (UID: \"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\") " pod="openshift-multus/multus-4xgl7" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.002965 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff-etc-kubernetes\") pod \"multus-4xgl7\" (UID: \"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\") " pod="openshift-multus/multus-4xgl7" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.002986 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff-cnibin\") pod \"multus-4xgl7\" (UID: \"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\") " pod="openshift-multus/multus-4xgl7" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.003005 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff-hostroot\") pod \"multus-4xgl7\" (UID: \"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\") " pod="openshift-multus/multus-4xgl7" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.003027 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff-cni-binary-copy\") pod \"multus-4xgl7\" (UID: \"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\") " pod="openshift-multus/multus-4xgl7" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.003056 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff-host-run-k8s-cni-cncf-io\") pod \"multus-4xgl7\" (UID: \"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\") " pod="openshift-multus/multus-4xgl7" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.003080 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff-host-var-lib-cni-bin\") pod \"multus-4xgl7\" (UID: \"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\") " pod="openshift-multus/multus-4xgl7" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.003102 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff-host-var-lib-kubelet\") pod \"multus-4xgl7\" (UID: \"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\") " pod="openshift-multus/multus-4xgl7" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.003154 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff-multus-conf-dir\") pod \"multus-4xgl7\" (UID: \"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\") " pod="openshift-multus/multus-4xgl7" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.003178 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff-multus-cni-dir\") pod \"multus-4xgl7\" (UID: \"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\") " pod="openshift-multus/multus-4xgl7" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.003201 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff-host-var-lib-cni-multus\") pod \"multus-4xgl7\" (UID: \"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\") " pod="openshift-multus/multus-4xgl7" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.003225 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff-host-run-multus-certs\") pod \"multus-4xgl7\" (UID: \"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\") " pod="openshift-multus/multus-4xgl7" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.003244 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2nhs\" (UniqueName: \"kubernetes.io/projected/85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff-kube-api-access-x2nhs\") pod \"multus-4xgl7\" (UID: \"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\") " pod="openshift-multus/multus-4xgl7" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.003485 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff-multus-socket-dir-parent\") pod \"multus-4xgl7\" (UID: \"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\") " pod="openshift-multus/multus-4xgl7" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.004269 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff-multus-daemon-config\") pod \"multus-4xgl7\" (UID: \"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\") " pod="openshift-multus/multus-4xgl7" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.004307 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff-cnibin\") pod \"multus-4xgl7\" (UID: \"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\") " pod="openshift-multus/multus-4xgl7" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.004380 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff-os-release\") pod \"multus-4xgl7\" (UID: \"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\") " pod="openshift-multus/multus-4xgl7" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.004455 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff-hostroot\") pod \"multus-4xgl7\" (UID: \"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\") " pod="openshift-multus/multus-4xgl7" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.004459 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff-multus-conf-dir\") pod \"multus-4xgl7\" (UID: \"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\") " pod="openshift-multus/multus-4xgl7" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.004494 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff-host-run-k8s-cni-cncf-io\") pod \"multus-4xgl7\" (UID: \"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\") " pod="openshift-multus/multus-4xgl7" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.004530 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff-etc-kubernetes\") pod \"multus-4xgl7\" (UID: \"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\") " pod="openshift-multus/multus-4xgl7" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.004551 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff-host-var-lib-cni-bin\") pod \"multus-4xgl7\" (UID: \"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\") " pod="openshift-multus/multus-4xgl7" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.004586 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff-host-var-lib-kubelet\") pod \"multus-4xgl7\" (UID: \"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\") " pod="openshift-multus/multus-4xgl7" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.004603 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff-host-var-lib-cni-multus\") pod \"multus-4xgl7\" (UID: \"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\") " pod="openshift-multus/multus-4xgl7" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.004685 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff-host-run-multus-certs\") pod \"multus-4xgl7\" (UID: \"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\") " pod="openshift-multus/multus-4xgl7" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.004703 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff-multus-cni-dir\") pod \"multus-4xgl7\" (UID: \"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\") " pod="openshift-multus/multus-4xgl7" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.004910 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff-cni-binary-copy\") pod \"multus-4xgl7\" (UID: \"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\") " pod="openshift-multus/multus-4xgl7" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.005459 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xgl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2nhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xgl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:15Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.021129 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0dc5e31901266db36374e4881b99b1a40767bc6ff58afe1e6226adf1f5c60fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aef513f474f25e56fd48d7c8b5f110939514be75dec224356dee883508d604f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:15Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.021384 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2nhs\" (UniqueName: \"kubernetes.io/projected/85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff-kube-api-access-x2nhs\") pod \"multus-4xgl7\" (UID: \"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\") " pod="openshift-multus/multus-4xgl7" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.034160 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:15Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.047285 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.049633 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:15Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.056836 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.056885 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.056883 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-mwn7b" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.056898 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.057136 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.057178 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:15Z","lastTransitionTime":"2026-03-12T08:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.065249 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-4xgl7" Mar 12 08:00:15 crc kubenswrapper[4809]: W0312 08:00:15.083648 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod85879dda_3dbb_4ed2_a8e0_4b2dbcf175ff.slice/crio-6b5956dba7db3e0e18c24133a3d755f4fb10ba3111097d7b06a2efa80b1723cf WatchSource:0}: Error finding container 6b5956dba7db3e0e18c24133a3d755f4fb10ba3111097d7b06a2efa80b1723cf: Status 404 returned error can't find the container with id 6b5956dba7db3e0e18c24133a3d755f4fb10ba3111097d7b06a2efa80b1723cf Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.118522 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-7h9l6"] Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.120049 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.122767 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.123268 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.123716 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.123891 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.123795 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.125402 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.125493 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.142361 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0dc5e31901266db36374e4881b99b1a40767bc6ff58afe1e6226adf1f5c60fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aef513f474f25e56fd48d7c8b5f110939514be75dec224356dee883508d604f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:15Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.158779 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:15Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.161011 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.161040 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.161048 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.161065 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.161077 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:15Z","lastTransitionTime":"2026-03-12T08:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.181655 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:15Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.204256 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-host-cni-bin\") pod \"ovnkube-node-7h9l6\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.204299 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-ovnkube-script-lib\") pod \"ovnkube-node-7h9l6\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.204324 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-log-socket\") pod \"ovnkube-node-7h9l6\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.204345 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-host-run-ovn-kubernetes\") pod \"ovnkube-node-7h9l6\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.204371 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-var-lib-openvswitch\") pod \"ovnkube-node-7h9l6\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.204394 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-ovn-node-metrics-cert\") pod \"ovnkube-node-7h9l6\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.204430 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-host-slash\") pod \"ovnkube-node-7h9l6\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.204454 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2wz9\" (UniqueName: \"kubernetes.io/projected/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-kube-api-access-r2wz9\") pod \"ovnkube-node-7h9l6\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.204477 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-run-systemd\") pod \"ovnkube-node-7h9l6\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.204499 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-ovnkube-config\") pod \"ovnkube-node-7h9l6\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.204580 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7h9l6\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.204621 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-run-ovn\") pod \"ovnkube-node-7h9l6\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.204645 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-host-run-netns\") pod \"ovnkube-node-7h9l6\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.204665 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-systemd-units\") pod \"ovnkube-node-7h9l6\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.204700 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-run-openvswitch\") pod \"ovnkube-node-7h9l6\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.204769 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-env-overrides\") pod \"ovnkube-node-7h9l6\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.204807 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-host-kubelet\") pod \"ovnkube-node-7h9l6\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.204891 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-node-log\") pod \"ovnkube-node-7h9l6\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.204925 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-host-cni-netd\") pod \"ovnkube-node-7h9l6\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.204969 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-etc-openvswitch\") pod \"ovnkube-node-7h9l6\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.207175 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7h9l6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:15Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.224421 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4677e418d05a581aee3be8fd421402c02922c5eaf004cbcae1ae691fb67d283e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:15Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.238321 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"101483ba-8ed3-40eb-9855-077e9add029f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6d4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:15Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.256328 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwn7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab4c7cca-c503-41d4-8abf-5b19429defff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwn7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:15Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.264935 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.264988 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.265002 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.265024 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.265038 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:15Z","lastTransitionTime":"2026-03-12T08:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.273524 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09f5e61a-c077-449f-8291-dbf93ac9aca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95634c408015e53a4c8d044b27803c5070ad086be1f3372a760744db1bc9c033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c386318c2476130eb02e6620ab75fce7d5cd814d7f214bed4f449de60d126a14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efb3f6bb09e8fcec0267b3fdbdf43301e33aa2ba8bfaf8b3a124c7e7714949ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T08:00:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 08:00:01.682190 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 08:00:01.682347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 08:00:01.683010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1444506139/tls.crt::/tmp/serving-cert-1444506139/tls.key\\\\\\\"\\\\nI0312 08:00:02.063779 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 08:00:02.066800 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 08:00:02.066837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 08:00:02.066881 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 08:00:02.066894 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 08:00:02.073309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 08:00:02.073332 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073342 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 08:00:02.073345 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 08:00:02.073348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 08:00:02.073351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 08:00:02.073351 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 08:00:02.075060 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://411ee556dd1461097ea682e93564110b2a46d02734cb91026ba06ffe7de7a80f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:15Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.288894 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83bfb2b1536a6a7a823599dddb47b6ad50b038509d00298d2fcee85b0a5b6c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:15Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.300996 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:15Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.305873 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-host-kubelet\") pod \"ovnkube-node-7h9l6\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.305900 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-node-log\") pod \"ovnkube-node-7h9l6\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.305918 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-host-cni-netd\") pod \"ovnkube-node-7h9l6\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.305940 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-etc-openvswitch\") pod \"ovnkube-node-7h9l6\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.305956 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-ovnkube-script-lib\") pod \"ovnkube-node-7h9l6\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.305973 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-host-cni-bin\") pod \"ovnkube-node-7h9l6\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.305988 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-log-socket\") pod \"ovnkube-node-7h9l6\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.306003 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-host-run-ovn-kubernetes\") pod \"ovnkube-node-7h9l6\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.306037 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-var-lib-openvswitch\") pod \"ovnkube-node-7h9l6\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.306055 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-ovn-node-metrics-cert\") pod \"ovnkube-node-7h9l6\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.306080 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-host-slash\") pod \"ovnkube-node-7h9l6\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.306097 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2wz9\" (UniqueName: \"kubernetes.io/projected/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-kube-api-access-r2wz9\") pod \"ovnkube-node-7h9l6\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.306086 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-node-log\") pod \"ovnkube-node-7h9l6\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.306161 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-host-cni-netd\") pod \"ovnkube-node-7h9l6\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.306173 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-log-socket\") pod \"ovnkube-node-7h9l6\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.306169 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-run-systemd\") pod \"ovnkube-node-7h9l6\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.306159 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-host-kubelet\") pod \"ovnkube-node-7h9l6\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.306162 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-host-run-ovn-kubernetes\") pod \"ovnkube-node-7h9l6\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.306086 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-etc-openvswitch\") pod \"ovnkube-node-7h9l6\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.306125 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-run-systemd\") pod \"ovnkube-node-7h9l6\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.306189 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-var-lib-openvswitch\") pod \"ovnkube-node-7h9l6\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.306195 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-host-slash\") pod \"ovnkube-node-7h9l6\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.306380 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7h9l6\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.306417 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-ovnkube-config\") pod \"ovnkube-node-7h9l6\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.306443 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7h9l6\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.306616 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-host-run-netns\") pod \"ovnkube-node-7h9l6\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.306638 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-run-ovn\") pod \"ovnkube-node-7h9l6\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.306665 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-systemd-units\") pod \"ovnkube-node-7h9l6\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.306682 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-run-openvswitch\") pod \"ovnkube-node-7h9l6\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.306712 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-env-overrides\") pod \"ovnkube-node-7h9l6\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.306915 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-host-cni-bin\") pod \"ovnkube-node-7h9l6\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.306968 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-ovnkube-script-lib\") pod \"ovnkube-node-7h9l6\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.306973 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-run-ovn\") pod \"ovnkube-node-7h9l6\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.307014 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-host-run-netns\") pod \"ovnkube-node-7h9l6\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.307019 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-systemd-units\") pod \"ovnkube-node-7h9l6\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.307048 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-ovnkube-config\") pod \"ovnkube-node-7h9l6\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.307149 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-run-openvswitch\") pod \"ovnkube-node-7h9l6\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.307328 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-env-overrides\") pod \"ovnkube-node-7h9l6\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.312560 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-ovn-node-metrics-cert\") pod \"ovnkube-node-7h9l6\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.317393 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kbwnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acc48a9a-cc0a-4361-b1bc-7c7684f9bf93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j44jn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kbwnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:15Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.327737 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2wz9\" (UniqueName: \"kubernetes.io/projected/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-kube-api-access-r2wz9\") pod \"ovnkube-node-7h9l6\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.332040 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xgl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2nhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xgl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:15Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.367973 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.368017 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.368027 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.368046 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.368055 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:15Z","lastTransitionTime":"2026-03-12T08:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.453503 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.476165 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.476329 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.476424 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.476511 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.476619 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:15Z","lastTransitionTime":"2026-03-12T08:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.490498 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mwn7b" event={"ID":"ab4c7cca-c503-41d4-8abf-5b19429defff","Type":"ContainerStarted","Data":"82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b"} Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.490540 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mwn7b" event={"ID":"ab4c7cca-c503-41d4-8abf-5b19429defff","Type":"ContainerStarted","Data":"d514274a9280d141851b592763153930bdf779acacebdfffb613fb0cc075a95d"} Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.494253 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" event={"ID":"101483ba-8ed3-40eb-9855-077e9add029f","Type":"ContainerStarted","Data":"4a3044e9eaa4ccc87b594a3f5440d95bf23c648ceb2e8f45abeadeda79b4a773"} Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.494281 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" event={"ID":"101483ba-8ed3-40eb-9855-077e9add029f","Type":"ContainerStarted","Data":"5a1dde7e156e97091c5fa419562607743e842e7e610e685fff1be3ce7ce33f39"} Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.494293 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" event={"ID":"101483ba-8ed3-40eb-9855-077e9add029f","Type":"ContainerStarted","Data":"0cc85a10cbb633613c24128779f1b0f85fb631f70c9f1ccdffd1582e9b4effa8"} Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.495935 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-kbwnt" event={"ID":"acc48a9a-cc0a-4361-b1bc-7c7684f9bf93","Type":"ContainerStarted","Data":"cb5d8cd4006dec3e1304218cda9d03ea761465524e055daccb8e61ef1368a7ca"} Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.495980 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-kbwnt" event={"ID":"acc48a9a-cc0a-4361-b1bc-7c7684f9bf93","Type":"ContainerStarted","Data":"d4149b05130102384f373328250faab2412763b59037c86844c12bdf585d3500"} Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.497394 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4xgl7" event={"ID":"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff","Type":"ContainerStarted","Data":"aa0e58ad29c62a229115fb214956dc11f004bd7f4a8188d7435e0857f07ff46b"} Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.497434 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4xgl7" event={"ID":"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff","Type":"ContainerStarted","Data":"6b5956dba7db3e0e18c24133a3d755f4fb10ba3111097d7b06a2efa80b1723cf"} Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.498809 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" event={"ID":"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e","Type":"ContainerStarted","Data":"664fc95bdb25511c48003bee104a460ddacbd720d5808bd8ddcfbd0a8a9d6c60"} Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.504894 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:15Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.517516 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:15Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.539936 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7h9l6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:15Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.556438 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwn7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab4c7cca-c503-41d4-8abf-5b19429defff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwn7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:15Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.570270 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4677e418d05a581aee3be8fd421402c02922c5eaf004cbcae1ae691fb67d283e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:15Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.579107 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.579194 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.579207 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.579222 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.579233 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:15Z","lastTransitionTime":"2026-03-12T08:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.587058 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"101483ba-8ed3-40eb-9855-077e9add029f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6d4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:15Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.599152 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:15Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.609229 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kbwnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acc48a9a-cc0a-4361-b1bc-7c7684f9bf93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j44jn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kbwnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:15Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.623056 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xgl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2nhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xgl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:15Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.639790 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09f5e61a-c077-449f-8291-dbf93ac9aca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95634c408015e53a4c8d044b27803c5070ad086be1f3372a760744db1bc9c033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c386318c2476130eb02e6620ab75fce7d5cd814d7f214bed4f449de60d126a14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efb3f6bb09e8fcec0267b3fdbdf43301e33aa2ba8bfaf8b3a124c7e7714949ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T08:00:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 08:00:01.682190 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 08:00:01.682347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 08:00:01.683010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1444506139/tls.crt::/tmp/serving-cert-1444506139/tls.key\\\\\\\"\\\\nI0312 08:00:02.063779 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 08:00:02.066800 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 08:00:02.066837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 08:00:02.066881 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 08:00:02.066894 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 08:00:02.073309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 08:00:02.073332 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073342 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 08:00:02.073345 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 08:00:02.073348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 08:00:02.073351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 08:00:02.073351 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 08:00:02.075060 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://411ee556dd1461097ea682e93564110b2a46d02734cb91026ba06ffe7de7a80f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:15Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.652464 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83bfb2b1536a6a7a823599dddb47b6ad50b038509d00298d2fcee85b0a5b6c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:15Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.664860 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0dc5e31901266db36374e4881b99b1a40767bc6ff58afe1e6226adf1f5c60fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aef513f474f25e56fd48d7c8b5f110939514be75dec224356dee883508d604f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:15Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.679508 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4677e418d05a581aee3be8fd421402c02922c5eaf004cbcae1ae691fb67d283e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:15Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.681852 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.682164 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.682197 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.682237 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.682269 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:15Z","lastTransitionTime":"2026-03-12T08:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.691292 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"101483ba-8ed3-40eb-9855-077e9add029f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a3044e9eaa4ccc87b594a3f5440d95bf23c648ceb2e8f45abeadeda79b4a773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1dde7e156e97091c5fa419562607743e842e7e610e685fff1be3ce7ce33f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6d4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:15Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.705952 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwn7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab4c7cca-c503-41d4-8abf-5b19429defff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwn7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:15Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.717236 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09f5e61a-c077-449f-8291-dbf93ac9aca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95634c408015e53a4c8d044b27803c5070ad086be1f3372a760744db1bc9c033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c386318c2476130eb02e6620ab75fce7d5cd814d7f214bed4f449de60d126a14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efb3f6bb09e8fcec0267b3fdbdf43301e33aa2ba8bfaf8b3a124c7e7714949ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T08:00:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 08:00:01.682190 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 08:00:01.682347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 08:00:01.683010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1444506139/tls.crt::/tmp/serving-cert-1444506139/tls.key\\\\\\\"\\\\nI0312 08:00:02.063779 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 08:00:02.066800 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 08:00:02.066837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 08:00:02.066881 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 08:00:02.066894 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 08:00:02.073309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 08:00:02.073332 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073342 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 08:00:02.073345 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 08:00:02.073348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 08:00:02.073351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 08:00:02.073351 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 08:00:02.075060 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://411ee556dd1461097ea682e93564110b2a46d02734cb91026ba06ffe7de7a80f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:15Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.734148 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83bfb2b1536a6a7a823599dddb47b6ad50b038509d00298d2fcee85b0a5b6c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:15Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.748805 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:15Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.759177 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kbwnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acc48a9a-cc0a-4361-b1bc-7c7684f9bf93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5d8cd4006dec3e1304218cda9d03ea761465524e055daccb8e61ef1368a7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j44jn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kbwnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:15Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.772128 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xgl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa0e58ad29c62a229115fb214956dc11f004bd7f4a8188d7435e0857f07ff46b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2nhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xgl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:15Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.783991 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0dc5e31901266db36374e4881b99b1a40767bc6ff58afe1e6226adf1f5c60fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aef513f474f25e56fd48d7c8b5f110939514be75dec224356dee883508d604f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:15Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.784491 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.784535 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.784546 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.784562 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.784575 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:15Z","lastTransitionTime":"2026-03-12T08:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.797162 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:15Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.817559 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:15Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.838051 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7h9l6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:15Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.888307 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.888584 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.888679 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.888813 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.888891 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:15Z","lastTransitionTime":"2026-03-12T08:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.991553 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.991823 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.991892 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.991960 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:15 crc kubenswrapper[4809]: I0312 08:00:15.992026 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:15Z","lastTransitionTime":"2026-03-12T08:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.094634 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.094670 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.094680 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.094698 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.094710 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:16Z","lastTransitionTime":"2026-03-12T08:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.105141 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:00:16 crc kubenswrapper[4809]: E0312 08:00:16.105248 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.105393 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.105486 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:00:16 crc kubenswrapper[4809]: E0312 08:00:16.105653 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 08:00:16 crc kubenswrapper[4809]: E0312 08:00:16.106251 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.119154 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.201367 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.201724 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.201733 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.201750 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.201760 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:16Z","lastTransitionTime":"2026-03-12T08:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.304561 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.304635 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.304645 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.304686 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.304701 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:16Z","lastTransitionTime":"2026-03-12T08:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.408817 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.408861 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.408871 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.408890 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.408901 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:16Z","lastTransitionTime":"2026-03-12T08:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.515833 4809 generic.go:334] "Generic (PLEG): container finished" podID="ab4c7cca-c503-41d4-8abf-5b19429defff" containerID="82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b" exitCode=0 Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.516093 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mwn7b" event={"ID":"ab4c7cca-c503-41d4-8abf-5b19429defff","Type":"ContainerDied","Data":"82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b"} Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.518929 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.519027 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.519069 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.519099 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.519152 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:16Z","lastTransitionTime":"2026-03-12T08:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.524364 4809 generic.go:334] "Generic (PLEG): container finished" podID="cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" containerID="dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828" exitCode=0 Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.524609 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" event={"ID":"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e","Type":"ContainerDied","Data":"dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828"} Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.544464 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:16Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.557927 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:16Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.578418 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7h9l6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:16Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.590819 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4677e418d05a581aee3be8fd421402c02922c5eaf004cbcae1ae691fb67d283e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:16Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.601210 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"101483ba-8ed3-40eb-9855-077e9add029f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a3044e9eaa4ccc87b594a3f5440d95bf23c648ceb2e8f45abeadeda79b4a773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1dde7e156e97091c5fa419562607743e842e7e610e685fff1be3ce7ce33f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6d4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:16Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.617318 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwn7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab4c7cca-c503-41d4-8abf-5b19429defff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwn7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:16Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.627014 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.627049 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.627060 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.627074 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.627083 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:16Z","lastTransitionTime":"2026-03-12T08:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.629975 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xgl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa0e58ad29c62a229115fb214956dc11f004bd7f4a8188d7435e0857f07ff46b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2nhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xgl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:16Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.644493 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09f5e61a-c077-449f-8291-dbf93ac9aca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95634c408015e53a4c8d044b27803c5070ad086be1f3372a760744db1bc9c033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c386318c2476130eb02e6620ab75fce7d5cd814d7f214bed4f449de60d126a14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efb3f6bb09e8fcec0267b3fdbdf43301e33aa2ba8bfaf8b3a124c7e7714949ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T08:00:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 08:00:01.682190 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 08:00:01.682347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 08:00:01.683010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1444506139/tls.crt::/tmp/serving-cert-1444506139/tls.key\\\\\\\"\\\\nI0312 08:00:02.063779 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 08:00:02.066800 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 08:00:02.066837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 08:00:02.066881 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 08:00:02.066894 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 08:00:02.073309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 08:00:02.073332 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073342 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 08:00:02.073345 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 08:00:02.073348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 08:00:02.073351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 08:00:02.073351 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 08:00:02.075060 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://411ee556dd1461097ea682e93564110b2a46d02734cb91026ba06ffe7de7a80f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:16Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.696730 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83bfb2b1536a6a7a823599dddb47b6ad50b038509d00298d2fcee85b0a5b6c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:16Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.722422 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:16Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.730600 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.730667 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.730683 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.730706 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.730725 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:16Z","lastTransitionTime":"2026-03-12T08:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.738495 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kbwnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acc48a9a-cc0a-4361-b1bc-7c7684f9bf93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5d8cd4006dec3e1304218cda9d03ea761465524e055daccb8e61ef1368a7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j44jn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kbwnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:16Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.759862 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90601560-88a4-4e9e-8c6c-b7cfd6193dfc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcce0bfc6e523f2bd2e05ab2403c80a280f3c22b7c9d4e61bde3bb414b1b6c1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afe86a0784b3f9ecefb87d8d8de1e616420abe185dfcf0e47ad17f7d3e18a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ecaed5f8be6ee7aa9b143575897770874696cfd0d71c30ef8e46d75a67be75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d974ea0c31aedbb16a24b692f45399aa47ba7a1e4b14467bb954c10757b38cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c32315e2dbe601b08a2b2c3db8a50dfc5d99be959dbf33c19f6a2c55df38c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28106812f58b36dc991efb2c3b84b4c3f38016271262418de7cb6d3130ee2743\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28106812f58b36dc991efb2c3b84b4c3f38016271262418de7cb6d3130ee2743\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://759e1933c261395715f2209ea1aa72957dfb4c5b1049c830e87854b0dcdba55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759e1933c261395715f2209ea1aa72957dfb4c5b1049c830e87854b0dcdba55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0d58c5fd72b76f75c191f46f25e0aa9fb3026191f5fb97c210907e1ea38baa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0d58c5fd72b76f75c191f46f25e0aa9fb3026191f5fb97c210907e1ea38baa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:16Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.772429 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0dc5e31901266db36374e4881b99b1a40767bc6ff58afe1e6226adf1f5c60fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aef513f474f25e56fd48d7c8b5f110939514be75dec224356dee883508d604f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:16Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.785222 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09f5e61a-c077-449f-8291-dbf93ac9aca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95634c408015e53a4c8d044b27803c5070ad086be1f3372a760744db1bc9c033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c386318c2476130eb02e6620ab75fce7d5cd814d7f214bed4f449de60d126a14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efb3f6bb09e8fcec0267b3fdbdf43301e33aa2ba8bfaf8b3a124c7e7714949ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T08:00:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 08:00:01.682190 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 08:00:01.682347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 08:00:01.683010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1444506139/tls.crt::/tmp/serving-cert-1444506139/tls.key\\\\\\\"\\\\nI0312 08:00:02.063779 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 08:00:02.066800 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 08:00:02.066837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 08:00:02.066881 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 08:00:02.066894 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 08:00:02.073309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 08:00:02.073332 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073342 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 08:00:02.073345 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 08:00:02.073348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 08:00:02.073351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 08:00:02.073351 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 08:00:02.075060 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://411ee556dd1461097ea682e93564110b2a46d02734cb91026ba06ffe7de7a80f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:16Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.800025 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83bfb2b1536a6a7a823599dddb47b6ad50b038509d00298d2fcee85b0a5b6c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:16Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.814501 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:16Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.824743 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kbwnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acc48a9a-cc0a-4361-b1bc-7c7684f9bf93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5d8cd4006dec3e1304218cda9d03ea761465524e055daccb8e61ef1368a7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j44jn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kbwnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:16Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.835264 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.835501 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.835589 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.835688 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.835787 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:16Z","lastTransitionTime":"2026-03-12T08:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.837163 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xgl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa0e58ad29c62a229115fb214956dc11f004bd7f4a8188d7435e0857f07ff46b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2nhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xgl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:16Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.855324 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90601560-88a4-4e9e-8c6c-b7cfd6193dfc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcce0bfc6e523f2bd2e05ab2403c80a280f3c22b7c9d4e61bde3bb414b1b6c1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afe86a0784b3f9ecefb87d8d8de1e616420abe185dfcf0e47ad17f7d3e18a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ecaed5f8be6ee7aa9b143575897770874696cfd0d71c30ef8e46d75a67be75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d974ea0c31aedbb16a24b692f45399aa47ba7a1e4b14467bb954c10757b38cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c32315e2dbe601b08a2b2c3db8a50dfc5d99be959dbf33c19f6a2c55df38c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28106812f58b36dc991efb2c3b84b4c3f38016271262418de7cb6d3130ee2743\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28106812f58b36dc991efb2c3b84b4c3f38016271262418de7cb6d3130ee2743\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://759e1933c261395715f2209ea1aa72957dfb4c5b1049c830e87854b0dcdba55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759e1933c261395715f2209ea1aa72957dfb4c5b1049c830e87854b0dcdba55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0d58c5fd72b76f75c191f46f25e0aa9fb3026191f5fb97c210907e1ea38baa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0d58c5fd72b76f75c191f46f25e0aa9fb3026191f5fb97c210907e1ea38baa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:16Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.868668 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0dc5e31901266db36374e4881b99b1a40767bc6ff58afe1e6226adf1f5c60fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aef513f474f25e56fd48d7c8b5f110939514be75dec224356dee883508d604f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:16Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.882855 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:16Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.895669 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:16Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.914397 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7h9l6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:16Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.935857 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4677e418d05a581aee3be8fd421402c02922c5eaf004cbcae1ae691fb67d283e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:16Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.938298 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.938345 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.938357 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.938373 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.938386 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:16Z","lastTransitionTime":"2026-03-12T08:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.949282 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"101483ba-8ed3-40eb-9855-077e9add029f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a3044e9eaa4ccc87b594a3f5440d95bf23c648ceb2e8f45abeadeda79b4a773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1dde7e156e97091c5fa419562607743e842e7e610e685fff1be3ce7ce33f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6d4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:16Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:16 crc kubenswrapper[4809]: I0312 08:00:16.965924 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwn7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab4c7cca-c503-41d4-8abf-5b19429defff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwn7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:16Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.040709 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.040741 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.040751 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.040766 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.040776 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:17Z","lastTransitionTime":"2026-03-12T08:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.144214 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.144252 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.144261 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.144275 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.144285 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:17Z","lastTransitionTime":"2026-03-12T08:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.144273 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90601560-88a4-4e9e-8c6c-b7cfd6193dfc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcce0bfc6e523f2bd2e05ab2403c80a280f3c22b7c9d4e61bde3bb414b1b6c1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afe86a0784b3f9ecefb87d8d8de1e616420abe185dfcf0e47ad17f7d3e18a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ecaed5f8be6ee7aa9b143575897770874696cfd0d71c30ef8e46d75a67be75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d974ea0c31aedbb16a24b692f45399aa47ba7a1e4b14467bb954c10757b38cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c32315e2dbe601b08a2b2c3db8a50dfc5d99be959dbf33c19f6a2c55df38c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28106812f58b36dc991efb2c3b84b4c3f38016271262418de7cb6d3130ee2743\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28106812f58b36dc991efb2c3b84b4c3f38016271262418de7cb6d3130ee2743\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://759e1933c261395715f2209ea1aa72957dfb4c5b1049c830e87854b0dcdba55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759e1933c261395715f2209ea1aa72957dfb4c5b1049c830e87854b0dcdba55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0d58c5fd72b76f75c191f46f25e0aa9fb3026191f5fb97c210907e1ea38baa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0d58c5fd72b76f75c191f46f25e0aa9fb3026191f5fb97c210907e1ea38baa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:17Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.165992 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0dc5e31901266db36374e4881b99b1a40767bc6ff58afe1e6226adf1f5c60fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aef513f474f25e56fd48d7c8b5f110939514be75dec224356dee883508d604f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:17Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.183280 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:17Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.196170 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:17Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.220896 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7h9l6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:17Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.234452 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.234485 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.234494 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.234507 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.234516 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:17Z","lastTransitionTime":"2026-03-12T08:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.237198 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4677e418d05a581aee3be8fd421402c02922c5eaf004cbcae1ae691fb67d283e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:17Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:17 crc kubenswrapper[4809]: E0312 08:00:17.250600 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f5485f66-5ce6-43bd-a720-57f2c3255098\\\",\\\"systemUUID\\\":\\\"f124771b-42e6-4243-8a1b-002105aaecbe\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:17Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.255134 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.255177 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.255187 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.255204 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.255213 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:17Z","lastTransitionTime":"2026-03-12T08:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.255714 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"101483ba-8ed3-40eb-9855-077e9add029f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a3044e9eaa4ccc87b594a3f5440d95bf23c648ceb2e8f45abeadeda79b4a773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1dde7e156e97091c5fa419562607743e842e7e610e685fff1be3ce7ce33f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6d4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:17Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:17 crc kubenswrapper[4809]: E0312 08:00:17.269765 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f5485f66-5ce6-43bd-a720-57f2c3255098\\\",\\\"systemUUID\\\":\\\"f124771b-42e6-4243-8a1b-002105aaecbe\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:17Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.270035 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwn7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab4c7cca-c503-41d4-8abf-5b19429defff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwn7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:17Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.274216 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.274272 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.274290 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.274314 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.274333 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:17Z","lastTransitionTime":"2026-03-12T08:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.283255 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09f5e61a-c077-449f-8291-dbf93ac9aca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95634c408015e53a4c8d044b27803c5070ad086be1f3372a760744db1bc9c033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c386318c2476130eb02e6620ab75fce7d5cd814d7f214bed4f449de60d126a14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efb3f6bb09e8fcec0267b3fdbdf43301e33aa2ba8bfaf8b3a124c7e7714949ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T08:00:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 08:00:01.682190 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 08:00:01.682347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 08:00:01.683010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1444506139/tls.crt::/tmp/serving-cert-1444506139/tls.key\\\\\\\"\\\\nI0312 08:00:02.063779 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 08:00:02.066800 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 08:00:02.066837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 08:00:02.066881 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 08:00:02.066894 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 08:00:02.073309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 08:00:02.073332 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073342 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 08:00:02.073345 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 08:00:02.073348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 08:00:02.073351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 08:00:02.073351 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 08:00:02.075060 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://411ee556dd1461097ea682e93564110b2a46d02734cb91026ba06ffe7de7a80f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:17Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:17 crc kubenswrapper[4809]: E0312 08:00:17.286642 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f5485f66-5ce6-43bd-a720-57f2c3255098\\\",\\\"systemUUID\\\":\\\"f124771b-42e6-4243-8a1b-002105aaecbe\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:17Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.289358 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.289390 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.289401 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.289417 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.289427 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:17Z","lastTransitionTime":"2026-03-12T08:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.298705 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83bfb2b1536a6a7a823599dddb47b6ad50b038509d00298d2fcee85b0a5b6c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:17Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:17 crc kubenswrapper[4809]: E0312 08:00:17.298843 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f5485f66-5ce6-43bd-a720-57f2c3255098\\\",\\\"systemUUID\\\":\\\"f124771b-42e6-4243-8a1b-002105aaecbe\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:17Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.303258 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.303313 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.303327 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.303348 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.303359 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:17Z","lastTransitionTime":"2026-03-12T08:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:17 crc kubenswrapper[4809]: E0312 08:00:17.317819 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f5485f66-5ce6-43bd-a720-57f2c3255098\\\",\\\"systemUUID\\\":\\\"f124771b-42e6-4243-8a1b-002105aaecbe\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:17Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:17 crc kubenswrapper[4809]: E0312 08:00:17.317936 4809 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.318150 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:17Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.319930 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.319999 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.320011 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.320029 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.320040 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:17Z","lastTransitionTime":"2026-03-12T08:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.333453 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kbwnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acc48a9a-cc0a-4361-b1bc-7c7684f9bf93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5d8cd4006dec3e1304218cda9d03ea761465524e055daccb8e61ef1368a7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j44jn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kbwnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:17Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.347066 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xgl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa0e58ad29c62a229115fb214956dc11f004bd7f4a8188d7435e0857f07ff46b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2nhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xgl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:17Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.426760 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.426836 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.426857 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.426884 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.426910 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:17Z","lastTransitionTime":"2026-03-12T08:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.528992 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.529556 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.529573 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.529585 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.529594 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:17Z","lastTransitionTime":"2026-03-12T08:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.529819 4809 generic.go:334] "Generic (PLEG): container finished" podID="ab4c7cca-c503-41d4-8abf-5b19429defff" containerID="ef92b238f6dd7e873d92f846bfa5a6dcbc6f5ab26617736df052a7f331fc5f4c" exitCode=0 Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.529867 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mwn7b" event={"ID":"ab4c7cca-c503-41d4-8abf-5b19429defff","Type":"ContainerDied","Data":"ef92b238f6dd7e873d92f846bfa5a6dcbc6f5ab26617736df052a7f331fc5f4c"} Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.544582 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" event={"ID":"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e","Type":"ContainerStarted","Data":"833b7c9d59ec9b7cf52e9b6c12705d034b46279a46145c181f92d30ffb6e7f39"} Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.544631 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" event={"ID":"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e","Type":"ContainerStarted","Data":"95b49430238b4b9dc6fefc7169830e7106e6d43b1130d0b2fd358f103161f390"} Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.544641 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" event={"ID":"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e","Type":"ContainerStarted","Data":"06c9bb77ec46b7ddf658e865660c808ab82509e698598c42daf2b199305c32c2"} Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.544651 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" event={"ID":"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e","Type":"ContainerStarted","Data":"31a4d3f57db51abc6b567283017a96e97ab98055de09513fe14d0be447942b4c"} Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.544662 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" event={"ID":"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e","Type":"ContainerStarted","Data":"943bab511f40b661cb31e977ec90fbc4edd6cb41d26593b0d7829dfd854fc047"} Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.544673 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" event={"ID":"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e","Type":"ContainerStarted","Data":"11f06a94e052ce6d8bc2496ade124907ca536c37d2710afe11c824327dc3de5a"} Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.553326 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7h9l6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:17Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.571919 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:17Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.584839 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:17Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.598567 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"101483ba-8ed3-40eb-9855-077e9add029f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a3044e9eaa4ccc87b594a3f5440d95bf23c648ceb2e8f45abeadeda79b4a773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1dde7e156e97091c5fa419562607743e842e7e610e685fff1be3ce7ce33f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6d4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:17Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.614747 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwn7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab4c7cca-c503-41d4-8abf-5b19429defff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef92b238f6dd7e873d92f846bfa5a6dcbc6f5ab26617736df052a7f331fc5f4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef92b238f6dd7e873d92f846bfa5a6dcbc6f5ab26617736df052a7f331fc5f4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwn7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:17Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.628013 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4677e418d05a581aee3be8fd421402c02922c5eaf004cbcae1ae691fb67d283e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:17Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.632287 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.632344 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.632356 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.632372 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.632381 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:17Z","lastTransitionTime":"2026-03-12T08:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.642477 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83bfb2b1536a6a7a823599dddb47b6ad50b038509d00298d2fcee85b0a5b6c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:17Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.654132 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:17Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.663960 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kbwnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acc48a9a-cc0a-4361-b1bc-7c7684f9bf93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5d8cd4006dec3e1304218cda9d03ea761465524e055daccb8e61ef1368a7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j44jn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kbwnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:17Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.676936 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xgl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa0e58ad29c62a229115fb214956dc11f004bd7f4a8188d7435e0857f07ff46b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2nhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xgl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:17Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.689942 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09f5e61a-c077-449f-8291-dbf93ac9aca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95634c408015e53a4c8d044b27803c5070ad086be1f3372a760744db1bc9c033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c386318c2476130eb02e6620ab75fce7d5cd814d7f214bed4f449de60d126a14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efb3f6bb09e8fcec0267b3fdbdf43301e33aa2ba8bfaf8b3a124c7e7714949ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T08:00:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 08:00:01.682190 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 08:00:01.682347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 08:00:01.683010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1444506139/tls.crt::/tmp/serving-cert-1444506139/tls.key\\\\\\\"\\\\nI0312 08:00:02.063779 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 08:00:02.066800 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 08:00:02.066837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 08:00:02.066881 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 08:00:02.066894 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 08:00:02.073309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 08:00:02.073332 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073342 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 08:00:02.073345 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 08:00:02.073348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 08:00:02.073351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 08:00:02.073351 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 08:00:02.075060 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://411ee556dd1461097ea682e93564110b2a46d02734cb91026ba06ffe7de7a80f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:17Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.702085 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0dc5e31901266db36374e4881b99b1a40767bc6ff58afe1e6226adf1f5c60fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aef513f474f25e56fd48d7c8b5f110939514be75dec224356dee883508d604f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:17Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.718618 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90601560-88a4-4e9e-8c6c-b7cfd6193dfc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcce0bfc6e523f2bd2e05ab2403c80a280f3c22b7c9d4e61bde3bb414b1b6c1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afe86a0784b3f9ecefb87d8d8de1e616420abe185dfcf0e47ad17f7d3e18a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ecaed5f8be6ee7aa9b143575897770874696cfd0d71c30ef8e46d75a67be75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d974ea0c31aedbb16a24b692f45399aa47ba7a1e4b14467bb954c10757b38cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c32315e2dbe601b08a2b2c3db8a50dfc5d99be959dbf33c19f6a2c55df38c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28106812f58b36dc991efb2c3b84b4c3f38016271262418de7cb6d3130ee2743\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28106812f58b36dc991efb2c3b84b4c3f38016271262418de7cb6d3130ee2743\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://759e1933c261395715f2209ea1aa72957dfb4c5b1049c830e87854b0dcdba55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759e1933c261395715f2209ea1aa72957dfb4c5b1049c830e87854b0dcdba55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0d58c5fd72b76f75c191f46f25e0aa9fb3026191f5fb97c210907e1ea38baa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0d58c5fd72b76f75c191f46f25e0aa9fb3026191f5fb97c210907e1ea38baa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:17Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.734919 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.734957 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.734966 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.734979 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.734989 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:17Z","lastTransitionTime":"2026-03-12T08:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.837718 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.837966 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.837975 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.837987 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.837996 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:17Z","lastTransitionTime":"2026-03-12T08:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.940405 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.940450 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.940464 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.940481 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:17 crc kubenswrapper[4809]: I0312 08:00:17.940493 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:17Z","lastTransitionTime":"2026-03-12T08:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:18 crc kubenswrapper[4809]: I0312 08:00:18.042838 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:18 crc kubenswrapper[4809]: I0312 08:00:18.042901 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:18 crc kubenswrapper[4809]: I0312 08:00:18.042915 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:18 crc kubenswrapper[4809]: I0312 08:00:18.042933 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:18 crc kubenswrapper[4809]: I0312 08:00:18.042944 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:18Z","lastTransitionTime":"2026-03-12T08:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:18 crc kubenswrapper[4809]: I0312 08:00:18.105003 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:00:18 crc kubenswrapper[4809]: I0312 08:00:18.105098 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:00:18 crc kubenswrapper[4809]: I0312 08:00:18.105161 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:00:18 crc kubenswrapper[4809]: E0312 08:00:18.105202 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 08:00:18 crc kubenswrapper[4809]: E0312 08:00:18.105310 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 08:00:18 crc kubenswrapper[4809]: E0312 08:00:18.105409 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 08:00:18 crc kubenswrapper[4809]: I0312 08:00:18.145823 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:18 crc kubenswrapper[4809]: I0312 08:00:18.145862 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:18 crc kubenswrapper[4809]: I0312 08:00:18.145872 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:18 crc kubenswrapper[4809]: I0312 08:00:18.145887 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:18 crc kubenswrapper[4809]: I0312 08:00:18.145897 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:18Z","lastTransitionTime":"2026-03-12T08:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:18 crc kubenswrapper[4809]: I0312 08:00:18.248380 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:18 crc kubenswrapper[4809]: I0312 08:00:18.248421 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:18 crc kubenswrapper[4809]: I0312 08:00:18.248431 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:18 crc kubenswrapper[4809]: I0312 08:00:18.248446 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:18 crc kubenswrapper[4809]: I0312 08:00:18.248460 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:18Z","lastTransitionTime":"2026-03-12T08:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:18 crc kubenswrapper[4809]: I0312 08:00:18.352016 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:18 crc kubenswrapper[4809]: I0312 08:00:18.352077 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:18 crc kubenswrapper[4809]: I0312 08:00:18.352096 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:18 crc kubenswrapper[4809]: I0312 08:00:18.352147 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:18 crc kubenswrapper[4809]: I0312 08:00:18.352166 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:18Z","lastTransitionTime":"2026-03-12T08:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:18 crc kubenswrapper[4809]: I0312 08:00:18.455249 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:18 crc kubenswrapper[4809]: I0312 08:00:18.455296 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:18 crc kubenswrapper[4809]: I0312 08:00:18.455306 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:18 crc kubenswrapper[4809]: I0312 08:00:18.455330 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:18 crc kubenswrapper[4809]: I0312 08:00:18.455341 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:18Z","lastTransitionTime":"2026-03-12T08:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:18 crc kubenswrapper[4809]: I0312 08:00:18.549651 4809 generic.go:334] "Generic (PLEG): container finished" podID="ab4c7cca-c503-41d4-8abf-5b19429defff" containerID="04cf364fbd4a2f9039a1733e337000f89db4b2ef6b4cd8de079f4c01c91add76" exitCode=0 Mar 12 08:00:18 crc kubenswrapper[4809]: I0312 08:00:18.549686 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mwn7b" event={"ID":"ab4c7cca-c503-41d4-8abf-5b19429defff","Type":"ContainerDied","Data":"04cf364fbd4a2f9039a1733e337000f89db4b2ef6b4cd8de079f4c01c91add76"} Mar 12 08:00:18 crc kubenswrapper[4809]: I0312 08:00:18.557525 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:18 crc kubenswrapper[4809]: I0312 08:00:18.557566 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:18 crc kubenswrapper[4809]: I0312 08:00:18.557578 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:18 crc kubenswrapper[4809]: I0312 08:00:18.557597 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:18 crc kubenswrapper[4809]: I0312 08:00:18.557608 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:18Z","lastTransitionTime":"2026-03-12T08:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:18 crc kubenswrapper[4809]: I0312 08:00:18.593279 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90601560-88a4-4e9e-8c6c-b7cfd6193dfc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcce0bfc6e523f2bd2e05ab2403c80a280f3c22b7c9d4e61bde3bb414b1b6c1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afe86a0784b3f9ecefb87d8d8de1e616420abe185dfcf0e47ad17f7d3e18a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ecaed5f8be6ee7aa9b143575897770874696cfd0d71c30ef8e46d75a67be75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d974ea0c31aedbb16a24b692f45399aa47ba7a1e4b14467bb954c10757b38cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c32315e2dbe601b08a2b2c3db8a50dfc5d99be959dbf33c19f6a2c55df38c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28106812f58b36dc991efb2c3b84b4c3f38016271262418de7cb6d3130ee2743\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28106812f58b36dc991efb2c3b84b4c3f38016271262418de7cb6d3130ee2743\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://759e1933c261395715f2209ea1aa72957dfb4c5b1049c830e87854b0dcdba55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759e1933c261395715f2209ea1aa72957dfb4c5b1049c830e87854b0dcdba55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0d58c5fd72b76f75c191f46f25e0aa9fb3026191f5fb97c210907e1ea38baa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0d58c5fd72b76f75c191f46f25e0aa9fb3026191f5fb97c210907e1ea38baa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:18Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:18 crc kubenswrapper[4809]: I0312 08:00:18.607447 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0dc5e31901266db36374e4881b99b1a40767bc6ff58afe1e6226adf1f5c60fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aef513f474f25e56fd48d7c8b5f110939514be75dec224356dee883508d604f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:18Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:18 crc kubenswrapper[4809]: I0312 08:00:18.621507 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:18Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:18 crc kubenswrapper[4809]: I0312 08:00:18.632526 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:18Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:18 crc kubenswrapper[4809]: I0312 08:00:18.656665 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7h9l6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:18Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:18 crc kubenswrapper[4809]: I0312 08:00:18.659527 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:18 crc kubenswrapper[4809]: I0312 08:00:18.659553 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:18 crc kubenswrapper[4809]: I0312 08:00:18.659562 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:18 crc kubenswrapper[4809]: I0312 08:00:18.659575 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:18 crc kubenswrapper[4809]: I0312 08:00:18.659584 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:18Z","lastTransitionTime":"2026-03-12T08:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:18 crc kubenswrapper[4809]: I0312 08:00:18.674716 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4677e418d05a581aee3be8fd421402c02922c5eaf004cbcae1ae691fb67d283e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:18Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:18 crc kubenswrapper[4809]: I0312 08:00:18.687165 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"101483ba-8ed3-40eb-9855-077e9add029f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a3044e9eaa4ccc87b594a3f5440d95bf23c648ceb2e8f45abeadeda79b4a773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1dde7e156e97091c5fa419562607743e842e7e610e685fff1be3ce7ce33f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6d4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:18Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:18 crc kubenswrapper[4809]: I0312 08:00:18.708378 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwn7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab4c7cca-c503-41d4-8abf-5b19429defff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef92b238f6dd7e873d92f846bfa5a6dcbc6f5ab26617736df052a7f331fc5f4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef92b238f6dd7e873d92f846bfa5a6dcbc6f5ab26617736df052a7f331fc5f4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04cf364fbd4a2f9039a1733e337000f89db4b2ef6b4cd8de079f4c01c91add76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04cf364fbd4a2f9039a1733e337000f89db4b2ef6b4cd8de079f4c01c91add76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwn7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:18Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:18 crc kubenswrapper[4809]: I0312 08:00:18.724395 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09f5e61a-c077-449f-8291-dbf93ac9aca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95634c408015e53a4c8d044b27803c5070ad086be1f3372a760744db1bc9c033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c386318c2476130eb02e6620ab75fce7d5cd814d7f214bed4f449de60d126a14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efb3f6bb09e8fcec0267b3fdbdf43301e33aa2ba8bfaf8b3a124c7e7714949ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T08:00:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 08:00:01.682190 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 08:00:01.682347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 08:00:01.683010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1444506139/tls.crt::/tmp/serving-cert-1444506139/tls.key\\\\\\\"\\\\nI0312 08:00:02.063779 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 08:00:02.066800 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 08:00:02.066837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 08:00:02.066881 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 08:00:02.066894 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 08:00:02.073309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 08:00:02.073332 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073342 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 08:00:02.073345 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 08:00:02.073348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 08:00:02.073351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 08:00:02.073351 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 08:00:02.075060 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://411ee556dd1461097ea682e93564110b2a46d02734cb91026ba06ffe7de7a80f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:18Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:18 crc kubenswrapper[4809]: I0312 08:00:18.738080 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83bfb2b1536a6a7a823599dddb47b6ad50b038509d00298d2fcee85b0a5b6c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:18Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:18 crc kubenswrapper[4809]: I0312 08:00:18.750372 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:18Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:18 crc kubenswrapper[4809]: I0312 08:00:18.766997 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kbwnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acc48a9a-cc0a-4361-b1bc-7c7684f9bf93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5d8cd4006dec3e1304218cda9d03ea761465524e055daccb8e61ef1368a7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j44jn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kbwnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:18Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:18 crc kubenswrapper[4809]: I0312 08:00:18.767390 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:18 crc kubenswrapper[4809]: I0312 08:00:18.767416 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:18 crc kubenswrapper[4809]: I0312 08:00:18.767429 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:18 crc kubenswrapper[4809]: I0312 08:00:18.767447 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:18 crc kubenswrapper[4809]: I0312 08:00:18.767460 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:18Z","lastTransitionTime":"2026-03-12T08:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:18 crc kubenswrapper[4809]: I0312 08:00:18.781283 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xgl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa0e58ad29c62a229115fb214956dc11f004bd7f4a8188d7435e0857f07ff46b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2nhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xgl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:18Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:18 crc kubenswrapper[4809]: I0312 08:00:18.869801 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:18 crc kubenswrapper[4809]: I0312 08:00:18.869844 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:18 crc kubenswrapper[4809]: I0312 08:00:18.869856 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:18 crc kubenswrapper[4809]: I0312 08:00:18.869872 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:18 crc kubenswrapper[4809]: I0312 08:00:18.869884 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:18Z","lastTransitionTime":"2026-03-12T08:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:18 crc kubenswrapper[4809]: I0312 08:00:18.972830 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:18 crc kubenswrapper[4809]: I0312 08:00:18.972876 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:18 crc kubenswrapper[4809]: I0312 08:00:18.972894 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:18 crc kubenswrapper[4809]: I0312 08:00:18.972915 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:18 crc kubenswrapper[4809]: I0312 08:00:18.972932 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:18Z","lastTransitionTime":"2026-03-12T08:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:19 crc kubenswrapper[4809]: I0312 08:00:19.076002 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:19 crc kubenswrapper[4809]: I0312 08:00:19.076040 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:19 crc kubenswrapper[4809]: I0312 08:00:19.076049 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:19 crc kubenswrapper[4809]: I0312 08:00:19.076067 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:19 crc kubenswrapper[4809]: I0312 08:00:19.076076 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:19Z","lastTransitionTime":"2026-03-12T08:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:19 crc kubenswrapper[4809]: I0312 08:00:19.178760 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:19 crc kubenswrapper[4809]: I0312 08:00:19.178818 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:19 crc kubenswrapper[4809]: I0312 08:00:19.178835 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:19 crc kubenswrapper[4809]: I0312 08:00:19.178860 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:19 crc kubenswrapper[4809]: I0312 08:00:19.178878 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:19Z","lastTransitionTime":"2026-03-12T08:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:19 crc kubenswrapper[4809]: I0312 08:00:19.316600 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:19 crc kubenswrapper[4809]: I0312 08:00:19.316657 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:19 crc kubenswrapper[4809]: I0312 08:00:19.316670 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:19 crc kubenswrapper[4809]: I0312 08:00:19.316687 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:19 crc kubenswrapper[4809]: I0312 08:00:19.316701 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:19Z","lastTransitionTime":"2026-03-12T08:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:19 crc kubenswrapper[4809]: I0312 08:00:19.418937 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:19 crc kubenswrapper[4809]: I0312 08:00:19.419229 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:19 crc kubenswrapper[4809]: I0312 08:00:19.419241 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:19 crc kubenswrapper[4809]: I0312 08:00:19.419258 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:19 crc kubenswrapper[4809]: I0312 08:00:19.419268 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:19Z","lastTransitionTime":"2026-03-12T08:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:19 crc kubenswrapper[4809]: I0312 08:00:19.521351 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:19 crc kubenswrapper[4809]: I0312 08:00:19.521389 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:19 crc kubenswrapper[4809]: I0312 08:00:19.521398 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:19 crc kubenswrapper[4809]: I0312 08:00:19.521412 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:19 crc kubenswrapper[4809]: I0312 08:00:19.521423 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:19Z","lastTransitionTime":"2026-03-12T08:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:19 crc kubenswrapper[4809]: I0312 08:00:19.556509 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" event={"ID":"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e","Type":"ContainerStarted","Data":"3a39013e253a1014bfc666169571d96ed943c28cc73221fcfa3f85d31ee9ea85"} Mar 12 08:00:19 crc kubenswrapper[4809]: I0312 08:00:19.558921 4809 generic.go:334] "Generic (PLEG): container finished" podID="ab4c7cca-c503-41d4-8abf-5b19429defff" containerID="844db69e092396d79015605fd21fde37c13689eb546b220c8e9ecc789aa7dc75" exitCode=0 Mar 12 08:00:19 crc kubenswrapper[4809]: I0312 08:00:19.558955 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mwn7b" event={"ID":"ab4c7cca-c503-41d4-8abf-5b19429defff","Type":"ContainerDied","Data":"844db69e092396d79015605fd21fde37c13689eb546b220c8e9ecc789aa7dc75"} Mar 12 08:00:19 crc kubenswrapper[4809]: I0312 08:00:19.582035 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09f5e61a-c077-449f-8291-dbf93ac9aca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95634c408015e53a4c8d044b27803c5070ad086be1f3372a760744db1bc9c033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c386318c2476130eb02e6620ab75fce7d5cd814d7f214bed4f449de60d126a14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efb3f6bb09e8fcec0267b3fdbdf43301e33aa2ba8bfaf8b3a124c7e7714949ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T08:00:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 08:00:01.682190 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 08:00:01.682347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 08:00:01.683010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1444506139/tls.crt::/tmp/serving-cert-1444506139/tls.key\\\\\\\"\\\\nI0312 08:00:02.063779 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 08:00:02.066800 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 08:00:02.066837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 08:00:02.066881 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 08:00:02.066894 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 08:00:02.073309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 08:00:02.073332 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073342 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 08:00:02.073345 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 08:00:02.073348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 08:00:02.073351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 08:00:02.073351 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 08:00:02.075060 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://411ee556dd1461097ea682e93564110b2a46d02734cb91026ba06ffe7de7a80f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:19Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:19 crc kubenswrapper[4809]: I0312 08:00:19.599439 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83bfb2b1536a6a7a823599dddb47b6ad50b038509d00298d2fcee85b0a5b6c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:19Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:19 crc kubenswrapper[4809]: I0312 08:00:19.618969 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:19Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:19 crc kubenswrapper[4809]: I0312 08:00:19.623657 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:19 crc kubenswrapper[4809]: I0312 08:00:19.623698 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:19 crc kubenswrapper[4809]: I0312 08:00:19.623708 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:19 crc kubenswrapper[4809]: I0312 08:00:19.623722 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:19 crc kubenswrapper[4809]: I0312 08:00:19.623731 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:19Z","lastTransitionTime":"2026-03-12T08:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:19 crc kubenswrapper[4809]: I0312 08:00:19.632050 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kbwnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acc48a9a-cc0a-4361-b1bc-7c7684f9bf93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5d8cd4006dec3e1304218cda9d03ea761465524e055daccb8e61ef1368a7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j44jn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kbwnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:19Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:19 crc kubenswrapper[4809]: I0312 08:00:19.655085 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xgl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa0e58ad29c62a229115fb214956dc11f004bd7f4a8188d7435e0857f07ff46b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2nhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xgl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:19Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:19 crc kubenswrapper[4809]: I0312 08:00:19.679511 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90601560-88a4-4e9e-8c6c-b7cfd6193dfc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcce0bfc6e523f2bd2e05ab2403c80a280f3c22b7c9d4e61bde3bb414b1b6c1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afe86a0784b3f9ecefb87d8d8de1e616420abe185dfcf0e47ad17f7d3e18a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ecaed5f8be6ee7aa9b143575897770874696cfd0d71c30ef8e46d75a67be75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d974ea0c31aedbb16a24b692f45399aa47ba7a1e4b14467bb954c10757b38cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c32315e2dbe601b08a2b2c3db8a50dfc5d99be959dbf33c19f6a2c55df38c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28106812f58b36dc991efb2c3b84b4c3f38016271262418de7cb6d3130ee2743\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28106812f58b36dc991efb2c3b84b4c3f38016271262418de7cb6d3130ee2743\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://759e1933c261395715f2209ea1aa72957dfb4c5b1049c830e87854b0dcdba55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759e1933c261395715f2209ea1aa72957dfb4c5b1049c830e87854b0dcdba55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0d58c5fd72b76f75c191f46f25e0aa9fb3026191f5fb97c210907e1ea38baa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0d58c5fd72b76f75c191f46f25e0aa9fb3026191f5fb97c210907e1ea38baa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:19Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:19 crc kubenswrapper[4809]: I0312 08:00:19.690625 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0dc5e31901266db36374e4881b99b1a40767bc6ff58afe1e6226adf1f5c60fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aef513f474f25e56fd48d7c8b5f110939514be75dec224356dee883508d604f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:19Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:19 crc kubenswrapper[4809]: I0312 08:00:19.701354 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:19Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:19 crc kubenswrapper[4809]: I0312 08:00:19.712990 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:19Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:19 crc kubenswrapper[4809]: I0312 08:00:19.726595 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:19 crc kubenswrapper[4809]: I0312 08:00:19.726632 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:19 crc kubenswrapper[4809]: I0312 08:00:19.726644 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:19 crc kubenswrapper[4809]: I0312 08:00:19.726661 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:19 crc kubenswrapper[4809]: I0312 08:00:19.726673 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:19Z","lastTransitionTime":"2026-03-12T08:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:19 crc kubenswrapper[4809]: I0312 08:00:19.729213 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7h9l6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:19Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:19 crc kubenswrapper[4809]: I0312 08:00:19.742583 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4677e418d05a581aee3be8fd421402c02922c5eaf004cbcae1ae691fb67d283e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:19Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:19 crc kubenswrapper[4809]: I0312 08:00:19.752724 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"101483ba-8ed3-40eb-9855-077e9add029f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a3044e9eaa4ccc87b594a3f5440d95bf23c648ceb2e8f45abeadeda79b4a773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1dde7e156e97091c5fa419562607743e842e7e610e685fff1be3ce7ce33f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6d4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:19Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:19 crc kubenswrapper[4809]: I0312 08:00:19.765358 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwn7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab4c7cca-c503-41d4-8abf-5b19429defff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef92b238f6dd7e873d92f846bfa5a6dcbc6f5ab26617736df052a7f331fc5f4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef92b238f6dd7e873d92f846bfa5a6dcbc6f5ab26617736df052a7f331fc5f4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04cf364fbd4a2f9039a1733e337000f89db4b2ef6b4cd8de079f4c01c91add76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04cf364fbd4a2f9039a1733e337000f89db4b2ef6b4cd8de079f4c01c91add76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://844db69e092396d79015605fd21fde37c13689eb546b220c8e9ecc789aa7dc75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844db69e092396d79015605fd21fde37c13689eb546b220c8e9ecc789aa7dc75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwn7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:19Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:19 crc kubenswrapper[4809]: I0312 08:00:19.828998 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:19 crc kubenswrapper[4809]: I0312 08:00:19.829038 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:19 crc kubenswrapper[4809]: I0312 08:00:19.829049 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:19 crc kubenswrapper[4809]: I0312 08:00:19.829069 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:19 crc kubenswrapper[4809]: I0312 08:00:19.829080 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:19Z","lastTransitionTime":"2026-03-12T08:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:19 crc kubenswrapper[4809]: I0312 08:00:19.854003 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:00:19 crc kubenswrapper[4809]: I0312 08:00:19.854087 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:00:19 crc kubenswrapper[4809]: E0312 08:00:19.854191 4809 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 12 08:00:19 crc kubenswrapper[4809]: E0312 08:00:19.854210 4809 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 12 08:00:19 crc kubenswrapper[4809]: E0312 08:00:19.854263 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-12 08:00:35.854244704 +0000 UTC m=+109.436280437 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 12 08:00:19 crc kubenswrapper[4809]: E0312 08:00:19.854282 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-12 08:00:35.854274765 +0000 UTC m=+109.436310498 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 12 08:00:19 crc kubenswrapper[4809]: I0312 08:00:19.931586 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:19 crc kubenswrapper[4809]: I0312 08:00:19.931625 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:19 crc kubenswrapper[4809]: I0312 08:00:19.931633 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:19 crc kubenswrapper[4809]: I0312 08:00:19.931647 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:19 crc kubenswrapper[4809]: I0312 08:00:19.931656 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:19Z","lastTransitionTime":"2026-03-12T08:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:19 crc kubenswrapper[4809]: I0312 08:00:19.955236 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 08:00:19 crc kubenswrapper[4809]: I0312 08:00:19.955327 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:00:19 crc kubenswrapper[4809]: I0312 08:00:19.955375 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:00:19 crc kubenswrapper[4809]: E0312 08:00:19.955439 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 08:00:35.95542274 +0000 UTC m=+109.537458473 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:00:19 crc kubenswrapper[4809]: E0312 08:00:19.955536 4809 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 08:00:19 crc kubenswrapper[4809]: E0312 08:00:19.955548 4809 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 08:00:19 crc kubenswrapper[4809]: E0312 08:00:19.955557 4809 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 08:00:19 crc kubenswrapper[4809]: E0312 08:00:19.955577 4809 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 08:00:19 crc kubenswrapper[4809]: E0312 08:00:19.955614 4809 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 08:00:19 crc kubenswrapper[4809]: E0312 08:00:19.955592 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-12 08:00:35.955586436 +0000 UTC m=+109.537622169 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 08:00:19 crc kubenswrapper[4809]: E0312 08:00:19.955630 4809 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 08:00:19 crc kubenswrapper[4809]: E0312 08:00:19.955705 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-12 08:00:35.955685378 +0000 UTC m=+109.537721121 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 08:00:20 crc kubenswrapper[4809]: I0312 08:00:20.033640 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:20 crc kubenswrapper[4809]: I0312 08:00:20.033706 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:20 crc kubenswrapper[4809]: I0312 08:00:20.033725 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:20 crc kubenswrapper[4809]: I0312 08:00:20.033749 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:20 crc kubenswrapper[4809]: I0312 08:00:20.033769 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:20Z","lastTransitionTime":"2026-03-12T08:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:20 crc kubenswrapper[4809]: I0312 08:00:20.105309 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:00:20 crc kubenswrapper[4809]: I0312 08:00:20.105388 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:00:20 crc kubenswrapper[4809]: E0312 08:00:20.105454 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 08:00:20 crc kubenswrapper[4809]: E0312 08:00:20.105584 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 08:00:20 crc kubenswrapper[4809]: I0312 08:00:20.105704 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:00:20 crc kubenswrapper[4809]: E0312 08:00:20.105803 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 08:00:20 crc kubenswrapper[4809]: I0312 08:00:20.136243 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:20 crc kubenswrapper[4809]: I0312 08:00:20.136326 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:20 crc kubenswrapper[4809]: I0312 08:00:20.136347 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:20 crc kubenswrapper[4809]: I0312 08:00:20.136763 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:20 crc kubenswrapper[4809]: I0312 08:00:20.137221 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:20Z","lastTransitionTime":"2026-03-12T08:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:20 crc kubenswrapper[4809]: I0312 08:00:20.239688 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:20 crc kubenswrapper[4809]: I0312 08:00:20.239746 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:20 crc kubenswrapper[4809]: I0312 08:00:20.239758 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:20 crc kubenswrapper[4809]: I0312 08:00:20.239775 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:20 crc kubenswrapper[4809]: I0312 08:00:20.239790 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:20Z","lastTransitionTime":"2026-03-12T08:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:20 crc kubenswrapper[4809]: I0312 08:00:20.342329 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:20 crc kubenswrapper[4809]: I0312 08:00:20.342365 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:20 crc kubenswrapper[4809]: I0312 08:00:20.342375 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:20 crc kubenswrapper[4809]: I0312 08:00:20.342389 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:20 crc kubenswrapper[4809]: I0312 08:00:20.342401 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:20Z","lastTransitionTime":"2026-03-12T08:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:20 crc kubenswrapper[4809]: I0312 08:00:20.444748 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:20 crc kubenswrapper[4809]: I0312 08:00:20.444792 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:20 crc kubenswrapper[4809]: I0312 08:00:20.444806 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:20 crc kubenswrapper[4809]: I0312 08:00:20.444824 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:20 crc kubenswrapper[4809]: I0312 08:00:20.444836 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:20Z","lastTransitionTime":"2026-03-12T08:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:20 crc kubenswrapper[4809]: I0312 08:00:20.546835 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:20 crc kubenswrapper[4809]: I0312 08:00:20.546881 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:20 crc kubenswrapper[4809]: I0312 08:00:20.546893 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:20 crc kubenswrapper[4809]: I0312 08:00:20.546910 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:20 crc kubenswrapper[4809]: I0312 08:00:20.546921 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:20Z","lastTransitionTime":"2026-03-12T08:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:20 crc kubenswrapper[4809]: I0312 08:00:20.565850 4809 generic.go:334] "Generic (PLEG): container finished" podID="ab4c7cca-c503-41d4-8abf-5b19429defff" containerID="5143175a54d02709bf304d2a4d50a1c3b2c441552e206628f9e1a5dfdb27ae97" exitCode=0 Mar 12 08:00:20 crc kubenswrapper[4809]: I0312 08:00:20.565883 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mwn7b" event={"ID":"ab4c7cca-c503-41d4-8abf-5b19429defff","Type":"ContainerDied","Data":"5143175a54d02709bf304d2a4d50a1c3b2c441552e206628f9e1a5dfdb27ae97"} Mar 12 08:00:20 crc kubenswrapper[4809]: I0312 08:00:20.584801 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83bfb2b1536a6a7a823599dddb47b6ad50b038509d00298d2fcee85b0a5b6c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:20Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:20 crc kubenswrapper[4809]: I0312 08:00:20.602625 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:20Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:20 crc kubenswrapper[4809]: I0312 08:00:20.614561 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kbwnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acc48a9a-cc0a-4361-b1bc-7c7684f9bf93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5d8cd4006dec3e1304218cda9d03ea761465524e055daccb8e61ef1368a7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j44jn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kbwnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:20Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:20 crc kubenswrapper[4809]: I0312 08:00:20.633881 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xgl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa0e58ad29c62a229115fb214956dc11f004bd7f4a8188d7435e0857f07ff46b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2nhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xgl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:20Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:20 crc kubenswrapper[4809]: I0312 08:00:20.649765 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:20 crc kubenswrapper[4809]: I0312 08:00:20.649821 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:20 crc kubenswrapper[4809]: I0312 08:00:20.649831 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:20 crc kubenswrapper[4809]: I0312 08:00:20.649851 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:20 crc kubenswrapper[4809]: I0312 08:00:20.649864 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:20Z","lastTransitionTime":"2026-03-12T08:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:20 crc kubenswrapper[4809]: I0312 08:00:20.659941 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09f5e61a-c077-449f-8291-dbf93ac9aca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95634c408015e53a4c8d044b27803c5070ad086be1f3372a760744db1bc9c033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c386318c2476130eb02e6620ab75fce7d5cd814d7f214bed4f449de60d126a14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efb3f6bb09e8fcec0267b3fdbdf43301e33aa2ba8bfaf8b3a124c7e7714949ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T08:00:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 08:00:01.682190 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 08:00:01.682347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 08:00:01.683010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1444506139/tls.crt::/tmp/serving-cert-1444506139/tls.key\\\\\\\"\\\\nI0312 08:00:02.063779 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 08:00:02.066800 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 08:00:02.066837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 08:00:02.066881 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 08:00:02.066894 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 08:00:02.073309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 08:00:02.073332 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073342 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 08:00:02.073345 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 08:00:02.073348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 08:00:02.073351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 08:00:02.073351 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 08:00:02.075060 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://411ee556dd1461097ea682e93564110b2a46d02734cb91026ba06ffe7de7a80f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:20Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:20 crc kubenswrapper[4809]: I0312 08:00:20.675573 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0dc5e31901266db36374e4881b99b1a40767bc6ff58afe1e6226adf1f5c60fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aef513f474f25e56fd48d7c8b5f110939514be75dec224356dee883508d604f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:20Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:20 crc kubenswrapper[4809]: I0312 08:00:20.697513 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90601560-88a4-4e9e-8c6c-b7cfd6193dfc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcce0bfc6e523f2bd2e05ab2403c80a280f3c22b7c9d4e61bde3bb414b1b6c1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afe86a0784b3f9ecefb87d8d8de1e616420abe185dfcf0e47ad17f7d3e18a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ecaed5f8be6ee7aa9b143575897770874696cfd0d71c30ef8e46d75a67be75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d974ea0c31aedbb16a24b692f45399aa47ba7a1e4b14467bb954c10757b38cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c32315e2dbe601b08a2b2c3db8a50dfc5d99be959dbf33c19f6a2c55df38c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28106812f58b36dc991efb2c3b84b4c3f38016271262418de7cb6d3130ee2743\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28106812f58b36dc991efb2c3b84b4c3f38016271262418de7cb6d3130ee2743\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://759e1933c261395715f2209ea1aa72957dfb4c5b1049c830e87854b0dcdba55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759e1933c261395715f2209ea1aa72957dfb4c5b1049c830e87854b0dcdba55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0d58c5fd72b76f75c191f46f25e0aa9fb3026191f5fb97c210907e1ea38baa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0d58c5fd72b76f75c191f46f25e0aa9fb3026191f5fb97c210907e1ea38baa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:20Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:20 crc kubenswrapper[4809]: I0312 08:00:20.719152 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7h9l6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:20Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:20 crc kubenswrapper[4809]: I0312 08:00:20.732084 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:20Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:20 crc kubenswrapper[4809]: I0312 08:00:20.746087 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:20Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:20 crc kubenswrapper[4809]: I0312 08:00:20.755454 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:20 crc kubenswrapper[4809]: I0312 08:00:20.755491 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:20 crc kubenswrapper[4809]: I0312 08:00:20.755504 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:20 crc kubenswrapper[4809]: I0312 08:00:20.755520 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:20 crc kubenswrapper[4809]: I0312 08:00:20.755532 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:20Z","lastTransitionTime":"2026-03-12T08:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:20 crc kubenswrapper[4809]: I0312 08:00:20.765330 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"101483ba-8ed3-40eb-9855-077e9add029f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a3044e9eaa4ccc87b594a3f5440d95bf23c648ceb2e8f45abeadeda79b4a773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1dde7e156e97091c5fa419562607743e842e7e610e685fff1be3ce7ce33f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6d4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:20Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:20 crc kubenswrapper[4809]: I0312 08:00:20.782639 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwn7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab4c7cca-c503-41d4-8abf-5b19429defff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef92b238f6dd7e873d92f846bfa5a6dcbc6f5ab26617736df052a7f331fc5f4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef92b238f6dd7e873d92f846bfa5a6dcbc6f5ab26617736df052a7f331fc5f4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04cf364fbd4a2f9039a1733e337000f89db4b2ef6b4cd8de079f4c01c91add76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04cf364fbd4a2f9039a1733e337000f89db4b2ef6b4cd8de079f4c01c91add76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://844db69e092396d79015605fd21fde37c13689eb546b220c8e9ecc789aa7dc75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844db69e092396d79015605fd21fde37c13689eb546b220c8e9ecc789aa7dc75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5143175a54d02709bf304d2a4d50a1c3b2c441552e206628f9e1a5dfdb27ae97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5143175a54d02709bf304d2a4d50a1c3b2c441552e206628f9e1a5dfdb27ae97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwn7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:20Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:20 crc kubenswrapper[4809]: I0312 08:00:20.794129 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4677e418d05a581aee3be8fd421402c02922c5eaf004cbcae1ae691fb67d283e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:20Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:20 crc kubenswrapper[4809]: I0312 08:00:20.857890 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:20 crc kubenswrapper[4809]: I0312 08:00:20.857931 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:20 crc kubenswrapper[4809]: I0312 08:00:20.857940 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:20 crc kubenswrapper[4809]: I0312 08:00:20.857958 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:20 crc kubenswrapper[4809]: I0312 08:00:20.857968 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:20Z","lastTransitionTime":"2026-03-12T08:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:20 crc kubenswrapper[4809]: I0312 08:00:20.960027 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:20 crc kubenswrapper[4809]: I0312 08:00:20.960063 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:20 crc kubenswrapper[4809]: I0312 08:00:20.960073 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:20 crc kubenswrapper[4809]: I0312 08:00:20.960092 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:20 crc kubenswrapper[4809]: I0312 08:00:20.960104 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:20Z","lastTransitionTime":"2026-03-12T08:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.015614 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-vcxlw"] Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.015938 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-vcxlw" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.017762 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.018071 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.019593 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.019839 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.029067 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4677e418d05a581aee3be8fd421402c02922c5eaf004cbcae1ae691fb67d283e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:21Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.047446 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"101483ba-8ed3-40eb-9855-077e9add029f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a3044e9eaa4ccc87b594a3f5440d95bf23c648ceb2e8f45abeadeda79b4a773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1dde7e156e97091c5fa419562607743e842e7e610e685fff1be3ce7ce33f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6d4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:21Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.060365 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwn7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab4c7cca-c503-41d4-8abf-5b19429defff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef92b238f6dd7e873d92f846bfa5a6dcbc6f5ab26617736df052a7f331fc5f4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef92b238f6dd7e873d92f846bfa5a6dcbc6f5ab26617736df052a7f331fc5f4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04cf364fbd4a2f9039a1733e337000f89db4b2ef6b4cd8de079f4c01c91add76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04cf364fbd4a2f9039a1733e337000f89db4b2ef6b4cd8de079f4c01c91add76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://844db69e092396d79015605fd21fde37c13689eb546b220c8e9ecc789aa7dc75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844db69e092396d79015605fd21fde37c13689eb546b220c8e9ecc789aa7dc75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5143175a54d02709bf304d2a4d50a1c3b2c441552e206628f9e1a5dfdb27ae97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5143175a54d02709bf304d2a4d50a1c3b2c441552e206628f9e1a5dfdb27ae97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwn7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:21Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.061958 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.062000 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.062012 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.062029 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.062041 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:21Z","lastTransitionTime":"2026-03-12T08:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.077702 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09f5e61a-c077-449f-8291-dbf93ac9aca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95634c408015e53a4c8d044b27803c5070ad086be1f3372a760744db1bc9c033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c386318c2476130eb02e6620ab75fce7d5cd814d7f214bed4f449de60d126a14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efb3f6bb09e8fcec0267b3fdbdf43301e33aa2ba8bfaf8b3a124c7e7714949ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T08:00:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 08:00:01.682190 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 08:00:01.682347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 08:00:01.683010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1444506139/tls.crt::/tmp/serving-cert-1444506139/tls.key\\\\\\\"\\\\nI0312 08:00:02.063779 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 08:00:02.066800 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 08:00:02.066837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 08:00:02.066881 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 08:00:02.066894 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 08:00:02.073309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 08:00:02.073332 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073342 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 08:00:02.073345 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 08:00:02.073348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 08:00:02.073351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 08:00:02.073351 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 08:00:02.075060 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://411ee556dd1461097ea682e93564110b2a46d02734cb91026ba06ffe7de7a80f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:21Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.088721 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83bfb2b1536a6a7a823599dddb47b6ad50b038509d00298d2fcee85b0a5b6c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:21Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.098852 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:21Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.106419 4809 scope.go:117] "RemoveContainer" containerID="94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9" Mar 12 08:00:21 crc kubenswrapper[4809]: E0312 08:00:21.106599 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.107056 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kbwnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acc48a9a-cc0a-4361-b1bc-7c7684f9bf93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5d8cd4006dec3e1304218cda9d03ea761465524e055daccb8e61ef1368a7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j44jn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kbwnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:21Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.119644 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xgl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa0e58ad29c62a229115fb214956dc11f004bd7f4a8188d7435e0857f07ff46b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2nhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xgl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:21Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.130815 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0dc5e31901266db36374e4881b99b1a40767bc6ff58afe1e6226adf1f5c60fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aef513f474f25e56fd48d7c8b5f110939514be75dec224356dee883508d604f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:21Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.152450 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90601560-88a4-4e9e-8c6c-b7cfd6193dfc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcce0bfc6e523f2bd2e05ab2403c80a280f3c22b7c9d4e61bde3bb414b1b6c1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afe86a0784b3f9ecefb87d8d8de1e616420abe185dfcf0e47ad17f7d3e18a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ecaed5f8be6ee7aa9b143575897770874696cfd0d71c30ef8e46d75a67be75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d974ea0c31aedbb16a24b692f45399aa47ba7a1e4b14467bb954c10757b38cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c32315e2dbe601b08a2b2c3db8a50dfc5d99be959dbf33c19f6a2c55df38c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28106812f58b36dc991efb2c3b84b4c3f38016271262418de7cb6d3130ee2743\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28106812f58b36dc991efb2c3b84b4c3f38016271262418de7cb6d3130ee2743\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://759e1933c261395715f2209ea1aa72957dfb4c5b1049c830e87854b0dcdba55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759e1933c261395715f2209ea1aa72957dfb4c5b1049c830e87854b0dcdba55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0d58c5fd72b76f75c191f46f25e0aa9fb3026191f5fb97c210907e1ea38baa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0d58c5fd72b76f75c191f46f25e0aa9fb3026191f5fb97c210907e1ea38baa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:21Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.163345 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:21Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.164448 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.164490 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.164502 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.164519 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.164529 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:21Z","lastTransitionTime":"2026-03-12T08:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.169229 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5f7db3e1-e4cd-4800-a7c2-30d8254caa4f-host\") pod \"node-ca-vcxlw\" (UID: \"5f7db3e1-e4cd-4800-a7c2-30d8254caa4f\") " pod="openshift-image-registry/node-ca-vcxlw" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.169277 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5f7db3e1-e4cd-4800-a7c2-30d8254caa4f-serviceca\") pod \"node-ca-vcxlw\" (UID: \"5f7db3e1-e4cd-4800-a7c2-30d8254caa4f\") " pod="openshift-image-registry/node-ca-vcxlw" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.169296 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpqkt\" (UniqueName: \"kubernetes.io/projected/5f7db3e1-e4cd-4800-a7c2-30d8254caa4f-kube-api-access-xpqkt\") pod \"node-ca-vcxlw\" (UID: \"5f7db3e1-e4cd-4800-a7c2-30d8254caa4f\") " pod="openshift-image-registry/node-ca-vcxlw" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.184827 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7h9l6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:21Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.193017 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vcxlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7db3e1-e4cd-4800-a7c2-30d8254caa4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vcxlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:21Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.205453 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:21Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.267479 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.267529 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.267540 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.267556 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.267569 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:21Z","lastTransitionTime":"2026-03-12T08:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.270230 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5f7db3e1-e4cd-4800-a7c2-30d8254caa4f-host\") pod \"node-ca-vcxlw\" (UID: \"5f7db3e1-e4cd-4800-a7c2-30d8254caa4f\") " pod="openshift-image-registry/node-ca-vcxlw" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.270297 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5f7db3e1-e4cd-4800-a7c2-30d8254caa4f-serviceca\") pod \"node-ca-vcxlw\" (UID: \"5f7db3e1-e4cd-4800-a7c2-30d8254caa4f\") " pod="openshift-image-registry/node-ca-vcxlw" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.270324 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpqkt\" (UniqueName: \"kubernetes.io/projected/5f7db3e1-e4cd-4800-a7c2-30d8254caa4f-kube-api-access-xpqkt\") pod \"node-ca-vcxlw\" (UID: \"5f7db3e1-e4cd-4800-a7c2-30d8254caa4f\") " pod="openshift-image-registry/node-ca-vcxlw" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.270363 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5f7db3e1-e4cd-4800-a7c2-30d8254caa4f-host\") pod \"node-ca-vcxlw\" (UID: \"5f7db3e1-e4cd-4800-a7c2-30d8254caa4f\") " pod="openshift-image-registry/node-ca-vcxlw" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.271553 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5f7db3e1-e4cd-4800-a7c2-30d8254caa4f-serviceca\") pod \"node-ca-vcxlw\" (UID: \"5f7db3e1-e4cd-4800-a7c2-30d8254caa4f\") " pod="openshift-image-registry/node-ca-vcxlw" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.297910 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpqkt\" (UniqueName: \"kubernetes.io/projected/5f7db3e1-e4cd-4800-a7c2-30d8254caa4f-kube-api-access-xpqkt\") pod \"node-ca-vcxlw\" (UID: \"5f7db3e1-e4cd-4800-a7c2-30d8254caa4f\") " pod="openshift-image-registry/node-ca-vcxlw" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.351218 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-vcxlw" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.368803 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.368840 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.368851 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.368866 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.368876 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:21Z","lastTransitionTime":"2026-03-12T08:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.499289 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.499322 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.499332 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.499347 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.499355 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:21Z","lastTransitionTime":"2026-03-12T08:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.571614 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-vcxlw" event={"ID":"5f7db3e1-e4cd-4800-a7c2-30d8254caa4f","Type":"ContainerStarted","Data":"d2425252a425c8cb678c6d04374258b2dd2813ec924b42c4482df388296674e9"} Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.576800 4809 generic.go:334] "Generic (PLEG): container finished" podID="ab4c7cca-c503-41d4-8abf-5b19429defff" containerID="1baa2e910db1674a7a4e4f09b6e01d305daf0f6ef4c7fee24090365dba810b9e" exitCode=0 Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.576850 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mwn7b" event={"ID":"ab4c7cca-c503-41d4-8abf-5b19429defff","Type":"ContainerDied","Data":"1baa2e910db1674a7a4e4f09b6e01d305daf0f6ef4c7fee24090365dba810b9e"} Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.592208 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83bfb2b1536a6a7a823599dddb47b6ad50b038509d00298d2fcee85b0a5b6c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:21Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.603623 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:21Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.605454 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.605480 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.605518 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.605532 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.605541 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:21Z","lastTransitionTime":"2026-03-12T08:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.612957 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kbwnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acc48a9a-cc0a-4361-b1bc-7c7684f9bf93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5d8cd4006dec3e1304218cda9d03ea761465524e055daccb8e61ef1368a7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j44jn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kbwnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:21Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.624337 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xgl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa0e58ad29c62a229115fb214956dc11f004bd7f4a8188d7435e0857f07ff46b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2nhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xgl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:21Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.641418 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09f5e61a-c077-449f-8291-dbf93ac9aca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95634c408015e53a4c8d044b27803c5070ad086be1f3372a760744db1bc9c033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c386318c2476130eb02e6620ab75fce7d5cd814d7f214bed4f449de60d126a14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efb3f6bb09e8fcec0267b3fdbdf43301e33aa2ba8bfaf8b3a124c7e7714949ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T08:00:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 08:00:01.682190 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 08:00:01.682347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 08:00:01.683010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1444506139/tls.crt::/tmp/serving-cert-1444506139/tls.key\\\\\\\"\\\\nI0312 08:00:02.063779 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 08:00:02.066800 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 08:00:02.066837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 08:00:02.066881 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 08:00:02.066894 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 08:00:02.073309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 08:00:02.073332 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073342 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 08:00:02.073345 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 08:00:02.073348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 08:00:02.073351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 08:00:02.073351 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 08:00:02.075060 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://411ee556dd1461097ea682e93564110b2a46d02734cb91026ba06ffe7de7a80f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:21Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.653505 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0dc5e31901266db36374e4881b99b1a40767bc6ff58afe1e6226adf1f5c60fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aef513f474f25e56fd48d7c8b5f110939514be75dec224356dee883508d604f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:21Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.681403 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90601560-88a4-4e9e-8c6c-b7cfd6193dfc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcce0bfc6e523f2bd2e05ab2403c80a280f3c22b7c9d4e61bde3bb414b1b6c1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afe86a0784b3f9ecefb87d8d8de1e616420abe185dfcf0e47ad17f7d3e18a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ecaed5f8be6ee7aa9b143575897770874696cfd0d71c30ef8e46d75a67be75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d974ea0c31aedbb16a24b692f45399aa47ba7a1e4b14467bb954c10757b38cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c32315e2dbe601b08a2b2c3db8a50dfc5d99be959dbf33c19f6a2c55df38c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28106812f58b36dc991efb2c3b84b4c3f38016271262418de7cb6d3130ee2743\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28106812f58b36dc991efb2c3b84b4c3f38016271262418de7cb6d3130ee2743\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://759e1933c261395715f2209ea1aa72957dfb4c5b1049c830e87854b0dcdba55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759e1933c261395715f2209ea1aa72957dfb4c5b1049c830e87854b0dcdba55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0d58c5fd72b76f75c191f46f25e0aa9fb3026191f5fb97c210907e1ea38baa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0d58c5fd72b76f75c191f46f25e0aa9fb3026191f5fb97c210907e1ea38baa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:21Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.708648 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7h9l6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:21Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.714687 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.714720 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.714730 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.714744 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.714752 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:21Z","lastTransitionTime":"2026-03-12T08:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.724173 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vcxlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7db3e1-e4cd-4800-a7c2-30d8254caa4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vcxlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:21Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.737725 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:21Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.748796 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:21Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.761277 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"101483ba-8ed3-40eb-9855-077e9add029f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a3044e9eaa4ccc87b594a3f5440d95bf23c648ceb2e8f45abeadeda79b4a773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1dde7e156e97091c5fa419562607743e842e7e610e685fff1be3ce7ce33f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6d4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:21Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.779446 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwn7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab4c7cca-c503-41d4-8abf-5b19429defff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef92b238f6dd7e873d92f846bfa5a6dcbc6f5ab26617736df052a7f331fc5f4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef92b238f6dd7e873d92f846bfa5a6dcbc6f5ab26617736df052a7f331fc5f4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04cf364fbd4a2f9039a1733e337000f89db4b2ef6b4cd8de079f4c01c91add76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04cf364fbd4a2f9039a1733e337000f89db4b2ef6b4cd8de079f4c01c91add76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://844db69e092396d79015605fd21fde37c13689eb546b220c8e9ecc789aa7dc75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844db69e092396d79015605fd21fde37c13689eb546b220c8e9ecc789aa7dc75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5143175a54d02709bf304d2a4d50a1c3b2c441552e206628f9e1a5dfdb27ae97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5143175a54d02709bf304d2a4d50a1c3b2c441552e206628f9e1a5dfdb27ae97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1baa2e910db1674a7a4e4f09b6e01d305daf0f6ef4c7fee24090365dba810b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baa2e910db1674a7a4e4f09b6e01d305daf0f6ef4c7fee24090365dba810b9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwn7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:21Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.793571 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4677e418d05a581aee3be8fd421402c02922c5eaf004cbcae1ae691fb67d283e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:21Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.816299 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.816328 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.816336 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.816349 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.816357 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:21Z","lastTransitionTime":"2026-03-12T08:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.918225 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.918269 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.918286 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.918303 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:21 crc kubenswrapper[4809]: I0312 08:00:21.918314 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:21Z","lastTransitionTime":"2026-03-12T08:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.021429 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.021482 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.021503 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.021529 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.021549 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:22Z","lastTransitionTime":"2026-03-12T08:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.105014 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:00:22 crc kubenswrapper[4809]: E0312 08:00:22.105177 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.105582 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:00:22 crc kubenswrapper[4809]: E0312 08:00:22.105659 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.105711 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:00:22 crc kubenswrapper[4809]: E0312 08:00:22.105764 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.124133 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.124215 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.124229 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.124245 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.124256 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:22Z","lastTransitionTime":"2026-03-12T08:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.226927 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.226963 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.226973 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.226987 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.226997 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:22Z","lastTransitionTime":"2026-03-12T08:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.329007 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.329040 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.329049 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.329065 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.329074 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:22Z","lastTransitionTime":"2026-03-12T08:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.431831 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.431869 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.431880 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.431896 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.431914 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:22Z","lastTransitionTime":"2026-03-12T08:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.534679 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.534713 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.534723 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.534739 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.534750 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:22Z","lastTransitionTime":"2026-03-12T08:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.583049 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mwn7b" event={"ID":"ab4c7cca-c503-41d4-8abf-5b19429defff","Type":"ContainerStarted","Data":"faaaba74bba436056c80065bda470df45e31d6405fc8b717b0299b5a0fa3ef37"} Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.587773 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" event={"ID":"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e","Type":"ContainerStarted","Data":"43ea7eb62d65475117fbd44e54f1fea06b6c47a63cdf9e7be6bfaa498b208648"} Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.588030 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.588062 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.589096 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-vcxlw" event={"ID":"5f7db3e1-e4cd-4800-a7c2-30d8254caa4f","Type":"ContainerStarted","Data":"a976f7b8cb11f2d6a4393b26321f830f79835db3c42bdc340ac4b189ee0e56d1"} Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.605485 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90601560-88a4-4e9e-8c6c-b7cfd6193dfc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcce0bfc6e523f2bd2e05ab2403c80a280f3c22b7c9d4e61bde3bb414b1b6c1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afe86a0784b3f9ecefb87d8d8de1e616420abe185dfcf0e47ad17f7d3e18a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ecaed5f8be6ee7aa9b143575897770874696cfd0d71c30ef8e46d75a67be75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d974ea0c31aedbb16a24b692f45399aa47ba7a1e4b14467bb954c10757b38cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c32315e2dbe601b08a2b2c3db8a50dfc5d99be959dbf33c19f6a2c55df38c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28106812f58b36dc991efb2c3b84b4c3f38016271262418de7cb6d3130ee2743\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28106812f58b36dc991efb2c3b84b4c3f38016271262418de7cb6d3130ee2743\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://759e1933c261395715f2209ea1aa72957dfb4c5b1049c830e87854b0dcdba55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759e1933c261395715f2209ea1aa72957dfb4c5b1049c830e87854b0dcdba55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0d58c5fd72b76f75c191f46f25e0aa9fb3026191f5fb97c210907e1ea38baa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0d58c5fd72b76f75c191f46f25e0aa9fb3026191f5fb97c210907e1ea38baa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:22Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.637852 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.637902 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.637917 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.637939 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.637956 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:22Z","lastTransitionTime":"2026-03-12T08:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.654776 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0dc5e31901266db36374e4881b99b1a40767bc6ff58afe1e6226adf1f5c60fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aef513f474f25e56fd48d7c8b5f110939514be75dec224356dee883508d604f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:22Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.656773 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.669062 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:22Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.686444 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:22Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.706180 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7h9l6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:22Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.717221 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vcxlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7db3e1-e4cd-4800-a7c2-30d8254caa4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vcxlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:22Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.733358 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4677e418d05a581aee3be8fd421402c02922c5eaf004cbcae1ae691fb67d283e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:22Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.740750 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.740793 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.740808 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.740830 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.740849 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:22Z","lastTransitionTime":"2026-03-12T08:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.746178 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"101483ba-8ed3-40eb-9855-077e9add029f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a3044e9eaa4ccc87b594a3f5440d95bf23c648ceb2e8f45abeadeda79b4a773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1dde7e156e97091c5fa419562607743e842e7e610e685fff1be3ce7ce33f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6d4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:22Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.762502 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwn7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab4c7cca-c503-41d4-8abf-5b19429defff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaaba74bba436056c80065bda470df45e31d6405fc8b717b0299b5a0fa3ef37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef92b238f6dd7e873d92f846bfa5a6dcbc6f5ab26617736df052a7f331fc5f4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef92b238f6dd7e873d92f846bfa5a6dcbc6f5ab26617736df052a7f331fc5f4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04cf364fbd4a2f9039a1733e337000f89db4b2ef6b4cd8de079f4c01c91add76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04cf364fbd4a2f9039a1733e337000f89db4b2ef6b4cd8de079f4c01c91add76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://844db69e092396d79015605fd21fde37c13689eb546b220c8e9ecc789aa7dc75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844db69e092396d79015605fd21fde37c13689eb546b220c8e9ecc789aa7dc75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5143175a54d02709bf304d2a4d50a1c3b2c441552e206628f9e1a5dfdb27ae97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5143175a54d02709bf304d2a4d50a1c3b2c441552e206628f9e1a5dfdb27ae97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1baa2e910db1674a7a4e4f09b6e01d305daf0f6ef4c7fee24090365dba810b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baa2e910db1674a7a4e4f09b6e01d305daf0f6ef4c7fee24090365dba810b9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwn7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:22Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.775901 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09f5e61a-c077-449f-8291-dbf93ac9aca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95634c408015e53a4c8d044b27803c5070ad086be1f3372a760744db1bc9c033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c386318c2476130eb02e6620ab75fce7d5cd814d7f214bed4f449de60d126a14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efb3f6bb09e8fcec0267b3fdbdf43301e33aa2ba8bfaf8b3a124c7e7714949ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T08:00:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 08:00:01.682190 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 08:00:01.682347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 08:00:01.683010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1444506139/tls.crt::/tmp/serving-cert-1444506139/tls.key\\\\\\\"\\\\nI0312 08:00:02.063779 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 08:00:02.066800 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 08:00:02.066837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 08:00:02.066881 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 08:00:02.066894 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 08:00:02.073309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 08:00:02.073332 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073342 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 08:00:02.073345 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 08:00:02.073348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 08:00:02.073351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 08:00:02.073351 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 08:00:02.075060 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://411ee556dd1461097ea682e93564110b2a46d02734cb91026ba06ffe7de7a80f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:22Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.794035 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83bfb2b1536a6a7a823599dddb47b6ad50b038509d00298d2fcee85b0a5b6c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:22Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.810309 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:22Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.822669 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kbwnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acc48a9a-cc0a-4361-b1bc-7c7684f9bf93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5d8cd4006dec3e1304218cda9d03ea761465524e055daccb8e61ef1368a7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j44jn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kbwnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:22Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.839169 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xgl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa0e58ad29c62a229115fb214956dc11f004bd7f4a8188d7435e0857f07ff46b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2nhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xgl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:22Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.842621 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.842652 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.842663 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.842677 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.842687 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:22Z","lastTransitionTime":"2026-03-12T08:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.852505 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4677e418d05a581aee3be8fd421402c02922c5eaf004cbcae1ae691fb67d283e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:22Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.863404 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"101483ba-8ed3-40eb-9855-077e9add029f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a3044e9eaa4ccc87b594a3f5440d95bf23c648ceb2e8f45abeadeda79b4a773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1dde7e156e97091c5fa419562607743e842e7e610e685fff1be3ce7ce33f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6d4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:22Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.877489 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwn7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab4c7cca-c503-41d4-8abf-5b19429defff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaaba74bba436056c80065bda470df45e31d6405fc8b717b0299b5a0fa3ef37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef92b238f6dd7e873d92f846bfa5a6dcbc6f5ab26617736df052a7f331fc5f4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef92b238f6dd7e873d92f846bfa5a6dcbc6f5ab26617736df052a7f331fc5f4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04cf364fbd4a2f9039a1733e337000f89db4b2ef6b4cd8de079f4c01c91add76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04cf364fbd4a2f9039a1733e337000f89db4b2ef6b4cd8de079f4c01c91add76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://844db69e092396d79015605fd21fde37c13689eb546b220c8e9ecc789aa7dc75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844db69e092396d79015605fd21fde37c13689eb546b220c8e9ecc789aa7dc75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5143175a54d02709bf304d2a4d50a1c3b2c441552e206628f9e1a5dfdb27ae97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5143175a54d02709bf304d2a4d50a1c3b2c441552e206628f9e1a5dfdb27ae97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1baa2e910db1674a7a4e4f09b6e01d305daf0f6ef4c7fee24090365dba810b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baa2e910db1674a7a4e4f09b6e01d305daf0f6ef4c7fee24090365dba810b9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwn7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:22Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.891699 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09f5e61a-c077-449f-8291-dbf93ac9aca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95634c408015e53a4c8d044b27803c5070ad086be1f3372a760744db1bc9c033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c386318c2476130eb02e6620ab75fce7d5cd814d7f214bed4f449de60d126a14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efb3f6bb09e8fcec0267b3fdbdf43301e33aa2ba8bfaf8b3a124c7e7714949ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T08:00:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 08:00:01.682190 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 08:00:01.682347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 08:00:01.683010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1444506139/tls.crt::/tmp/serving-cert-1444506139/tls.key\\\\\\\"\\\\nI0312 08:00:02.063779 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 08:00:02.066800 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 08:00:02.066837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 08:00:02.066881 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 08:00:02.066894 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 08:00:02.073309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 08:00:02.073332 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073342 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 08:00:02.073345 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 08:00:02.073348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 08:00:02.073351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 08:00:02.073351 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 08:00:02.075060 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://411ee556dd1461097ea682e93564110b2a46d02734cb91026ba06ffe7de7a80f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:22Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.909297 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83bfb2b1536a6a7a823599dddb47b6ad50b038509d00298d2fcee85b0a5b6c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:22Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.924823 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:22Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.935745 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kbwnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acc48a9a-cc0a-4361-b1bc-7c7684f9bf93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5d8cd4006dec3e1304218cda9d03ea761465524e055daccb8e61ef1368a7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j44jn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kbwnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:22Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.944375 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.944410 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.944422 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.944436 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.944447 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:22Z","lastTransitionTime":"2026-03-12T08:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.949603 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xgl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa0e58ad29c62a229115fb214956dc11f004bd7f4a8188d7435e0857f07ff46b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2nhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xgl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:22Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.979061 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90601560-88a4-4e9e-8c6c-b7cfd6193dfc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcce0bfc6e523f2bd2e05ab2403c80a280f3c22b7c9d4e61bde3bb414b1b6c1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afe86a0784b3f9ecefb87d8d8de1e616420abe185dfcf0e47ad17f7d3e18a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ecaed5f8be6ee7aa9b143575897770874696cfd0d71c30ef8e46d75a67be75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d974ea0c31aedbb16a24b692f45399aa47ba7a1e4b14467bb954c10757b38cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c32315e2dbe601b08a2b2c3db8a50dfc5d99be959dbf33c19f6a2c55df38c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28106812f58b36dc991efb2c3b84b4c3f38016271262418de7cb6d3130ee2743\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28106812f58b36dc991efb2c3b84b4c3f38016271262418de7cb6d3130ee2743\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://759e1933c261395715f2209ea1aa72957dfb4c5b1049c830e87854b0dcdba55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759e1933c261395715f2209ea1aa72957dfb4c5b1049c830e87854b0dcdba55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0d58c5fd72b76f75c191f46f25e0aa9fb3026191f5fb97c210907e1ea38baa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0d58c5fd72b76f75c191f46f25e0aa9fb3026191f5fb97c210907e1ea38baa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:22Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:22 crc kubenswrapper[4809]: I0312 08:00:22.993774 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0dc5e31901266db36374e4881b99b1a40767bc6ff58afe1e6226adf1f5c60fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aef513f474f25e56fd48d7c8b5f110939514be75dec224356dee883508d604f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:22Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:23 crc kubenswrapper[4809]: I0312 08:00:23.013053 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:23Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:23 crc kubenswrapper[4809]: I0312 08:00:23.024228 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:23Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:23 crc kubenswrapper[4809]: I0312 08:00:23.047066 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:23 crc kubenswrapper[4809]: I0312 08:00:23.047095 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:23 crc kubenswrapper[4809]: I0312 08:00:23.047103 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:23 crc kubenswrapper[4809]: I0312 08:00:23.047130 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:23 crc kubenswrapper[4809]: I0312 08:00:23.047142 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:23Z","lastTransitionTime":"2026-03-12T08:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:23 crc kubenswrapper[4809]: I0312 08:00:23.049524 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a4d3f57db51abc6b567283017a96e97ab98055de09513fe14d0be447942b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06c9bb77ec46b7ddf658e865660c808ab82509e698598c42daf2b199305c32c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://833b7c9d59ec9b7cf52e9b6c12705d034b46279a46145c181f92d30ffb6e7f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b49430238b4b9dc6fefc7169830e7106e6d43b1130d0b2fd358f103161f390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://943bab511f40b661cb31e977ec90fbc4edd6cb41d26593b0d7829dfd854fc047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f06a94e052ce6d8bc2496ade124907ca536c37d2710afe11c824327dc3de5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43ea7eb62d65475117fbd44e54f1fea06b6c47a63cdf9e7be6bfaa498b208648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a39013e253a1014bfc666169571d96ed943c28cc73221fcfa3f85d31ee9ea85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7h9l6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:23Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:23 crc kubenswrapper[4809]: I0312 08:00:23.060610 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vcxlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7db3e1-e4cd-4800-a7c2-30d8254caa4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a976f7b8cb11f2d6a4393b26321f830f79835db3c42bdc340ac4b189ee0e56d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vcxlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:23Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:23 crc kubenswrapper[4809]: I0312 08:00:23.149850 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:23 crc kubenswrapper[4809]: I0312 08:00:23.149926 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:23 crc kubenswrapper[4809]: I0312 08:00:23.149958 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:23 crc kubenswrapper[4809]: I0312 08:00:23.150007 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:23 crc kubenswrapper[4809]: I0312 08:00:23.150020 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:23Z","lastTransitionTime":"2026-03-12T08:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:23 crc kubenswrapper[4809]: I0312 08:00:23.252445 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:23 crc kubenswrapper[4809]: I0312 08:00:23.252487 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:23 crc kubenswrapper[4809]: I0312 08:00:23.252499 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:23 crc kubenswrapper[4809]: I0312 08:00:23.252574 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:23 crc kubenswrapper[4809]: I0312 08:00:23.252597 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:23Z","lastTransitionTime":"2026-03-12T08:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:23 crc kubenswrapper[4809]: I0312 08:00:23.354750 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:23 crc kubenswrapper[4809]: I0312 08:00:23.354786 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:23 crc kubenswrapper[4809]: I0312 08:00:23.354794 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:23 crc kubenswrapper[4809]: I0312 08:00:23.354805 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:23 crc kubenswrapper[4809]: I0312 08:00:23.354815 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:23Z","lastTransitionTime":"2026-03-12T08:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:23 crc kubenswrapper[4809]: I0312 08:00:23.457574 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:23 crc kubenswrapper[4809]: I0312 08:00:23.457608 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:23 crc kubenswrapper[4809]: I0312 08:00:23.457618 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:23 crc kubenswrapper[4809]: I0312 08:00:23.457632 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:23 crc kubenswrapper[4809]: I0312 08:00:23.457643 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:23Z","lastTransitionTime":"2026-03-12T08:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:23 crc kubenswrapper[4809]: I0312 08:00:23.559990 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:23 crc kubenswrapper[4809]: I0312 08:00:23.560041 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:23 crc kubenswrapper[4809]: I0312 08:00:23.560060 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:23 crc kubenswrapper[4809]: I0312 08:00:23.560083 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:23 crc kubenswrapper[4809]: I0312 08:00:23.560100 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:23Z","lastTransitionTime":"2026-03-12T08:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:23 crc kubenswrapper[4809]: I0312 08:00:23.592212 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:00:23 crc kubenswrapper[4809]: I0312 08:00:23.620653 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:00:23 crc kubenswrapper[4809]: I0312 08:00:23.634921 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kbwnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acc48a9a-cc0a-4361-b1bc-7c7684f9bf93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5d8cd4006dec3e1304218cda9d03ea761465524e055daccb8e61ef1368a7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j44jn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kbwnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:23Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:23 crc kubenswrapper[4809]: I0312 08:00:23.653110 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xgl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa0e58ad29c62a229115fb214956dc11f004bd7f4a8188d7435e0857f07ff46b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2nhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xgl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:23Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:23 crc kubenswrapper[4809]: I0312 08:00:23.663529 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:23 crc kubenswrapper[4809]: I0312 08:00:23.663558 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:23 crc kubenswrapper[4809]: I0312 08:00:23.663568 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:23 crc kubenswrapper[4809]: I0312 08:00:23.663581 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:23 crc kubenswrapper[4809]: I0312 08:00:23.663590 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:23Z","lastTransitionTime":"2026-03-12T08:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:23 crc kubenswrapper[4809]: I0312 08:00:23.672195 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09f5e61a-c077-449f-8291-dbf93ac9aca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95634c408015e53a4c8d044b27803c5070ad086be1f3372a760744db1bc9c033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c386318c2476130eb02e6620ab75fce7d5cd814d7f214bed4f449de60d126a14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efb3f6bb09e8fcec0267b3fdbdf43301e33aa2ba8bfaf8b3a124c7e7714949ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T08:00:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 08:00:01.682190 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 08:00:01.682347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 08:00:01.683010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1444506139/tls.crt::/tmp/serving-cert-1444506139/tls.key\\\\\\\"\\\\nI0312 08:00:02.063779 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 08:00:02.066800 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 08:00:02.066837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 08:00:02.066881 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 08:00:02.066894 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 08:00:02.073309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 08:00:02.073332 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073342 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 08:00:02.073345 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 08:00:02.073348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 08:00:02.073351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 08:00:02.073351 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 08:00:02.075060 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://411ee556dd1461097ea682e93564110b2a46d02734cb91026ba06ffe7de7a80f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:23Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:23 crc kubenswrapper[4809]: I0312 08:00:23.685935 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83bfb2b1536a6a7a823599dddb47b6ad50b038509d00298d2fcee85b0a5b6c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:23Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:23 crc kubenswrapper[4809]: I0312 08:00:23.702978 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:23Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:23 crc kubenswrapper[4809]: I0312 08:00:23.734438 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90601560-88a4-4e9e-8c6c-b7cfd6193dfc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcce0bfc6e523f2bd2e05ab2403c80a280f3c22b7c9d4e61bde3bb414b1b6c1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afe86a0784b3f9ecefb87d8d8de1e616420abe185dfcf0e47ad17f7d3e18a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ecaed5f8be6ee7aa9b143575897770874696cfd0d71c30ef8e46d75a67be75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d974ea0c31aedbb16a24b692f45399aa47ba7a1e4b14467bb954c10757b38cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c32315e2dbe601b08a2b2c3db8a50dfc5d99be959dbf33c19f6a2c55df38c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28106812f58b36dc991efb2c3b84b4c3f38016271262418de7cb6d3130ee2743\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28106812f58b36dc991efb2c3b84b4c3f38016271262418de7cb6d3130ee2743\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://759e1933c261395715f2209ea1aa72957dfb4c5b1049c830e87854b0dcdba55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759e1933c261395715f2209ea1aa72957dfb4c5b1049c830e87854b0dcdba55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0d58c5fd72b76f75c191f46f25e0aa9fb3026191f5fb97c210907e1ea38baa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0d58c5fd72b76f75c191f46f25e0aa9fb3026191f5fb97c210907e1ea38baa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:23Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:23 crc kubenswrapper[4809]: I0312 08:00:23.752802 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0dc5e31901266db36374e4881b99b1a40767bc6ff58afe1e6226adf1f5c60fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aef513f474f25e56fd48d7c8b5f110939514be75dec224356dee883508d604f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:23Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:23 crc kubenswrapper[4809]: I0312 08:00:23.765647 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:23 crc kubenswrapper[4809]: I0312 08:00:23.765722 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:23 crc kubenswrapper[4809]: I0312 08:00:23.765739 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:23 crc kubenswrapper[4809]: I0312 08:00:23.765763 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:23 crc kubenswrapper[4809]: I0312 08:00:23.765778 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:23Z","lastTransitionTime":"2026-03-12T08:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:23 crc kubenswrapper[4809]: I0312 08:00:23.780078 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:23Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:23 crc kubenswrapper[4809]: I0312 08:00:23.797206 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:23Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:23 crc kubenswrapper[4809]: I0312 08:00:23.818342 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a4d3f57db51abc6b567283017a96e97ab98055de09513fe14d0be447942b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06c9bb77ec46b7ddf658e865660c808ab82509e698598c42daf2b199305c32c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://833b7c9d59ec9b7cf52e9b6c12705d034b46279a46145c181f92d30ffb6e7f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b49430238b4b9dc6fefc7169830e7106e6d43b1130d0b2fd358f103161f390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://943bab511f40b661cb31e977ec90fbc4edd6cb41d26593b0d7829dfd854fc047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f06a94e052ce6d8bc2496ade124907ca536c37d2710afe11c824327dc3de5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43ea7eb62d65475117fbd44e54f1fea06b6c47a63cdf9e7be6bfaa498b208648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a39013e253a1014bfc666169571d96ed943c28cc73221fcfa3f85d31ee9ea85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7h9l6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:23Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:23 crc kubenswrapper[4809]: I0312 08:00:23.837572 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vcxlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7db3e1-e4cd-4800-a7c2-30d8254caa4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a976f7b8cb11f2d6a4393b26321f830f79835db3c42bdc340ac4b189ee0e56d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vcxlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:23Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:23 crc kubenswrapper[4809]: I0312 08:00:23.853355 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4677e418d05a581aee3be8fd421402c02922c5eaf004cbcae1ae691fb67d283e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:23Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:23 crc kubenswrapper[4809]: I0312 08:00:23.868277 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:23 crc kubenswrapper[4809]: I0312 08:00:23.868319 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:23 crc kubenswrapper[4809]: I0312 08:00:23.868334 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:23 crc kubenswrapper[4809]: I0312 08:00:23.868352 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:23 crc kubenswrapper[4809]: I0312 08:00:23.868366 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:23Z","lastTransitionTime":"2026-03-12T08:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:23 crc kubenswrapper[4809]: I0312 08:00:23.870445 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"101483ba-8ed3-40eb-9855-077e9add029f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a3044e9eaa4ccc87b594a3f5440d95bf23c648ceb2e8f45abeadeda79b4a773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1dde7e156e97091c5fa419562607743e842e7e610e685fff1be3ce7ce33f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6d4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:23Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:23 crc kubenswrapper[4809]: I0312 08:00:23.887094 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwn7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab4c7cca-c503-41d4-8abf-5b19429defff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaaba74bba436056c80065bda470df45e31d6405fc8b717b0299b5a0fa3ef37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef92b238f6dd7e873d92f846bfa5a6dcbc6f5ab26617736df052a7f331fc5f4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef92b238f6dd7e873d92f846bfa5a6dcbc6f5ab26617736df052a7f331fc5f4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04cf364fbd4a2f9039a1733e337000f89db4b2ef6b4cd8de079f4c01c91add76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04cf364fbd4a2f9039a1733e337000f89db4b2ef6b4cd8de079f4c01c91add76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://844db69e092396d79015605fd21fde37c13689eb546b220c8e9ecc789aa7dc75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844db69e092396d79015605fd21fde37c13689eb546b220c8e9ecc789aa7dc75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5143175a54d02709bf304d2a4d50a1c3b2c441552e206628f9e1a5dfdb27ae97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5143175a54d02709bf304d2a4d50a1c3b2c441552e206628f9e1a5dfdb27ae97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1baa2e910db1674a7a4e4f09b6e01d305daf0f6ef4c7fee24090365dba810b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baa2e910db1674a7a4e4f09b6e01d305daf0f6ef4c7fee24090365dba810b9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwn7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:23Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:23 crc kubenswrapper[4809]: I0312 08:00:23.970896 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:23 crc kubenswrapper[4809]: I0312 08:00:23.970922 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:23 crc kubenswrapper[4809]: I0312 08:00:23.970930 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:23 crc kubenswrapper[4809]: I0312 08:00:23.970942 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:23 crc kubenswrapper[4809]: I0312 08:00:23.970950 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:23Z","lastTransitionTime":"2026-03-12T08:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:24 crc kubenswrapper[4809]: I0312 08:00:24.074204 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:24 crc kubenswrapper[4809]: I0312 08:00:24.074233 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:24 crc kubenswrapper[4809]: I0312 08:00:24.074242 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:24 crc kubenswrapper[4809]: I0312 08:00:24.074255 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:24 crc kubenswrapper[4809]: I0312 08:00:24.074264 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:24Z","lastTransitionTime":"2026-03-12T08:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:24 crc kubenswrapper[4809]: I0312 08:00:24.105136 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:00:24 crc kubenswrapper[4809]: E0312 08:00:24.105246 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 08:00:24 crc kubenswrapper[4809]: I0312 08:00:24.105541 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:00:24 crc kubenswrapper[4809]: E0312 08:00:24.105611 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 08:00:24 crc kubenswrapper[4809]: I0312 08:00:24.105663 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:00:24 crc kubenswrapper[4809]: E0312 08:00:24.105725 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 08:00:24 crc kubenswrapper[4809]: I0312 08:00:24.176861 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:24 crc kubenswrapper[4809]: I0312 08:00:24.176895 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:24 crc kubenswrapper[4809]: I0312 08:00:24.176910 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:24 crc kubenswrapper[4809]: I0312 08:00:24.176926 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:24 crc kubenswrapper[4809]: I0312 08:00:24.176939 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:24Z","lastTransitionTime":"2026-03-12T08:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:24 crc kubenswrapper[4809]: I0312 08:00:24.279084 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:24 crc kubenswrapper[4809]: I0312 08:00:24.279140 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:24 crc kubenswrapper[4809]: I0312 08:00:24.279153 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:24 crc kubenswrapper[4809]: I0312 08:00:24.279167 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:24 crc kubenswrapper[4809]: I0312 08:00:24.279178 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:24Z","lastTransitionTime":"2026-03-12T08:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:24 crc kubenswrapper[4809]: I0312 08:00:24.380897 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:24 crc kubenswrapper[4809]: I0312 08:00:24.380922 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:24 crc kubenswrapper[4809]: I0312 08:00:24.380930 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:24 crc kubenswrapper[4809]: I0312 08:00:24.380941 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:24 crc kubenswrapper[4809]: I0312 08:00:24.380949 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:24Z","lastTransitionTime":"2026-03-12T08:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:24 crc kubenswrapper[4809]: I0312 08:00:24.483509 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:24 crc kubenswrapper[4809]: I0312 08:00:24.483535 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:24 crc kubenswrapper[4809]: I0312 08:00:24.483543 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:24 crc kubenswrapper[4809]: I0312 08:00:24.483556 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:24 crc kubenswrapper[4809]: I0312 08:00:24.483564 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:24Z","lastTransitionTime":"2026-03-12T08:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:24 crc kubenswrapper[4809]: I0312 08:00:24.585562 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:24 crc kubenswrapper[4809]: I0312 08:00:24.585644 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:24 crc kubenswrapper[4809]: I0312 08:00:24.585662 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:24 crc kubenswrapper[4809]: I0312 08:00:24.585680 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:24 crc kubenswrapper[4809]: I0312 08:00:24.585690 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:24Z","lastTransitionTime":"2026-03-12T08:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:24 crc kubenswrapper[4809]: I0312 08:00:24.596362 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7h9l6_cc7631d0-7d4b-4f5a-ab01-7516b2ed998e/ovnkube-controller/0.log" Mar 12 08:00:24 crc kubenswrapper[4809]: I0312 08:00:24.598793 4809 generic.go:334] "Generic (PLEG): container finished" podID="cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" containerID="43ea7eb62d65475117fbd44e54f1fea06b6c47a63cdf9e7be6bfaa498b208648" exitCode=1 Mar 12 08:00:24 crc kubenswrapper[4809]: I0312 08:00:24.598830 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" event={"ID":"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e","Type":"ContainerDied","Data":"43ea7eb62d65475117fbd44e54f1fea06b6c47a63cdf9e7be6bfaa498b208648"} Mar 12 08:00:24 crc kubenswrapper[4809]: I0312 08:00:24.599390 4809 scope.go:117] "RemoveContainer" containerID="43ea7eb62d65475117fbd44e54f1fea06b6c47a63cdf9e7be6bfaa498b208648" Mar 12 08:00:24 crc kubenswrapper[4809]: I0312 08:00:24.622361 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83bfb2b1536a6a7a823599dddb47b6ad50b038509d00298d2fcee85b0a5b6c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:24Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:24 crc kubenswrapper[4809]: I0312 08:00:24.640182 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:24Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:24 crc kubenswrapper[4809]: I0312 08:00:24.656387 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kbwnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acc48a9a-cc0a-4361-b1bc-7c7684f9bf93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5d8cd4006dec3e1304218cda9d03ea761465524e055daccb8e61ef1368a7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j44jn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kbwnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:24Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:24 crc kubenswrapper[4809]: I0312 08:00:24.671184 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xgl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa0e58ad29c62a229115fb214956dc11f004bd7f4a8188d7435e0857f07ff46b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2nhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xgl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:24Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:24 crc kubenswrapper[4809]: I0312 08:00:24.683094 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09f5e61a-c077-449f-8291-dbf93ac9aca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95634c408015e53a4c8d044b27803c5070ad086be1f3372a760744db1bc9c033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c386318c2476130eb02e6620ab75fce7d5cd814d7f214bed4f449de60d126a14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efb3f6bb09e8fcec0267b3fdbdf43301e33aa2ba8bfaf8b3a124c7e7714949ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T08:00:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 08:00:01.682190 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 08:00:01.682347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 08:00:01.683010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1444506139/tls.crt::/tmp/serving-cert-1444506139/tls.key\\\\\\\"\\\\nI0312 08:00:02.063779 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 08:00:02.066800 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 08:00:02.066837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 08:00:02.066881 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 08:00:02.066894 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 08:00:02.073309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 08:00:02.073332 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073342 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 08:00:02.073345 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 08:00:02.073348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 08:00:02.073351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 08:00:02.073351 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 08:00:02.075060 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://411ee556dd1461097ea682e93564110b2a46d02734cb91026ba06ffe7de7a80f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:24Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:24 crc kubenswrapper[4809]: I0312 08:00:24.687762 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:24 crc kubenswrapper[4809]: I0312 08:00:24.687795 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:24 crc kubenswrapper[4809]: I0312 08:00:24.687804 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:24 crc kubenswrapper[4809]: I0312 08:00:24.687817 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:24 crc kubenswrapper[4809]: I0312 08:00:24.687826 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:24Z","lastTransitionTime":"2026-03-12T08:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:24 crc kubenswrapper[4809]: I0312 08:00:24.697738 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0dc5e31901266db36374e4881b99b1a40767bc6ff58afe1e6226adf1f5c60fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aef513f474f25e56fd48d7c8b5f110939514be75dec224356dee883508d604f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:24Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:24 crc kubenswrapper[4809]: I0312 08:00:24.716884 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90601560-88a4-4e9e-8c6c-b7cfd6193dfc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcce0bfc6e523f2bd2e05ab2403c80a280f3c22b7c9d4e61bde3bb414b1b6c1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afe86a0784b3f9ecefb87d8d8de1e616420abe185dfcf0e47ad17f7d3e18a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ecaed5f8be6ee7aa9b143575897770874696cfd0d71c30ef8e46d75a67be75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d974ea0c31aedbb16a24b692f45399aa47ba7a1e4b14467bb954c10757b38cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c32315e2dbe601b08a2b2c3db8a50dfc5d99be959dbf33c19f6a2c55df38c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28106812f58b36dc991efb2c3b84b4c3f38016271262418de7cb6d3130ee2743\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28106812f58b36dc991efb2c3b84b4c3f38016271262418de7cb6d3130ee2743\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://759e1933c261395715f2209ea1aa72957dfb4c5b1049c830e87854b0dcdba55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759e1933c261395715f2209ea1aa72957dfb4c5b1049c830e87854b0dcdba55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0d58c5fd72b76f75c191f46f25e0aa9fb3026191f5fb97c210907e1ea38baa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0d58c5fd72b76f75c191f46f25e0aa9fb3026191f5fb97c210907e1ea38baa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:24Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:24 crc kubenswrapper[4809]: I0312 08:00:24.733641 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a4d3f57db51abc6b567283017a96e97ab98055de09513fe14d0be447942b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06c9bb77ec46b7ddf658e865660c808ab82509e698598c42daf2b199305c32c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://833b7c9d59ec9b7cf52e9b6c12705d034b46279a46145c181f92d30ffb6e7f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b49430238b4b9dc6fefc7169830e7106e6d43b1130d0b2fd358f103161f390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://943bab511f40b661cb31e977ec90fbc4edd6cb41d26593b0d7829dfd854fc047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f06a94e052ce6d8bc2496ade124907ca536c37d2710afe11c824327dc3de5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43ea7eb62d65475117fbd44e54f1fea06b6c47a63cdf9e7be6bfaa498b208648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43ea7eb62d65475117fbd44e54f1fea06b6c47a63cdf9e7be6bfaa498b208648\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T08:00:24Z\\\",\\\"message\\\":\\\"ctor *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0312 08:00:24.412852 6608 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0312 08:00:24.413043 6608 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0312 08:00:24.413306 6608 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0312 08:00:24.413415 6608 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0312 08:00:24.413721 6608 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0312 08:00:24.413759 6608 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0312 08:00:24.413788 6608 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0312 08:00:24.413860 6608 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0312 08:00:24.413870 6608 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0312 08:00:24.413895 6608 factory.go:656] Stopping watch factory\\\\nI0312 08:00:24.413917 6608 ovnkube.go:599] Stopped ovnkube\\\\nI0312 08:00:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a39013e253a1014bfc666169571d96ed943c28cc73221fcfa3f85d31ee9ea85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7h9l6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:24Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:24 crc kubenswrapper[4809]: I0312 08:00:24.742859 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vcxlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7db3e1-e4cd-4800-a7c2-30d8254caa4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a976f7b8cb11f2d6a4393b26321f830f79835db3c42bdc340ac4b189ee0e56d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vcxlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:24Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:24 crc kubenswrapper[4809]: I0312 08:00:24.752826 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:24Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:24 crc kubenswrapper[4809]: I0312 08:00:24.763200 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:24Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:24 crc kubenswrapper[4809]: I0312 08:00:24.772188 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"101483ba-8ed3-40eb-9855-077e9add029f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a3044e9eaa4ccc87b594a3f5440d95bf23c648ceb2e8f45abeadeda79b4a773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1dde7e156e97091c5fa419562607743e842e7e610e685fff1be3ce7ce33f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6d4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:24Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:24 crc kubenswrapper[4809]: I0312 08:00:24.786552 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwn7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab4c7cca-c503-41d4-8abf-5b19429defff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaaba74bba436056c80065bda470df45e31d6405fc8b717b0299b5a0fa3ef37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef92b238f6dd7e873d92f846bfa5a6dcbc6f5ab26617736df052a7f331fc5f4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef92b238f6dd7e873d92f846bfa5a6dcbc6f5ab26617736df052a7f331fc5f4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04cf364fbd4a2f9039a1733e337000f89db4b2ef6b4cd8de079f4c01c91add76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04cf364fbd4a2f9039a1733e337000f89db4b2ef6b4cd8de079f4c01c91add76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://844db69e092396d79015605fd21fde37c13689eb546b220c8e9ecc789aa7dc75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844db69e092396d79015605fd21fde37c13689eb546b220c8e9ecc789aa7dc75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5143175a54d02709bf304d2a4d50a1c3b2c441552e206628f9e1a5dfdb27ae97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5143175a54d02709bf304d2a4d50a1c3b2c441552e206628f9e1a5dfdb27ae97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1baa2e910db1674a7a4e4f09b6e01d305daf0f6ef4c7fee24090365dba810b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baa2e910db1674a7a4e4f09b6e01d305daf0f6ef4c7fee24090365dba810b9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwn7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:24Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:24 crc kubenswrapper[4809]: I0312 08:00:24.790363 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:24 crc kubenswrapper[4809]: I0312 08:00:24.790397 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:24 crc kubenswrapper[4809]: I0312 08:00:24.790405 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:24 crc kubenswrapper[4809]: I0312 08:00:24.790421 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:24 crc kubenswrapper[4809]: I0312 08:00:24.790429 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:24Z","lastTransitionTime":"2026-03-12T08:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:24 crc kubenswrapper[4809]: I0312 08:00:24.796438 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4677e418d05a581aee3be8fd421402c02922c5eaf004cbcae1ae691fb67d283e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:24Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:24 crc kubenswrapper[4809]: I0312 08:00:24.892774 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:24 crc kubenswrapper[4809]: I0312 08:00:24.892810 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:24 crc kubenswrapper[4809]: I0312 08:00:24.892819 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:24 crc kubenswrapper[4809]: I0312 08:00:24.892833 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:24 crc kubenswrapper[4809]: I0312 08:00:24.892842 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:24Z","lastTransitionTime":"2026-03-12T08:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:24 crc kubenswrapper[4809]: I0312 08:00:24.995873 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:24 crc kubenswrapper[4809]: I0312 08:00:24.995949 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:24 crc kubenswrapper[4809]: I0312 08:00:24.995958 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:24 crc kubenswrapper[4809]: I0312 08:00:24.995973 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:24 crc kubenswrapper[4809]: I0312 08:00:24.996003 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:24Z","lastTransitionTime":"2026-03-12T08:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:25 crc kubenswrapper[4809]: I0312 08:00:25.097723 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:25 crc kubenswrapper[4809]: I0312 08:00:25.097756 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:25 crc kubenswrapper[4809]: I0312 08:00:25.097764 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:25 crc kubenswrapper[4809]: I0312 08:00:25.097779 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:25 crc kubenswrapper[4809]: I0312 08:00:25.097787 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:25Z","lastTransitionTime":"2026-03-12T08:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:25 crc kubenswrapper[4809]: I0312 08:00:25.199810 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:25 crc kubenswrapper[4809]: I0312 08:00:25.199849 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:25 crc kubenswrapper[4809]: I0312 08:00:25.199859 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:25 crc kubenswrapper[4809]: I0312 08:00:25.199874 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:25 crc kubenswrapper[4809]: I0312 08:00:25.199882 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:25Z","lastTransitionTime":"2026-03-12T08:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:25 crc kubenswrapper[4809]: I0312 08:00:25.301882 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:25 crc kubenswrapper[4809]: I0312 08:00:25.301911 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:25 crc kubenswrapper[4809]: I0312 08:00:25.301920 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:25 crc kubenswrapper[4809]: I0312 08:00:25.301933 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:25 crc kubenswrapper[4809]: I0312 08:00:25.301942 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:25Z","lastTransitionTime":"2026-03-12T08:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:25 crc kubenswrapper[4809]: I0312 08:00:25.404204 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:25 crc kubenswrapper[4809]: I0312 08:00:25.404256 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:25 crc kubenswrapper[4809]: I0312 08:00:25.404267 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:25 crc kubenswrapper[4809]: I0312 08:00:25.404279 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:25 crc kubenswrapper[4809]: I0312 08:00:25.404289 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:25Z","lastTransitionTime":"2026-03-12T08:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:25 crc kubenswrapper[4809]: I0312 08:00:25.499547 4809 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 12 08:00:25 crc kubenswrapper[4809]: I0312 08:00:25.506390 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:25 crc kubenswrapper[4809]: I0312 08:00:25.506438 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:25 crc kubenswrapper[4809]: I0312 08:00:25.506455 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:25 crc kubenswrapper[4809]: I0312 08:00:25.506475 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:25 crc kubenswrapper[4809]: I0312 08:00:25.506492 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:25Z","lastTransitionTime":"2026-03-12T08:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:25 crc kubenswrapper[4809]: I0312 08:00:25.603973 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7h9l6_cc7631d0-7d4b-4f5a-ab01-7516b2ed998e/ovnkube-controller/0.log" Mar 12 08:00:25 crc kubenswrapper[4809]: I0312 08:00:25.606662 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" event={"ID":"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e","Type":"ContainerStarted","Data":"66cafb4f566b5ba69ecf5eca4e22a9d8ccb639efc79f214281eae7bb6424fa0c"} Mar 12 08:00:25 crc kubenswrapper[4809]: I0312 08:00:25.607432 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:00:25 crc kubenswrapper[4809]: I0312 08:00:25.607905 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:25 crc kubenswrapper[4809]: I0312 08:00:25.607937 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:25 crc kubenswrapper[4809]: I0312 08:00:25.607947 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:25 crc kubenswrapper[4809]: I0312 08:00:25.607960 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:25 crc kubenswrapper[4809]: I0312 08:00:25.607971 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:25Z","lastTransitionTime":"2026-03-12T08:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:25 crc kubenswrapper[4809]: I0312 08:00:25.635556 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90601560-88a4-4e9e-8c6c-b7cfd6193dfc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcce0bfc6e523f2bd2e05ab2403c80a280f3c22b7c9d4e61bde3bb414b1b6c1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afe86a0784b3f9ecefb87d8d8de1e616420abe185dfcf0e47ad17f7d3e18a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ecaed5f8be6ee7aa9b143575897770874696cfd0d71c30ef8e46d75a67be75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d974ea0c31aedbb16a24b692f45399aa47ba7a1e4b14467bb954c10757b38cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c32315e2dbe601b08a2b2c3db8a50dfc5d99be959dbf33c19f6a2c55df38c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28106812f58b36dc991efb2c3b84b4c3f38016271262418de7cb6d3130ee2743\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28106812f58b36dc991efb2c3b84b4c3f38016271262418de7cb6d3130ee2743\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://759e1933c261395715f2209ea1aa72957dfb4c5b1049c830e87854b0dcdba55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759e1933c261395715f2209ea1aa72957dfb4c5b1049c830e87854b0dcdba55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0d58c5fd72b76f75c191f46f25e0aa9fb3026191f5fb97c210907e1ea38baa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0d58c5fd72b76f75c191f46f25e0aa9fb3026191f5fb97c210907e1ea38baa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:25Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:25 crc kubenswrapper[4809]: I0312 08:00:25.647192 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0dc5e31901266db36374e4881b99b1a40767bc6ff58afe1e6226adf1f5c60fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aef513f474f25e56fd48d7c8b5f110939514be75dec224356dee883508d604f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:25Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:25 crc kubenswrapper[4809]: I0312 08:00:25.660192 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:25Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:25 crc kubenswrapper[4809]: I0312 08:00:25.671271 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:25Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:25 crc kubenswrapper[4809]: I0312 08:00:25.687270 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a4d3f57db51abc6b567283017a96e97ab98055de09513fe14d0be447942b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06c9bb77ec46b7ddf658e865660c808ab82509e698598c42daf2b199305c32c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://833b7c9d59ec9b7cf52e9b6c12705d034b46279a46145c181f92d30ffb6e7f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b49430238b4b9dc6fefc7169830e7106e6d43b1130d0b2fd358f103161f390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://943bab511f40b661cb31e977ec90fbc4edd6cb41d26593b0d7829dfd854fc047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f06a94e052ce6d8bc2496ade124907ca536c37d2710afe11c824327dc3de5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66cafb4f566b5ba69ecf5eca4e22a9d8ccb639efc79f214281eae7bb6424fa0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43ea7eb62d65475117fbd44e54f1fea06b6c47a63cdf9e7be6bfaa498b208648\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T08:00:24Z\\\",\\\"message\\\":\\\"ctor *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0312 08:00:24.412852 6608 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0312 08:00:24.413043 6608 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0312 08:00:24.413306 6608 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0312 08:00:24.413415 6608 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0312 08:00:24.413721 6608 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0312 08:00:24.413759 6608 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0312 08:00:24.413788 6608 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0312 08:00:24.413860 6608 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0312 08:00:24.413870 6608 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0312 08:00:24.413895 6608 factory.go:656] Stopping watch factory\\\\nI0312 08:00:24.413917 6608 ovnkube.go:599] Stopped ovnkube\\\\nI0312 08:00:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a39013e253a1014bfc666169571d96ed943c28cc73221fcfa3f85d31ee9ea85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7h9l6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:25Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:25 crc kubenswrapper[4809]: I0312 08:00:25.697573 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vcxlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7db3e1-e4cd-4800-a7c2-30d8254caa4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a976f7b8cb11f2d6a4393b26321f830f79835db3c42bdc340ac4b189ee0e56d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vcxlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:25Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:25 crc kubenswrapper[4809]: I0312 08:00:25.707094 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4677e418d05a581aee3be8fd421402c02922c5eaf004cbcae1ae691fb67d283e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:25Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:25 crc kubenswrapper[4809]: I0312 08:00:25.709703 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:25 crc kubenswrapper[4809]: I0312 08:00:25.709739 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:25 crc kubenswrapper[4809]: I0312 08:00:25.709757 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:25 crc kubenswrapper[4809]: I0312 08:00:25.709776 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:25 crc kubenswrapper[4809]: I0312 08:00:25.709788 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:25Z","lastTransitionTime":"2026-03-12T08:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:25 crc kubenswrapper[4809]: I0312 08:00:25.716399 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"101483ba-8ed3-40eb-9855-077e9add029f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a3044e9eaa4ccc87b594a3f5440d95bf23c648ceb2e8f45abeadeda79b4a773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1dde7e156e97091c5fa419562607743e842e7e610e685fff1be3ce7ce33f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6d4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:25Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:25 crc kubenswrapper[4809]: I0312 08:00:25.728347 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwn7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab4c7cca-c503-41d4-8abf-5b19429defff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaaba74bba436056c80065bda470df45e31d6405fc8b717b0299b5a0fa3ef37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef92b238f6dd7e873d92f846bfa5a6dcbc6f5ab26617736df052a7f331fc5f4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef92b238f6dd7e873d92f846bfa5a6dcbc6f5ab26617736df052a7f331fc5f4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04cf364fbd4a2f9039a1733e337000f89db4b2ef6b4cd8de079f4c01c91add76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04cf364fbd4a2f9039a1733e337000f89db4b2ef6b4cd8de079f4c01c91add76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://844db69e092396d79015605fd21fde37c13689eb546b220c8e9ecc789aa7dc75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844db69e092396d79015605fd21fde37c13689eb546b220c8e9ecc789aa7dc75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5143175a54d02709bf304d2a4d50a1c3b2c441552e206628f9e1a5dfdb27ae97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5143175a54d02709bf304d2a4d50a1c3b2c441552e206628f9e1a5dfdb27ae97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1baa2e910db1674a7a4e4f09b6e01d305daf0f6ef4c7fee24090365dba810b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baa2e910db1674a7a4e4f09b6e01d305daf0f6ef4c7fee24090365dba810b9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwn7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:25Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:25 crc kubenswrapper[4809]: I0312 08:00:25.739063 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kbwnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acc48a9a-cc0a-4361-b1bc-7c7684f9bf93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5d8cd4006dec3e1304218cda9d03ea761465524e055daccb8e61ef1368a7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j44jn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kbwnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:25Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:25 crc kubenswrapper[4809]: I0312 08:00:25.754091 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xgl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa0e58ad29c62a229115fb214956dc11f004bd7f4a8188d7435e0857f07ff46b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2nhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xgl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:25Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:25 crc kubenswrapper[4809]: I0312 08:00:25.770875 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09f5e61a-c077-449f-8291-dbf93ac9aca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95634c408015e53a4c8d044b27803c5070ad086be1f3372a760744db1bc9c033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c386318c2476130eb02e6620ab75fce7d5cd814d7f214bed4f449de60d126a14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efb3f6bb09e8fcec0267b3fdbdf43301e33aa2ba8bfaf8b3a124c7e7714949ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T08:00:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 08:00:01.682190 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 08:00:01.682347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 08:00:01.683010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1444506139/tls.crt::/tmp/serving-cert-1444506139/tls.key\\\\\\\"\\\\nI0312 08:00:02.063779 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 08:00:02.066800 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 08:00:02.066837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 08:00:02.066881 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 08:00:02.066894 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 08:00:02.073309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 08:00:02.073332 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073342 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 08:00:02.073345 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 08:00:02.073348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 08:00:02.073351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 08:00:02.073351 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 08:00:02.075060 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://411ee556dd1461097ea682e93564110b2a46d02734cb91026ba06ffe7de7a80f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:25Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:25 crc kubenswrapper[4809]: I0312 08:00:25.783622 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83bfb2b1536a6a7a823599dddb47b6ad50b038509d00298d2fcee85b0a5b6c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:25Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:25 crc kubenswrapper[4809]: I0312 08:00:25.794968 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:25Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:25 crc kubenswrapper[4809]: I0312 08:00:25.812021 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:25 crc kubenswrapper[4809]: I0312 08:00:25.812145 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:25 crc kubenswrapper[4809]: I0312 08:00:25.812155 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:25 crc kubenswrapper[4809]: I0312 08:00:25.812168 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:25 crc kubenswrapper[4809]: I0312 08:00:25.812177 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:25Z","lastTransitionTime":"2026-03-12T08:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:25 crc kubenswrapper[4809]: I0312 08:00:25.915045 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:25 crc kubenswrapper[4809]: I0312 08:00:25.915080 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:25 crc kubenswrapper[4809]: I0312 08:00:25.915089 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:25 crc kubenswrapper[4809]: I0312 08:00:25.915103 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:25 crc kubenswrapper[4809]: I0312 08:00:25.915128 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:25Z","lastTransitionTime":"2026-03-12T08:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.018085 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.018134 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.018145 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.018159 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.018169 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:26Z","lastTransitionTime":"2026-03-12T08:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.105255 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.105304 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.105410 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:00:26 crc kubenswrapper[4809]: E0312 08:00:26.105405 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 08:00:26 crc kubenswrapper[4809]: E0312 08:00:26.105505 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 08:00:26 crc kubenswrapper[4809]: E0312 08:00:26.105573 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.120555 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.120598 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.120610 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.120626 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.120637 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:26Z","lastTransitionTime":"2026-03-12T08:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.222941 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.222987 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.223001 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.223020 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.223033 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:26Z","lastTransitionTime":"2026-03-12T08:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.325398 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.325446 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.325458 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.325475 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.325487 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:26Z","lastTransitionTime":"2026-03-12T08:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.428663 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.428722 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.428738 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.428761 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.428780 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:26Z","lastTransitionTime":"2026-03-12T08:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.532571 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.532633 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.532650 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.532681 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.532701 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:26Z","lastTransitionTime":"2026-03-12T08:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.613984 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7h9l6_cc7631d0-7d4b-4f5a-ab01-7516b2ed998e/ovnkube-controller/1.log" Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.615172 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7h9l6_cc7631d0-7d4b-4f5a-ab01-7516b2ed998e/ovnkube-controller/0.log" Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.619567 4809 generic.go:334] "Generic (PLEG): container finished" podID="cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" containerID="66cafb4f566b5ba69ecf5eca4e22a9d8ccb639efc79f214281eae7bb6424fa0c" exitCode=1 Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.619626 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" event={"ID":"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e","Type":"ContainerDied","Data":"66cafb4f566b5ba69ecf5eca4e22a9d8ccb639efc79f214281eae7bb6424fa0c"} Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.619690 4809 scope.go:117] "RemoveContainer" containerID="43ea7eb62d65475117fbd44e54f1fea06b6c47a63cdf9e7be6bfaa498b208648" Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.621199 4809 scope.go:117] "RemoveContainer" containerID="66cafb4f566b5ba69ecf5eca4e22a9d8ccb639efc79f214281eae7bb6424fa0c" Mar 12 08:00:26 crc kubenswrapper[4809]: E0312 08:00:26.621552 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-7h9l6_openshift-ovn-kubernetes(cc7631d0-7d4b-4f5a-ab01-7516b2ed998e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" podUID="cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.635476 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.635511 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.635522 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.635539 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.635552 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:26Z","lastTransitionTime":"2026-03-12T08:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.650073 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90601560-88a4-4e9e-8c6c-b7cfd6193dfc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcce0bfc6e523f2bd2e05ab2403c80a280f3c22b7c9d4e61bde3bb414b1b6c1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afe86a0784b3f9ecefb87d8d8de1e616420abe185dfcf0e47ad17f7d3e18a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ecaed5f8be6ee7aa9b143575897770874696cfd0d71c30ef8e46d75a67be75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d974ea0c31aedbb16a24b692f45399aa47ba7a1e4b14467bb954c10757b38cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c32315e2dbe601b08a2b2c3db8a50dfc5d99be959dbf33c19f6a2c55df38c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28106812f58b36dc991efb2c3b84b4c3f38016271262418de7cb6d3130ee2743\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28106812f58b36dc991efb2c3b84b4c3f38016271262418de7cb6d3130ee2743\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://759e1933c261395715f2209ea1aa72957dfb4c5b1049c830e87854b0dcdba55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759e1933c261395715f2209ea1aa72957dfb4c5b1049c830e87854b0dcdba55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0d58c5fd72b76f75c191f46f25e0aa9fb3026191f5fb97c210907e1ea38baa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0d58c5fd72b76f75c191f46f25e0aa9fb3026191f5fb97c210907e1ea38baa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:26Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.668561 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0dc5e31901266db36374e4881b99b1a40767bc6ff58afe1e6226adf1f5c60fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aef513f474f25e56fd48d7c8b5f110939514be75dec224356dee883508d604f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:26Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.686467 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:26Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.702718 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:26Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.722612 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a4d3f57db51abc6b567283017a96e97ab98055de09513fe14d0be447942b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06c9bb77ec46b7ddf658e865660c808ab82509e698598c42daf2b199305c32c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://833b7c9d59ec9b7cf52e9b6c12705d034b46279a46145c181f92d30ffb6e7f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b49430238b4b9dc6fefc7169830e7106e6d43b1130d0b2fd358f103161f390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://943bab511f40b661cb31e977ec90fbc4edd6cb41d26593b0d7829dfd854fc047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f06a94e052ce6d8bc2496ade124907ca536c37d2710afe11c824327dc3de5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66cafb4f566b5ba69ecf5eca4e22a9d8ccb639efc79f214281eae7bb6424fa0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43ea7eb62d65475117fbd44e54f1fea06b6c47a63cdf9e7be6bfaa498b208648\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T08:00:24Z\\\",\\\"message\\\":\\\"ctor *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0312 08:00:24.412852 6608 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0312 08:00:24.413043 6608 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0312 08:00:24.413306 6608 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0312 08:00:24.413415 6608 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0312 08:00:24.413721 6608 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0312 08:00:24.413759 6608 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0312 08:00:24.413788 6608 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0312 08:00:24.413860 6608 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0312 08:00:24.413870 6608 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0312 08:00:24.413895 6608 factory.go:656] Stopping watch factory\\\\nI0312 08:00:24.413917 6608 ovnkube.go:599] Stopped ovnkube\\\\nI0312 08:00:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66cafb4f566b5ba69ecf5eca4e22a9d8ccb639efc79f214281eae7bb6424fa0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T08:00:25Z\\\",\\\"message\\\":\\\"twork=default are: map[]\\\\nI0312 08:00:25.399620 6771 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0312 08:00:25.399629 6771 services_controller.go:443] Built service openshift-kube-controller-manager/kube-controller-manager LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.36\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0312 08:00:25.399649 6771 services_controller.go:444] Built service openshift-kube-controller-manager/kube-controller-manager LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0312 08:00:25.399661 6771 services_controller.go:445] Built service openshift-kube-controller-manager/kube-controller-manager LB template configs for network=default: []services.lbConfig(nil)\\\\nF0312 08:00:25.399668 6771 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to sh\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a39013e253a1014bfc666169571d96ed943c28cc73221fcfa3f85d31ee9ea85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7h9l6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:26Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.732878 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vcxlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7db3e1-e4cd-4800-a7c2-30d8254caa4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a976f7b8cb11f2d6a4393b26321f830f79835db3c42bdc340ac4b189ee0e56d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vcxlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:26Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.739098 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.739156 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.739170 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.739185 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.739218 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:26Z","lastTransitionTime":"2026-03-12T08:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.742490 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4677e418d05a581aee3be8fd421402c02922c5eaf004cbcae1ae691fb67d283e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:26Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.754923 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"101483ba-8ed3-40eb-9855-077e9add029f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a3044e9eaa4ccc87b594a3f5440d95bf23c648ceb2e8f45abeadeda79b4a773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1dde7e156e97091c5fa419562607743e842e7e610e685fff1be3ce7ce33f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6d4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:26Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.768003 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwn7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab4c7cca-c503-41d4-8abf-5b19429defff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaaba74bba436056c80065bda470df45e31d6405fc8b717b0299b5a0fa3ef37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef92b238f6dd7e873d92f846bfa5a6dcbc6f5ab26617736df052a7f331fc5f4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef92b238f6dd7e873d92f846bfa5a6dcbc6f5ab26617736df052a7f331fc5f4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04cf364fbd4a2f9039a1733e337000f89db4b2ef6b4cd8de079f4c01c91add76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04cf364fbd4a2f9039a1733e337000f89db4b2ef6b4cd8de079f4c01c91add76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://844db69e092396d79015605fd21fde37c13689eb546b220c8e9ecc789aa7dc75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844db69e092396d79015605fd21fde37c13689eb546b220c8e9ecc789aa7dc75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5143175a54d02709bf304d2a4d50a1c3b2c441552e206628f9e1a5dfdb27ae97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5143175a54d02709bf304d2a4d50a1c3b2c441552e206628f9e1a5dfdb27ae97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1baa2e910db1674a7a4e4f09b6e01d305daf0f6ef4c7fee24090365dba810b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baa2e910db1674a7a4e4f09b6e01d305daf0f6ef4c7fee24090365dba810b9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwn7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:26Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.780261 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09f5e61a-c077-449f-8291-dbf93ac9aca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95634c408015e53a4c8d044b27803c5070ad086be1f3372a760744db1bc9c033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c386318c2476130eb02e6620ab75fce7d5cd814d7f214bed4f449de60d126a14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efb3f6bb09e8fcec0267b3fdbdf43301e33aa2ba8bfaf8b3a124c7e7714949ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T08:00:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 08:00:01.682190 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 08:00:01.682347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 08:00:01.683010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1444506139/tls.crt::/tmp/serving-cert-1444506139/tls.key\\\\\\\"\\\\nI0312 08:00:02.063779 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 08:00:02.066800 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 08:00:02.066837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 08:00:02.066881 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 08:00:02.066894 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 08:00:02.073309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 08:00:02.073332 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073342 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 08:00:02.073345 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 08:00:02.073348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 08:00:02.073351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 08:00:02.073351 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 08:00:02.075060 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://411ee556dd1461097ea682e93564110b2a46d02734cb91026ba06ffe7de7a80f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:26Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.792637 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83bfb2b1536a6a7a823599dddb47b6ad50b038509d00298d2fcee85b0a5b6c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:26Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.804164 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:26Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.812232 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kbwnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acc48a9a-cc0a-4361-b1bc-7c7684f9bf93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5d8cd4006dec3e1304218cda9d03ea761465524e055daccb8e61ef1368a7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j44jn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kbwnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:26Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.823373 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xgl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa0e58ad29c62a229115fb214956dc11f004bd7f4a8188d7435e0857f07ff46b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2nhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xgl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:26Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.842137 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.842179 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.842187 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.842223 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.842233 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:26Z","lastTransitionTime":"2026-03-12T08:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.921621 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d7prn"] Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.922238 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d7prn" Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.924367 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.925077 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.935540 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4677e418d05a581aee3be8fd421402c02922c5eaf004cbcae1ae691fb67d283e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:26Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.945467 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.945528 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.945541 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.945557 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.945568 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:26Z","lastTransitionTime":"2026-03-12T08:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.947472 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"101483ba-8ed3-40eb-9855-077e9add029f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a3044e9eaa4ccc87b594a3f5440d95bf23c648ceb2e8f45abeadeda79b4a773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1dde7e156e97091c5fa419562607743e842e7e610e685fff1be3ce7ce33f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6d4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:26Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.959356 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b97e5c85-4adb-4061-b2e0-1be7440c2133-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-d7prn\" (UID: \"b97e5c85-4adb-4061-b2e0-1be7440c2133\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d7prn" Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.959566 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b97e5c85-4adb-4061-b2e0-1be7440c2133-env-overrides\") pod \"ovnkube-control-plane-749d76644c-d7prn\" (UID: \"b97e5c85-4adb-4061-b2e0-1be7440c2133\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d7prn" Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.959656 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f99hq\" (UniqueName: \"kubernetes.io/projected/b97e5c85-4adb-4061-b2e0-1be7440c2133-kube-api-access-f99hq\") pod \"ovnkube-control-plane-749d76644c-d7prn\" (UID: \"b97e5c85-4adb-4061-b2e0-1be7440c2133\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d7prn" Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.959730 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b97e5c85-4adb-4061-b2e0-1be7440c2133-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-d7prn\" (UID: \"b97e5c85-4adb-4061-b2e0-1be7440c2133\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d7prn" Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.961396 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwn7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab4c7cca-c503-41d4-8abf-5b19429defff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaaba74bba436056c80065bda470df45e31d6405fc8b717b0299b5a0fa3ef37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef92b238f6dd7e873d92f846bfa5a6dcbc6f5ab26617736df052a7f331fc5f4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef92b238f6dd7e873d92f846bfa5a6dcbc6f5ab26617736df052a7f331fc5f4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04cf364fbd4a2f9039a1733e337000f89db4b2ef6b4cd8de079f4c01c91add76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04cf364fbd4a2f9039a1733e337000f89db4b2ef6b4cd8de079f4c01c91add76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://844db69e092396d79015605fd21fde37c13689eb546b220c8e9ecc789aa7dc75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844db69e092396d79015605fd21fde37c13689eb546b220c8e9ecc789aa7dc75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5143175a54d02709bf304d2a4d50a1c3b2c441552e206628f9e1a5dfdb27ae97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5143175a54d02709bf304d2a4d50a1c3b2c441552e206628f9e1a5dfdb27ae97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1baa2e910db1674a7a4e4f09b6e01d305daf0f6ef4c7fee24090365dba810b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baa2e910db1674a7a4e4f09b6e01d305daf0f6ef4c7fee24090365dba810b9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwn7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:26Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.975600 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09f5e61a-c077-449f-8291-dbf93ac9aca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95634c408015e53a4c8d044b27803c5070ad086be1f3372a760744db1bc9c033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c386318c2476130eb02e6620ab75fce7d5cd814d7f214bed4f449de60d126a14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efb3f6bb09e8fcec0267b3fdbdf43301e33aa2ba8bfaf8b3a124c7e7714949ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T08:00:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 08:00:01.682190 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 08:00:01.682347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 08:00:01.683010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1444506139/tls.crt::/tmp/serving-cert-1444506139/tls.key\\\\\\\"\\\\nI0312 08:00:02.063779 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 08:00:02.066800 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 08:00:02.066837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 08:00:02.066881 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 08:00:02.066894 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 08:00:02.073309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 08:00:02.073332 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073342 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 08:00:02.073345 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 08:00:02.073348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 08:00:02.073351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 08:00:02.073351 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 08:00:02.075060 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://411ee556dd1461097ea682e93564110b2a46d02734cb91026ba06ffe7de7a80f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:26Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:26 crc kubenswrapper[4809]: I0312 08:00:26.990249 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83bfb2b1536a6a7a823599dddb47b6ad50b038509d00298d2fcee85b0a5b6c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:26Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.004649 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.014841 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kbwnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acc48a9a-cc0a-4361-b1bc-7c7684f9bf93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5d8cd4006dec3e1304218cda9d03ea761465524e055daccb8e61ef1368a7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j44jn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kbwnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.027057 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xgl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa0e58ad29c62a229115fb214956dc11f004bd7f4a8188d7435e0857f07ff46b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2nhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xgl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.048304 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.048340 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.048351 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.048366 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.048378 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:27Z","lastTransitionTime":"2026-03-12T08:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.060955 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b97e5c85-4adb-4061-b2e0-1be7440c2133-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-d7prn\" (UID: \"b97e5c85-4adb-4061-b2e0-1be7440c2133\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d7prn" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.061028 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b97e5c85-4adb-4061-b2e0-1be7440c2133-env-overrides\") pod \"ovnkube-control-plane-749d76644c-d7prn\" (UID: \"b97e5c85-4adb-4061-b2e0-1be7440c2133\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d7prn" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.061086 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f99hq\" (UniqueName: \"kubernetes.io/projected/b97e5c85-4adb-4061-b2e0-1be7440c2133-kube-api-access-f99hq\") pod \"ovnkube-control-plane-749d76644c-d7prn\" (UID: \"b97e5c85-4adb-4061-b2e0-1be7440c2133\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d7prn" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.061146 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b97e5c85-4adb-4061-b2e0-1be7440c2133-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-d7prn\" (UID: \"b97e5c85-4adb-4061-b2e0-1be7440c2133\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d7prn" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.062838 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b97e5c85-4adb-4061-b2e0-1be7440c2133-env-overrides\") pod \"ovnkube-control-plane-749d76644c-d7prn\" (UID: \"b97e5c85-4adb-4061-b2e0-1be7440c2133\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d7prn" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.063415 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b97e5c85-4adb-4061-b2e0-1be7440c2133-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-d7prn\" (UID: \"b97e5c85-4adb-4061-b2e0-1be7440c2133\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d7prn" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.065913 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90601560-88a4-4e9e-8c6c-b7cfd6193dfc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcce0bfc6e523f2bd2e05ab2403c80a280f3c22b7c9d4e61bde3bb414b1b6c1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afe86a0784b3f9ecefb87d8d8de1e616420abe185dfcf0e47ad17f7d3e18a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ecaed5f8be6ee7aa9b143575897770874696cfd0d71c30ef8e46d75a67be75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d974ea0c31aedbb16a24b692f45399aa47ba7a1e4b14467bb954c10757b38cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c32315e2dbe601b08a2b2c3db8a50dfc5d99be959dbf33c19f6a2c55df38c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28106812f58b36dc991efb2c3b84b4c3f38016271262418de7cb6d3130ee2743\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28106812f58b36dc991efb2c3b84b4c3f38016271262418de7cb6d3130ee2743\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://759e1933c261395715f2209ea1aa72957dfb4c5b1049c830e87854b0dcdba55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759e1933c261395715f2209ea1aa72957dfb4c5b1049c830e87854b0dcdba55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0d58c5fd72b76f75c191f46f25e0aa9fb3026191f5fb97c210907e1ea38baa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0d58c5fd72b76f75c191f46f25e0aa9fb3026191f5fb97c210907e1ea38baa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.073966 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b97e5c85-4adb-4061-b2e0-1be7440c2133-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-d7prn\" (UID: \"b97e5c85-4adb-4061-b2e0-1be7440c2133\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d7prn" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.094963 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f99hq\" (UniqueName: \"kubernetes.io/projected/b97e5c85-4adb-4061-b2e0-1be7440c2133-kube-api-access-f99hq\") pod \"ovnkube-control-plane-749d76644c-d7prn\" (UID: \"b97e5c85-4adb-4061-b2e0-1be7440c2133\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d7prn" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.097717 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0dc5e31901266db36374e4881b99b1a40767bc6ff58afe1e6226adf1f5c60fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aef513f474f25e56fd48d7c8b5f110939514be75dec224356dee883508d604f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.117039 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d7prn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b97e5c85-4adb-4061-b2e0-1be7440c2133\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f99hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f99hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-d7prn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.138515 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.150338 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.150387 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.150399 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.150418 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.150429 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:27Z","lastTransitionTime":"2026-03-12T08:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.152961 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.170676 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a4d3f57db51abc6b567283017a96e97ab98055de09513fe14d0be447942b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06c9bb77ec46b7ddf658e865660c808ab82509e698598c42daf2b199305c32c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://833b7c9d59ec9b7cf52e9b6c12705d034b46279a46145c181f92d30ffb6e7f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b49430238b4b9dc6fefc7169830e7106e6d43b1130d0b2fd358f103161f390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://943bab511f40b661cb31e977ec90fbc4edd6cb41d26593b0d7829dfd854fc047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f06a94e052ce6d8bc2496ade124907ca536c37d2710afe11c824327dc3de5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66cafb4f566b5ba69ecf5eca4e22a9d8ccb639efc79f214281eae7bb6424fa0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43ea7eb62d65475117fbd44e54f1fea06b6c47a63cdf9e7be6bfaa498b208648\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T08:00:24Z\\\",\\\"message\\\":\\\"ctor *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0312 08:00:24.412852 6608 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0312 08:00:24.413043 6608 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0312 08:00:24.413306 6608 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0312 08:00:24.413415 6608 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0312 08:00:24.413721 6608 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0312 08:00:24.413759 6608 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0312 08:00:24.413788 6608 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0312 08:00:24.413860 6608 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0312 08:00:24.413870 6608 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0312 08:00:24.413895 6608 factory.go:656] Stopping watch factory\\\\nI0312 08:00:24.413917 6608 ovnkube.go:599] Stopped ovnkube\\\\nI0312 08:00:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66cafb4f566b5ba69ecf5eca4e22a9d8ccb639efc79f214281eae7bb6424fa0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T08:00:25Z\\\",\\\"message\\\":\\\"twork=default are: map[]\\\\nI0312 08:00:25.399620 6771 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0312 08:00:25.399629 6771 services_controller.go:443] Built service openshift-kube-controller-manager/kube-controller-manager LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.36\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0312 08:00:25.399649 6771 services_controller.go:444] Built service openshift-kube-controller-manager/kube-controller-manager LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0312 08:00:25.399661 6771 services_controller.go:445] Built service openshift-kube-controller-manager/kube-controller-manager LB template configs for network=default: []services.lbConfig(nil)\\\\nF0312 08:00:25.399668 6771 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to sh\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a39013e253a1014bfc666169571d96ed943c28cc73221fcfa3f85d31ee9ea85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7h9l6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.182490 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vcxlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7db3e1-e4cd-4800-a7c2-30d8254caa4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a976f7b8cb11f2d6a4393b26321f830f79835db3c42bdc340ac4b189ee0e56d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vcxlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.193914 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d7prn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b97e5c85-4adb-4061-b2e0-1be7440c2133\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f99hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f99hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-d7prn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.211590 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90601560-88a4-4e9e-8c6c-b7cfd6193dfc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcce0bfc6e523f2bd2e05ab2403c80a280f3c22b7c9d4e61bde3bb414b1b6c1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afe86a0784b3f9ecefb87d8d8de1e616420abe185dfcf0e47ad17f7d3e18a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ecaed5f8be6ee7aa9b143575897770874696cfd0d71c30ef8e46d75a67be75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d974ea0c31aedbb16a24b692f45399aa47ba7a1e4b14467bb954c10757b38cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c32315e2dbe601b08a2b2c3db8a50dfc5d99be959dbf33c19f6a2c55df38c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28106812f58b36dc991efb2c3b84b4c3f38016271262418de7cb6d3130ee2743\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28106812f58b36dc991efb2c3b84b4c3f38016271262418de7cb6d3130ee2743\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://759e1933c261395715f2209ea1aa72957dfb4c5b1049c830e87854b0dcdba55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759e1933c261395715f2209ea1aa72957dfb4c5b1049c830e87854b0dcdba55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0d58c5fd72b76f75c191f46f25e0aa9fb3026191f5fb97c210907e1ea38baa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0d58c5fd72b76f75c191f46f25e0aa9fb3026191f5fb97c210907e1ea38baa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.223075 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0dc5e31901266db36374e4881b99b1a40767bc6ff58afe1e6226adf1f5c60fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aef513f474f25e56fd48d7c8b5f110939514be75dec224356dee883508d604f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.231817 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vcxlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7db3e1-e4cd-4800-a7c2-30d8254caa4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a976f7b8cb11f2d6a4393b26321f830f79835db3c42bdc340ac4b189ee0e56d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vcxlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.234646 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d7prn" Mar 12 08:00:27 crc kubenswrapper[4809]: W0312 08:00:27.248828 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb97e5c85_4adb_4061_b2e0_1be7440c2133.slice/crio-cb0b89776cbfde1108df4c381bee064a4d04eb01cbaa3bcf7ace3a28027d29c1 WatchSource:0}: Error finding container cb0b89776cbfde1108df4c381bee064a4d04eb01cbaa3bcf7ace3a28027d29c1: Status 404 returned error can't find the container with id cb0b89776cbfde1108df4c381bee064a4d04eb01cbaa3bcf7ace3a28027d29c1 Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.252465 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.252488 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.252496 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.252509 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.252519 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:27Z","lastTransitionTime":"2026-03-12T08:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.256523 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.271888 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.294303 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a4d3f57db51abc6b567283017a96e97ab98055de09513fe14d0be447942b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06c9bb77ec46b7ddf658e865660c808ab82509e698598c42daf2b199305c32c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://833b7c9d59ec9b7cf52e9b6c12705d034b46279a46145c181f92d30ffb6e7f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b49430238b4b9dc6fefc7169830e7106e6d43b1130d0b2fd358f103161f390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://943bab511f40b661cb31e977ec90fbc4edd6cb41d26593b0d7829dfd854fc047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f06a94e052ce6d8bc2496ade124907ca536c37d2710afe11c824327dc3de5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66cafb4f566b5ba69ecf5eca4e22a9d8ccb639efc79f214281eae7bb6424fa0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43ea7eb62d65475117fbd44e54f1fea06b6c47a63cdf9e7be6bfaa498b208648\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T08:00:24Z\\\",\\\"message\\\":\\\"ctor *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0312 08:00:24.412852 6608 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0312 08:00:24.413043 6608 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0312 08:00:24.413306 6608 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0312 08:00:24.413415 6608 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0312 08:00:24.413721 6608 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0312 08:00:24.413759 6608 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0312 08:00:24.413788 6608 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0312 08:00:24.413860 6608 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0312 08:00:24.413870 6608 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0312 08:00:24.413895 6608 factory.go:656] Stopping watch factory\\\\nI0312 08:00:24.413917 6608 ovnkube.go:599] Stopped ovnkube\\\\nI0312 08:00:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66cafb4f566b5ba69ecf5eca4e22a9d8ccb639efc79f214281eae7bb6424fa0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T08:00:25Z\\\",\\\"message\\\":\\\"twork=default are: map[]\\\\nI0312 08:00:25.399620 6771 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0312 08:00:25.399629 6771 services_controller.go:443] Built service openshift-kube-controller-manager/kube-controller-manager LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.36\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0312 08:00:25.399649 6771 services_controller.go:444] Built service openshift-kube-controller-manager/kube-controller-manager LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0312 08:00:25.399661 6771 services_controller.go:445] Built service openshift-kube-controller-manager/kube-controller-manager LB template configs for network=default: []services.lbConfig(nil)\\\\nF0312 08:00:25.399668 6771 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to sh\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a39013e253a1014bfc666169571d96ed943c28cc73221fcfa3f85d31ee9ea85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7h9l6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.310429 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwn7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab4c7cca-c503-41d4-8abf-5b19429defff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaaba74bba436056c80065bda470df45e31d6405fc8b717b0299b5a0fa3ef37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef92b238f6dd7e873d92f846bfa5a6dcbc6f5ab26617736df052a7f331fc5f4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef92b238f6dd7e873d92f846bfa5a6dcbc6f5ab26617736df052a7f331fc5f4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04cf364fbd4a2f9039a1733e337000f89db4b2ef6b4cd8de079f4c01c91add76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04cf364fbd4a2f9039a1733e337000f89db4b2ef6b4cd8de079f4c01c91add76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://844db69e092396d79015605fd21fde37c13689eb546b220c8e9ecc789aa7dc75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844db69e092396d79015605fd21fde37c13689eb546b220c8e9ecc789aa7dc75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5143175a54d02709bf304d2a4d50a1c3b2c441552e206628f9e1a5dfdb27ae97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5143175a54d02709bf304d2a4d50a1c3b2c441552e206628f9e1a5dfdb27ae97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1baa2e910db1674a7a4e4f09b6e01d305daf0f6ef4c7fee24090365dba810b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baa2e910db1674a7a4e4f09b6e01d305daf0f6ef4c7fee24090365dba810b9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwn7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.326005 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4677e418d05a581aee3be8fd421402c02922c5eaf004cbcae1ae691fb67d283e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.338291 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"101483ba-8ed3-40eb-9855-077e9add029f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a3044e9eaa4ccc87b594a3f5440d95bf23c648ceb2e8f45abeadeda79b4a773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1dde7e156e97091c5fa419562607743e842e7e610e685fff1be3ce7ce33f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6d4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.351915 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.355399 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.356240 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.356282 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.356308 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.356324 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:27Z","lastTransitionTime":"2026-03-12T08:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.365220 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kbwnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acc48a9a-cc0a-4361-b1bc-7c7684f9bf93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5d8cd4006dec3e1304218cda9d03ea761465524e055daccb8e61ef1368a7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j44jn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kbwnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.379427 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xgl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa0e58ad29c62a229115fb214956dc11f004bd7f4a8188d7435e0857f07ff46b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2nhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xgl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.397468 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09f5e61a-c077-449f-8291-dbf93ac9aca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95634c408015e53a4c8d044b27803c5070ad086be1f3372a760744db1bc9c033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c386318c2476130eb02e6620ab75fce7d5cd814d7f214bed4f449de60d126a14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efb3f6bb09e8fcec0267b3fdbdf43301e33aa2ba8bfaf8b3a124c7e7714949ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T08:00:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 08:00:01.682190 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 08:00:01.682347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 08:00:01.683010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1444506139/tls.crt::/tmp/serving-cert-1444506139/tls.key\\\\\\\"\\\\nI0312 08:00:02.063779 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 08:00:02.066800 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 08:00:02.066837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 08:00:02.066881 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 08:00:02.066894 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 08:00:02.073309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 08:00:02.073332 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073342 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 08:00:02.073345 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 08:00:02.073348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 08:00:02.073351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 08:00:02.073351 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 08:00:02.075060 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://411ee556dd1461097ea682e93564110b2a46d02734cb91026ba06ffe7de7a80f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.411128 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83bfb2b1536a6a7a823599dddb47b6ad50b038509d00298d2fcee85b0a5b6c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.460418 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.460484 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.460497 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.460644 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.460671 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:27Z","lastTransitionTime":"2026-03-12T08:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.562882 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.562929 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.562941 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.562986 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.563000 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:27Z","lastTransitionTime":"2026-03-12T08:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.630485 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7h9l6_cc7631d0-7d4b-4f5a-ab01-7516b2ed998e/ovnkube-controller/1.log" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.633501 4809 scope.go:117] "RemoveContainer" containerID="66cafb4f566b5ba69ecf5eca4e22a9d8ccb639efc79f214281eae7bb6424fa0c" Mar 12 08:00:27 crc kubenswrapper[4809]: E0312 08:00:27.633647 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-7h9l6_openshift-ovn-kubernetes(cc7631d0-7d4b-4f5a-ab01-7516b2ed998e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" podUID="cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.634279 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d7prn" event={"ID":"b97e5c85-4adb-4061-b2e0-1be7440c2133","Type":"ContainerStarted","Data":"edfd33ff837e4b09002fef30a5d35f242dd087139d7cabc6416bae56811968c9"} Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.634319 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d7prn" event={"ID":"b97e5c85-4adb-4061-b2e0-1be7440c2133","Type":"ContainerStarted","Data":"8aba607a1e3cfe1831b9a97989571fc863a712e4579f87587f6d0089b255ab12"} Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.634333 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d7prn" event={"ID":"b97e5c85-4adb-4061-b2e0-1be7440c2133","Type":"ContainerStarted","Data":"cb0b89776cbfde1108df4c381bee064a4d04eb01cbaa3bcf7ace3a28027d29c1"} Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.645080 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vcxlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7db3e1-e4cd-4800-a7c2-30d8254caa4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a976f7b8cb11f2d6a4393b26321f830f79835db3c42bdc340ac4b189ee0e56d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vcxlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.657145 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.658471 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-p566k"] Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.659024 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p566k" Mar 12 08:00:27 crc kubenswrapper[4809]: E0312 08:00:27.659102 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p566k" podUID="3d31c58d-0f0d-431f-bebc-57173f467eee" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.665393 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.665424 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.665431 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.665444 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.665453 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:27Z","lastTransitionTime":"2026-03-12T08:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.665785 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j742\" (UniqueName: \"kubernetes.io/projected/3d31c58d-0f0d-431f-bebc-57173f467eee-kube-api-access-7j742\") pod \"network-metrics-daemon-p566k\" (UID: \"3d31c58d-0f0d-431f-bebc-57173f467eee\") " pod="openshift-multus/network-metrics-daemon-p566k" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.665845 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3d31c58d-0f0d-431f-bebc-57173f467eee-metrics-certs\") pod \"network-metrics-daemon-p566k\" (UID: \"3d31c58d-0f0d-431f-bebc-57173f467eee\") " pod="openshift-multus/network-metrics-daemon-p566k" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.673930 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.673970 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.673982 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.673997 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.674009 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:27Z","lastTransitionTime":"2026-03-12T08:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.675787 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:27 crc kubenswrapper[4809]: E0312 08:00:27.688868 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f5485f66-5ce6-43bd-a720-57f2c3255098\\\",\\\"systemUUID\\\":\\\"f124771b-42e6-4243-8a1b-002105aaecbe\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.692026 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.692061 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.692071 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.692088 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.692098 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:27Z","lastTransitionTime":"2026-03-12T08:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.692160 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a4d3f57db51abc6b567283017a96e97ab98055de09513fe14d0be447942b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06c9bb77ec46b7ddf658e865660c808ab82509e698598c42daf2b199305c32c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://833b7c9d59ec9b7cf52e9b6c12705d034b46279a46145c181f92d30ffb6e7f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b49430238b4b9dc6fefc7169830e7106e6d43b1130d0b2fd358f103161f390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://943bab511f40b661cb31e977ec90fbc4edd6cb41d26593b0d7829dfd854fc047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f06a94e052ce6d8bc2496ade124907ca536c37d2710afe11c824327dc3de5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66cafb4f566b5ba69ecf5eca4e22a9d8ccb639efc79f214281eae7bb6424fa0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66cafb4f566b5ba69ecf5eca4e22a9d8ccb639efc79f214281eae7bb6424fa0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T08:00:25Z\\\",\\\"message\\\":\\\"twork=default are: map[]\\\\nI0312 08:00:25.399620 6771 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0312 08:00:25.399629 6771 services_controller.go:443] Built service openshift-kube-controller-manager/kube-controller-manager LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.36\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0312 08:00:25.399649 6771 services_controller.go:444] Built service openshift-kube-controller-manager/kube-controller-manager LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0312 08:00:25.399661 6771 services_controller.go:445] Built service openshift-kube-controller-manager/kube-controller-manager LB template configs for network=default: []services.lbConfig(nil)\\\\nF0312 08:00:25.399668 6771 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to sh\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-7h9l6_openshift-ovn-kubernetes(cc7631d0-7d4b-4f5a-ab01-7516b2ed998e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a39013e253a1014bfc666169571d96ed943c28cc73221fcfa3f85d31ee9ea85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7h9l6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:27 crc kubenswrapper[4809]: E0312 08:00:27.703524 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f5485f66-5ce6-43bd-a720-57f2c3255098\\\",\\\"systemUUID\\\":\\\"f124771b-42e6-4243-8a1b-002105aaecbe\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.705908 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwn7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab4c7cca-c503-41d4-8abf-5b19429defff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaaba74bba436056c80065bda470df45e31d6405fc8b717b0299b5a0fa3ef37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef92b238f6dd7e873d92f846bfa5a6dcbc6f5ab26617736df052a7f331fc5f4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef92b238f6dd7e873d92f846bfa5a6dcbc6f5ab26617736df052a7f331fc5f4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04cf364fbd4a2f9039a1733e337000f89db4b2ef6b4cd8de079f4c01c91add76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04cf364fbd4a2f9039a1733e337000f89db4b2ef6b4cd8de079f4c01c91add76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://844db69e092396d79015605fd21fde37c13689eb546b220c8e9ecc789aa7dc75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844db69e092396d79015605fd21fde37c13689eb546b220c8e9ecc789aa7dc75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5143175a54d02709bf304d2a4d50a1c3b2c441552e206628f9e1a5dfdb27ae97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5143175a54d02709bf304d2a4d50a1c3b2c441552e206628f9e1a5dfdb27ae97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1baa2e910db1674a7a4e4f09b6e01d305daf0f6ef4c7fee24090365dba810b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baa2e910db1674a7a4e4f09b6e01d305daf0f6ef4c7fee24090365dba810b9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwn7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.707057 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.707090 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.707099 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.707129 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.707142 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:27Z","lastTransitionTime":"2026-03-12T08:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.717791 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4677e418d05a581aee3be8fd421402c02922c5eaf004cbcae1ae691fb67d283e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:27 crc kubenswrapper[4809]: E0312 08:00:27.718772 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f5485f66-5ce6-43bd-a720-57f2c3255098\\\",\\\"systemUUID\\\":\\\"f124771b-42e6-4243-8a1b-002105aaecbe\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.725988 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.726020 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.726028 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.726042 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.726051 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:27Z","lastTransitionTime":"2026-03-12T08:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.731270 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"101483ba-8ed3-40eb-9855-077e9add029f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a3044e9eaa4ccc87b594a3f5440d95bf23c648ceb2e8f45abeadeda79b4a773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1dde7e156e97091c5fa419562607743e842e7e610e685fff1be3ce7ce33f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6d4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:27 crc kubenswrapper[4809]: E0312 08:00:27.736466 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f5485f66-5ce6-43bd-a720-57f2c3255098\\\",\\\"systemUUID\\\":\\\"f124771b-42e6-4243-8a1b-002105aaecbe\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.739517 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.739551 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.739561 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.739577 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.739590 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:27Z","lastTransitionTime":"2026-03-12T08:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.741897 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:27 crc kubenswrapper[4809]: E0312 08:00:27.751133 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f5485f66-5ce6-43bd-a720-57f2c3255098\\\",\\\"systemUUID\\\":\\\"f124771b-42e6-4243-8a1b-002105aaecbe\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:27 crc kubenswrapper[4809]: E0312 08:00:27.751282 4809 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.754037 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kbwnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acc48a9a-cc0a-4361-b1bc-7c7684f9bf93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5d8cd4006dec3e1304218cda9d03ea761465524e055daccb8e61ef1368a7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j44jn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kbwnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.766616 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7j742\" (UniqueName: \"kubernetes.io/projected/3d31c58d-0f0d-431f-bebc-57173f467eee-kube-api-access-7j742\") pod \"network-metrics-daemon-p566k\" (UID: \"3d31c58d-0f0d-431f-bebc-57173f467eee\") " pod="openshift-multus/network-metrics-daemon-p566k" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.766682 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3d31c58d-0f0d-431f-bebc-57173f467eee-metrics-certs\") pod \"network-metrics-daemon-p566k\" (UID: \"3d31c58d-0f0d-431f-bebc-57173f467eee\") " pod="openshift-multus/network-metrics-daemon-p566k" Mar 12 08:00:27 crc kubenswrapper[4809]: E0312 08:00:27.766816 4809 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 08:00:27 crc kubenswrapper[4809]: E0312 08:00:27.766876 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d31c58d-0f0d-431f-bebc-57173f467eee-metrics-certs podName:3d31c58d-0f0d-431f-bebc-57173f467eee nodeName:}" failed. No retries permitted until 2026-03-12 08:00:28.266857908 +0000 UTC m=+101.848893651 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3d31c58d-0f0d-431f-bebc-57173f467eee-metrics-certs") pod "network-metrics-daemon-p566k" (UID: "3d31c58d-0f0d-431f-bebc-57173f467eee") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.767295 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.767331 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.767346 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.767360 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.767370 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:27Z","lastTransitionTime":"2026-03-12T08:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.767413 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xgl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa0e58ad29c62a229115fb214956dc11f004bd7f4a8188d7435e0857f07ff46b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2nhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xgl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.782188 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09f5e61a-c077-449f-8291-dbf93ac9aca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95634c408015e53a4c8d044b27803c5070ad086be1f3372a760744db1bc9c033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c386318c2476130eb02e6620ab75fce7d5cd814d7f214bed4f449de60d126a14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efb3f6bb09e8fcec0267b3fdbdf43301e33aa2ba8bfaf8b3a124c7e7714949ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T08:00:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 08:00:01.682190 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 08:00:01.682347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 08:00:01.683010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1444506139/tls.crt::/tmp/serving-cert-1444506139/tls.key\\\\\\\"\\\\nI0312 08:00:02.063779 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 08:00:02.066800 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 08:00:02.066837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 08:00:02.066881 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 08:00:02.066894 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 08:00:02.073309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 08:00:02.073332 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073342 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 08:00:02.073345 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 08:00:02.073348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 08:00:02.073351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 08:00:02.073351 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 08:00:02.075060 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://411ee556dd1461097ea682e93564110b2a46d02734cb91026ba06ffe7de7a80f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.788303 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7j742\" (UniqueName: \"kubernetes.io/projected/3d31c58d-0f0d-431f-bebc-57173f467eee-kube-api-access-7j742\") pod \"network-metrics-daemon-p566k\" (UID: \"3d31c58d-0f0d-431f-bebc-57173f467eee\") " pod="openshift-multus/network-metrics-daemon-p566k" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.797180 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83bfb2b1536a6a7a823599dddb47b6ad50b038509d00298d2fcee85b0a5b6c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.810102 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d7prn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b97e5c85-4adb-4061-b2e0-1be7440c2133\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f99hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f99hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-d7prn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.827764 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90601560-88a4-4e9e-8c6c-b7cfd6193dfc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcce0bfc6e523f2bd2e05ab2403c80a280f3c22b7c9d4e61bde3bb414b1b6c1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afe86a0784b3f9ecefb87d8d8de1e616420abe185dfcf0e47ad17f7d3e18a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ecaed5f8be6ee7aa9b143575897770874696cfd0d71c30ef8e46d75a67be75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d974ea0c31aedbb16a24b692f45399aa47ba7a1e4b14467bb954c10757b38cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c32315e2dbe601b08a2b2c3db8a50dfc5d99be959dbf33c19f6a2c55df38c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28106812f58b36dc991efb2c3b84b4c3f38016271262418de7cb6d3130ee2743\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28106812f58b36dc991efb2c3b84b4c3f38016271262418de7cb6d3130ee2743\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://759e1933c261395715f2209ea1aa72957dfb4c5b1049c830e87854b0dcdba55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759e1933c261395715f2209ea1aa72957dfb4c5b1049c830e87854b0dcdba55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0d58c5fd72b76f75c191f46f25e0aa9fb3026191f5fb97c210907e1ea38baa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0d58c5fd72b76f75c191f46f25e0aa9fb3026191f5fb97c210907e1ea38baa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.840363 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0dc5e31901266db36374e4881b99b1a40767bc6ff58afe1e6226adf1f5c60fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aef513f474f25e56fd48d7c8b5f110939514be75dec224356dee883508d604f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.861431 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90601560-88a4-4e9e-8c6c-b7cfd6193dfc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcce0bfc6e523f2bd2e05ab2403c80a280f3c22b7c9d4e61bde3bb414b1b6c1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afe86a0784b3f9ecefb87d8d8de1e616420abe185dfcf0e47ad17f7d3e18a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ecaed5f8be6ee7aa9b143575897770874696cfd0d71c30ef8e46d75a67be75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d974ea0c31aedbb16a24b692f45399aa47ba7a1e4b14467bb954c10757b38cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c32315e2dbe601b08a2b2c3db8a50dfc5d99be959dbf33c19f6a2c55df38c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28106812f58b36dc991efb2c3b84b4c3f38016271262418de7cb6d3130ee2743\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28106812f58b36dc991efb2c3b84b4c3f38016271262418de7cb6d3130ee2743\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://759e1933c261395715f2209ea1aa72957dfb4c5b1049c830e87854b0dcdba55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759e1933c261395715f2209ea1aa72957dfb4c5b1049c830e87854b0dcdba55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0d58c5fd72b76f75c191f46f25e0aa9fb3026191f5fb97c210907e1ea38baa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0d58c5fd72b76f75c191f46f25e0aa9fb3026191f5fb97c210907e1ea38baa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.869150 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.869188 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.869200 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.869216 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.869225 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:27Z","lastTransitionTime":"2026-03-12T08:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.875002 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0dc5e31901266db36374e4881b99b1a40767bc6ff58afe1e6226adf1f5c60fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aef513f474f25e56fd48d7c8b5f110939514be75dec224356dee883508d604f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.885735 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d7prn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b97e5c85-4adb-4061-b2e0-1be7440c2133\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aba607a1e3cfe1831b9a97989571fc863a712e4579f87587f6d0089b255ab12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f99hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edfd33ff837e4b09002fef30a5d35f242dd087139d7cabc6416bae56811968c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f99hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-d7prn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.896781 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.910487 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.931909 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a4d3f57db51abc6b567283017a96e97ab98055de09513fe14d0be447942b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06c9bb77ec46b7ddf658e865660c808ab82509e698598c42daf2b199305c32c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://833b7c9d59ec9b7cf52e9b6c12705d034b46279a46145c181f92d30ffb6e7f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b49430238b4b9dc6fefc7169830e7106e6d43b1130d0b2fd358f103161f390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://943bab511f40b661cb31e977ec90fbc4edd6cb41d26593b0d7829dfd854fc047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f06a94e052ce6d8bc2496ade124907ca536c37d2710afe11c824327dc3de5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66cafb4f566b5ba69ecf5eca4e22a9d8ccb639efc79f214281eae7bb6424fa0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66cafb4f566b5ba69ecf5eca4e22a9d8ccb639efc79f214281eae7bb6424fa0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T08:00:25Z\\\",\\\"message\\\":\\\"twork=default are: map[]\\\\nI0312 08:00:25.399620 6771 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0312 08:00:25.399629 6771 services_controller.go:443] Built service openshift-kube-controller-manager/kube-controller-manager LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.36\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0312 08:00:25.399649 6771 services_controller.go:444] Built service openshift-kube-controller-manager/kube-controller-manager LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0312 08:00:25.399661 6771 services_controller.go:445] Built service openshift-kube-controller-manager/kube-controller-manager LB template configs for network=default: []services.lbConfig(nil)\\\\nF0312 08:00:25.399668 6771 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to sh\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-7h9l6_openshift-ovn-kubernetes(cc7631d0-7d4b-4f5a-ab01-7516b2ed998e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a39013e253a1014bfc666169571d96ed943c28cc73221fcfa3f85d31ee9ea85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7h9l6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.942252 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vcxlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7db3e1-e4cd-4800-a7c2-30d8254caa4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a976f7b8cb11f2d6a4393b26321f830f79835db3c42bdc340ac4b189ee0e56d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vcxlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.952295 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4677e418d05a581aee3be8fd421402c02922c5eaf004cbcae1ae691fb67d283e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.961591 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"101483ba-8ed3-40eb-9855-077e9add029f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a3044e9eaa4ccc87b594a3f5440d95bf23c648ceb2e8f45abeadeda79b4a773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1dde7e156e97091c5fa419562607743e842e7e610e685fff1be3ce7ce33f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6d4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.971248 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.971285 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.971296 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.971311 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.971322 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:27Z","lastTransitionTime":"2026-03-12T08:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.973417 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwn7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab4c7cca-c503-41d4-8abf-5b19429defff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaaba74bba436056c80065bda470df45e31d6405fc8b717b0299b5a0fa3ef37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef92b238f6dd7e873d92f846bfa5a6dcbc6f5ab26617736df052a7f331fc5f4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef92b238f6dd7e873d92f846bfa5a6dcbc6f5ab26617736df052a7f331fc5f4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04cf364fbd4a2f9039a1733e337000f89db4b2ef6b4cd8de079f4c01c91add76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04cf364fbd4a2f9039a1733e337000f89db4b2ef6b4cd8de079f4c01c91add76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://844db69e092396d79015605fd21fde37c13689eb546b220c8e9ecc789aa7dc75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844db69e092396d79015605fd21fde37c13689eb546b220c8e9ecc789aa7dc75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5143175a54d02709bf304d2a4d50a1c3b2c441552e206628f9e1a5dfdb27ae97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5143175a54d02709bf304d2a4d50a1c3b2c441552e206628f9e1a5dfdb27ae97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1baa2e910db1674a7a4e4f09b6e01d305daf0f6ef4c7fee24090365dba810b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baa2e910db1674a7a4e4f09b6e01d305daf0f6ef4c7fee24090365dba810b9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwn7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:27 crc kubenswrapper[4809]: I0312 08:00:27.987459 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09f5e61a-c077-449f-8291-dbf93ac9aca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95634c408015e53a4c8d044b27803c5070ad086be1f3372a760744db1bc9c033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c386318c2476130eb02e6620ab75fce7d5cd814d7f214bed4f449de60d126a14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efb3f6bb09e8fcec0267b3fdbdf43301e33aa2ba8bfaf8b3a124c7e7714949ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T08:00:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 08:00:01.682190 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 08:00:01.682347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 08:00:01.683010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1444506139/tls.crt::/tmp/serving-cert-1444506139/tls.key\\\\\\\"\\\\nI0312 08:00:02.063779 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 08:00:02.066800 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 08:00:02.066837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 08:00:02.066881 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 08:00:02.066894 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 08:00:02.073309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 08:00:02.073332 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073342 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 08:00:02.073345 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 08:00:02.073348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 08:00:02.073351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 08:00:02.073351 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 08:00:02.075060 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://411ee556dd1461097ea682e93564110b2a46d02734cb91026ba06ffe7de7a80f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:28 crc kubenswrapper[4809]: I0312 08:00:28.000096 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83bfb2b1536a6a7a823599dddb47b6ad50b038509d00298d2fcee85b0a5b6c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:28 crc kubenswrapper[4809]: I0312 08:00:28.011941 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:28Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:28 crc kubenswrapper[4809]: I0312 08:00:28.021216 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kbwnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acc48a9a-cc0a-4361-b1bc-7c7684f9bf93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5d8cd4006dec3e1304218cda9d03ea761465524e055daccb8e61ef1368a7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j44jn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kbwnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:28Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:28 crc kubenswrapper[4809]: I0312 08:00:28.034374 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xgl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa0e58ad29c62a229115fb214956dc11f004bd7f4a8188d7435e0857f07ff46b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2nhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xgl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:28Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:28 crc kubenswrapper[4809]: I0312 08:00:28.044929 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-p566k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d31c58d-0f0d-431f-bebc-57173f467eee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j742\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j742\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-p566k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:28Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:28 crc kubenswrapper[4809]: I0312 08:00:28.073654 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:28 crc kubenswrapper[4809]: I0312 08:00:28.073676 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:28 crc kubenswrapper[4809]: I0312 08:00:28.073685 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:28 crc kubenswrapper[4809]: I0312 08:00:28.073697 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:28 crc kubenswrapper[4809]: I0312 08:00:28.073707 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:28Z","lastTransitionTime":"2026-03-12T08:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:28 crc kubenswrapper[4809]: I0312 08:00:28.105348 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:00:28 crc kubenswrapper[4809]: I0312 08:00:28.105408 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:00:28 crc kubenswrapper[4809]: E0312 08:00:28.105447 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 08:00:28 crc kubenswrapper[4809]: I0312 08:00:28.105518 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:00:28 crc kubenswrapper[4809]: E0312 08:00:28.105645 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 08:00:28 crc kubenswrapper[4809]: E0312 08:00:28.105747 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 08:00:28 crc kubenswrapper[4809]: I0312 08:00:28.175768 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:28 crc kubenswrapper[4809]: I0312 08:00:28.175810 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:28 crc kubenswrapper[4809]: I0312 08:00:28.175822 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:28 crc kubenswrapper[4809]: I0312 08:00:28.175838 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:28 crc kubenswrapper[4809]: I0312 08:00:28.175848 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:28Z","lastTransitionTime":"2026-03-12T08:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:28 crc kubenswrapper[4809]: I0312 08:00:28.270555 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3d31c58d-0f0d-431f-bebc-57173f467eee-metrics-certs\") pod \"network-metrics-daemon-p566k\" (UID: \"3d31c58d-0f0d-431f-bebc-57173f467eee\") " pod="openshift-multus/network-metrics-daemon-p566k" Mar 12 08:00:28 crc kubenswrapper[4809]: E0312 08:00:28.270731 4809 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 08:00:28 crc kubenswrapper[4809]: E0312 08:00:28.270802 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d31c58d-0f0d-431f-bebc-57173f467eee-metrics-certs podName:3d31c58d-0f0d-431f-bebc-57173f467eee nodeName:}" failed. No retries permitted until 2026-03-12 08:00:29.270782516 +0000 UTC m=+102.852818249 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3d31c58d-0f0d-431f-bebc-57173f467eee-metrics-certs") pod "network-metrics-daemon-p566k" (UID: "3d31c58d-0f0d-431f-bebc-57173f467eee") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 08:00:28 crc kubenswrapper[4809]: I0312 08:00:28.277811 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:28 crc kubenswrapper[4809]: I0312 08:00:28.277846 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:28 crc kubenswrapper[4809]: I0312 08:00:28.277857 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:28 crc kubenswrapper[4809]: I0312 08:00:28.277874 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:28 crc kubenswrapper[4809]: I0312 08:00:28.277886 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:28Z","lastTransitionTime":"2026-03-12T08:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:28 crc kubenswrapper[4809]: I0312 08:00:28.380985 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:28 crc kubenswrapper[4809]: I0312 08:00:28.381053 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:28 crc kubenswrapper[4809]: I0312 08:00:28.381067 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:28 crc kubenswrapper[4809]: I0312 08:00:28.381081 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:28 crc kubenswrapper[4809]: I0312 08:00:28.381091 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:28Z","lastTransitionTime":"2026-03-12T08:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:28 crc kubenswrapper[4809]: I0312 08:00:28.483464 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:28 crc kubenswrapper[4809]: I0312 08:00:28.483504 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:28 crc kubenswrapper[4809]: I0312 08:00:28.483517 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:28 crc kubenswrapper[4809]: I0312 08:00:28.483532 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:28 crc kubenswrapper[4809]: I0312 08:00:28.483544 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:28Z","lastTransitionTime":"2026-03-12T08:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:28 crc kubenswrapper[4809]: I0312 08:00:28.585961 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:28 crc kubenswrapper[4809]: I0312 08:00:28.585993 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:28 crc kubenswrapper[4809]: I0312 08:00:28.586001 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:28 crc kubenswrapper[4809]: I0312 08:00:28.586012 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:28 crc kubenswrapper[4809]: I0312 08:00:28.586021 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:28Z","lastTransitionTime":"2026-03-12T08:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:28 crc kubenswrapper[4809]: I0312 08:00:28.687958 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:28 crc kubenswrapper[4809]: I0312 08:00:28.688008 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:28 crc kubenswrapper[4809]: I0312 08:00:28.688025 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:28 crc kubenswrapper[4809]: I0312 08:00:28.688044 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:28 crc kubenswrapper[4809]: I0312 08:00:28.688056 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:28Z","lastTransitionTime":"2026-03-12T08:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:28 crc kubenswrapper[4809]: I0312 08:00:28.791577 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:28 crc kubenswrapper[4809]: I0312 08:00:28.791621 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:28 crc kubenswrapper[4809]: I0312 08:00:28.791632 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:28 crc kubenswrapper[4809]: I0312 08:00:28.791650 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:28 crc kubenswrapper[4809]: I0312 08:00:28.791662 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:28Z","lastTransitionTime":"2026-03-12T08:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:28 crc kubenswrapper[4809]: I0312 08:00:28.894488 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:28 crc kubenswrapper[4809]: I0312 08:00:28.894516 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:28 crc kubenswrapper[4809]: I0312 08:00:28.894526 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:28 crc kubenswrapper[4809]: I0312 08:00:28.894538 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:28 crc kubenswrapper[4809]: I0312 08:00:28.894546 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:28Z","lastTransitionTime":"2026-03-12T08:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:28 crc kubenswrapper[4809]: I0312 08:00:28.996267 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:28 crc kubenswrapper[4809]: I0312 08:00:28.996299 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:28 crc kubenswrapper[4809]: I0312 08:00:28.996307 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:28 crc kubenswrapper[4809]: I0312 08:00:28.996319 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:28 crc kubenswrapper[4809]: I0312 08:00:28.996328 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:28Z","lastTransitionTime":"2026-03-12T08:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:29 crc kubenswrapper[4809]: I0312 08:00:29.099782 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:29 crc kubenswrapper[4809]: I0312 08:00:29.099846 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:29 crc kubenswrapper[4809]: I0312 08:00:29.099872 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:29 crc kubenswrapper[4809]: I0312 08:00:29.099904 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:29 crc kubenswrapper[4809]: I0312 08:00:29.099927 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:29Z","lastTransitionTime":"2026-03-12T08:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:29 crc kubenswrapper[4809]: I0312 08:00:29.105142 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p566k" Mar 12 08:00:29 crc kubenswrapper[4809]: E0312 08:00:29.105302 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p566k" podUID="3d31c58d-0f0d-431f-bebc-57173f467eee" Mar 12 08:00:29 crc kubenswrapper[4809]: I0312 08:00:29.203873 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:29 crc kubenswrapper[4809]: I0312 08:00:29.203935 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:29 crc kubenswrapper[4809]: I0312 08:00:29.203959 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:29 crc kubenswrapper[4809]: I0312 08:00:29.203991 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:29 crc kubenswrapper[4809]: I0312 08:00:29.204014 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:29Z","lastTransitionTime":"2026-03-12T08:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:29 crc kubenswrapper[4809]: I0312 08:00:29.278287 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3d31c58d-0f0d-431f-bebc-57173f467eee-metrics-certs\") pod \"network-metrics-daemon-p566k\" (UID: \"3d31c58d-0f0d-431f-bebc-57173f467eee\") " pod="openshift-multus/network-metrics-daemon-p566k" Mar 12 08:00:29 crc kubenswrapper[4809]: E0312 08:00:29.278546 4809 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 08:00:29 crc kubenswrapper[4809]: E0312 08:00:29.278642 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d31c58d-0f0d-431f-bebc-57173f467eee-metrics-certs podName:3d31c58d-0f0d-431f-bebc-57173f467eee nodeName:}" failed. No retries permitted until 2026-03-12 08:00:31.278617642 +0000 UTC m=+104.860653405 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3d31c58d-0f0d-431f-bebc-57173f467eee-metrics-certs") pod "network-metrics-daemon-p566k" (UID: "3d31c58d-0f0d-431f-bebc-57173f467eee") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 08:00:29 crc kubenswrapper[4809]: I0312 08:00:29.307774 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:29 crc kubenswrapper[4809]: I0312 08:00:29.307836 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:29 crc kubenswrapper[4809]: I0312 08:00:29.307852 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:29 crc kubenswrapper[4809]: I0312 08:00:29.307876 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:29 crc kubenswrapper[4809]: I0312 08:00:29.307893 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:29Z","lastTransitionTime":"2026-03-12T08:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:29 crc kubenswrapper[4809]: I0312 08:00:29.410701 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:29 crc kubenswrapper[4809]: I0312 08:00:29.410741 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:29 crc kubenswrapper[4809]: I0312 08:00:29.410752 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:29 crc kubenswrapper[4809]: I0312 08:00:29.410767 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:29 crc kubenswrapper[4809]: I0312 08:00:29.410778 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:29Z","lastTransitionTime":"2026-03-12T08:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:29 crc kubenswrapper[4809]: I0312 08:00:29.512884 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:29 crc kubenswrapper[4809]: I0312 08:00:29.512949 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:29 crc kubenswrapper[4809]: I0312 08:00:29.512966 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:29 crc kubenswrapper[4809]: I0312 08:00:29.512994 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:29 crc kubenswrapper[4809]: I0312 08:00:29.513011 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:29Z","lastTransitionTime":"2026-03-12T08:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:29 crc kubenswrapper[4809]: I0312 08:00:29.616053 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:29 crc kubenswrapper[4809]: I0312 08:00:29.616158 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:29 crc kubenswrapper[4809]: I0312 08:00:29.616193 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:29 crc kubenswrapper[4809]: I0312 08:00:29.616225 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:29 crc kubenswrapper[4809]: I0312 08:00:29.616243 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:29Z","lastTransitionTime":"2026-03-12T08:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:29 crc kubenswrapper[4809]: I0312 08:00:29.719569 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:29 crc kubenswrapper[4809]: I0312 08:00:29.719644 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:29 crc kubenswrapper[4809]: I0312 08:00:29.719667 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:29 crc kubenswrapper[4809]: I0312 08:00:29.719697 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:29 crc kubenswrapper[4809]: I0312 08:00:29.719724 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:29Z","lastTransitionTime":"2026-03-12T08:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:29 crc kubenswrapper[4809]: I0312 08:00:29.823056 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:29 crc kubenswrapper[4809]: I0312 08:00:29.823165 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:29 crc kubenswrapper[4809]: I0312 08:00:29.823190 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:29 crc kubenswrapper[4809]: I0312 08:00:29.823218 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:29 crc kubenswrapper[4809]: I0312 08:00:29.823239 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:29Z","lastTransitionTime":"2026-03-12T08:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:29 crc kubenswrapper[4809]: I0312 08:00:29.925814 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:29 crc kubenswrapper[4809]: I0312 08:00:29.925907 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:29 crc kubenswrapper[4809]: I0312 08:00:29.925927 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:29 crc kubenswrapper[4809]: I0312 08:00:29.925955 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:29 crc kubenswrapper[4809]: I0312 08:00:29.925974 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:29Z","lastTransitionTime":"2026-03-12T08:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:30 crc kubenswrapper[4809]: I0312 08:00:30.028508 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:30 crc kubenswrapper[4809]: I0312 08:00:30.028572 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:30 crc kubenswrapper[4809]: I0312 08:00:30.028597 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:30 crc kubenswrapper[4809]: I0312 08:00:30.028627 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:30 crc kubenswrapper[4809]: I0312 08:00:30.028657 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:30Z","lastTransitionTime":"2026-03-12T08:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:30 crc kubenswrapper[4809]: I0312 08:00:30.105768 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:00:30 crc kubenswrapper[4809]: I0312 08:00:30.105815 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:00:30 crc kubenswrapper[4809]: E0312 08:00:30.105952 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 08:00:30 crc kubenswrapper[4809]: I0312 08:00:30.106009 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:00:30 crc kubenswrapper[4809]: E0312 08:00:30.106206 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 08:00:30 crc kubenswrapper[4809]: E0312 08:00:30.106367 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 08:00:30 crc kubenswrapper[4809]: I0312 08:00:30.131772 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:30 crc kubenswrapper[4809]: I0312 08:00:30.131821 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:30 crc kubenswrapper[4809]: I0312 08:00:30.131839 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:30 crc kubenswrapper[4809]: I0312 08:00:30.131860 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:30 crc kubenswrapper[4809]: I0312 08:00:30.131877 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:30Z","lastTransitionTime":"2026-03-12T08:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:30 crc kubenswrapper[4809]: I0312 08:00:30.235027 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:30 crc kubenswrapper[4809]: I0312 08:00:30.235086 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:30 crc kubenswrapper[4809]: I0312 08:00:30.235102 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:30 crc kubenswrapper[4809]: I0312 08:00:30.235152 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:30 crc kubenswrapper[4809]: I0312 08:00:30.235172 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:30Z","lastTransitionTime":"2026-03-12T08:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:30 crc kubenswrapper[4809]: I0312 08:00:30.338597 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:30 crc kubenswrapper[4809]: I0312 08:00:30.338662 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:30 crc kubenswrapper[4809]: I0312 08:00:30.338687 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:30 crc kubenswrapper[4809]: I0312 08:00:30.338716 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:30 crc kubenswrapper[4809]: I0312 08:00:30.338740 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:30Z","lastTransitionTime":"2026-03-12T08:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:30 crc kubenswrapper[4809]: I0312 08:00:30.446689 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:30 crc kubenswrapper[4809]: I0312 08:00:30.446751 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:30 crc kubenswrapper[4809]: I0312 08:00:30.446761 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:30 crc kubenswrapper[4809]: I0312 08:00:30.446773 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:30 crc kubenswrapper[4809]: I0312 08:00:30.446782 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:30Z","lastTransitionTime":"2026-03-12T08:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:30 crc kubenswrapper[4809]: I0312 08:00:30.550452 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:30 crc kubenswrapper[4809]: I0312 08:00:30.550517 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:30 crc kubenswrapper[4809]: I0312 08:00:30.550538 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:30 crc kubenswrapper[4809]: I0312 08:00:30.550565 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:30 crc kubenswrapper[4809]: I0312 08:00:30.550581 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:30Z","lastTransitionTime":"2026-03-12T08:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:30 crc kubenswrapper[4809]: I0312 08:00:30.652789 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:30 crc kubenswrapper[4809]: I0312 08:00:30.652849 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:30 crc kubenswrapper[4809]: I0312 08:00:30.652866 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:30 crc kubenswrapper[4809]: I0312 08:00:30.652891 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:30 crc kubenswrapper[4809]: I0312 08:00:30.652910 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:30Z","lastTransitionTime":"2026-03-12T08:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:30 crc kubenswrapper[4809]: I0312 08:00:30.755460 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:30 crc kubenswrapper[4809]: I0312 08:00:30.755529 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:30 crc kubenswrapper[4809]: I0312 08:00:30.755553 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:30 crc kubenswrapper[4809]: I0312 08:00:30.755583 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:30 crc kubenswrapper[4809]: I0312 08:00:30.755605 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:30Z","lastTransitionTime":"2026-03-12T08:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:30 crc kubenswrapper[4809]: I0312 08:00:30.858743 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:30 crc kubenswrapper[4809]: I0312 08:00:30.858810 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:30 crc kubenswrapper[4809]: I0312 08:00:30.858833 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:30 crc kubenswrapper[4809]: I0312 08:00:30.858859 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:30 crc kubenswrapper[4809]: I0312 08:00:30.858878 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:30Z","lastTransitionTime":"2026-03-12T08:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:30 crc kubenswrapper[4809]: I0312 08:00:30.961793 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:30 crc kubenswrapper[4809]: I0312 08:00:30.961857 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:30 crc kubenswrapper[4809]: I0312 08:00:30.961880 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:30 crc kubenswrapper[4809]: I0312 08:00:30.961910 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:30 crc kubenswrapper[4809]: I0312 08:00:30.961932 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:30Z","lastTransitionTime":"2026-03-12T08:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:31 crc kubenswrapper[4809]: I0312 08:00:31.066067 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:31 crc kubenswrapper[4809]: I0312 08:00:31.066192 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:31 crc kubenswrapper[4809]: I0312 08:00:31.066215 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:31 crc kubenswrapper[4809]: I0312 08:00:31.066251 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:31 crc kubenswrapper[4809]: I0312 08:00:31.066277 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:31Z","lastTransitionTime":"2026-03-12T08:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:31 crc kubenswrapper[4809]: I0312 08:00:31.105985 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p566k" Mar 12 08:00:31 crc kubenswrapper[4809]: E0312 08:00:31.106300 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p566k" podUID="3d31c58d-0f0d-431f-bebc-57173f467eee" Mar 12 08:00:31 crc kubenswrapper[4809]: I0312 08:00:31.170321 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:31 crc kubenswrapper[4809]: I0312 08:00:31.170391 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:31 crc kubenswrapper[4809]: I0312 08:00:31.170474 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:31 crc kubenswrapper[4809]: I0312 08:00:31.170533 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:31 crc kubenswrapper[4809]: I0312 08:00:31.170585 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:31Z","lastTransitionTime":"2026-03-12T08:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:31 crc kubenswrapper[4809]: I0312 08:00:31.273732 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:31 crc kubenswrapper[4809]: I0312 08:00:31.273767 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:31 crc kubenswrapper[4809]: I0312 08:00:31.273780 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:31 crc kubenswrapper[4809]: I0312 08:00:31.273795 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:31 crc kubenswrapper[4809]: I0312 08:00:31.273805 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:31Z","lastTransitionTime":"2026-03-12T08:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:31 crc kubenswrapper[4809]: I0312 08:00:31.302661 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3d31c58d-0f0d-431f-bebc-57173f467eee-metrics-certs\") pod \"network-metrics-daemon-p566k\" (UID: \"3d31c58d-0f0d-431f-bebc-57173f467eee\") " pod="openshift-multus/network-metrics-daemon-p566k" Mar 12 08:00:31 crc kubenswrapper[4809]: E0312 08:00:31.302849 4809 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 08:00:31 crc kubenswrapper[4809]: E0312 08:00:31.302929 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d31c58d-0f0d-431f-bebc-57173f467eee-metrics-certs podName:3d31c58d-0f0d-431f-bebc-57173f467eee nodeName:}" failed. No retries permitted until 2026-03-12 08:00:35.30290555 +0000 UTC m=+108.884941323 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3d31c58d-0f0d-431f-bebc-57173f467eee-metrics-certs") pod "network-metrics-daemon-p566k" (UID: "3d31c58d-0f0d-431f-bebc-57173f467eee") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 08:00:31 crc kubenswrapper[4809]: I0312 08:00:31.377405 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:31 crc kubenswrapper[4809]: I0312 08:00:31.377466 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:31 crc kubenswrapper[4809]: I0312 08:00:31.377483 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:31 crc kubenswrapper[4809]: I0312 08:00:31.377509 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:31 crc kubenswrapper[4809]: I0312 08:00:31.377527 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:31Z","lastTransitionTime":"2026-03-12T08:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:31 crc kubenswrapper[4809]: I0312 08:00:31.480296 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:31 crc kubenswrapper[4809]: I0312 08:00:31.480342 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:31 crc kubenswrapper[4809]: I0312 08:00:31.480354 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:31 crc kubenswrapper[4809]: I0312 08:00:31.480374 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:31 crc kubenswrapper[4809]: I0312 08:00:31.480386 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:31Z","lastTransitionTime":"2026-03-12T08:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:31 crc kubenswrapper[4809]: I0312 08:00:31.582969 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:31 crc kubenswrapper[4809]: I0312 08:00:31.583048 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:31 crc kubenswrapper[4809]: I0312 08:00:31.583062 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:31 crc kubenswrapper[4809]: I0312 08:00:31.583083 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:31 crc kubenswrapper[4809]: I0312 08:00:31.583099 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:31Z","lastTransitionTime":"2026-03-12T08:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:31 crc kubenswrapper[4809]: I0312 08:00:31.685795 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:31 crc kubenswrapper[4809]: I0312 08:00:31.685840 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:31 crc kubenswrapper[4809]: I0312 08:00:31.685850 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:31 crc kubenswrapper[4809]: I0312 08:00:31.685865 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:31 crc kubenswrapper[4809]: I0312 08:00:31.685874 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:31Z","lastTransitionTime":"2026-03-12T08:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:31 crc kubenswrapper[4809]: I0312 08:00:31.789453 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:31 crc kubenswrapper[4809]: I0312 08:00:31.789510 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:31 crc kubenswrapper[4809]: I0312 08:00:31.789531 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:31 crc kubenswrapper[4809]: I0312 08:00:31.789554 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:31 crc kubenswrapper[4809]: I0312 08:00:31.789573 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:31Z","lastTransitionTime":"2026-03-12T08:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:31 crc kubenswrapper[4809]: I0312 08:00:31.892544 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:31 crc kubenswrapper[4809]: I0312 08:00:31.892581 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:31 crc kubenswrapper[4809]: I0312 08:00:31.892591 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:31 crc kubenswrapper[4809]: I0312 08:00:31.892610 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:31 crc kubenswrapper[4809]: I0312 08:00:31.892651 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:31Z","lastTransitionTime":"2026-03-12T08:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:31 crc kubenswrapper[4809]: I0312 08:00:31.994984 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:31 crc kubenswrapper[4809]: I0312 08:00:31.995044 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:31 crc kubenswrapper[4809]: I0312 08:00:31.995061 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:31 crc kubenswrapper[4809]: I0312 08:00:31.995086 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:31 crc kubenswrapper[4809]: I0312 08:00:31.995108 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:31Z","lastTransitionTime":"2026-03-12T08:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:32 crc kubenswrapper[4809]: I0312 08:00:32.097836 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:32 crc kubenswrapper[4809]: I0312 08:00:32.097878 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:32 crc kubenswrapper[4809]: I0312 08:00:32.097890 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:32 crc kubenswrapper[4809]: I0312 08:00:32.097905 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:32 crc kubenswrapper[4809]: I0312 08:00:32.097916 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:32Z","lastTransitionTime":"2026-03-12T08:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:32 crc kubenswrapper[4809]: I0312 08:00:32.105283 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:00:32 crc kubenswrapper[4809]: I0312 08:00:32.105304 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:00:32 crc kubenswrapper[4809]: I0312 08:00:32.105323 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:00:32 crc kubenswrapper[4809]: E0312 08:00:32.105412 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 08:00:32 crc kubenswrapper[4809]: E0312 08:00:32.105530 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 08:00:32 crc kubenswrapper[4809]: E0312 08:00:32.105666 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 08:00:32 crc kubenswrapper[4809]: I0312 08:00:32.200279 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:32 crc kubenswrapper[4809]: I0312 08:00:32.200318 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:32 crc kubenswrapper[4809]: I0312 08:00:32.200330 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:32 crc kubenswrapper[4809]: I0312 08:00:32.200347 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:32 crc kubenswrapper[4809]: I0312 08:00:32.200357 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:32Z","lastTransitionTime":"2026-03-12T08:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:32 crc kubenswrapper[4809]: I0312 08:00:32.302782 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:32 crc kubenswrapper[4809]: I0312 08:00:32.302882 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:32 crc kubenswrapper[4809]: I0312 08:00:32.302908 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:32 crc kubenswrapper[4809]: I0312 08:00:32.302943 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:32 crc kubenswrapper[4809]: I0312 08:00:32.302967 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:32Z","lastTransitionTime":"2026-03-12T08:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:32 crc kubenswrapper[4809]: I0312 08:00:32.406019 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:32 crc kubenswrapper[4809]: I0312 08:00:32.406101 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:32 crc kubenswrapper[4809]: I0312 08:00:32.406159 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:32 crc kubenswrapper[4809]: I0312 08:00:32.406186 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:32 crc kubenswrapper[4809]: I0312 08:00:32.406204 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:32Z","lastTransitionTime":"2026-03-12T08:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:32 crc kubenswrapper[4809]: I0312 08:00:32.508874 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:32 crc kubenswrapper[4809]: I0312 08:00:32.508932 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:32 crc kubenswrapper[4809]: I0312 08:00:32.508951 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:32 crc kubenswrapper[4809]: I0312 08:00:32.508975 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:32 crc kubenswrapper[4809]: I0312 08:00:32.508995 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:32Z","lastTransitionTime":"2026-03-12T08:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:32 crc kubenswrapper[4809]: I0312 08:00:32.797474 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:32 crc kubenswrapper[4809]: I0312 08:00:32.797528 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:32 crc kubenswrapper[4809]: I0312 08:00:32.797543 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:32 crc kubenswrapper[4809]: I0312 08:00:32.797569 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:32 crc kubenswrapper[4809]: I0312 08:00:32.797587 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:32Z","lastTransitionTime":"2026-03-12T08:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:32 crc kubenswrapper[4809]: I0312 08:00:32.899763 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:32 crc kubenswrapper[4809]: I0312 08:00:32.899799 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:32 crc kubenswrapper[4809]: I0312 08:00:32.899809 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:32 crc kubenswrapper[4809]: I0312 08:00:32.899825 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:32 crc kubenswrapper[4809]: I0312 08:00:32.899839 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:32Z","lastTransitionTime":"2026-03-12T08:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:33 crc kubenswrapper[4809]: I0312 08:00:33.002195 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:33 crc kubenswrapper[4809]: I0312 08:00:33.002261 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:33 crc kubenswrapper[4809]: I0312 08:00:33.002283 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:33 crc kubenswrapper[4809]: I0312 08:00:33.002313 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:33 crc kubenswrapper[4809]: I0312 08:00:33.002334 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:33Z","lastTransitionTime":"2026-03-12T08:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:33 crc kubenswrapper[4809]: I0312 08:00:33.105514 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p566k" Mar 12 08:00:33 crc kubenswrapper[4809]: E0312 08:00:33.105768 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p566k" podUID="3d31c58d-0f0d-431f-bebc-57173f467eee" Mar 12 08:00:33 crc kubenswrapper[4809]: I0312 08:00:33.105994 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:33 crc kubenswrapper[4809]: I0312 08:00:33.106032 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:33 crc kubenswrapper[4809]: I0312 08:00:33.106055 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:33 crc kubenswrapper[4809]: I0312 08:00:33.106083 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:33 crc kubenswrapper[4809]: I0312 08:00:33.106106 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:33Z","lastTransitionTime":"2026-03-12T08:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:33 crc kubenswrapper[4809]: I0312 08:00:33.209320 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:33 crc kubenswrapper[4809]: I0312 08:00:33.209387 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:33 crc kubenswrapper[4809]: I0312 08:00:33.209405 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:33 crc kubenswrapper[4809]: I0312 08:00:33.209431 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:33 crc kubenswrapper[4809]: I0312 08:00:33.209453 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:33Z","lastTransitionTime":"2026-03-12T08:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:33 crc kubenswrapper[4809]: I0312 08:00:33.312884 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:33 crc kubenswrapper[4809]: I0312 08:00:33.312947 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:33 crc kubenswrapper[4809]: I0312 08:00:33.312972 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:33 crc kubenswrapper[4809]: I0312 08:00:33.313000 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:33 crc kubenswrapper[4809]: I0312 08:00:33.313021 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:33Z","lastTransitionTime":"2026-03-12T08:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:33 crc kubenswrapper[4809]: I0312 08:00:33.415746 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:33 crc kubenswrapper[4809]: I0312 08:00:33.415854 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:33 crc kubenswrapper[4809]: I0312 08:00:33.415872 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:33 crc kubenswrapper[4809]: I0312 08:00:33.415983 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:33 crc kubenswrapper[4809]: I0312 08:00:33.416004 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:33Z","lastTransitionTime":"2026-03-12T08:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:33 crc kubenswrapper[4809]: I0312 08:00:33.518692 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:33 crc kubenswrapper[4809]: I0312 08:00:33.518801 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:33 crc kubenswrapper[4809]: I0312 08:00:33.518820 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:33 crc kubenswrapper[4809]: I0312 08:00:33.518844 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:33 crc kubenswrapper[4809]: I0312 08:00:33.518862 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:33Z","lastTransitionTime":"2026-03-12T08:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:33 crc kubenswrapper[4809]: I0312 08:00:33.621453 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:33 crc kubenswrapper[4809]: I0312 08:00:33.621523 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:33 crc kubenswrapper[4809]: I0312 08:00:33.621541 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:33 crc kubenswrapper[4809]: I0312 08:00:33.621751 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:33 crc kubenswrapper[4809]: I0312 08:00:33.621771 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:33Z","lastTransitionTime":"2026-03-12T08:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:33 crc kubenswrapper[4809]: I0312 08:00:33.725082 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:33 crc kubenswrapper[4809]: I0312 08:00:33.725184 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:33 crc kubenswrapper[4809]: I0312 08:00:33.725204 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:33 crc kubenswrapper[4809]: I0312 08:00:33.725235 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:33 crc kubenswrapper[4809]: I0312 08:00:33.725257 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:33Z","lastTransitionTime":"2026-03-12T08:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:33 crc kubenswrapper[4809]: I0312 08:00:33.828144 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:33 crc kubenswrapper[4809]: I0312 08:00:33.828183 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:33 crc kubenswrapper[4809]: I0312 08:00:33.828193 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:33 crc kubenswrapper[4809]: I0312 08:00:33.828206 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:33 crc kubenswrapper[4809]: I0312 08:00:33.828218 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:33Z","lastTransitionTime":"2026-03-12T08:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:33 crc kubenswrapper[4809]: I0312 08:00:33.931763 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:33 crc kubenswrapper[4809]: I0312 08:00:33.931840 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:33 crc kubenswrapper[4809]: I0312 08:00:33.931865 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:33 crc kubenswrapper[4809]: I0312 08:00:33.931893 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:33 crc kubenswrapper[4809]: I0312 08:00:33.931910 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:33Z","lastTransitionTime":"2026-03-12T08:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:34 crc kubenswrapper[4809]: I0312 08:00:34.034818 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:34 crc kubenswrapper[4809]: I0312 08:00:34.034851 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:34 crc kubenswrapper[4809]: I0312 08:00:34.034859 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:34 crc kubenswrapper[4809]: I0312 08:00:34.034872 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:34 crc kubenswrapper[4809]: I0312 08:00:34.034882 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:34Z","lastTransitionTime":"2026-03-12T08:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:34 crc kubenswrapper[4809]: I0312 08:00:34.106013 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:00:34 crc kubenswrapper[4809]: I0312 08:00:34.106065 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:00:34 crc kubenswrapper[4809]: I0312 08:00:34.106099 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:00:34 crc kubenswrapper[4809]: E0312 08:00:34.106409 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 08:00:34 crc kubenswrapper[4809]: E0312 08:00:34.106564 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 08:00:34 crc kubenswrapper[4809]: E0312 08:00:34.106709 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 08:00:34 crc kubenswrapper[4809]: I0312 08:00:34.107808 4809 scope.go:117] "RemoveContainer" containerID="94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9" Mar 12 08:00:34 crc kubenswrapper[4809]: E0312 08:00:34.108224 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 12 08:00:34 crc kubenswrapper[4809]: I0312 08:00:34.138172 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:34 crc kubenswrapper[4809]: I0312 08:00:34.138229 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:34 crc kubenswrapper[4809]: I0312 08:00:34.138247 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:34 crc kubenswrapper[4809]: I0312 08:00:34.138270 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:34 crc kubenswrapper[4809]: I0312 08:00:34.138287 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:34Z","lastTransitionTime":"2026-03-12T08:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:34 crc kubenswrapper[4809]: I0312 08:00:34.240692 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:34 crc kubenswrapper[4809]: I0312 08:00:34.240753 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:34 crc kubenswrapper[4809]: I0312 08:00:34.240770 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:34 crc kubenswrapper[4809]: I0312 08:00:34.240794 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:34 crc kubenswrapper[4809]: I0312 08:00:34.240811 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:34Z","lastTransitionTime":"2026-03-12T08:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:34 crc kubenswrapper[4809]: I0312 08:00:34.343889 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:34 crc kubenswrapper[4809]: I0312 08:00:34.343954 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:34 crc kubenswrapper[4809]: I0312 08:00:34.343979 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:34 crc kubenswrapper[4809]: I0312 08:00:34.344005 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:34 crc kubenswrapper[4809]: I0312 08:00:34.344023 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:34Z","lastTransitionTime":"2026-03-12T08:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:34 crc kubenswrapper[4809]: I0312 08:00:34.447107 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:34 crc kubenswrapper[4809]: I0312 08:00:34.447203 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:34 crc kubenswrapper[4809]: I0312 08:00:34.447222 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:34 crc kubenswrapper[4809]: I0312 08:00:34.447248 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:34 crc kubenswrapper[4809]: I0312 08:00:34.447271 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:34Z","lastTransitionTime":"2026-03-12T08:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:34 crc kubenswrapper[4809]: I0312 08:00:34.550252 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:34 crc kubenswrapper[4809]: I0312 08:00:34.550319 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:34 crc kubenswrapper[4809]: I0312 08:00:34.550336 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:34 crc kubenswrapper[4809]: I0312 08:00:34.550360 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:34 crc kubenswrapper[4809]: I0312 08:00:34.550378 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:34Z","lastTransitionTime":"2026-03-12T08:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:34 crc kubenswrapper[4809]: I0312 08:00:34.654921 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:34 crc kubenswrapper[4809]: I0312 08:00:34.654982 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:34 crc kubenswrapper[4809]: I0312 08:00:34.654999 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:34 crc kubenswrapper[4809]: I0312 08:00:34.655022 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:34 crc kubenswrapper[4809]: I0312 08:00:34.655038 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:34Z","lastTransitionTime":"2026-03-12T08:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:34 crc kubenswrapper[4809]: I0312 08:00:34.758201 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:34 crc kubenswrapper[4809]: I0312 08:00:34.758281 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:34 crc kubenswrapper[4809]: I0312 08:00:34.758313 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:34 crc kubenswrapper[4809]: I0312 08:00:34.758343 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:34 crc kubenswrapper[4809]: I0312 08:00:34.758365 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:34Z","lastTransitionTime":"2026-03-12T08:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:34 crc kubenswrapper[4809]: I0312 08:00:34.860647 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:34 crc kubenswrapper[4809]: I0312 08:00:34.861268 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:34 crc kubenswrapper[4809]: I0312 08:00:34.861295 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:34 crc kubenswrapper[4809]: I0312 08:00:34.861319 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:34 crc kubenswrapper[4809]: I0312 08:00:34.861336 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:34Z","lastTransitionTime":"2026-03-12T08:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:34 crc kubenswrapper[4809]: I0312 08:00:34.964447 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:34 crc kubenswrapper[4809]: I0312 08:00:34.964490 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:34 crc kubenswrapper[4809]: I0312 08:00:34.964562 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:34 crc kubenswrapper[4809]: I0312 08:00:34.964590 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:34 crc kubenswrapper[4809]: I0312 08:00:34.964606 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:34Z","lastTransitionTime":"2026-03-12T08:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:35 crc kubenswrapper[4809]: I0312 08:00:35.068628 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:35 crc kubenswrapper[4809]: I0312 08:00:35.068692 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:35 crc kubenswrapper[4809]: I0312 08:00:35.068710 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:35 crc kubenswrapper[4809]: I0312 08:00:35.068735 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:35 crc kubenswrapper[4809]: I0312 08:00:35.068752 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:35Z","lastTransitionTime":"2026-03-12T08:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:35 crc kubenswrapper[4809]: I0312 08:00:35.105523 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p566k" Mar 12 08:00:35 crc kubenswrapper[4809]: E0312 08:00:35.105742 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p566k" podUID="3d31c58d-0f0d-431f-bebc-57173f467eee" Mar 12 08:00:35 crc kubenswrapper[4809]: I0312 08:00:35.171808 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:35 crc kubenswrapper[4809]: I0312 08:00:35.171886 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:35 crc kubenswrapper[4809]: I0312 08:00:35.171905 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:35 crc kubenswrapper[4809]: I0312 08:00:35.171929 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:35 crc kubenswrapper[4809]: I0312 08:00:35.171947 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:35Z","lastTransitionTime":"2026-03-12T08:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:35 crc kubenswrapper[4809]: I0312 08:00:35.274903 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:35 crc kubenswrapper[4809]: I0312 08:00:35.274957 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:35 crc kubenswrapper[4809]: I0312 08:00:35.274974 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:35 crc kubenswrapper[4809]: I0312 08:00:35.274996 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:35 crc kubenswrapper[4809]: I0312 08:00:35.275013 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:35Z","lastTransitionTime":"2026-03-12T08:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:35 crc kubenswrapper[4809]: I0312 08:00:35.318066 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3d31c58d-0f0d-431f-bebc-57173f467eee-metrics-certs\") pod \"network-metrics-daemon-p566k\" (UID: \"3d31c58d-0f0d-431f-bebc-57173f467eee\") " pod="openshift-multus/network-metrics-daemon-p566k" Mar 12 08:00:35 crc kubenswrapper[4809]: E0312 08:00:35.318233 4809 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 08:00:35 crc kubenswrapper[4809]: E0312 08:00:35.318308 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d31c58d-0f0d-431f-bebc-57173f467eee-metrics-certs podName:3d31c58d-0f0d-431f-bebc-57173f467eee nodeName:}" failed. No retries permitted until 2026-03-12 08:00:43.318289818 +0000 UTC m=+116.900325551 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3d31c58d-0f0d-431f-bebc-57173f467eee-metrics-certs") pod "network-metrics-daemon-p566k" (UID: "3d31c58d-0f0d-431f-bebc-57173f467eee") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 08:00:35 crc kubenswrapper[4809]: I0312 08:00:35.377339 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:35 crc kubenswrapper[4809]: I0312 08:00:35.377386 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:35 crc kubenswrapper[4809]: I0312 08:00:35.377398 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:35 crc kubenswrapper[4809]: I0312 08:00:35.377418 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:35 crc kubenswrapper[4809]: I0312 08:00:35.377430 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:35Z","lastTransitionTime":"2026-03-12T08:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:35 crc kubenswrapper[4809]: I0312 08:00:35.480630 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:35 crc kubenswrapper[4809]: I0312 08:00:35.481318 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:35 crc kubenswrapper[4809]: I0312 08:00:35.481335 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:35 crc kubenswrapper[4809]: I0312 08:00:35.481392 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:35 crc kubenswrapper[4809]: I0312 08:00:35.481412 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:35Z","lastTransitionTime":"2026-03-12T08:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:35 crc kubenswrapper[4809]: I0312 08:00:35.583958 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:35 crc kubenswrapper[4809]: I0312 08:00:35.584035 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:35 crc kubenswrapper[4809]: I0312 08:00:35.584060 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:35 crc kubenswrapper[4809]: I0312 08:00:35.584089 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:35 crc kubenswrapper[4809]: I0312 08:00:35.584110 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:35Z","lastTransitionTime":"2026-03-12T08:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:35 crc kubenswrapper[4809]: I0312 08:00:35.687413 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:35 crc kubenswrapper[4809]: I0312 08:00:35.687477 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:35 crc kubenswrapper[4809]: I0312 08:00:35.687498 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:35 crc kubenswrapper[4809]: I0312 08:00:35.687525 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:35 crc kubenswrapper[4809]: I0312 08:00:35.687544 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:35Z","lastTransitionTime":"2026-03-12T08:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:35 crc kubenswrapper[4809]: I0312 08:00:35.790924 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:35 crc kubenswrapper[4809]: I0312 08:00:35.790995 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:35 crc kubenswrapper[4809]: I0312 08:00:35.791013 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:35 crc kubenswrapper[4809]: I0312 08:00:35.791038 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:35 crc kubenswrapper[4809]: I0312 08:00:35.791056 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:35Z","lastTransitionTime":"2026-03-12T08:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:35 crc kubenswrapper[4809]: I0312 08:00:35.893718 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:35 crc kubenswrapper[4809]: I0312 08:00:35.893774 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:35 crc kubenswrapper[4809]: I0312 08:00:35.893796 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:35 crc kubenswrapper[4809]: I0312 08:00:35.893823 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:35 crc kubenswrapper[4809]: I0312 08:00:35.893842 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:35Z","lastTransitionTime":"2026-03-12T08:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:35 crc kubenswrapper[4809]: I0312 08:00:35.924610 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:00:35 crc kubenswrapper[4809]: I0312 08:00:35.924860 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:00:35 crc kubenswrapper[4809]: E0312 08:00:35.924958 4809 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 12 08:00:35 crc kubenswrapper[4809]: E0312 08:00:35.925059 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-12 08:01:07.925028457 +0000 UTC m=+141.507064240 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 12 08:00:35 crc kubenswrapper[4809]: E0312 08:00:35.925070 4809 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 12 08:00:35 crc kubenswrapper[4809]: E0312 08:00:35.925180 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-12 08:01:07.925162021 +0000 UTC m=+141.507197794 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 12 08:00:35 crc kubenswrapper[4809]: I0312 08:00:35.997050 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:35 crc kubenswrapper[4809]: I0312 08:00:35.997107 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:35 crc kubenswrapper[4809]: I0312 08:00:35.997155 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:35 crc kubenswrapper[4809]: I0312 08:00:35.997180 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:35 crc kubenswrapper[4809]: I0312 08:00:35.997197 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:35Z","lastTransitionTime":"2026-03-12T08:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:36 crc kubenswrapper[4809]: I0312 08:00:36.025678 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 08:00:36 crc kubenswrapper[4809]: I0312 08:00:36.025858 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:00:36 crc kubenswrapper[4809]: E0312 08:00:36.025885 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 08:01:08.025851494 +0000 UTC m=+141.607887257 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:00:36 crc kubenswrapper[4809]: I0312 08:00:36.025992 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:00:36 crc kubenswrapper[4809]: E0312 08:00:36.026146 4809 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 08:00:36 crc kubenswrapper[4809]: E0312 08:00:36.026194 4809 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 08:00:36 crc kubenswrapper[4809]: E0312 08:00:36.026235 4809 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 08:00:36 crc kubenswrapper[4809]: E0312 08:00:36.026263 4809 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 08:00:36 crc kubenswrapper[4809]: E0312 08:00:36.026267 4809 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 08:00:36 crc kubenswrapper[4809]: E0312 08:00:36.026297 4809 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 08:00:36 crc kubenswrapper[4809]: E0312 08:00:36.026365 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-12 08:01:08.026338037 +0000 UTC m=+141.608373810 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 08:00:36 crc kubenswrapper[4809]: E0312 08:00:36.026397 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-12 08:01:08.026383398 +0000 UTC m=+141.608419171 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 08:00:36 crc kubenswrapper[4809]: I0312 08:00:36.100336 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:36 crc kubenswrapper[4809]: I0312 08:00:36.100377 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:36 crc kubenswrapper[4809]: I0312 08:00:36.100389 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:36 crc kubenswrapper[4809]: I0312 08:00:36.100406 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:36 crc kubenswrapper[4809]: I0312 08:00:36.100419 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:36Z","lastTransitionTime":"2026-03-12T08:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:36 crc kubenswrapper[4809]: I0312 08:00:36.104939 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:00:36 crc kubenswrapper[4809]: I0312 08:00:36.104985 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:00:36 crc kubenswrapper[4809]: I0312 08:00:36.104970 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:00:36 crc kubenswrapper[4809]: E0312 08:00:36.105108 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 08:00:36 crc kubenswrapper[4809]: E0312 08:00:36.105308 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 08:00:36 crc kubenswrapper[4809]: E0312 08:00:36.105396 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 08:00:36 crc kubenswrapper[4809]: I0312 08:00:36.204180 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:36 crc kubenswrapper[4809]: I0312 08:00:36.204253 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:36 crc kubenswrapper[4809]: I0312 08:00:36.204270 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:36 crc kubenswrapper[4809]: I0312 08:00:36.204294 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:36 crc kubenswrapper[4809]: I0312 08:00:36.204310 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:36Z","lastTransitionTime":"2026-03-12T08:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:36 crc kubenswrapper[4809]: I0312 08:00:36.307650 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:36 crc kubenswrapper[4809]: I0312 08:00:36.307722 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:36 crc kubenswrapper[4809]: I0312 08:00:36.307739 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:36 crc kubenswrapper[4809]: I0312 08:00:36.307765 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:36 crc kubenswrapper[4809]: I0312 08:00:36.307783 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:36Z","lastTransitionTime":"2026-03-12T08:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:36 crc kubenswrapper[4809]: I0312 08:00:36.411458 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:36 crc kubenswrapper[4809]: I0312 08:00:36.411527 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:36 crc kubenswrapper[4809]: I0312 08:00:36.411544 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:36 crc kubenswrapper[4809]: I0312 08:00:36.411571 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:36 crc kubenswrapper[4809]: I0312 08:00:36.411589 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:36Z","lastTransitionTime":"2026-03-12T08:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:36 crc kubenswrapper[4809]: I0312 08:00:36.514505 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:36 crc kubenswrapper[4809]: I0312 08:00:36.514573 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:36 crc kubenswrapper[4809]: I0312 08:00:36.514590 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:36 crc kubenswrapper[4809]: I0312 08:00:36.514615 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:36 crc kubenswrapper[4809]: I0312 08:00:36.514633 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:36Z","lastTransitionTime":"2026-03-12T08:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:36 crc kubenswrapper[4809]: I0312 08:00:36.618147 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:36 crc kubenswrapper[4809]: I0312 08:00:36.618223 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:36 crc kubenswrapper[4809]: I0312 08:00:36.618244 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:36 crc kubenswrapper[4809]: I0312 08:00:36.618269 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:36 crc kubenswrapper[4809]: I0312 08:00:36.618292 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:36Z","lastTransitionTime":"2026-03-12T08:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:36 crc kubenswrapper[4809]: I0312 08:00:36.721243 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:36 crc kubenswrapper[4809]: I0312 08:00:36.721312 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:36 crc kubenswrapper[4809]: I0312 08:00:36.721330 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:36 crc kubenswrapper[4809]: I0312 08:00:36.721358 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:36 crc kubenswrapper[4809]: I0312 08:00:36.721376 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:36Z","lastTransitionTime":"2026-03-12T08:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:36 crc kubenswrapper[4809]: I0312 08:00:36.824861 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:36 crc kubenswrapper[4809]: I0312 08:00:36.824909 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:36 crc kubenswrapper[4809]: I0312 08:00:36.824925 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:36 crc kubenswrapper[4809]: I0312 08:00:36.824940 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:36 crc kubenswrapper[4809]: I0312 08:00:36.824952 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:36Z","lastTransitionTime":"2026-03-12T08:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:36 crc kubenswrapper[4809]: I0312 08:00:36.927767 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:36 crc kubenswrapper[4809]: I0312 08:00:36.927842 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:36 crc kubenswrapper[4809]: I0312 08:00:36.927861 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:36 crc kubenswrapper[4809]: I0312 08:00:36.927888 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:36 crc kubenswrapper[4809]: I0312 08:00:36.927906 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:36Z","lastTransitionTime":"2026-03-12T08:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.030504 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.030589 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.030608 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.030642 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.030661 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:37Z","lastTransitionTime":"2026-03-12T08:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.104998 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p566k" Mar 12 08:00:37 crc kubenswrapper[4809]: E0312 08:00:37.105226 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p566k" podUID="3d31c58d-0f0d-431f-bebc-57173f467eee" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.122560 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kbwnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acc48a9a-cc0a-4361-b1bc-7c7684f9bf93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5d8cd4006dec3e1304218cda9d03ea761465524e055daccb8e61ef1368a7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j44jn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kbwnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:37Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.133530 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.133605 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.133624 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.133649 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.133666 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:37Z","lastTransitionTime":"2026-03-12T08:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.139277 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xgl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa0e58ad29c62a229115fb214956dc11f004bd7f4a8188d7435e0857f07ff46b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2nhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xgl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:37Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.155013 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-p566k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d31c58d-0f0d-431f-bebc-57173f467eee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j742\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j742\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-p566k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:37Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.173238 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09f5e61a-c077-449f-8291-dbf93ac9aca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95634c408015e53a4c8d044b27803c5070ad086be1f3372a760744db1bc9c033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c386318c2476130eb02e6620ab75fce7d5cd814d7f214bed4f449de60d126a14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efb3f6bb09e8fcec0267b3fdbdf43301e33aa2ba8bfaf8b3a124c7e7714949ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T08:00:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 08:00:01.682190 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 08:00:01.682347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 08:00:01.683010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1444506139/tls.crt::/tmp/serving-cert-1444506139/tls.key\\\\\\\"\\\\nI0312 08:00:02.063779 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 08:00:02.066800 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 08:00:02.066837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 08:00:02.066881 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 08:00:02.066894 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 08:00:02.073309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 08:00:02.073332 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073342 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 08:00:02.073345 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 08:00:02.073348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 08:00:02.073351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 08:00:02.073351 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 08:00:02.075060 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://411ee556dd1461097ea682e93564110b2a46d02734cb91026ba06ffe7de7a80f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:37Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.197148 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83bfb2b1536a6a7a823599dddb47b6ad50b038509d00298d2fcee85b0a5b6c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:37Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.214827 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:37Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.235600 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90601560-88a4-4e9e-8c6c-b7cfd6193dfc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcce0bfc6e523f2bd2e05ab2403c80a280f3c22b7c9d4e61bde3bb414b1b6c1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afe86a0784b3f9ecefb87d8d8de1e616420abe185dfcf0e47ad17f7d3e18a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ecaed5f8be6ee7aa9b143575897770874696cfd0d71c30ef8e46d75a67be75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d974ea0c31aedbb16a24b692f45399aa47ba7a1e4b14467bb954c10757b38cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c32315e2dbe601b08a2b2c3db8a50dfc5d99be959dbf33c19f6a2c55df38c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28106812f58b36dc991efb2c3b84b4c3f38016271262418de7cb6d3130ee2743\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28106812f58b36dc991efb2c3b84b4c3f38016271262418de7cb6d3130ee2743\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://759e1933c261395715f2209ea1aa72957dfb4c5b1049c830e87854b0dcdba55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759e1933c261395715f2209ea1aa72957dfb4c5b1049c830e87854b0dcdba55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0d58c5fd72b76f75c191f46f25e0aa9fb3026191f5fb97c210907e1ea38baa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0d58c5fd72b76f75c191f46f25e0aa9fb3026191f5fb97c210907e1ea38baa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:37Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.236956 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.236990 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.236999 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.237014 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.237025 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:37Z","lastTransitionTime":"2026-03-12T08:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.252170 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0dc5e31901266db36374e4881b99b1a40767bc6ff58afe1e6226adf1f5c60fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aef513f474f25e56fd48d7c8b5f110939514be75dec224356dee883508d604f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:37Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.262320 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d7prn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b97e5c85-4adb-4061-b2e0-1be7440c2133\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aba607a1e3cfe1831b9a97989571fc863a712e4579f87587f6d0089b255ab12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f99hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edfd33ff837e4b09002fef30a5d35f242dd087139d7cabc6416bae56811968c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f99hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-d7prn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:37Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.273738 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:37Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.286616 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:37Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.315319 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a4d3f57db51abc6b567283017a96e97ab98055de09513fe14d0be447942b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06c9bb77ec46b7ddf658e865660c808ab82509e698598c42daf2b199305c32c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://833b7c9d59ec9b7cf52e9b6c12705d034b46279a46145c181f92d30ffb6e7f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b49430238b4b9dc6fefc7169830e7106e6d43b1130d0b2fd358f103161f390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://943bab511f40b661cb31e977ec90fbc4edd6cb41d26593b0d7829dfd854fc047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f06a94e052ce6d8bc2496ade124907ca536c37d2710afe11c824327dc3de5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66cafb4f566b5ba69ecf5eca4e22a9d8ccb639efc79f214281eae7bb6424fa0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66cafb4f566b5ba69ecf5eca4e22a9d8ccb639efc79f214281eae7bb6424fa0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T08:00:25Z\\\",\\\"message\\\":\\\"twork=default are: map[]\\\\nI0312 08:00:25.399620 6771 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0312 08:00:25.399629 6771 services_controller.go:443] Built service openshift-kube-controller-manager/kube-controller-manager LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.36\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0312 08:00:25.399649 6771 services_controller.go:444] Built service openshift-kube-controller-manager/kube-controller-manager LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0312 08:00:25.399661 6771 services_controller.go:445] Built service openshift-kube-controller-manager/kube-controller-manager LB template configs for network=default: []services.lbConfig(nil)\\\\nF0312 08:00:25.399668 6771 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to sh\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-7h9l6_openshift-ovn-kubernetes(cc7631d0-7d4b-4f5a-ab01-7516b2ed998e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a39013e253a1014bfc666169571d96ed943c28cc73221fcfa3f85d31ee9ea85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7h9l6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:37Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.325995 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vcxlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7db3e1-e4cd-4800-a7c2-30d8254caa4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a976f7b8cb11f2d6a4393b26321f830f79835db3c42bdc340ac4b189ee0e56d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vcxlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:37Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.337244 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4677e418d05a581aee3be8fd421402c02922c5eaf004cbcae1ae691fb67d283e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:37Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.339626 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.339663 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.339670 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.339685 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.339695 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:37Z","lastTransitionTime":"2026-03-12T08:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.348768 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"101483ba-8ed3-40eb-9855-077e9add029f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a3044e9eaa4ccc87b594a3f5440d95bf23c648ceb2e8f45abeadeda79b4a773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1dde7e156e97091c5fa419562607743e842e7e610e685fff1be3ce7ce33f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6d4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:37Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.360894 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwn7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab4c7cca-c503-41d4-8abf-5b19429defff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaaba74bba436056c80065bda470df45e31d6405fc8b717b0299b5a0fa3ef37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef92b238f6dd7e873d92f846bfa5a6dcbc6f5ab26617736df052a7f331fc5f4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef92b238f6dd7e873d92f846bfa5a6dcbc6f5ab26617736df052a7f331fc5f4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04cf364fbd4a2f9039a1733e337000f89db4b2ef6b4cd8de079f4c01c91add76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04cf364fbd4a2f9039a1733e337000f89db4b2ef6b4cd8de079f4c01c91add76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://844db69e092396d79015605fd21fde37c13689eb546b220c8e9ecc789aa7dc75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844db69e092396d79015605fd21fde37c13689eb546b220c8e9ecc789aa7dc75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5143175a54d02709bf304d2a4d50a1c3b2c441552e206628f9e1a5dfdb27ae97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5143175a54d02709bf304d2a4d50a1c3b2c441552e206628f9e1a5dfdb27ae97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1baa2e910db1674a7a4e4f09b6e01d305daf0f6ef4c7fee24090365dba810b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baa2e910db1674a7a4e4f09b6e01d305daf0f6ef4c7fee24090365dba810b9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwn7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:37Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.442638 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.442694 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.442707 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.442725 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.442740 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:37Z","lastTransitionTime":"2026-03-12T08:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.545793 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.546215 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.546323 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.546428 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.546523 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:37Z","lastTransitionTime":"2026-03-12T08:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.650527 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.651196 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.651398 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.651557 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.651703 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:37Z","lastTransitionTime":"2026-03-12T08:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.755564 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.755926 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.756102 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.756352 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.756754 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:37Z","lastTransitionTime":"2026-03-12T08:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.758561 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.758638 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.758661 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.758690 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.758708 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:37Z","lastTransitionTime":"2026-03-12T08:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:37 crc kubenswrapper[4809]: E0312 08:00:37.774301 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f5485f66-5ce6-43bd-a720-57f2c3255098\\\",\\\"systemUUID\\\":\\\"f124771b-42e6-4243-8a1b-002105aaecbe\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:37Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.780190 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.780277 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.780297 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.780333 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.780354 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:37Z","lastTransitionTime":"2026-03-12T08:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:37 crc kubenswrapper[4809]: E0312 08:00:37.801297 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f5485f66-5ce6-43bd-a720-57f2c3255098\\\",\\\"systemUUID\\\":\\\"f124771b-42e6-4243-8a1b-002105aaecbe\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:37Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.806822 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.807075 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.807376 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.807705 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.807966 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:37Z","lastTransitionTime":"2026-03-12T08:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:37 crc kubenswrapper[4809]: E0312 08:00:37.823613 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f5485f66-5ce6-43bd-a720-57f2c3255098\\\",\\\"systemUUID\\\":\\\"f124771b-42e6-4243-8a1b-002105aaecbe\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:37Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.829541 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.829604 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.829624 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.829652 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.829673 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:37Z","lastTransitionTime":"2026-03-12T08:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:37 crc kubenswrapper[4809]: E0312 08:00:37.846518 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f5485f66-5ce6-43bd-a720-57f2c3255098\\\",\\\"systemUUID\\\":\\\"f124771b-42e6-4243-8a1b-002105aaecbe\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:37Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.852770 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.852825 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.852843 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.852868 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.852888 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:37Z","lastTransitionTime":"2026-03-12T08:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:37 crc kubenswrapper[4809]: E0312 08:00:37.869601 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f5485f66-5ce6-43bd-a720-57f2c3255098\\\",\\\"systemUUID\\\":\\\"f124771b-42e6-4243-8a1b-002105aaecbe\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:37Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:37 crc kubenswrapper[4809]: E0312 08:00:37.869930 4809 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.879419 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.879478 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.879535 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.879562 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.879580 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:37Z","lastTransitionTime":"2026-03-12T08:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.983325 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.983417 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.983437 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.983460 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:37 crc kubenswrapper[4809]: I0312 08:00:37.983496 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:37Z","lastTransitionTime":"2026-03-12T08:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:38 crc kubenswrapper[4809]: I0312 08:00:38.086326 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:38 crc kubenswrapper[4809]: I0312 08:00:38.086358 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:38 crc kubenswrapper[4809]: I0312 08:00:38.086366 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:38 crc kubenswrapper[4809]: I0312 08:00:38.086377 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:38 crc kubenswrapper[4809]: I0312 08:00:38.086386 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:38Z","lastTransitionTime":"2026-03-12T08:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:38 crc kubenswrapper[4809]: I0312 08:00:38.105240 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:00:38 crc kubenswrapper[4809]: E0312 08:00:38.105341 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 08:00:38 crc kubenswrapper[4809]: I0312 08:00:38.105668 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:00:38 crc kubenswrapper[4809]: E0312 08:00:38.105749 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 08:00:38 crc kubenswrapper[4809]: I0312 08:00:38.105787 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:00:38 crc kubenswrapper[4809]: E0312 08:00:38.105830 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 08:00:38 crc kubenswrapper[4809]: I0312 08:00:38.106400 4809 scope.go:117] "RemoveContainer" containerID="66cafb4f566b5ba69ecf5eca4e22a9d8ccb639efc79f214281eae7bb6424fa0c" Mar 12 08:00:38 crc kubenswrapper[4809]: I0312 08:00:38.193456 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:38 crc kubenswrapper[4809]: I0312 08:00:38.193497 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:38 crc kubenswrapper[4809]: I0312 08:00:38.193507 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:38 crc kubenswrapper[4809]: I0312 08:00:38.193523 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:38 crc kubenswrapper[4809]: I0312 08:00:38.193536 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:38Z","lastTransitionTime":"2026-03-12T08:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:38 crc kubenswrapper[4809]: I0312 08:00:38.296067 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:38 crc kubenswrapper[4809]: I0312 08:00:38.296162 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:38 crc kubenswrapper[4809]: I0312 08:00:38.296183 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:38 crc kubenswrapper[4809]: I0312 08:00:38.296209 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:38 crc kubenswrapper[4809]: I0312 08:00:38.296227 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:38Z","lastTransitionTime":"2026-03-12T08:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:38 crc kubenswrapper[4809]: I0312 08:00:38.398866 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:38 crc kubenswrapper[4809]: I0312 08:00:38.398895 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:38 crc kubenswrapper[4809]: I0312 08:00:38.398903 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:38 crc kubenswrapper[4809]: I0312 08:00:38.398917 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:38 crc kubenswrapper[4809]: I0312 08:00:38.398925 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:38Z","lastTransitionTime":"2026-03-12T08:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:38 crc kubenswrapper[4809]: I0312 08:00:38.501407 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:38 crc kubenswrapper[4809]: I0312 08:00:38.501450 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:38 crc kubenswrapper[4809]: I0312 08:00:38.501464 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:38 crc kubenswrapper[4809]: I0312 08:00:38.501480 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:38 crc kubenswrapper[4809]: I0312 08:00:38.501521 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:38Z","lastTransitionTime":"2026-03-12T08:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:38 crc kubenswrapper[4809]: I0312 08:00:38.604404 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:38 crc kubenswrapper[4809]: I0312 08:00:38.604447 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:38 crc kubenswrapper[4809]: I0312 08:00:38.604459 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:38 crc kubenswrapper[4809]: I0312 08:00:38.604475 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:38 crc kubenswrapper[4809]: I0312 08:00:38.604490 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:38Z","lastTransitionTime":"2026-03-12T08:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:38 crc kubenswrapper[4809]: I0312 08:00:38.707468 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:38 crc kubenswrapper[4809]: I0312 08:00:38.707511 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:38 crc kubenswrapper[4809]: I0312 08:00:38.707525 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:38 crc kubenswrapper[4809]: I0312 08:00:38.707544 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:38 crc kubenswrapper[4809]: I0312 08:00:38.707557 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:38Z","lastTransitionTime":"2026-03-12T08:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:38 crc kubenswrapper[4809]: I0312 08:00:38.809338 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:38 crc kubenswrapper[4809]: I0312 08:00:38.809390 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:38 crc kubenswrapper[4809]: I0312 08:00:38.809398 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:38 crc kubenswrapper[4809]: I0312 08:00:38.809410 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:38 crc kubenswrapper[4809]: I0312 08:00:38.809419 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:38Z","lastTransitionTime":"2026-03-12T08:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:38 crc kubenswrapper[4809]: I0312 08:00:38.818330 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7h9l6_cc7631d0-7d4b-4f5a-ab01-7516b2ed998e/ovnkube-controller/1.log" Mar 12 08:00:38 crc kubenswrapper[4809]: I0312 08:00:38.820939 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" event={"ID":"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e","Type":"ContainerStarted","Data":"bd7061a04af8eca7049be8a049e77071d26305cf53171b0ba73714b5f0fe8f89"} Mar 12 08:00:38 crc kubenswrapper[4809]: I0312 08:00:38.821286 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:00:38 crc kubenswrapper[4809]: I0312 08:00:38.845252 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09f5e61a-c077-449f-8291-dbf93ac9aca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95634c408015e53a4c8d044b27803c5070ad086be1f3372a760744db1bc9c033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c386318c2476130eb02e6620ab75fce7d5cd814d7f214bed4f449de60d126a14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efb3f6bb09e8fcec0267b3fdbdf43301e33aa2ba8bfaf8b3a124c7e7714949ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T08:00:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 08:00:01.682190 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 08:00:01.682347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 08:00:01.683010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1444506139/tls.crt::/tmp/serving-cert-1444506139/tls.key\\\\\\\"\\\\nI0312 08:00:02.063779 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 08:00:02.066800 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 08:00:02.066837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 08:00:02.066881 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 08:00:02.066894 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 08:00:02.073309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 08:00:02.073332 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073342 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 08:00:02.073345 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 08:00:02.073348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 08:00:02.073351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 08:00:02.073351 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 08:00:02.075060 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://411ee556dd1461097ea682e93564110b2a46d02734cb91026ba06ffe7de7a80f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:38Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:38 crc kubenswrapper[4809]: I0312 08:00:38.861829 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83bfb2b1536a6a7a823599dddb47b6ad50b038509d00298d2fcee85b0a5b6c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:38Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:38 crc kubenswrapper[4809]: I0312 08:00:38.875426 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:38Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:38 crc kubenswrapper[4809]: I0312 08:00:38.885545 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kbwnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acc48a9a-cc0a-4361-b1bc-7c7684f9bf93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5d8cd4006dec3e1304218cda9d03ea761465524e055daccb8e61ef1368a7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j44jn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kbwnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:38Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:38 crc kubenswrapper[4809]: I0312 08:00:38.899779 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xgl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa0e58ad29c62a229115fb214956dc11f004bd7f4a8188d7435e0857f07ff46b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2nhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xgl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:38Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:38 crc kubenswrapper[4809]: I0312 08:00:38.912007 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:38 crc kubenswrapper[4809]: I0312 08:00:38.912059 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:38 crc kubenswrapper[4809]: I0312 08:00:38.912071 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:38 crc kubenswrapper[4809]: I0312 08:00:38.912088 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:38 crc kubenswrapper[4809]: I0312 08:00:38.912101 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:38Z","lastTransitionTime":"2026-03-12T08:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:38 crc kubenswrapper[4809]: I0312 08:00:38.913104 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-p566k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d31c58d-0f0d-431f-bebc-57173f467eee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j742\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j742\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-p566k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:38Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:38 crc kubenswrapper[4809]: I0312 08:00:38.932253 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90601560-88a4-4e9e-8c6c-b7cfd6193dfc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcce0bfc6e523f2bd2e05ab2403c80a280f3c22b7c9d4e61bde3bb414b1b6c1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afe86a0784b3f9ecefb87d8d8de1e616420abe185dfcf0e47ad17f7d3e18a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ecaed5f8be6ee7aa9b143575897770874696cfd0d71c30ef8e46d75a67be75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d974ea0c31aedbb16a24b692f45399aa47ba7a1e4b14467bb954c10757b38cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c32315e2dbe601b08a2b2c3db8a50dfc5d99be959dbf33c19f6a2c55df38c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28106812f58b36dc991efb2c3b84b4c3f38016271262418de7cb6d3130ee2743\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28106812f58b36dc991efb2c3b84b4c3f38016271262418de7cb6d3130ee2743\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://759e1933c261395715f2209ea1aa72957dfb4c5b1049c830e87854b0dcdba55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759e1933c261395715f2209ea1aa72957dfb4c5b1049c830e87854b0dcdba55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0d58c5fd72b76f75c191f46f25e0aa9fb3026191f5fb97c210907e1ea38baa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0d58c5fd72b76f75c191f46f25e0aa9fb3026191f5fb97c210907e1ea38baa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:38Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:38 crc kubenswrapper[4809]: I0312 08:00:38.946603 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0dc5e31901266db36374e4881b99b1a40767bc6ff58afe1e6226adf1f5c60fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aef513f474f25e56fd48d7c8b5f110939514be75dec224356dee883508d604f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:38Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:38 crc kubenswrapper[4809]: I0312 08:00:38.963459 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d7prn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b97e5c85-4adb-4061-b2e0-1be7440c2133\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aba607a1e3cfe1831b9a97989571fc863a712e4579f87587f6d0089b255ab12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f99hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edfd33ff837e4b09002fef30a5d35f242dd087139d7cabc6416bae56811968c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f99hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-d7prn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:38Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:38 crc kubenswrapper[4809]: I0312 08:00:38.979543 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:38Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:38 crc kubenswrapper[4809]: I0312 08:00:38.993615 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:38Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:39 crc kubenswrapper[4809]: I0312 08:00:39.014592 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:39 crc kubenswrapper[4809]: I0312 08:00:39.014676 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:39 crc kubenswrapper[4809]: I0312 08:00:39.014695 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:39 crc kubenswrapper[4809]: I0312 08:00:39.014719 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:39 crc kubenswrapper[4809]: I0312 08:00:39.014733 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:39Z","lastTransitionTime":"2026-03-12T08:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:39 crc kubenswrapper[4809]: I0312 08:00:39.022600 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a4d3f57db51abc6b567283017a96e97ab98055de09513fe14d0be447942b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06c9bb77ec46b7ddf658e865660c808ab82509e698598c42daf2b199305c32c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://833b7c9d59ec9b7cf52e9b6c12705d034b46279a46145c181f92d30ffb6e7f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b49430238b4b9dc6fefc7169830e7106e6d43b1130d0b2fd358f103161f390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://943bab511f40b661cb31e977ec90fbc4edd6cb41d26593b0d7829dfd854fc047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f06a94e052ce6d8bc2496ade124907ca536c37d2710afe11c824327dc3de5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd7061a04af8eca7049be8a049e77071d26305cf53171b0ba73714b5f0fe8f89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66cafb4f566b5ba69ecf5eca4e22a9d8ccb639efc79f214281eae7bb6424fa0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T08:00:25Z\\\",\\\"message\\\":\\\"twork=default are: map[]\\\\nI0312 08:00:25.399620 6771 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0312 08:00:25.399629 6771 services_controller.go:443] Built service openshift-kube-controller-manager/kube-controller-manager LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.36\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0312 08:00:25.399649 6771 services_controller.go:444] Built service openshift-kube-controller-manager/kube-controller-manager LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0312 08:00:25.399661 6771 services_controller.go:445] Built service openshift-kube-controller-manager/kube-controller-manager LB template configs for network=default: []services.lbConfig(nil)\\\\nF0312 08:00:25.399668 6771 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to sh\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a39013e253a1014bfc666169571d96ed943c28cc73221fcfa3f85d31ee9ea85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7h9l6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:39Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:39 crc kubenswrapper[4809]: I0312 08:00:39.039266 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vcxlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7db3e1-e4cd-4800-a7c2-30d8254caa4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a976f7b8cb11f2d6a4393b26321f830f79835db3c42bdc340ac4b189ee0e56d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vcxlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:39Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:39 crc kubenswrapper[4809]: I0312 08:00:39.058833 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4677e418d05a581aee3be8fd421402c02922c5eaf004cbcae1ae691fb67d283e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:39Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:39 crc kubenswrapper[4809]: I0312 08:00:39.077733 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"101483ba-8ed3-40eb-9855-077e9add029f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a3044e9eaa4ccc87b594a3f5440d95bf23c648ceb2e8f45abeadeda79b4a773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1dde7e156e97091c5fa419562607743e842e7e610e685fff1be3ce7ce33f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6d4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:39Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:39 crc kubenswrapper[4809]: I0312 08:00:39.103158 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwn7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab4c7cca-c503-41d4-8abf-5b19429defff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaaba74bba436056c80065bda470df45e31d6405fc8b717b0299b5a0fa3ef37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef92b238f6dd7e873d92f846bfa5a6dcbc6f5ab26617736df052a7f331fc5f4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef92b238f6dd7e873d92f846bfa5a6dcbc6f5ab26617736df052a7f331fc5f4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04cf364fbd4a2f9039a1733e337000f89db4b2ef6b4cd8de079f4c01c91add76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04cf364fbd4a2f9039a1733e337000f89db4b2ef6b4cd8de079f4c01c91add76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://844db69e092396d79015605fd21fde37c13689eb546b220c8e9ecc789aa7dc75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844db69e092396d79015605fd21fde37c13689eb546b220c8e9ecc789aa7dc75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5143175a54d02709bf304d2a4d50a1c3b2c441552e206628f9e1a5dfdb27ae97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5143175a54d02709bf304d2a4d50a1c3b2c441552e206628f9e1a5dfdb27ae97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1baa2e910db1674a7a4e4f09b6e01d305daf0f6ef4c7fee24090365dba810b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baa2e910db1674a7a4e4f09b6e01d305daf0f6ef4c7fee24090365dba810b9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwn7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:39Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:39 crc kubenswrapper[4809]: I0312 08:00:39.105374 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p566k" Mar 12 08:00:39 crc kubenswrapper[4809]: E0312 08:00:39.105539 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p566k" podUID="3d31c58d-0f0d-431f-bebc-57173f467eee" Mar 12 08:00:39 crc kubenswrapper[4809]: I0312 08:00:39.117175 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:39 crc kubenswrapper[4809]: I0312 08:00:39.117258 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:39 crc kubenswrapper[4809]: I0312 08:00:39.117275 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:39 crc kubenswrapper[4809]: I0312 08:00:39.117295 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:39 crc kubenswrapper[4809]: I0312 08:00:39.117309 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:39Z","lastTransitionTime":"2026-03-12T08:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:39 crc kubenswrapper[4809]: I0312 08:00:39.220164 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:39 crc kubenswrapper[4809]: I0312 08:00:39.220216 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:39 crc kubenswrapper[4809]: I0312 08:00:39.220232 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:39 crc kubenswrapper[4809]: I0312 08:00:39.220279 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:39 crc kubenswrapper[4809]: I0312 08:00:39.220300 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:39Z","lastTransitionTime":"2026-03-12T08:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:39 crc kubenswrapper[4809]: I0312 08:00:39.323697 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:39 crc kubenswrapper[4809]: I0312 08:00:39.323754 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:39 crc kubenswrapper[4809]: I0312 08:00:39.323773 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:39 crc kubenswrapper[4809]: I0312 08:00:39.323798 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:39 crc kubenswrapper[4809]: I0312 08:00:39.323815 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:39Z","lastTransitionTime":"2026-03-12T08:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:39 crc kubenswrapper[4809]: I0312 08:00:39.427092 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:39 crc kubenswrapper[4809]: I0312 08:00:39.427179 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:39 crc kubenswrapper[4809]: I0312 08:00:39.427199 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:39 crc kubenswrapper[4809]: I0312 08:00:39.427226 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:39 crc kubenswrapper[4809]: I0312 08:00:39.427242 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:39Z","lastTransitionTime":"2026-03-12T08:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:39 crc kubenswrapper[4809]: I0312 08:00:39.530488 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:39 crc kubenswrapper[4809]: I0312 08:00:39.530550 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:39 crc kubenswrapper[4809]: I0312 08:00:39.530581 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:39 crc kubenswrapper[4809]: I0312 08:00:39.530608 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:39 crc kubenswrapper[4809]: I0312 08:00:39.530627 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:39Z","lastTransitionTime":"2026-03-12T08:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:39 crc kubenswrapper[4809]: I0312 08:00:39.634257 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:39 crc kubenswrapper[4809]: I0312 08:00:39.634334 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:39 crc kubenswrapper[4809]: I0312 08:00:39.634351 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:39 crc kubenswrapper[4809]: I0312 08:00:39.634376 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:39 crc kubenswrapper[4809]: I0312 08:00:39.634396 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:39Z","lastTransitionTime":"2026-03-12T08:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:39 crc kubenswrapper[4809]: I0312 08:00:39.737570 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:39 crc kubenswrapper[4809]: I0312 08:00:39.737622 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:39 crc kubenswrapper[4809]: I0312 08:00:39.737640 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:39 crc kubenswrapper[4809]: I0312 08:00:39.737663 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:39 crc kubenswrapper[4809]: I0312 08:00:39.737680 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:39Z","lastTransitionTime":"2026-03-12T08:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:39 crc kubenswrapper[4809]: I0312 08:00:39.828473 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7h9l6_cc7631d0-7d4b-4f5a-ab01-7516b2ed998e/ovnkube-controller/2.log" Mar 12 08:00:39 crc kubenswrapper[4809]: I0312 08:00:39.829631 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7h9l6_cc7631d0-7d4b-4f5a-ab01-7516b2ed998e/ovnkube-controller/1.log" Mar 12 08:00:39 crc kubenswrapper[4809]: I0312 08:00:39.833774 4809 generic.go:334] "Generic (PLEG): container finished" podID="cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" containerID="bd7061a04af8eca7049be8a049e77071d26305cf53171b0ba73714b5f0fe8f89" exitCode=1 Mar 12 08:00:39 crc kubenswrapper[4809]: I0312 08:00:39.833842 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" event={"ID":"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e","Type":"ContainerDied","Data":"bd7061a04af8eca7049be8a049e77071d26305cf53171b0ba73714b5f0fe8f89"} Mar 12 08:00:39 crc kubenswrapper[4809]: I0312 08:00:39.833900 4809 scope.go:117] "RemoveContainer" containerID="66cafb4f566b5ba69ecf5eca4e22a9d8ccb639efc79f214281eae7bb6424fa0c" Mar 12 08:00:39 crc kubenswrapper[4809]: I0312 08:00:39.835318 4809 scope.go:117] "RemoveContainer" containerID="bd7061a04af8eca7049be8a049e77071d26305cf53171b0ba73714b5f0fe8f89" Mar 12 08:00:39 crc kubenswrapper[4809]: E0312 08:00:39.835573 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7h9l6_openshift-ovn-kubernetes(cc7631d0-7d4b-4f5a-ab01-7516b2ed998e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" podUID="cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" Mar 12 08:00:39 crc kubenswrapper[4809]: I0312 08:00:39.841213 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:39 crc kubenswrapper[4809]: I0312 08:00:39.841260 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:39 crc kubenswrapper[4809]: I0312 08:00:39.841295 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:39 crc kubenswrapper[4809]: I0312 08:00:39.841315 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:39 crc kubenswrapper[4809]: I0312 08:00:39.841331 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:39Z","lastTransitionTime":"2026-03-12T08:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:39 crc kubenswrapper[4809]: I0312 08:00:39.855322 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4677e418d05a581aee3be8fd421402c02922c5eaf004cbcae1ae691fb67d283e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:39Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:39 crc kubenswrapper[4809]: I0312 08:00:39.874645 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"101483ba-8ed3-40eb-9855-077e9add029f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a3044e9eaa4ccc87b594a3f5440d95bf23c648ceb2e8f45abeadeda79b4a773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1dde7e156e97091c5fa419562607743e842e7e610e685fff1be3ce7ce33f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6d4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:39Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:39 crc kubenswrapper[4809]: I0312 08:00:39.897600 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwn7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab4c7cca-c503-41d4-8abf-5b19429defff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaaba74bba436056c80065bda470df45e31d6405fc8b717b0299b5a0fa3ef37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef92b238f6dd7e873d92f846bfa5a6dcbc6f5ab26617736df052a7f331fc5f4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef92b238f6dd7e873d92f846bfa5a6dcbc6f5ab26617736df052a7f331fc5f4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04cf364fbd4a2f9039a1733e337000f89db4b2ef6b4cd8de079f4c01c91add76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04cf364fbd4a2f9039a1733e337000f89db4b2ef6b4cd8de079f4c01c91add76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://844db69e092396d79015605fd21fde37c13689eb546b220c8e9ecc789aa7dc75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844db69e092396d79015605fd21fde37c13689eb546b220c8e9ecc789aa7dc75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5143175a54d02709bf304d2a4d50a1c3b2c441552e206628f9e1a5dfdb27ae97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5143175a54d02709bf304d2a4d50a1c3b2c441552e206628f9e1a5dfdb27ae97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1baa2e910db1674a7a4e4f09b6e01d305daf0f6ef4c7fee24090365dba810b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baa2e910db1674a7a4e4f09b6e01d305daf0f6ef4c7fee24090365dba810b9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwn7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:39Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:39 crc kubenswrapper[4809]: I0312 08:00:39.913169 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-p566k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d31c58d-0f0d-431f-bebc-57173f467eee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j742\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j742\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-p566k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:39Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:39 crc kubenswrapper[4809]: I0312 08:00:39.933877 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09f5e61a-c077-449f-8291-dbf93ac9aca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95634c408015e53a4c8d044b27803c5070ad086be1f3372a760744db1bc9c033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c386318c2476130eb02e6620ab75fce7d5cd814d7f214bed4f449de60d126a14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efb3f6bb09e8fcec0267b3fdbdf43301e33aa2ba8bfaf8b3a124c7e7714949ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T08:00:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 08:00:01.682190 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 08:00:01.682347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 08:00:01.683010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1444506139/tls.crt::/tmp/serving-cert-1444506139/tls.key\\\\\\\"\\\\nI0312 08:00:02.063779 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 08:00:02.066800 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 08:00:02.066837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 08:00:02.066881 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 08:00:02.066894 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 08:00:02.073309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 08:00:02.073332 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073342 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 08:00:02.073345 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 08:00:02.073348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 08:00:02.073351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 08:00:02.073351 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 08:00:02.075060 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://411ee556dd1461097ea682e93564110b2a46d02734cb91026ba06ffe7de7a80f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:39Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:39 crc kubenswrapper[4809]: I0312 08:00:39.944209 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:39 crc kubenswrapper[4809]: I0312 08:00:39.944285 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:39 crc kubenswrapper[4809]: I0312 08:00:39.944308 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:39 crc kubenswrapper[4809]: I0312 08:00:39.944341 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:39 crc kubenswrapper[4809]: I0312 08:00:39.944365 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:39Z","lastTransitionTime":"2026-03-12T08:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:39 crc kubenswrapper[4809]: I0312 08:00:39.956756 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83bfb2b1536a6a7a823599dddb47b6ad50b038509d00298d2fcee85b0a5b6c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:39Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:39 crc kubenswrapper[4809]: I0312 08:00:39.976298 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:39Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:39 crc kubenswrapper[4809]: I0312 08:00:39.992299 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kbwnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acc48a9a-cc0a-4361-b1bc-7c7684f9bf93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5d8cd4006dec3e1304218cda9d03ea761465524e055daccb8e61ef1368a7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j44jn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kbwnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:39Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:40 crc kubenswrapper[4809]: I0312 08:00:40.012399 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xgl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa0e58ad29c62a229115fb214956dc11f004bd7f4a8188d7435e0857f07ff46b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2nhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xgl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:40Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:40 crc kubenswrapper[4809]: I0312 08:00:40.045950 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90601560-88a4-4e9e-8c6c-b7cfd6193dfc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcce0bfc6e523f2bd2e05ab2403c80a280f3c22b7c9d4e61bde3bb414b1b6c1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afe86a0784b3f9ecefb87d8d8de1e616420abe185dfcf0e47ad17f7d3e18a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ecaed5f8be6ee7aa9b143575897770874696cfd0d71c30ef8e46d75a67be75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d974ea0c31aedbb16a24b692f45399aa47ba7a1e4b14467bb954c10757b38cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c32315e2dbe601b08a2b2c3db8a50dfc5d99be959dbf33c19f6a2c55df38c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28106812f58b36dc991efb2c3b84b4c3f38016271262418de7cb6d3130ee2743\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28106812f58b36dc991efb2c3b84b4c3f38016271262418de7cb6d3130ee2743\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://759e1933c261395715f2209ea1aa72957dfb4c5b1049c830e87854b0dcdba55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759e1933c261395715f2209ea1aa72957dfb4c5b1049c830e87854b0dcdba55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0d58c5fd72b76f75c191f46f25e0aa9fb3026191f5fb97c210907e1ea38baa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0d58c5fd72b76f75c191f46f25e0aa9fb3026191f5fb97c210907e1ea38baa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:40Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:40 crc kubenswrapper[4809]: I0312 08:00:40.047812 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:40 crc kubenswrapper[4809]: I0312 08:00:40.047865 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:40 crc kubenswrapper[4809]: I0312 08:00:40.047882 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:40 crc kubenswrapper[4809]: I0312 08:00:40.047905 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:40 crc kubenswrapper[4809]: I0312 08:00:40.047921 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:40Z","lastTransitionTime":"2026-03-12T08:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:40 crc kubenswrapper[4809]: I0312 08:00:40.070436 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0dc5e31901266db36374e4881b99b1a40767bc6ff58afe1e6226adf1f5c60fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aef513f474f25e56fd48d7c8b5f110939514be75dec224356dee883508d604f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:40Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:40 crc kubenswrapper[4809]: I0312 08:00:40.087102 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d7prn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b97e5c85-4adb-4061-b2e0-1be7440c2133\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aba607a1e3cfe1831b9a97989571fc863a712e4579f87587f6d0089b255ab12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f99hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edfd33ff837e4b09002fef30a5d35f242dd087139d7cabc6416bae56811968c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f99hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-d7prn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:40Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:40 crc kubenswrapper[4809]: I0312 08:00:40.105726 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:00:40 crc kubenswrapper[4809]: I0312 08:00:40.105816 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:00:40 crc kubenswrapper[4809]: I0312 08:00:40.105765 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:00:40 crc kubenswrapper[4809]: E0312 08:00:40.105949 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 08:00:40 crc kubenswrapper[4809]: E0312 08:00:40.106205 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 08:00:40 crc kubenswrapper[4809]: E0312 08:00:40.106345 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 08:00:40 crc kubenswrapper[4809]: I0312 08:00:40.108408 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:40Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:40 crc kubenswrapper[4809]: I0312 08:00:40.124412 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:40Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:40 crc kubenswrapper[4809]: I0312 08:00:40.151748 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:40 crc kubenswrapper[4809]: I0312 08:00:40.152271 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:40 crc kubenswrapper[4809]: I0312 08:00:40.152291 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:40 crc kubenswrapper[4809]: I0312 08:00:40.152319 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:40 crc kubenswrapper[4809]: I0312 08:00:40.152338 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:40Z","lastTransitionTime":"2026-03-12T08:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:40 crc kubenswrapper[4809]: I0312 08:00:40.158278 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a4d3f57db51abc6b567283017a96e97ab98055de09513fe14d0be447942b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06c9bb77ec46b7ddf658e865660c808ab82509e698598c42daf2b199305c32c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://833b7c9d59ec9b7cf52e9b6c12705d034b46279a46145c181f92d30ffb6e7f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b49430238b4b9dc6fefc7169830e7106e6d43b1130d0b2fd358f103161f390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://943bab511f40b661cb31e977ec90fbc4edd6cb41d26593b0d7829dfd854fc047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f06a94e052ce6d8bc2496ade124907ca536c37d2710afe11c824327dc3de5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd7061a04af8eca7049be8a049e77071d26305cf53171b0ba73714b5f0fe8f89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66cafb4f566b5ba69ecf5eca4e22a9d8ccb639efc79f214281eae7bb6424fa0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T08:00:25Z\\\",\\\"message\\\":\\\"twork=default are: map[]\\\\nI0312 08:00:25.399620 6771 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0312 08:00:25.399629 6771 services_controller.go:443] Built service openshift-kube-controller-manager/kube-controller-manager LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.36\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0312 08:00:25.399649 6771 services_controller.go:444] Built service openshift-kube-controller-manager/kube-controller-manager LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0312 08:00:25.399661 6771 services_controller.go:445] Built service openshift-kube-controller-manager/kube-controller-manager LB template configs for network=default: []services.lbConfig(nil)\\\\nF0312 08:00:25.399668 6771 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to sh\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd7061a04af8eca7049be8a049e77071d26305cf53171b0ba73714b5f0fe8f89\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T08:00:39Z\\\",\\\"message\\\":\\\"t-secret-name:serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0008826d7 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: authentication-operator,},ClusterIP:10.217.5.150,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.150],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0312 08:00:39.104426 7023 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handle\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a39013e253a1014bfc666169571d96ed943c28cc73221fcfa3f85d31ee9ea85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7h9l6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:40Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:40 crc kubenswrapper[4809]: I0312 08:00:40.174001 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vcxlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7db3e1-e4cd-4800-a7c2-30d8254caa4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a976f7b8cb11f2d6a4393b26321f830f79835db3c42bdc340ac4b189ee0e56d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vcxlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:40Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:40 crc kubenswrapper[4809]: I0312 08:00:40.255780 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:40 crc kubenswrapper[4809]: I0312 08:00:40.255828 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:40 crc kubenswrapper[4809]: I0312 08:00:40.255837 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:40 crc kubenswrapper[4809]: I0312 08:00:40.255853 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:40 crc kubenswrapper[4809]: I0312 08:00:40.255863 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:40Z","lastTransitionTime":"2026-03-12T08:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:40 crc kubenswrapper[4809]: I0312 08:00:40.360040 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:40 crc kubenswrapper[4809]: I0312 08:00:40.360166 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:40 crc kubenswrapper[4809]: I0312 08:00:40.360191 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:40 crc kubenswrapper[4809]: I0312 08:00:40.360224 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:40 crc kubenswrapper[4809]: I0312 08:00:40.360253 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:40Z","lastTransitionTime":"2026-03-12T08:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:40 crc kubenswrapper[4809]: I0312 08:00:40.463630 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:40 crc kubenswrapper[4809]: I0312 08:00:40.463694 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:40 crc kubenswrapper[4809]: I0312 08:00:40.463712 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:40 crc kubenswrapper[4809]: I0312 08:00:40.463738 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:40 crc kubenswrapper[4809]: I0312 08:00:40.463757 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:40Z","lastTransitionTime":"2026-03-12T08:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:40 crc kubenswrapper[4809]: I0312 08:00:40.567454 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:40 crc kubenswrapper[4809]: I0312 08:00:40.567519 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:40 crc kubenswrapper[4809]: I0312 08:00:40.567537 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:40 crc kubenswrapper[4809]: I0312 08:00:40.567570 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:40 crc kubenswrapper[4809]: I0312 08:00:40.567588 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:40Z","lastTransitionTime":"2026-03-12T08:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:40 crc kubenswrapper[4809]: I0312 08:00:40.670985 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:40 crc kubenswrapper[4809]: I0312 08:00:40.671036 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:40 crc kubenswrapper[4809]: I0312 08:00:40.671054 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:40 crc kubenswrapper[4809]: I0312 08:00:40.671078 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:40 crc kubenswrapper[4809]: I0312 08:00:40.671096 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:40Z","lastTransitionTime":"2026-03-12T08:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:40 crc kubenswrapper[4809]: I0312 08:00:40.774813 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:40 crc kubenswrapper[4809]: I0312 08:00:40.774889 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:40 crc kubenswrapper[4809]: I0312 08:00:40.774913 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:40 crc kubenswrapper[4809]: I0312 08:00:40.774944 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:40 crc kubenswrapper[4809]: I0312 08:00:40.774970 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:40Z","lastTransitionTime":"2026-03-12T08:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:40 crc kubenswrapper[4809]: I0312 08:00:40.840567 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7h9l6_cc7631d0-7d4b-4f5a-ab01-7516b2ed998e/ovnkube-controller/2.log" Mar 12 08:00:40 crc kubenswrapper[4809]: I0312 08:00:40.847550 4809 scope.go:117] "RemoveContainer" containerID="bd7061a04af8eca7049be8a049e77071d26305cf53171b0ba73714b5f0fe8f89" Mar 12 08:00:40 crc kubenswrapper[4809]: E0312 08:00:40.848036 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7h9l6_openshift-ovn-kubernetes(cc7631d0-7d4b-4f5a-ab01-7516b2ed998e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" podUID="cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" Mar 12 08:00:40 crc kubenswrapper[4809]: I0312 08:00:40.872584 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a4d3f57db51abc6b567283017a96e97ab98055de09513fe14d0be447942b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06c9bb77ec46b7ddf658e865660c808ab82509e698598c42daf2b199305c32c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://833b7c9d59ec9b7cf52e9b6c12705d034b46279a46145c181f92d30ffb6e7f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b49430238b4b9dc6fefc7169830e7106e6d43b1130d0b2fd358f103161f390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://943bab511f40b661cb31e977ec90fbc4edd6cb41d26593b0d7829dfd854fc047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f06a94e052ce6d8bc2496ade124907ca536c37d2710afe11c824327dc3de5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd7061a04af8eca7049be8a049e77071d26305cf53171b0ba73714b5f0fe8f89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd7061a04af8eca7049be8a049e77071d26305cf53171b0ba73714b5f0fe8f89\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T08:00:39Z\\\",\\\"message\\\":\\\"t-secret-name:serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0008826d7 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: authentication-operator,},ClusterIP:10.217.5.150,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.150],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0312 08:00:39.104426 7023 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handle\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7h9l6_openshift-ovn-kubernetes(cc7631d0-7d4b-4f5a-ab01-7516b2ed998e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a39013e253a1014bfc666169571d96ed943c28cc73221fcfa3f85d31ee9ea85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7h9l6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:40Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:40 crc kubenswrapper[4809]: I0312 08:00:40.878615 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:40 crc kubenswrapper[4809]: I0312 08:00:40.878671 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:40 crc kubenswrapper[4809]: I0312 08:00:40.878687 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:40 crc kubenswrapper[4809]: I0312 08:00:40.878707 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:40 crc kubenswrapper[4809]: I0312 08:00:40.878722 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:40Z","lastTransitionTime":"2026-03-12T08:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:40 crc kubenswrapper[4809]: I0312 08:00:40.888478 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vcxlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7db3e1-e4cd-4800-a7c2-30d8254caa4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a976f7b8cb11f2d6a4393b26321f830f79835db3c42bdc340ac4b189ee0e56d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vcxlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:40Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:40 crc kubenswrapper[4809]: I0312 08:00:40.909353 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:40Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:40 crc kubenswrapper[4809]: I0312 08:00:40.930147 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:40Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:40 crc kubenswrapper[4809]: I0312 08:00:40.946378 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"101483ba-8ed3-40eb-9855-077e9add029f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a3044e9eaa4ccc87b594a3f5440d95bf23c648ceb2e8f45abeadeda79b4a773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1dde7e156e97091c5fa419562607743e842e7e610e685fff1be3ce7ce33f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6d4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:40Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:40 crc kubenswrapper[4809]: I0312 08:00:40.969361 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwn7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab4c7cca-c503-41d4-8abf-5b19429defff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaaba74bba436056c80065bda470df45e31d6405fc8b717b0299b5a0fa3ef37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef92b238f6dd7e873d92f846bfa5a6dcbc6f5ab26617736df052a7f331fc5f4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef92b238f6dd7e873d92f846bfa5a6dcbc6f5ab26617736df052a7f331fc5f4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04cf364fbd4a2f9039a1733e337000f89db4b2ef6b4cd8de079f4c01c91add76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04cf364fbd4a2f9039a1733e337000f89db4b2ef6b4cd8de079f4c01c91add76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://844db69e092396d79015605fd21fde37c13689eb546b220c8e9ecc789aa7dc75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844db69e092396d79015605fd21fde37c13689eb546b220c8e9ecc789aa7dc75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5143175a54d02709bf304d2a4d50a1c3b2c441552e206628f9e1a5dfdb27ae97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5143175a54d02709bf304d2a4d50a1c3b2c441552e206628f9e1a5dfdb27ae97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1baa2e910db1674a7a4e4f09b6e01d305daf0f6ef4c7fee24090365dba810b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baa2e910db1674a7a4e4f09b6e01d305daf0f6ef4c7fee24090365dba810b9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwn7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:40Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:40 crc kubenswrapper[4809]: I0312 08:00:40.991199 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:40 crc kubenswrapper[4809]: I0312 08:00:40.991308 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:40 crc kubenswrapper[4809]: I0312 08:00:40.991340 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:40 crc kubenswrapper[4809]: I0312 08:00:40.991368 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:40 crc kubenswrapper[4809]: I0312 08:00:40.991395 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:40Z","lastTransitionTime":"2026-03-12T08:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:40 crc kubenswrapper[4809]: I0312 08:00:40.994050 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4677e418d05a581aee3be8fd421402c02922c5eaf004cbcae1ae691fb67d283e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:40Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:41 crc kubenswrapper[4809]: I0312 08:00:41.018341 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83bfb2b1536a6a7a823599dddb47b6ad50b038509d00298d2fcee85b0a5b6c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:41Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:41 crc kubenswrapper[4809]: I0312 08:00:41.039484 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:41Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:41 crc kubenswrapper[4809]: I0312 08:00:41.057171 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kbwnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acc48a9a-cc0a-4361-b1bc-7c7684f9bf93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5d8cd4006dec3e1304218cda9d03ea761465524e055daccb8e61ef1368a7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j44jn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kbwnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:41Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:41 crc kubenswrapper[4809]: I0312 08:00:41.074498 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xgl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa0e58ad29c62a229115fb214956dc11f004bd7f4a8188d7435e0857f07ff46b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2nhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xgl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:41Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:41 crc kubenswrapper[4809]: I0312 08:00:41.090935 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-p566k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d31c58d-0f0d-431f-bebc-57173f467eee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j742\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j742\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-p566k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:41Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:41 crc kubenswrapper[4809]: I0312 08:00:41.094838 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:41 crc kubenswrapper[4809]: I0312 08:00:41.094893 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:41 crc kubenswrapper[4809]: I0312 08:00:41.094913 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:41 crc kubenswrapper[4809]: I0312 08:00:41.094945 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:41 crc kubenswrapper[4809]: I0312 08:00:41.094966 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:41Z","lastTransitionTime":"2026-03-12T08:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:41 crc kubenswrapper[4809]: I0312 08:00:41.105204 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p566k" Mar 12 08:00:41 crc kubenswrapper[4809]: E0312 08:00:41.105423 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p566k" podUID="3d31c58d-0f0d-431f-bebc-57173f467eee" Mar 12 08:00:41 crc kubenswrapper[4809]: I0312 08:00:41.108980 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09f5e61a-c077-449f-8291-dbf93ac9aca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95634c408015e53a4c8d044b27803c5070ad086be1f3372a760744db1bc9c033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c386318c2476130eb02e6620ab75fce7d5cd814d7f214bed4f449de60d126a14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efb3f6bb09e8fcec0267b3fdbdf43301e33aa2ba8bfaf8b3a124c7e7714949ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T08:00:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 08:00:01.682190 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 08:00:01.682347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 08:00:01.683010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1444506139/tls.crt::/tmp/serving-cert-1444506139/tls.key\\\\\\\"\\\\nI0312 08:00:02.063779 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 08:00:02.066800 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 08:00:02.066837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 08:00:02.066881 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 08:00:02.066894 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 08:00:02.073309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 08:00:02.073332 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073342 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 08:00:02.073345 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 08:00:02.073348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 08:00:02.073351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 08:00:02.073351 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 08:00:02.075060 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://411ee556dd1461097ea682e93564110b2a46d02734cb91026ba06ffe7de7a80f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:41Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:41 crc kubenswrapper[4809]: I0312 08:00:41.123233 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0dc5e31901266db36374e4881b99b1a40767bc6ff58afe1e6226adf1f5c60fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aef513f474f25e56fd48d7c8b5f110939514be75dec224356dee883508d604f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:41Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:41 crc kubenswrapper[4809]: I0312 08:00:41.135828 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d7prn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b97e5c85-4adb-4061-b2e0-1be7440c2133\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aba607a1e3cfe1831b9a97989571fc863a712e4579f87587f6d0089b255ab12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f99hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edfd33ff837e4b09002fef30a5d35f242dd087139d7cabc6416bae56811968c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f99hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-d7prn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:41Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:41 crc kubenswrapper[4809]: I0312 08:00:41.153887 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90601560-88a4-4e9e-8c6c-b7cfd6193dfc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcce0bfc6e523f2bd2e05ab2403c80a280f3c22b7c9d4e61bde3bb414b1b6c1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afe86a0784b3f9ecefb87d8d8de1e616420abe185dfcf0e47ad17f7d3e18a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ecaed5f8be6ee7aa9b143575897770874696cfd0d71c30ef8e46d75a67be75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d974ea0c31aedbb16a24b692f45399aa47ba7a1e4b14467bb954c10757b38cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c32315e2dbe601b08a2b2c3db8a50dfc5d99be959dbf33c19f6a2c55df38c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28106812f58b36dc991efb2c3b84b4c3f38016271262418de7cb6d3130ee2743\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28106812f58b36dc991efb2c3b84b4c3f38016271262418de7cb6d3130ee2743\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://759e1933c261395715f2209ea1aa72957dfb4c5b1049c830e87854b0dcdba55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759e1933c261395715f2209ea1aa72957dfb4c5b1049c830e87854b0dcdba55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0d58c5fd72b76f75c191f46f25e0aa9fb3026191f5fb97c210907e1ea38baa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0d58c5fd72b76f75c191f46f25e0aa9fb3026191f5fb97c210907e1ea38baa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:41Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:41 crc kubenswrapper[4809]: I0312 08:00:41.197853 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:41 crc kubenswrapper[4809]: I0312 08:00:41.198043 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:41 crc kubenswrapper[4809]: I0312 08:00:41.198076 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:41 crc kubenswrapper[4809]: I0312 08:00:41.198153 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:41 crc kubenswrapper[4809]: I0312 08:00:41.198181 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:41Z","lastTransitionTime":"2026-03-12T08:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:41 crc kubenswrapper[4809]: I0312 08:00:41.300870 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:41 crc kubenswrapper[4809]: I0312 08:00:41.300906 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:41 crc kubenswrapper[4809]: I0312 08:00:41.300917 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:41 crc kubenswrapper[4809]: I0312 08:00:41.300932 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:41 crc kubenswrapper[4809]: I0312 08:00:41.300945 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:41Z","lastTransitionTime":"2026-03-12T08:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:41 crc kubenswrapper[4809]: I0312 08:00:41.403958 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:41 crc kubenswrapper[4809]: I0312 08:00:41.404028 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:41 crc kubenswrapper[4809]: I0312 08:00:41.404046 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:41 crc kubenswrapper[4809]: I0312 08:00:41.404074 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:41 crc kubenswrapper[4809]: I0312 08:00:41.404091 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:41Z","lastTransitionTime":"2026-03-12T08:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:41 crc kubenswrapper[4809]: I0312 08:00:41.507806 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:41 crc kubenswrapper[4809]: I0312 08:00:41.507861 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:41 crc kubenswrapper[4809]: I0312 08:00:41.507878 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:41 crc kubenswrapper[4809]: I0312 08:00:41.507903 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:41 crc kubenswrapper[4809]: I0312 08:00:41.507924 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:41Z","lastTransitionTime":"2026-03-12T08:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:41 crc kubenswrapper[4809]: I0312 08:00:41.610428 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:41 crc kubenswrapper[4809]: I0312 08:00:41.610501 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:41 crc kubenswrapper[4809]: I0312 08:00:41.610521 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:41 crc kubenswrapper[4809]: I0312 08:00:41.610546 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:41 crc kubenswrapper[4809]: I0312 08:00:41.610570 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:41Z","lastTransitionTime":"2026-03-12T08:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:41 crc kubenswrapper[4809]: I0312 08:00:41.713279 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:41 crc kubenswrapper[4809]: I0312 08:00:41.713362 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:41 crc kubenswrapper[4809]: I0312 08:00:41.713381 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:41 crc kubenswrapper[4809]: I0312 08:00:41.713407 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:41 crc kubenswrapper[4809]: I0312 08:00:41.713425 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:41Z","lastTransitionTime":"2026-03-12T08:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:41 crc kubenswrapper[4809]: I0312 08:00:41.816724 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:41 crc kubenswrapper[4809]: I0312 08:00:41.816784 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:41 crc kubenswrapper[4809]: I0312 08:00:41.816807 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:41 crc kubenswrapper[4809]: I0312 08:00:41.816872 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:41 crc kubenswrapper[4809]: I0312 08:00:41.816895 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:41Z","lastTransitionTime":"2026-03-12T08:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:41 crc kubenswrapper[4809]: I0312 08:00:41.920244 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:41 crc kubenswrapper[4809]: I0312 08:00:41.920326 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:41 crc kubenswrapper[4809]: I0312 08:00:41.920435 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:41 crc kubenswrapper[4809]: I0312 08:00:41.920468 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:41 crc kubenswrapper[4809]: I0312 08:00:41.920491 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:41Z","lastTransitionTime":"2026-03-12T08:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:42 crc kubenswrapper[4809]: I0312 08:00:42.023542 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:42 crc kubenswrapper[4809]: I0312 08:00:42.023633 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:42 crc kubenswrapper[4809]: I0312 08:00:42.023659 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:42 crc kubenswrapper[4809]: I0312 08:00:42.023691 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:42 crc kubenswrapper[4809]: I0312 08:00:42.023715 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:42Z","lastTransitionTime":"2026-03-12T08:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:42 crc kubenswrapper[4809]: I0312 08:00:42.104946 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:00:42 crc kubenswrapper[4809]: I0312 08:00:42.105034 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:00:42 crc kubenswrapper[4809]: E0312 08:00:42.105191 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 08:00:42 crc kubenswrapper[4809]: E0312 08:00:42.105391 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 08:00:42 crc kubenswrapper[4809]: I0312 08:00:42.104953 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:00:42 crc kubenswrapper[4809]: E0312 08:00:42.105549 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 08:00:42 crc kubenswrapper[4809]: I0312 08:00:42.127468 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:42 crc kubenswrapper[4809]: I0312 08:00:42.127541 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:42 crc kubenswrapper[4809]: I0312 08:00:42.127563 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:42 crc kubenswrapper[4809]: I0312 08:00:42.127592 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:42 crc kubenswrapper[4809]: I0312 08:00:42.127610 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:42Z","lastTransitionTime":"2026-03-12T08:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:42 crc kubenswrapper[4809]: I0312 08:00:42.230353 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:42 crc kubenswrapper[4809]: I0312 08:00:42.230414 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:42 crc kubenswrapper[4809]: I0312 08:00:42.230431 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:42 crc kubenswrapper[4809]: I0312 08:00:42.230461 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:42 crc kubenswrapper[4809]: I0312 08:00:42.230479 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:42Z","lastTransitionTime":"2026-03-12T08:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:42 crc kubenswrapper[4809]: I0312 08:00:42.333775 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:42 crc kubenswrapper[4809]: I0312 08:00:42.333836 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:42 crc kubenswrapper[4809]: I0312 08:00:42.333855 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:42 crc kubenswrapper[4809]: I0312 08:00:42.333878 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:42 crc kubenswrapper[4809]: I0312 08:00:42.333893 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:42Z","lastTransitionTime":"2026-03-12T08:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:42 crc kubenswrapper[4809]: I0312 08:00:42.436831 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:42 crc kubenswrapper[4809]: I0312 08:00:42.436921 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:42 crc kubenswrapper[4809]: I0312 08:00:42.436941 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:42 crc kubenswrapper[4809]: I0312 08:00:42.436972 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:42 crc kubenswrapper[4809]: I0312 08:00:42.436989 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:42Z","lastTransitionTime":"2026-03-12T08:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:42 crc kubenswrapper[4809]: I0312 08:00:42.540006 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:42 crc kubenswrapper[4809]: I0312 08:00:42.540067 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:42 crc kubenswrapper[4809]: I0312 08:00:42.540084 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:42 crc kubenswrapper[4809]: I0312 08:00:42.540108 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:42 crc kubenswrapper[4809]: I0312 08:00:42.540151 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:42Z","lastTransitionTime":"2026-03-12T08:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:42 crc kubenswrapper[4809]: I0312 08:00:42.643452 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:42 crc kubenswrapper[4809]: I0312 08:00:42.643532 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:42 crc kubenswrapper[4809]: I0312 08:00:42.643551 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:42 crc kubenswrapper[4809]: I0312 08:00:42.643576 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:42 crc kubenswrapper[4809]: I0312 08:00:42.643594 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:42Z","lastTransitionTime":"2026-03-12T08:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:42 crc kubenswrapper[4809]: I0312 08:00:42.746286 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:42 crc kubenswrapper[4809]: I0312 08:00:42.746355 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:42 crc kubenswrapper[4809]: I0312 08:00:42.746379 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:42 crc kubenswrapper[4809]: I0312 08:00:42.746409 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:42 crc kubenswrapper[4809]: I0312 08:00:42.746434 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:42Z","lastTransitionTime":"2026-03-12T08:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:42 crc kubenswrapper[4809]: I0312 08:00:42.849743 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:42 crc kubenswrapper[4809]: I0312 08:00:42.849810 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:42 crc kubenswrapper[4809]: I0312 08:00:42.849859 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:42 crc kubenswrapper[4809]: I0312 08:00:42.849885 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:42 crc kubenswrapper[4809]: I0312 08:00:42.849901 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:42Z","lastTransitionTime":"2026-03-12T08:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:42 crc kubenswrapper[4809]: I0312 08:00:42.953235 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:42 crc kubenswrapper[4809]: I0312 08:00:42.953294 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:42 crc kubenswrapper[4809]: I0312 08:00:42.953311 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:42 crc kubenswrapper[4809]: I0312 08:00:42.953353 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:42 crc kubenswrapper[4809]: I0312 08:00:42.953370 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:42Z","lastTransitionTime":"2026-03-12T08:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:43 crc kubenswrapper[4809]: I0312 08:00:43.055863 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:43 crc kubenswrapper[4809]: I0312 08:00:43.055930 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:43 crc kubenswrapper[4809]: I0312 08:00:43.055953 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:43 crc kubenswrapper[4809]: I0312 08:00:43.055982 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:43 crc kubenswrapper[4809]: I0312 08:00:43.056004 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:43Z","lastTransitionTime":"2026-03-12T08:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:43 crc kubenswrapper[4809]: I0312 08:00:43.105934 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p566k" Mar 12 08:00:43 crc kubenswrapper[4809]: E0312 08:00:43.106151 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p566k" podUID="3d31c58d-0f0d-431f-bebc-57173f467eee" Mar 12 08:00:43 crc kubenswrapper[4809]: I0312 08:00:43.158830 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:43 crc kubenswrapper[4809]: I0312 08:00:43.158916 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:43 crc kubenswrapper[4809]: I0312 08:00:43.158934 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:43 crc kubenswrapper[4809]: I0312 08:00:43.158957 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:43 crc kubenswrapper[4809]: I0312 08:00:43.158976 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:43Z","lastTransitionTime":"2026-03-12T08:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:43 crc kubenswrapper[4809]: I0312 08:00:43.262682 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:43 crc kubenswrapper[4809]: I0312 08:00:43.262744 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:43 crc kubenswrapper[4809]: I0312 08:00:43.262761 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:43 crc kubenswrapper[4809]: I0312 08:00:43.262788 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:43 crc kubenswrapper[4809]: I0312 08:00:43.262804 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:43Z","lastTransitionTime":"2026-03-12T08:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:43 crc kubenswrapper[4809]: I0312 08:00:43.365740 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:43 crc kubenswrapper[4809]: I0312 08:00:43.365804 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:43 crc kubenswrapper[4809]: I0312 08:00:43.365821 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:43 crc kubenswrapper[4809]: I0312 08:00:43.365845 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:43 crc kubenswrapper[4809]: I0312 08:00:43.365862 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:43Z","lastTransitionTime":"2026-03-12T08:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:43 crc kubenswrapper[4809]: I0312 08:00:43.407723 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3d31c58d-0f0d-431f-bebc-57173f467eee-metrics-certs\") pod \"network-metrics-daemon-p566k\" (UID: \"3d31c58d-0f0d-431f-bebc-57173f467eee\") " pod="openshift-multus/network-metrics-daemon-p566k" Mar 12 08:00:43 crc kubenswrapper[4809]: E0312 08:00:43.407888 4809 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 08:00:43 crc kubenswrapper[4809]: E0312 08:00:43.407997 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d31c58d-0f0d-431f-bebc-57173f467eee-metrics-certs podName:3d31c58d-0f0d-431f-bebc-57173f467eee nodeName:}" failed. No retries permitted until 2026-03-12 08:00:59.407976854 +0000 UTC m=+132.990012597 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3d31c58d-0f0d-431f-bebc-57173f467eee-metrics-certs") pod "network-metrics-daemon-p566k" (UID: "3d31c58d-0f0d-431f-bebc-57173f467eee") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 08:00:43 crc kubenswrapper[4809]: I0312 08:00:43.469234 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:43 crc kubenswrapper[4809]: I0312 08:00:43.469284 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:43 crc kubenswrapper[4809]: I0312 08:00:43.469296 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:43 crc kubenswrapper[4809]: I0312 08:00:43.469315 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:43 crc kubenswrapper[4809]: I0312 08:00:43.469328 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:43Z","lastTransitionTime":"2026-03-12T08:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:43 crc kubenswrapper[4809]: I0312 08:00:43.572377 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:43 crc kubenswrapper[4809]: I0312 08:00:43.572468 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:43 crc kubenswrapper[4809]: I0312 08:00:43.572487 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:43 crc kubenswrapper[4809]: I0312 08:00:43.572512 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:43 crc kubenswrapper[4809]: I0312 08:00:43.572530 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:43Z","lastTransitionTime":"2026-03-12T08:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:43 crc kubenswrapper[4809]: I0312 08:00:43.676012 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:43 crc kubenswrapper[4809]: I0312 08:00:43.676222 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:43 crc kubenswrapper[4809]: I0312 08:00:43.676260 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:43 crc kubenswrapper[4809]: I0312 08:00:43.676290 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:43 crc kubenswrapper[4809]: I0312 08:00:43.676309 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:43Z","lastTransitionTime":"2026-03-12T08:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:43 crc kubenswrapper[4809]: I0312 08:00:43.779515 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:43 crc kubenswrapper[4809]: I0312 08:00:43.779583 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:43 crc kubenswrapper[4809]: I0312 08:00:43.779600 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:43 crc kubenswrapper[4809]: I0312 08:00:43.779626 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:43 crc kubenswrapper[4809]: I0312 08:00:43.779646 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:43Z","lastTransitionTime":"2026-03-12T08:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:43 crc kubenswrapper[4809]: I0312 08:00:43.881978 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:43 crc kubenswrapper[4809]: I0312 08:00:43.882084 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:43 crc kubenswrapper[4809]: I0312 08:00:43.882106 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:43 crc kubenswrapper[4809]: I0312 08:00:43.882159 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:43 crc kubenswrapper[4809]: I0312 08:00:43.882180 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:43Z","lastTransitionTime":"2026-03-12T08:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:43 crc kubenswrapper[4809]: I0312 08:00:43.985321 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:43 crc kubenswrapper[4809]: I0312 08:00:43.985452 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:43 crc kubenswrapper[4809]: I0312 08:00:43.985495 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:43 crc kubenswrapper[4809]: I0312 08:00:43.985528 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:43 crc kubenswrapper[4809]: I0312 08:00:43.985547 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:43Z","lastTransitionTime":"2026-03-12T08:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:44 crc kubenswrapper[4809]: I0312 08:00:44.089463 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:44 crc kubenswrapper[4809]: I0312 08:00:44.089601 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:44 crc kubenswrapper[4809]: I0312 08:00:44.089666 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:44 crc kubenswrapper[4809]: I0312 08:00:44.089692 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:44 crc kubenswrapper[4809]: I0312 08:00:44.089709 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:44Z","lastTransitionTime":"2026-03-12T08:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:44 crc kubenswrapper[4809]: I0312 08:00:44.105200 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:00:44 crc kubenswrapper[4809]: E0312 08:00:44.105345 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 08:00:44 crc kubenswrapper[4809]: I0312 08:00:44.105491 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:00:44 crc kubenswrapper[4809]: I0312 08:00:44.105629 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:00:44 crc kubenswrapper[4809]: E0312 08:00:44.105730 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 08:00:44 crc kubenswrapper[4809]: E0312 08:00:44.105852 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 08:00:44 crc kubenswrapper[4809]: I0312 08:00:44.193011 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:44 crc kubenswrapper[4809]: I0312 08:00:44.193063 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:44 crc kubenswrapper[4809]: I0312 08:00:44.193085 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:44 crc kubenswrapper[4809]: I0312 08:00:44.193157 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:44 crc kubenswrapper[4809]: I0312 08:00:44.193421 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:44Z","lastTransitionTime":"2026-03-12T08:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:44 crc kubenswrapper[4809]: I0312 08:00:44.296192 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:44 crc kubenswrapper[4809]: I0312 08:00:44.296232 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:44 crc kubenswrapper[4809]: I0312 08:00:44.296247 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:44 crc kubenswrapper[4809]: I0312 08:00:44.296266 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:44 crc kubenswrapper[4809]: I0312 08:00:44.296278 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:44Z","lastTransitionTime":"2026-03-12T08:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:44 crc kubenswrapper[4809]: I0312 08:00:44.399797 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:44 crc kubenswrapper[4809]: I0312 08:00:44.399863 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:44 crc kubenswrapper[4809]: I0312 08:00:44.399875 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:44 crc kubenswrapper[4809]: I0312 08:00:44.399900 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:44 crc kubenswrapper[4809]: I0312 08:00:44.399917 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:44Z","lastTransitionTime":"2026-03-12T08:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:44 crc kubenswrapper[4809]: I0312 08:00:44.503203 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:44 crc kubenswrapper[4809]: I0312 08:00:44.503305 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:44 crc kubenswrapper[4809]: I0312 08:00:44.503329 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:44 crc kubenswrapper[4809]: I0312 08:00:44.503362 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:44 crc kubenswrapper[4809]: I0312 08:00:44.503381 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:44Z","lastTransitionTime":"2026-03-12T08:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:44 crc kubenswrapper[4809]: I0312 08:00:44.606169 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:44 crc kubenswrapper[4809]: I0312 08:00:44.606247 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:44 crc kubenswrapper[4809]: I0312 08:00:44.606264 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:44 crc kubenswrapper[4809]: I0312 08:00:44.606293 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:44 crc kubenswrapper[4809]: I0312 08:00:44.606310 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:44Z","lastTransitionTime":"2026-03-12T08:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:44 crc kubenswrapper[4809]: I0312 08:00:44.709535 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:44 crc kubenswrapper[4809]: I0312 08:00:44.709592 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:44 crc kubenswrapper[4809]: I0312 08:00:44.709614 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:44 crc kubenswrapper[4809]: I0312 08:00:44.709638 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:44 crc kubenswrapper[4809]: I0312 08:00:44.709655 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:44Z","lastTransitionTime":"2026-03-12T08:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:44 crc kubenswrapper[4809]: I0312 08:00:44.813637 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:44 crc kubenswrapper[4809]: I0312 08:00:44.813736 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:44 crc kubenswrapper[4809]: I0312 08:00:44.813756 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:44 crc kubenswrapper[4809]: I0312 08:00:44.813800 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:44 crc kubenswrapper[4809]: I0312 08:00:44.813841 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:44Z","lastTransitionTime":"2026-03-12T08:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:44 crc kubenswrapper[4809]: I0312 08:00:44.917651 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:44 crc kubenswrapper[4809]: I0312 08:00:44.917735 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:44 crc kubenswrapper[4809]: I0312 08:00:44.917759 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:44 crc kubenswrapper[4809]: I0312 08:00:44.917791 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:44 crc kubenswrapper[4809]: I0312 08:00:44.917815 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:44Z","lastTransitionTime":"2026-03-12T08:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:45 crc kubenswrapper[4809]: I0312 08:00:45.021016 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:45 crc kubenswrapper[4809]: I0312 08:00:45.021202 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:45 crc kubenswrapper[4809]: I0312 08:00:45.021271 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:45 crc kubenswrapper[4809]: I0312 08:00:45.021308 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:45 crc kubenswrapper[4809]: I0312 08:00:45.021375 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:45Z","lastTransitionTime":"2026-03-12T08:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:45 crc kubenswrapper[4809]: I0312 08:00:45.105565 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p566k" Mar 12 08:00:45 crc kubenswrapper[4809]: E0312 08:00:45.105834 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p566k" podUID="3d31c58d-0f0d-431f-bebc-57173f467eee" Mar 12 08:00:45 crc kubenswrapper[4809]: I0312 08:00:45.125039 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:45 crc kubenswrapper[4809]: I0312 08:00:45.125103 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:45 crc kubenswrapper[4809]: I0312 08:00:45.125157 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:45 crc kubenswrapper[4809]: I0312 08:00:45.125186 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:45 crc kubenswrapper[4809]: I0312 08:00:45.125209 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:45Z","lastTransitionTime":"2026-03-12T08:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:45 crc kubenswrapper[4809]: I0312 08:00:45.228848 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:45 crc kubenswrapper[4809]: I0312 08:00:45.228914 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:45 crc kubenswrapper[4809]: I0312 08:00:45.228932 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:45 crc kubenswrapper[4809]: I0312 08:00:45.228957 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:45 crc kubenswrapper[4809]: I0312 08:00:45.228976 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:45Z","lastTransitionTime":"2026-03-12T08:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:45 crc kubenswrapper[4809]: I0312 08:00:45.334054 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:45 crc kubenswrapper[4809]: I0312 08:00:45.334144 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:45 crc kubenswrapper[4809]: I0312 08:00:45.334165 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:45 crc kubenswrapper[4809]: I0312 08:00:45.334194 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:45 crc kubenswrapper[4809]: I0312 08:00:45.334213 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:45Z","lastTransitionTime":"2026-03-12T08:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:45 crc kubenswrapper[4809]: I0312 08:00:45.437878 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:45 crc kubenswrapper[4809]: I0312 08:00:45.437944 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:45 crc kubenswrapper[4809]: I0312 08:00:45.437964 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:45 crc kubenswrapper[4809]: I0312 08:00:45.437989 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:45 crc kubenswrapper[4809]: I0312 08:00:45.438006 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:45Z","lastTransitionTime":"2026-03-12T08:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:45 crc kubenswrapper[4809]: I0312 08:00:45.541563 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:45 crc kubenswrapper[4809]: I0312 08:00:45.541638 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:45 crc kubenswrapper[4809]: I0312 08:00:45.541670 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:45 crc kubenswrapper[4809]: I0312 08:00:45.541703 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:45 crc kubenswrapper[4809]: I0312 08:00:45.541726 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:45Z","lastTransitionTime":"2026-03-12T08:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:45 crc kubenswrapper[4809]: I0312 08:00:45.644879 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:45 crc kubenswrapper[4809]: I0312 08:00:45.644932 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:45 crc kubenswrapper[4809]: I0312 08:00:45.644950 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:45 crc kubenswrapper[4809]: I0312 08:00:45.644976 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:45 crc kubenswrapper[4809]: I0312 08:00:45.644993 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:45Z","lastTransitionTime":"2026-03-12T08:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:45 crc kubenswrapper[4809]: I0312 08:00:45.749548 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:45 crc kubenswrapper[4809]: I0312 08:00:45.749607 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:45 crc kubenswrapper[4809]: I0312 08:00:45.749642 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:45 crc kubenswrapper[4809]: I0312 08:00:45.749675 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:45 crc kubenswrapper[4809]: I0312 08:00:45.749697 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:45Z","lastTransitionTime":"2026-03-12T08:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:45 crc kubenswrapper[4809]: I0312 08:00:45.853052 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:45 crc kubenswrapper[4809]: I0312 08:00:45.853177 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:45 crc kubenswrapper[4809]: I0312 08:00:45.853197 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:45 crc kubenswrapper[4809]: I0312 08:00:45.853257 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:45 crc kubenswrapper[4809]: I0312 08:00:45.853277 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:45Z","lastTransitionTime":"2026-03-12T08:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:45 crc kubenswrapper[4809]: I0312 08:00:45.957852 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:45 crc kubenswrapper[4809]: I0312 08:00:45.957929 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:45 crc kubenswrapper[4809]: I0312 08:00:45.957955 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:45 crc kubenswrapper[4809]: I0312 08:00:45.957989 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:45 crc kubenswrapper[4809]: I0312 08:00:45.958017 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:45Z","lastTransitionTime":"2026-03-12T08:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:46 crc kubenswrapper[4809]: I0312 08:00:46.061261 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:46 crc kubenswrapper[4809]: I0312 08:00:46.061380 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:46 crc kubenswrapper[4809]: I0312 08:00:46.061447 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:46 crc kubenswrapper[4809]: I0312 08:00:46.061479 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:46 crc kubenswrapper[4809]: I0312 08:00:46.061660 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:46Z","lastTransitionTime":"2026-03-12T08:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:46 crc kubenswrapper[4809]: I0312 08:00:46.105561 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:00:46 crc kubenswrapper[4809]: I0312 08:00:46.105672 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:00:46 crc kubenswrapper[4809]: I0312 08:00:46.105612 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:00:46 crc kubenswrapper[4809]: E0312 08:00:46.105873 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 08:00:46 crc kubenswrapper[4809]: E0312 08:00:46.106009 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 08:00:46 crc kubenswrapper[4809]: E0312 08:00:46.106223 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 08:00:46 crc kubenswrapper[4809]: I0312 08:00:46.165368 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:46 crc kubenswrapper[4809]: I0312 08:00:46.165431 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:46 crc kubenswrapper[4809]: I0312 08:00:46.165444 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:46 crc kubenswrapper[4809]: I0312 08:00:46.165466 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:46 crc kubenswrapper[4809]: I0312 08:00:46.165479 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:46Z","lastTransitionTime":"2026-03-12T08:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:46 crc kubenswrapper[4809]: I0312 08:00:46.268784 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:46 crc kubenswrapper[4809]: I0312 08:00:46.268871 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:46 crc kubenswrapper[4809]: I0312 08:00:46.268894 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:46 crc kubenswrapper[4809]: I0312 08:00:46.268932 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:46 crc kubenswrapper[4809]: I0312 08:00:46.268968 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:46Z","lastTransitionTime":"2026-03-12T08:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:46 crc kubenswrapper[4809]: I0312 08:00:46.373414 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:46 crc kubenswrapper[4809]: I0312 08:00:46.373478 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:46 crc kubenswrapper[4809]: I0312 08:00:46.373491 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:46 crc kubenswrapper[4809]: I0312 08:00:46.373512 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:46 crc kubenswrapper[4809]: I0312 08:00:46.373526 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:46Z","lastTransitionTime":"2026-03-12T08:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:46 crc kubenswrapper[4809]: I0312 08:00:46.476825 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:46 crc kubenswrapper[4809]: I0312 08:00:46.476884 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:46 crc kubenswrapper[4809]: I0312 08:00:46.476896 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:46 crc kubenswrapper[4809]: I0312 08:00:46.476920 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:46 crc kubenswrapper[4809]: I0312 08:00:46.476936 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:46Z","lastTransitionTime":"2026-03-12T08:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:46 crc kubenswrapper[4809]: I0312 08:00:46.580810 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:46 crc kubenswrapper[4809]: I0312 08:00:46.580862 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:46 crc kubenswrapper[4809]: I0312 08:00:46.580876 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:46 crc kubenswrapper[4809]: I0312 08:00:46.580896 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:46 crc kubenswrapper[4809]: I0312 08:00:46.580909 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:46Z","lastTransitionTime":"2026-03-12T08:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:46 crc kubenswrapper[4809]: I0312 08:00:46.684900 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:46 crc kubenswrapper[4809]: I0312 08:00:46.684944 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:46 crc kubenswrapper[4809]: I0312 08:00:46.684952 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:46 crc kubenswrapper[4809]: I0312 08:00:46.684973 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:46 crc kubenswrapper[4809]: I0312 08:00:46.684986 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:46Z","lastTransitionTime":"2026-03-12T08:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:46 crc kubenswrapper[4809]: I0312 08:00:46.788537 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:46 crc kubenswrapper[4809]: I0312 08:00:46.788590 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:46 crc kubenswrapper[4809]: I0312 08:00:46.788604 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:46 crc kubenswrapper[4809]: I0312 08:00:46.788625 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:46 crc kubenswrapper[4809]: I0312 08:00:46.788639 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:46Z","lastTransitionTime":"2026-03-12T08:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:46 crc kubenswrapper[4809]: I0312 08:00:46.890654 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:46 crc kubenswrapper[4809]: I0312 08:00:46.890691 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:46 crc kubenswrapper[4809]: I0312 08:00:46.890702 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:46 crc kubenswrapper[4809]: I0312 08:00:46.890716 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:46 crc kubenswrapper[4809]: I0312 08:00:46.890727 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:46Z","lastTransitionTime":"2026-03-12T08:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:46 crc kubenswrapper[4809]: I0312 08:00:46.994339 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:46 crc kubenswrapper[4809]: I0312 08:00:46.994391 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:46 crc kubenswrapper[4809]: I0312 08:00:46.994412 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:46 crc kubenswrapper[4809]: I0312 08:00:46.994435 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:46 crc kubenswrapper[4809]: I0312 08:00:46.994453 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:46Z","lastTransitionTime":"2026-03-12T08:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:47 crc kubenswrapper[4809]: E0312 08:00:47.095244 4809 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Mar 12 08:00:47 crc kubenswrapper[4809]: I0312 08:00:47.105180 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p566k" Mar 12 08:00:47 crc kubenswrapper[4809]: E0312 08:00:47.105355 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p566k" podUID="3d31c58d-0f0d-431f-bebc-57173f467eee" Mar 12 08:00:47 crc kubenswrapper[4809]: I0312 08:00:47.141690 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"101483ba-8ed3-40eb-9855-077e9add029f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a3044e9eaa4ccc87b594a3f5440d95bf23c648ceb2e8f45abeadeda79b4a773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1dde7e156e97091c5fa419562607743e842e7e610e685fff1be3ce7ce33f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6d4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:47Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:47 crc kubenswrapper[4809]: I0312 08:00:47.159892 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwn7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab4c7cca-c503-41d4-8abf-5b19429defff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaaba74bba436056c80065bda470df45e31d6405fc8b717b0299b5a0fa3ef37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef92b238f6dd7e873d92f846bfa5a6dcbc6f5ab26617736df052a7f331fc5f4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef92b238f6dd7e873d92f846bfa5a6dcbc6f5ab26617736df052a7f331fc5f4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04cf364fbd4a2f9039a1733e337000f89db4b2ef6b4cd8de079f4c01c91add76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04cf364fbd4a2f9039a1733e337000f89db4b2ef6b4cd8de079f4c01c91add76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://844db69e092396d79015605fd21fde37c13689eb546b220c8e9ecc789aa7dc75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844db69e092396d79015605fd21fde37c13689eb546b220c8e9ecc789aa7dc75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5143175a54d02709bf304d2a4d50a1c3b2c441552e206628f9e1a5dfdb27ae97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5143175a54d02709bf304d2a4d50a1c3b2c441552e206628f9e1a5dfdb27ae97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1baa2e910db1674a7a4e4f09b6e01d305daf0f6ef4c7fee24090365dba810b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baa2e910db1674a7a4e4f09b6e01d305daf0f6ef4c7fee24090365dba810b9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwn7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:47Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:47 crc kubenswrapper[4809]: I0312 08:00:47.173836 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4677e418d05a581aee3be8fd421402c02922c5eaf004cbcae1ae691fb67d283e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:47Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:47 crc kubenswrapper[4809]: I0312 08:00:47.189993 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83bfb2b1536a6a7a823599dddb47b6ad50b038509d00298d2fcee85b0a5b6c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:47Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:47 crc kubenswrapper[4809]: I0312 08:00:47.204951 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:47Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:47 crc kubenswrapper[4809]: I0312 08:00:47.216649 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kbwnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acc48a9a-cc0a-4361-b1bc-7c7684f9bf93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5d8cd4006dec3e1304218cda9d03ea761465524e055daccb8e61ef1368a7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j44jn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kbwnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:47Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:47 crc kubenswrapper[4809]: E0312 08:00:47.240015 4809 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 08:00:47 crc kubenswrapper[4809]: I0312 08:00:47.245604 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xgl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa0e58ad29c62a229115fb214956dc11f004bd7f4a8188d7435e0857f07ff46b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2nhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xgl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:47Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:47 crc kubenswrapper[4809]: I0312 08:00:47.266486 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-p566k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d31c58d-0f0d-431f-bebc-57173f467eee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j742\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j742\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-p566k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:47Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:47 crc kubenswrapper[4809]: I0312 08:00:47.285161 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09f5e61a-c077-449f-8291-dbf93ac9aca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95634c408015e53a4c8d044b27803c5070ad086be1f3372a760744db1bc9c033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c386318c2476130eb02e6620ab75fce7d5cd814d7f214bed4f449de60d126a14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efb3f6bb09e8fcec0267b3fdbdf43301e33aa2ba8bfaf8b3a124c7e7714949ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T08:00:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 08:00:01.682190 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 08:00:01.682347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 08:00:01.683010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1444506139/tls.crt::/tmp/serving-cert-1444506139/tls.key\\\\\\\"\\\\nI0312 08:00:02.063779 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 08:00:02.066800 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 08:00:02.066837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 08:00:02.066881 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 08:00:02.066894 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 08:00:02.073309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 08:00:02.073332 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073342 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 08:00:02.073345 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 08:00:02.073348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 08:00:02.073351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 08:00:02.073351 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 08:00:02.075060 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://411ee556dd1461097ea682e93564110b2a46d02734cb91026ba06ffe7de7a80f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:47Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:47 crc kubenswrapper[4809]: I0312 08:00:47.298817 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0dc5e31901266db36374e4881b99b1a40767bc6ff58afe1e6226adf1f5c60fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aef513f474f25e56fd48d7c8b5f110939514be75dec224356dee883508d604f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:47Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:47 crc kubenswrapper[4809]: I0312 08:00:47.315739 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d7prn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b97e5c85-4adb-4061-b2e0-1be7440c2133\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aba607a1e3cfe1831b9a97989571fc863a712e4579f87587f6d0089b255ab12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f99hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edfd33ff837e4b09002fef30a5d35f242dd087139d7cabc6416bae56811968c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f99hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-d7prn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:47Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:47 crc kubenswrapper[4809]: I0312 08:00:47.342044 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90601560-88a4-4e9e-8c6c-b7cfd6193dfc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcce0bfc6e523f2bd2e05ab2403c80a280f3c22b7c9d4e61bde3bb414b1b6c1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afe86a0784b3f9ecefb87d8d8de1e616420abe185dfcf0e47ad17f7d3e18a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ecaed5f8be6ee7aa9b143575897770874696cfd0d71c30ef8e46d75a67be75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d974ea0c31aedbb16a24b692f45399aa47ba7a1e4b14467bb954c10757b38cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c32315e2dbe601b08a2b2c3db8a50dfc5d99be959dbf33c19f6a2c55df38c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28106812f58b36dc991efb2c3b84b4c3f38016271262418de7cb6d3130ee2743\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28106812f58b36dc991efb2c3b84b4c3f38016271262418de7cb6d3130ee2743\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://759e1933c261395715f2209ea1aa72957dfb4c5b1049c830e87854b0dcdba55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759e1933c261395715f2209ea1aa72957dfb4c5b1049c830e87854b0dcdba55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0d58c5fd72b76f75c191f46f25e0aa9fb3026191f5fb97c210907e1ea38baa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0d58c5fd72b76f75c191f46f25e0aa9fb3026191f5fb97c210907e1ea38baa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:47Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:47 crc kubenswrapper[4809]: I0312 08:00:47.364604 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a4d3f57db51abc6b567283017a96e97ab98055de09513fe14d0be447942b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06c9bb77ec46b7ddf658e865660c808ab82509e698598c42daf2b199305c32c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://833b7c9d59ec9b7cf52e9b6c12705d034b46279a46145c181f92d30ffb6e7f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b49430238b4b9dc6fefc7169830e7106e6d43b1130d0b2fd358f103161f390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://943bab511f40b661cb31e977ec90fbc4edd6cb41d26593b0d7829dfd854fc047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f06a94e052ce6d8bc2496ade124907ca536c37d2710afe11c824327dc3de5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd7061a04af8eca7049be8a049e77071d26305cf53171b0ba73714b5f0fe8f89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd7061a04af8eca7049be8a049e77071d26305cf53171b0ba73714b5f0fe8f89\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T08:00:39Z\\\",\\\"message\\\":\\\"t-secret-name:serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0008826d7 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: authentication-operator,},ClusterIP:10.217.5.150,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.150],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0312 08:00:39.104426 7023 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handle\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7h9l6_openshift-ovn-kubernetes(cc7631d0-7d4b-4f5a-ab01-7516b2ed998e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a39013e253a1014bfc666169571d96ed943c28cc73221fcfa3f85d31ee9ea85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7h9l6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:47Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:47 crc kubenswrapper[4809]: I0312 08:00:47.380954 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vcxlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7db3e1-e4cd-4800-a7c2-30d8254caa4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a976f7b8cb11f2d6a4393b26321f830f79835db3c42bdc340ac4b189ee0e56d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vcxlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:47Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:47 crc kubenswrapper[4809]: I0312 08:00:47.401457 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:47Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:47 crc kubenswrapper[4809]: I0312 08:00:47.415907 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:47Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:48 crc kubenswrapper[4809]: I0312 08:00:48.105457 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:00:48 crc kubenswrapper[4809]: I0312 08:00:48.105585 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:00:48 crc kubenswrapper[4809]: I0312 08:00:48.105837 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:00:48 crc kubenswrapper[4809]: E0312 08:00:48.106000 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 08:00:48 crc kubenswrapper[4809]: I0312 08:00:48.106178 4809 scope.go:117] "RemoveContainer" containerID="94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9" Mar 12 08:00:48 crc kubenswrapper[4809]: E0312 08:00:48.106191 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 08:00:48 crc kubenswrapper[4809]: E0312 08:00:48.106313 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 08:00:48 crc kubenswrapper[4809]: I0312 08:00:48.124399 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Mar 12 08:00:48 crc kubenswrapper[4809]: I0312 08:00:48.206258 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:48 crc kubenswrapper[4809]: I0312 08:00:48.206321 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:48 crc kubenswrapper[4809]: I0312 08:00:48.206330 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:48 crc kubenswrapper[4809]: I0312 08:00:48.206344 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:48 crc kubenswrapper[4809]: I0312 08:00:48.206355 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:48Z","lastTransitionTime":"2026-03-12T08:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:48 crc kubenswrapper[4809]: E0312 08:00:48.223712 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f5485f66-5ce6-43bd-a720-57f2c3255098\\\",\\\"systemUUID\\\":\\\"f124771b-42e6-4243-8a1b-002105aaecbe\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:48Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:48 crc kubenswrapper[4809]: I0312 08:00:48.228005 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:48 crc kubenswrapper[4809]: I0312 08:00:48.228025 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:48 crc kubenswrapper[4809]: I0312 08:00:48.228033 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:48 crc kubenswrapper[4809]: I0312 08:00:48.228047 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:48 crc kubenswrapper[4809]: I0312 08:00:48.228059 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:48Z","lastTransitionTime":"2026-03-12T08:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:48 crc kubenswrapper[4809]: E0312 08:00:48.245538 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f5485f66-5ce6-43bd-a720-57f2c3255098\\\",\\\"systemUUID\\\":\\\"f124771b-42e6-4243-8a1b-002105aaecbe\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:48Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:48 crc kubenswrapper[4809]: I0312 08:00:48.250603 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:48 crc kubenswrapper[4809]: I0312 08:00:48.250630 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:48 crc kubenswrapper[4809]: I0312 08:00:48.250638 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:48 crc kubenswrapper[4809]: I0312 08:00:48.250651 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:48 crc kubenswrapper[4809]: I0312 08:00:48.250660 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:48Z","lastTransitionTime":"2026-03-12T08:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:48 crc kubenswrapper[4809]: E0312 08:00:48.271687 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f5485f66-5ce6-43bd-a720-57f2c3255098\\\",\\\"systemUUID\\\":\\\"f124771b-42e6-4243-8a1b-002105aaecbe\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:48Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:48 crc kubenswrapper[4809]: I0312 08:00:48.277588 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:48 crc kubenswrapper[4809]: I0312 08:00:48.277630 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:48 crc kubenswrapper[4809]: I0312 08:00:48.277640 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:48 crc kubenswrapper[4809]: I0312 08:00:48.277655 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:48 crc kubenswrapper[4809]: I0312 08:00:48.277666 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:48Z","lastTransitionTime":"2026-03-12T08:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:48 crc kubenswrapper[4809]: E0312 08:00:48.295830 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f5485f66-5ce6-43bd-a720-57f2c3255098\\\",\\\"systemUUID\\\":\\\"f124771b-42e6-4243-8a1b-002105aaecbe\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:48Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:48 crc kubenswrapper[4809]: I0312 08:00:48.299145 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:48 crc kubenswrapper[4809]: I0312 08:00:48.299166 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:48 crc kubenswrapper[4809]: I0312 08:00:48.299175 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:48 crc kubenswrapper[4809]: I0312 08:00:48.299187 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:48 crc kubenswrapper[4809]: I0312 08:00:48.299198 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:48Z","lastTransitionTime":"2026-03-12T08:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:48 crc kubenswrapper[4809]: E0312 08:00:48.314503 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f5485f66-5ce6-43bd-a720-57f2c3255098\\\",\\\"systemUUID\\\":\\\"f124771b-42e6-4243-8a1b-002105aaecbe\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:48Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:48 crc kubenswrapper[4809]: E0312 08:00:48.314612 4809 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 12 08:00:48 crc kubenswrapper[4809]: I0312 08:00:48.881074 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 12 08:00:48 crc kubenswrapper[4809]: I0312 08:00:48.883828 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b9814d65f13344afd5484baf20498a643e575626a91a474b937602f4bb06f564"} Mar 12 08:00:48 crc kubenswrapper[4809]: I0312 08:00:48.884392 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 08:00:48 crc kubenswrapper[4809]: I0312 08:00:48.901919 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d7prn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b97e5c85-4adb-4061-b2e0-1be7440c2133\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aba607a1e3cfe1831b9a97989571fc863a712e4579f87587f6d0089b255ab12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f99hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edfd33ff837e4b09002fef30a5d35f242dd087139d7cabc6416bae56811968c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f99hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-d7prn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:48Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:48 crc kubenswrapper[4809]: I0312 08:00:48.924825 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90601560-88a4-4e9e-8c6c-b7cfd6193dfc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcce0bfc6e523f2bd2e05ab2403c80a280f3c22b7c9d4e61bde3bb414b1b6c1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afe86a0784b3f9ecefb87d8d8de1e616420abe185dfcf0e47ad17f7d3e18a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ecaed5f8be6ee7aa9b143575897770874696cfd0d71c30ef8e46d75a67be75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d974ea0c31aedbb16a24b692f45399aa47ba7a1e4b14467bb954c10757b38cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c32315e2dbe601b08a2b2c3db8a50dfc5d99be959dbf33c19f6a2c55df38c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28106812f58b36dc991efb2c3b84b4c3f38016271262418de7cb6d3130ee2743\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28106812f58b36dc991efb2c3b84b4c3f38016271262418de7cb6d3130ee2743\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://759e1933c261395715f2209ea1aa72957dfb4c5b1049c830e87854b0dcdba55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759e1933c261395715f2209ea1aa72957dfb4c5b1049c830e87854b0dcdba55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0d58c5fd72b76f75c191f46f25e0aa9fb3026191f5fb97c210907e1ea38baa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0d58c5fd72b76f75c191f46f25e0aa9fb3026191f5fb97c210907e1ea38baa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:48Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:48 crc kubenswrapper[4809]: I0312 08:00:48.936828 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7050281d-f7e1-447e-9cfd-66fe7342efff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a8ee6bd6947c48013498a11e9479a358028706146a645bc6632fcda8717bc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cb0402282c7f399b69b70fb9af47794517f105267b536b58393e3c69aa0cacc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f6b51b0d31592c1695ecc8f19dd957bcb44a4a9c6ef3cf371818c729aee03e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9274fd03c22f70f9a14af97eb08241f288c6fa0c1c69b75f77c1057d35a7cbc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9274fd03c22f70f9a14af97eb08241f288c6fa0c1c69b75f77c1057d35a7cbc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:48Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:48 crc kubenswrapper[4809]: I0312 08:00:48.952891 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0dc5e31901266db36374e4881b99b1a40767bc6ff58afe1e6226adf1f5c60fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aef513f474f25e56fd48d7c8b5f110939514be75dec224356dee883508d604f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:48Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:48 crc kubenswrapper[4809]: I0312 08:00:48.967956 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vcxlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7db3e1-e4cd-4800-a7c2-30d8254caa4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a976f7b8cb11f2d6a4393b26321f830f79835db3c42bdc340ac4b189ee0e56d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vcxlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:48Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:48 crc kubenswrapper[4809]: I0312 08:00:48.986182 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:48Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:49 crc kubenswrapper[4809]: I0312 08:00:49.003575 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:49Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:49 crc kubenswrapper[4809]: I0312 08:00:49.032187 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a4d3f57db51abc6b567283017a96e97ab98055de09513fe14d0be447942b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06c9bb77ec46b7ddf658e865660c808ab82509e698598c42daf2b199305c32c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://833b7c9d59ec9b7cf52e9b6c12705d034b46279a46145c181f92d30ffb6e7f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b49430238b4b9dc6fefc7169830e7106e6d43b1130d0b2fd358f103161f390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://943bab511f40b661cb31e977ec90fbc4edd6cb41d26593b0d7829dfd854fc047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f06a94e052ce6d8bc2496ade124907ca536c37d2710afe11c824327dc3de5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd7061a04af8eca7049be8a049e77071d26305cf53171b0ba73714b5f0fe8f89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd7061a04af8eca7049be8a049e77071d26305cf53171b0ba73714b5f0fe8f89\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T08:00:39Z\\\",\\\"message\\\":\\\"t-secret-name:serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0008826d7 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: authentication-operator,},ClusterIP:10.217.5.150,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.150],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0312 08:00:39.104426 7023 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handle\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7h9l6_openshift-ovn-kubernetes(cc7631d0-7d4b-4f5a-ab01-7516b2ed998e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a39013e253a1014bfc666169571d96ed943c28cc73221fcfa3f85d31ee9ea85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7h9l6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:49Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:49 crc kubenswrapper[4809]: I0312 08:00:49.053518 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwn7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab4c7cca-c503-41d4-8abf-5b19429defff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaaba74bba436056c80065bda470df45e31d6405fc8b717b0299b5a0fa3ef37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef92b238f6dd7e873d92f846bfa5a6dcbc6f5ab26617736df052a7f331fc5f4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef92b238f6dd7e873d92f846bfa5a6dcbc6f5ab26617736df052a7f331fc5f4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04cf364fbd4a2f9039a1733e337000f89db4b2ef6b4cd8de079f4c01c91add76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04cf364fbd4a2f9039a1733e337000f89db4b2ef6b4cd8de079f4c01c91add76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://844db69e092396d79015605fd21fde37c13689eb546b220c8e9ecc789aa7dc75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844db69e092396d79015605fd21fde37c13689eb546b220c8e9ecc789aa7dc75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5143175a54d02709bf304d2a4d50a1c3b2c441552e206628f9e1a5dfdb27ae97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5143175a54d02709bf304d2a4d50a1c3b2c441552e206628f9e1a5dfdb27ae97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1baa2e910db1674a7a4e4f09b6e01d305daf0f6ef4c7fee24090365dba810b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baa2e910db1674a7a4e4f09b6e01d305daf0f6ef4c7fee24090365dba810b9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwn7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:49Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:49 crc kubenswrapper[4809]: I0312 08:00:49.066665 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4677e418d05a581aee3be8fd421402c02922c5eaf004cbcae1ae691fb67d283e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:49Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:49 crc kubenswrapper[4809]: I0312 08:00:49.081467 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"101483ba-8ed3-40eb-9855-077e9add029f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a3044e9eaa4ccc87b594a3f5440d95bf23c648ceb2e8f45abeadeda79b4a773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1dde7e156e97091c5fa419562607743e842e7e610e685fff1be3ce7ce33f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6d4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:49Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:49 crc kubenswrapper[4809]: I0312 08:00:49.097723 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:49Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:49 crc kubenswrapper[4809]: I0312 08:00:49.105730 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p566k" Mar 12 08:00:49 crc kubenswrapper[4809]: E0312 08:00:49.106058 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p566k" podUID="3d31c58d-0f0d-431f-bebc-57173f467eee" Mar 12 08:00:49 crc kubenswrapper[4809]: I0312 08:00:49.116110 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Mar 12 08:00:49 crc kubenswrapper[4809]: I0312 08:00:49.121449 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kbwnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acc48a9a-cc0a-4361-b1bc-7c7684f9bf93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5d8cd4006dec3e1304218cda9d03ea761465524e055daccb8e61ef1368a7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j44jn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kbwnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:49Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:49 crc kubenswrapper[4809]: I0312 08:00:49.138888 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xgl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa0e58ad29c62a229115fb214956dc11f004bd7f4a8188d7435e0857f07ff46b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2nhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xgl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:49Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:49 crc kubenswrapper[4809]: I0312 08:00:49.151601 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-p566k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d31c58d-0f0d-431f-bebc-57173f467eee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j742\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j742\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-p566k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:49Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:49 crc kubenswrapper[4809]: I0312 08:00:49.167602 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09f5e61a-c077-449f-8291-dbf93ac9aca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95634c408015e53a4c8d044b27803c5070ad086be1f3372a760744db1bc9c033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c386318c2476130eb02e6620ab75fce7d5cd814d7f214bed4f449de60d126a14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efb3f6bb09e8fcec0267b3fdbdf43301e33aa2ba8bfaf8b3a124c7e7714949ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9814d65f13344afd5484baf20498a643e575626a91a474b937602f4bb06f564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T08:00:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 08:00:01.682190 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 08:00:01.682347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 08:00:01.683010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1444506139/tls.crt::/tmp/serving-cert-1444506139/tls.key\\\\\\\"\\\\nI0312 08:00:02.063779 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 08:00:02.066800 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 08:00:02.066837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 08:00:02.066881 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 08:00:02.066894 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 08:00:02.073309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 08:00:02.073332 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073342 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 08:00:02.073345 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 08:00:02.073348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 08:00:02.073351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 08:00:02.073351 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 08:00:02.075060 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://411ee556dd1461097ea682e93564110b2a46d02734cb91026ba06ffe7de7a80f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:49Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:49 crc kubenswrapper[4809]: I0312 08:00:49.184153 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83bfb2b1536a6a7a823599dddb47b6ad50b038509d00298d2fcee85b0a5b6c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:49Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:50 crc kubenswrapper[4809]: I0312 08:00:50.104900 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:00:50 crc kubenswrapper[4809]: I0312 08:00:50.104936 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:00:50 crc kubenswrapper[4809]: E0312 08:00:50.105018 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 08:00:50 crc kubenswrapper[4809]: E0312 08:00:50.105164 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 08:00:50 crc kubenswrapper[4809]: I0312 08:00:50.105224 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:00:50 crc kubenswrapper[4809]: E0312 08:00:50.105276 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 08:00:51 crc kubenswrapper[4809]: I0312 08:00:51.105387 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p566k" Mar 12 08:00:51 crc kubenswrapper[4809]: E0312 08:00:51.105581 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p566k" podUID="3d31c58d-0f0d-431f-bebc-57173f467eee" Mar 12 08:00:52 crc kubenswrapper[4809]: I0312 08:00:52.104995 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:00:52 crc kubenswrapper[4809]: I0312 08:00:52.105200 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:00:52 crc kubenswrapper[4809]: I0312 08:00:52.105097 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:00:52 crc kubenswrapper[4809]: E0312 08:00:52.105332 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 08:00:52 crc kubenswrapper[4809]: E0312 08:00:52.105412 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 08:00:52 crc kubenswrapper[4809]: E0312 08:00:52.105506 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 08:00:52 crc kubenswrapper[4809]: E0312 08:00:52.240942 4809 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 08:00:53 crc kubenswrapper[4809]: I0312 08:00:53.105524 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p566k" Mar 12 08:00:53 crc kubenswrapper[4809]: E0312 08:00:53.105800 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p566k" podUID="3d31c58d-0f0d-431f-bebc-57173f467eee" Mar 12 08:00:53 crc kubenswrapper[4809]: I0312 08:00:53.108054 4809 scope.go:117] "RemoveContainer" containerID="bd7061a04af8eca7049be8a049e77071d26305cf53171b0ba73714b5f0fe8f89" Mar 12 08:00:53 crc kubenswrapper[4809]: E0312 08:00:53.108320 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7h9l6_openshift-ovn-kubernetes(cc7631d0-7d4b-4f5a-ab01-7516b2ed998e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" podUID="cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" Mar 12 08:00:54 crc kubenswrapper[4809]: I0312 08:00:54.105618 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:00:54 crc kubenswrapper[4809]: E0312 08:00:54.105732 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 08:00:54 crc kubenswrapper[4809]: I0312 08:00:54.105637 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:00:54 crc kubenswrapper[4809]: I0312 08:00:54.105618 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:00:54 crc kubenswrapper[4809]: E0312 08:00:54.105795 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 08:00:54 crc kubenswrapper[4809]: E0312 08:00:54.105982 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 08:00:55 crc kubenswrapper[4809]: I0312 08:00:55.105442 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p566k" Mar 12 08:00:55 crc kubenswrapper[4809]: E0312 08:00:55.105605 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p566k" podUID="3d31c58d-0f0d-431f-bebc-57173f467eee" Mar 12 08:00:56 crc kubenswrapper[4809]: I0312 08:00:56.105715 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:00:56 crc kubenswrapper[4809]: I0312 08:00:56.105715 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:00:56 crc kubenswrapper[4809]: I0312 08:00:56.105748 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:00:56 crc kubenswrapper[4809]: E0312 08:00:56.107549 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 08:00:56 crc kubenswrapper[4809]: E0312 08:00:56.107662 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 08:00:56 crc kubenswrapper[4809]: E0312 08:00:56.107719 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 08:00:57 crc kubenswrapper[4809]: I0312 08:00:57.105252 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p566k" Mar 12 08:00:57 crc kubenswrapper[4809]: E0312 08:00:57.105380 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p566k" podUID="3d31c58d-0f0d-431f-bebc-57173f467eee" Mar 12 08:00:57 crc kubenswrapper[4809]: I0312 08:00:57.123892 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90601560-88a4-4e9e-8c6c-b7cfd6193dfc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcce0bfc6e523f2bd2e05ab2403c80a280f3c22b7c9d4e61bde3bb414b1b6c1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afe86a0784b3f9ecefb87d8d8de1e616420abe185dfcf0e47ad17f7d3e18a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ecaed5f8be6ee7aa9b143575897770874696cfd0d71c30ef8e46d75a67be75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d974ea0c31aedbb16a24b692f45399aa47ba7a1e4b14467bb954c10757b38cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c32315e2dbe601b08a2b2c3db8a50dfc5d99be959dbf33c19f6a2c55df38c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28106812f58b36dc991efb2c3b84b4c3f38016271262418de7cb6d3130ee2743\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28106812f58b36dc991efb2c3b84b4c3f38016271262418de7cb6d3130ee2743\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://759e1933c261395715f2209ea1aa72957dfb4c5b1049c830e87854b0dcdba55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759e1933c261395715f2209ea1aa72957dfb4c5b1049c830e87854b0dcdba55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0d58c5fd72b76f75c191f46f25e0aa9fb3026191f5fb97c210907e1ea38baa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0d58c5fd72b76f75c191f46f25e0aa9fb3026191f5fb97c210907e1ea38baa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:57Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:57 crc kubenswrapper[4809]: I0312 08:00:57.134557 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7050281d-f7e1-447e-9cfd-66fe7342efff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a8ee6bd6947c48013498a11e9479a358028706146a645bc6632fcda8717bc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cb0402282c7f399b69b70fb9af47794517f105267b536b58393e3c69aa0cacc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f6b51b0d31592c1695ecc8f19dd957bcb44a4a9c6ef3cf371818c729aee03e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9274fd03c22f70f9a14af97eb08241f288c6fa0c1c69b75f77c1057d35a7cbc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9274fd03c22f70f9a14af97eb08241f288c6fa0c1c69b75f77c1057d35a7cbc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:57Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:57 crc kubenswrapper[4809]: I0312 08:00:57.146493 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0dc5e31901266db36374e4881b99b1a40767bc6ff58afe1e6226adf1f5c60fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aef513f474f25e56fd48d7c8b5f110939514be75dec224356dee883508d604f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:57Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:57 crc kubenswrapper[4809]: I0312 08:00:57.156463 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d7prn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b97e5c85-4adb-4061-b2e0-1be7440c2133\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aba607a1e3cfe1831b9a97989571fc863a712e4579f87587f6d0089b255ab12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f99hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edfd33ff837e4b09002fef30a5d35f242dd087139d7cabc6416bae56811968c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f99hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-d7prn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:57Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:57 crc kubenswrapper[4809]: I0312 08:00:57.168382 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:57Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:57 crc kubenswrapper[4809]: I0312 08:00:57.180825 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:57Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:57 crc kubenswrapper[4809]: I0312 08:00:57.200592 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a4d3f57db51abc6b567283017a96e97ab98055de09513fe14d0be447942b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06c9bb77ec46b7ddf658e865660c808ab82509e698598c42daf2b199305c32c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://833b7c9d59ec9b7cf52e9b6c12705d034b46279a46145c181f92d30ffb6e7f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b49430238b4b9dc6fefc7169830e7106e6d43b1130d0b2fd358f103161f390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://943bab511f40b661cb31e977ec90fbc4edd6cb41d26593b0d7829dfd854fc047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f06a94e052ce6d8bc2496ade124907ca536c37d2710afe11c824327dc3de5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd7061a04af8eca7049be8a049e77071d26305cf53171b0ba73714b5f0fe8f89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd7061a04af8eca7049be8a049e77071d26305cf53171b0ba73714b5f0fe8f89\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T08:00:39Z\\\",\\\"message\\\":\\\"t-secret-name:serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0008826d7 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: authentication-operator,},ClusterIP:10.217.5.150,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.150],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0312 08:00:39.104426 7023 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handle\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7h9l6_openshift-ovn-kubernetes(cc7631d0-7d4b-4f5a-ab01-7516b2ed998e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a39013e253a1014bfc666169571d96ed943c28cc73221fcfa3f85d31ee9ea85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7h9l6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:57Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:57 crc kubenswrapper[4809]: I0312 08:00:57.211448 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vcxlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7db3e1-e4cd-4800-a7c2-30d8254caa4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a976f7b8cb11f2d6a4393b26321f830f79835db3c42bdc340ac4b189ee0e56d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vcxlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:57Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:57 crc kubenswrapper[4809]: I0312 08:00:57.224756 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d5675be-ed00-4551-8b81-3ec73f344320\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fceb15dac1090a34620398fe604b2fe04af865b536619b0491dbc17edb6242b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0fd6b286ccf1bf7747d31f1174259c16f2a9e0d371eb039a48c135b6118ace\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T07:59:18Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0312 07:58:49.313745 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0312 07:58:49.315796 1 observer_polling.go:159] Starting file observer\\\\nI0312 07:58:49.351950 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0312 07:58:49.359297 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0312 07:59:18.847872 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0312 07:59:18.847934 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5d018b14132bda1464a87d5f4073560c28ae90351c58099d35605cfa0a18492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c0f55493f640a3d45a1337317ad9715adf5e032d370a4c4fa72254a52368ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31ad14167b5c219c3f37847ba32dd923e81e22b517173000441413914c132d99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:57Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:57 crc kubenswrapper[4809]: I0312 08:00:57.237105 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4677e418d05a581aee3be8fd421402c02922c5eaf004cbcae1ae691fb67d283e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:57Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:57 crc kubenswrapper[4809]: E0312 08:00:57.241380 4809 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 08:00:57 crc kubenswrapper[4809]: I0312 08:00:57.255760 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"101483ba-8ed3-40eb-9855-077e9add029f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a3044e9eaa4ccc87b594a3f5440d95bf23c648ceb2e8f45abeadeda79b4a773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1dde7e156e97091c5fa419562607743e842e7e610e685fff1be3ce7ce33f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6d4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:57Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:57 crc kubenswrapper[4809]: I0312 08:00:57.271947 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwn7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab4c7cca-c503-41d4-8abf-5b19429defff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaaba74bba436056c80065bda470df45e31d6405fc8b717b0299b5a0fa3ef37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef92b238f6dd7e873d92f846bfa5a6dcbc6f5ab26617736df052a7f331fc5f4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef92b238f6dd7e873d92f846bfa5a6dcbc6f5ab26617736df052a7f331fc5f4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04cf364fbd4a2f9039a1733e337000f89db4b2ef6b4cd8de079f4c01c91add76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04cf364fbd4a2f9039a1733e337000f89db4b2ef6b4cd8de079f4c01c91add76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://844db69e092396d79015605fd21fde37c13689eb546b220c8e9ecc789aa7dc75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844db69e092396d79015605fd21fde37c13689eb546b220c8e9ecc789aa7dc75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5143175a54d02709bf304d2a4d50a1c3b2c441552e206628f9e1a5dfdb27ae97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5143175a54d02709bf304d2a4d50a1c3b2c441552e206628f9e1a5dfdb27ae97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1baa2e910db1674a7a4e4f09b6e01d305daf0f6ef4c7fee24090365dba810b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baa2e910db1674a7a4e4f09b6e01d305daf0f6ef4c7fee24090365dba810b9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwn7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:57Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:57 crc kubenswrapper[4809]: I0312 08:00:57.285865 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xgl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa0e58ad29c62a229115fb214956dc11f004bd7f4a8188d7435e0857f07ff46b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2nhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xgl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:57Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:57 crc kubenswrapper[4809]: I0312 08:00:57.298235 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-p566k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d31c58d-0f0d-431f-bebc-57173f467eee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j742\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j742\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-p566k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:57Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:57 crc kubenswrapper[4809]: I0312 08:00:57.313063 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09f5e61a-c077-449f-8291-dbf93ac9aca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95634c408015e53a4c8d044b27803c5070ad086be1f3372a760744db1bc9c033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c386318c2476130eb02e6620ab75fce7d5cd814d7f214bed4f449de60d126a14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efb3f6bb09e8fcec0267b3fdbdf43301e33aa2ba8bfaf8b3a124c7e7714949ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9814d65f13344afd5484baf20498a643e575626a91a474b937602f4bb06f564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T08:00:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 08:00:01.682190 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 08:00:01.682347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 08:00:01.683010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1444506139/tls.crt::/tmp/serving-cert-1444506139/tls.key\\\\\\\"\\\\nI0312 08:00:02.063779 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 08:00:02.066800 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 08:00:02.066837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 08:00:02.066881 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 08:00:02.066894 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 08:00:02.073309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 08:00:02.073332 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073342 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 08:00:02.073345 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 08:00:02.073348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 08:00:02.073351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 08:00:02.073351 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 08:00:02.075060 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://411ee556dd1461097ea682e93564110b2a46d02734cb91026ba06ffe7de7a80f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:57Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:57 crc kubenswrapper[4809]: I0312 08:00:57.325464 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83bfb2b1536a6a7a823599dddb47b6ad50b038509d00298d2fcee85b0a5b6c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:57Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:57 crc kubenswrapper[4809]: I0312 08:00:57.337603 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:57Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:57 crc kubenswrapper[4809]: I0312 08:00:57.348157 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kbwnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acc48a9a-cc0a-4361-b1bc-7c7684f9bf93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5d8cd4006dec3e1304218cda9d03ea761465524e055daccb8e61ef1368a7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j44jn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kbwnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:57Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:58 crc kubenswrapper[4809]: I0312 08:00:58.105508 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:00:58 crc kubenswrapper[4809]: I0312 08:00:58.105545 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:00:58 crc kubenswrapper[4809]: I0312 08:00:58.105554 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:00:58 crc kubenswrapper[4809]: E0312 08:00:58.105675 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 08:00:58 crc kubenswrapper[4809]: E0312 08:00:58.105864 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 08:00:58 crc kubenswrapper[4809]: E0312 08:00:58.106028 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 08:00:58 crc kubenswrapper[4809]: I0312 08:00:58.636168 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:58 crc kubenswrapper[4809]: I0312 08:00:58.636270 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:58 crc kubenswrapper[4809]: I0312 08:00:58.636287 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:58 crc kubenswrapper[4809]: I0312 08:00:58.636311 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:58 crc kubenswrapper[4809]: I0312 08:00:58.636328 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:58Z","lastTransitionTime":"2026-03-12T08:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:58 crc kubenswrapper[4809]: E0312 08:00:58.653700 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f5485f66-5ce6-43bd-a720-57f2c3255098\\\",\\\"systemUUID\\\":\\\"f124771b-42e6-4243-8a1b-002105aaecbe\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:58Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:58 crc kubenswrapper[4809]: I0312 08:00:58.659546 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:58 crc kubenswrapper[4809]: I0312 08:00:58.659607 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:58 crc kubenswrapper[4809]: I0312 08:00:58.659627 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:58 crc kubenswrapper[4809]: I0312 08:00:58.659654 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:58 crc kubenswrapper[4809]: I0312 08:00:58.659670 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:58Z","lastTransitionTime":"2026-03-12T08:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:58 crc kubenswrapper[4809]: E0312 08:00:58.676678 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f5485f66-5ce6-43bd-a720-57f2c3255098\\\",\\\"systemUUID\\\":\\\"f124771b-42e6-4243-8a1b-002105aaecbe\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:58Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:58 crc kubenswrapper[4809]: I0312 08:00:58.681644 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:58 crc kubenswrapper[4809]: I0312 08:00:58.681707 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:58 crc kubenswrapper[4809]: I0312 08:00:58.681725 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:58 crc kubenswrapper[4809]: I0312 08:00:58.681750 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:58 crc kubenswrapper[4809]: I0312 08:00:58.681769 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:58Z","lastTransitionTime":"2026-03-12T08:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:58 crc kubenswrapper[4809]: E0312 08:00:58.703300 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f5485f66-5ce6-43bd-a720-57f2c3255098\\\",\\\"systemUUID\\\":\\\"f124771b-42e6-4243-8a1b-002105aaecbe\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:58Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:58 crc kubenswrapper[4809]: I0312 08:00:58.708610 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:58 crc kubenswrapper[4809]: I0312 08:00:58.708660 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:58 crc kubenswrapper[4809]: I0312 08:00:58.708677 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:58 crc kubenswrapper[4809]: I0312 08:00:58.708700 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:58 crc kubenswrapper[4809]: I0312 08:00:58.708716 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:58Z","lastTransitionTime":"2026-03-12T08:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:58 crc kubenswrapper[4809]: E0312 08:00:58.725560 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f5485f66-5ce6-43bd-a720-57f2c3255098\\\",\\\"systemUUID\\\":\\\"f124771b-42e6-4243-8a1b-002105aaecbe\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:58Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:58 crc kubenswrapper[4809]: I0312 08:00:58.730398 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:00:58 crc kubenswrapper[4809]: I0312 08:00:58.730452 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:00:58 crc kubenswrapper[4809]: I0312 08:00:58.730471 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:00:58 crc kubenswrapper[4809]: I0312 08:00:58.730494 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:00:58 crc kubenswrapper[4809]: I0312 08:00:58.730511 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:00:58Z","lastTransitionTime":"2026-03-12T08:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:00:58 crc kubenswrapper[4809]: E0312 08:00:58.751953 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:00:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f5485f66-5ce6-43bd-a720-57f2c3255098\\\",\\\"systemUUID\\\":\\\"f124771b-42e6-4243-8a1b-002105aaecbe\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:00:58Z is after 2025-08-24T17:21:41Z" Mar 12 08:00:58 crc kubenswrapper[4809]: E0312 08:00:58.752427 4809 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 12 08:00:59 crc kubenswrapper[4809]: I0312 08:00:59.105490 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p566k" Mar 12 08:00:59 crc kubenswrapper[4809]: E0312 08:00:59.105730 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p566k" podUID="3d31c58d-0f0d-431f-bebc-57173f467eee" Mar 12 08:00:59 crc kubenswrapper[4809]: I0312 08:00:59.411265 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3d31c58d-0f0d-431f-bebc-57173f467eee-metrics-certs\") pod \"network-metrics-daemon-p566k\" (UID: \"3d31c58d-0f0d-431f-bebc-57173f467eee\") " pod="openshift-multus/network-metrics-daemon-p566k" Mar 12 08:00:59 crc kubenswrapper[4809]: E0312 08:00:59.411416 4809 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 08:00:59 crc kubenswrapper[4809]: E0312 08:00:59.411466 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d31c58d-0f0d-431f-bebc-57173f467eee-metrics-certs podName:3d31c58d-0f0d-431f-bebc-57173f467eee nodeName:}" failed. No retries permitted until 2026-03-12 08:01:31.411452137 +0000 UTC m=+164.993487870 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3d31c58d-0f0d-431f-bebc-57173f467eee-metrics-certs") pod "network-metrics-daemon-p566k" (UID: "3d31c58d-0f0d-431f-bebc-57173f467eee") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 08:01:00 crc kubenswrapper[4809]: I0312 08:01:00.104980 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:01:00 crc kubenswrapper[4809]: I0312 08:01:00.105095 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:01:00 crc kubenswrapper[4809]: E0312 08:01:00.105189 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 08:01:00 crc kubenswrapper[4809]: I0312 08:01:00.104980 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:01:00 crc kubenswrapper[4809]: E0312 08:01:00.105328 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 08:01:00 crc kubenswrapper[4809]: E0312 08:01:00.105462 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 08:01:01 crc kubenswrapper[4809]: I0312 08:01:01.105324 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p566k" Mar 12 08:01:01 crc kubenswrapper[4809]: E0312 08:01:01.105586 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p566k" podUID="3d31c58d-0f0d-431f-bebc-57173f467eee" Mar 12 08:01:01 crc kubenswrapper[4809]: I0312 08:01:01.797736 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 08:01:01 crc kubenswrapper[4809]: I0312 08:01:01.818823 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:01Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:01 crc kubenswrapper[4809]: I0312 08:01:01.839021 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:01Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:01 crc kubenswrapper[4809]: I0312 08:01:01.870375 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a4d3f57db51abc6b567283017a96e97ab98055de09513fe14d0be447942b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06c9bb77ec46b7ddf658e865660c808ab82509e698598c42daf2b199305c32c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://833b7c9d59ec9b7cf52e9b6c12705d034b46279a46145c181f92d30ffb6e7f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b49430238b4b9dc6fefc7169830e7106e6d43b1130d0b2fd358f103161f390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://943bab511f40b661cb31e977ec90fbc4edd6cb41d26593b0d7829dfd854fc047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f06a94e052ce6d8bc2496ade124907ca536c37d2710afe11c824327dc3de5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd7061a04af8eca7049be8a049e77071d26305cf53171b0ba73714b5f0fe8f89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd7061a04af8eca7049be8a049e77071d26305cf53171b0ba73714b5f0fe8f89\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T08:00:39Z\\\",\\\"message\\\":\\\"t-secret-name:serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0008826d7 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: authentication-operator,},ClusterIP:10.217.5.150,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.150],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0312 08:00:39.104426 7023 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handle\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7h9l6_openshift-ovn-kubernetes(cc7631d0-7d4b-4f5a-ab01-7516b2ed998e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a39013e253a1014bfc666169571d96ed943c28cc73221fcfa3f85d31ee9ea85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7h9l6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:01Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:01 crc kubenswrapper[4809]: I0312 08:01:01.885391 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vcxlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7db3e1-e4cd-4800-a7c2-30d8254caa4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a976f7b8cb11f2d6a4393b26321f830f79835db3c42bdc340ac4b189ee0e56d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vcxlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:01Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:01 crc kubenswrapper[4809]: I0312 08:01:01.907204 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d5675be-ed00-4551-8b81-3ec73f344320\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fceb15dac1090a34620398fe604b2fe04af865b536619b0491dbc17edb6242b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0fd6b286ccf1bf7747d31f1174259c16f2a9e0d371eb039a48c135b6118ace\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T07:59:18Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0312 07:58:49.313745 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0312 07:58:49.315796 1 observer_polling.go:159] Starting file observer\\\\nI0312 07:58:49.351950 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0312 07:58:49.359297 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0312 07:59:18.847872 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0312 07:59:18.847934 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5d018b14132bda1464a87d5f4073560c28ae90351c58099d35605cfa0a18492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c0f55493f640a3d45a1337317ad9715adf5e032d370a4c4fa72254a52368ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31ad14167b5c219c3f37847ba32dd923e81e22b517173000441413914c132d99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:01Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:01 crc kubenswrapper[4809]: I0312 08:01:01.929498 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4xgl7_85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff/kube-multus/0.log" Mar 12 08:01:01 crc kubenswrapper[4809]: I0312 08:01:01.929625 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4677e418d05a581aee3be8fd421402c02922c5eaf004cbcae1ae691fb67d283e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:01Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:01 crc kubenswrapper[4809]: I0312 08:01:01.929902 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4xgl7" event={"ID":"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff","Type":"ContainerDied","Data":"aa0e58ad29c62a229115fb214956dc11f004bd7f4a8188d7435e0857f07ff46b"} Mar 12 08:01:01 crc kubenswrapper[4809]: I0312 08:01:01.930053 4809 generic.go:334] "Generic (PLEG): container finished" podID="85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff" containerID="aa0e58ad29c62a229115fb214956dc11f004bd7f4a8188d7435e0857f07ff46b" exitCode=1 Mar 12 08:01:01 crc kubenswrapper[4809]: I0312 08:01:01.931021 4809 scope.go:117] "RemoveContainer" containerID="aa0e58ad29c62a229115fb214956dc11f004bd7f4a8188d7435e0857f07ff46b" Mar 12 08:01:01 crc kubenswrapper[4809]: I0312 08:01:01.949712 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"101483ba-8ed3-40eb-9855-077e9add029f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a3044e9eaa4ccc87b594a3f5440d95bf23c648ceb2e8f45abeadeda79b4a773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1dde7e156e97091c5fa419562607743e842e7e610e685fff1be3ce7ce33f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6d4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:01Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:01 crc kubenswrapper[4809]: I0312 08:01:01.975824 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwn7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab4c7cca-c503-41d4-8abf-5b19429defff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaaba74bba436056c80065bda470df45e31d6405fc8b717b0299b5a0fa3ef37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef92b238f6dd7e873d92f846bfa5a6dcbc6f5ab26617736df052a7f331fc5f4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef92b238f6dd7e873d92f846bfa5a6dcbc6f5ab26617736df052a7f331fc5f4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04cf364fbd4a2f9039a1733e337000f89db4b2ef6b4cd8de079f4c01c91add76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04cf364fbd4a2f9039a1733e337000f89db4b2ef6b4cd8de079f4c01c91add76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://844db69e092396d79015605fd21fde37c13689eb546b220c8e9ecc789aa7dc75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844db69e092396d79015605fd21fde37c13689eb546b220c8e9ecc789aa7dc75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5143175a54d02709bf304d2a4d50a1c3b2c441552e206628f9e1a5dfdb27ae97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5143175a54d02709bf304d2a4d50a1c3b2c441552e206628f9e1a5dfdb27ae97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1baa2e910db1674a7a4e4f09b6e01d305daf0f6ef4c7fee24090365dba810b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baa2e910db1674a7a4e4f09b6e01d305daf0f6ef4c7fee24090365dba810b9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwn7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:01Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:01 crc kubenswrapper[4809]: I0312 08:01:01.991855 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-p566k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d31c58d-0f0d-431f-bebc-57173f467eee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j742\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j742\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-p566k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:01Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:02 crc kubenswrapper[4809]: I0312 08:01:02.007294 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09f5e61a-c077-449f-8291-dbf93ac9aca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95634c408015e53a4c8d044b27803c5070ad086be1f3372a760744db1bc9c033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c386318c2476130eb02e6620ab75fce7d5cd814d7f214bed4f449de60d126a14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efb3f6bb09e8fcec0267b3fdbdf43301e33aa2ba8bfaf8b3a124c7e7714949ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9814d65f13344afd5484baf20498a643e575626a91a474b937602f4bb06f564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T08:00:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 08:00:01.682190 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 08:00:01.682347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 08:00:01.683010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1444506139/tls.crt::/tmp/serving-cert-1444506139/tls.key\\\\\\\"\\\\nI0312 08:00:02.063779 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 08:00:02.066800 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 08:00:02.066837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 08:00:02.066881 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 08:00:02.066894 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 08:00:02.073309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 08:00:02.073332 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073342 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 08:00:02.073345 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 08:00:02.073348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 08:00:02.073351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 08:00:02.073351 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 08:00:02.075060 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://411ee556dd1461097ea682e93564110b2a46d02734cb91026ba06ffe7de7a80f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:02Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:02 crc kubenswrapper[4809]: I0312 08:01:02.028030 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83bfb2b1536a6a7a823599dddb47b6ad50b038509d00298d2fcee85b0a5b6c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:02Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:02 crc kubenswrapper[4809]: I0312 08:01:02.044293 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:02Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:02 crc kubenswrapper[4809]: I0312 08:01:02.059579 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kbwnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acc48a9a-cc0a-4361-b1bc-7c7684f9bf93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5d8cd4006dec3e1304218cda9d03ea761465524e055daccb8e61ef1368a7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j44jn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kbwnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:02Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:02 crc kubenswrapper[4809]: I0312 08:01:02.076022 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xgl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa0e58ad29c62a229115fb214956dc11f004bd7f4a8188d7435e0857f07ff46b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2nhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xgl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:02Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:02 crc kubenswrapper[4809]: I0312 08:01:02.105797 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:01:02 crc kubenswrapper[4809]: E0312 08:01:02.105946 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 08:01:02 crc kubenswrapper[4809]: I0312 08:01:02.106142 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:01:02 crc kubenswrapper[4809]: E0312 08:01:02.106299 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 08:01:02 crc kubenswrapper[4809]: I0312 08:01:02.106433 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90601560-88a4-4e9e-8c6c-b7cfd6193dfc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcce0bfc6e523f2bd2e05ab2403c80a280f3c22b7c9d4e61bde3bb414b1b6c1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afe86a0784b3f9ecefb87d8d8de1e616420abe185dfcf0e47ad17f7d3e18a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ecaed5f8be6ee7aa9b143575897770874696cfd0d71c30ef8e46d75a67be75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d974ea0c31aedbb16a24b692f45399aa47ba7a1e4b14467bb954c10757b38cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c32315e2dbe601b08a2b2c3db8a50dfc5d99be959dbf33c19f6a2c55df38c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28106812f58b36dc991efb2c3b84b4c3f38016271262418de7cb6d3130ee2743\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28106812f58b36dc991efb2c3b84b4c3f38016271262418de7cb6d3130ee2743\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://759e1933c261395715f2209ea1aa72957dfb4c5b1049c830e87854b0dcdba55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759e1933c261395715f2209ea1aa72957dfb4c5b1049c830e87854b0dcdba55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0d58c5fd72b76f75c191f46f25e0aa9fb3026191f5fb97c210907e1ea38baa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0d58c5fd72b76f75c191f46f25e0aa9fb3026191f5fb97c210907e1ea38baa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:02Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:02 crc kubenswrapper[4809]: I0312 08:01:02.107155 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:01:02 crc kubenswrapper[4809]: E0312 08:01:02.107431 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 08:01:02 crc kubenswrapper[4809]: I0312 08:01:02.124480 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7050281d-f7e1-447e-9cfd-66fe7342efff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a8ee6bd6947c48013498a11e9479a358028706146a645bc6632fcda8717bc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cb0402282c7f399b69b70fb9af47794517f105267b536b58393e3c69aa0cacc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f6b51b0d31592c1695ecc8f19dd957bcb44a4a9c6ef3cf371818c729aee03e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9274fd03c22f70f9a14af97eb08241f288c6fa0c1c69b75f77c1057d35a7cbc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9274fd03c22f70f9a14af97eb08241f288c6fa0c1c69b75f77c1057d35a7cbc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:02Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:02 crc kubenswrapper[4809]: I0312 08:01:02.140534 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0dc5e31901266db36374e4881b99b1a40767bc6ff58afe1e6226adf1f5c60fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aef513f474f25e56fd48d7c8b5f110939514be75dec224356dee883508d604f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:02Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:02 crc kubenswrapper[4809]: I0312 08:01:02.156390 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d7prn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b97e5c85-4adb-4061-b2e0-1be7440c2133\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aba607a1e3cfe1831b9a97989571fc863a712e4579f87587f6d0089b255ab12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f99hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edfd33ff837e4b09002fef30a5d35f242dd087139d7cabc6416bae56811968c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f99hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-d7prn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:02Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:02 crc kubenswrapper[4809]: I0312 08:01:02.175590 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d5675be-ed00-4551-8b81-3ec73f344320\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fceb15dac1090a34620398fe604b2fe04af865b536619b0491dbc17edb6242b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0fd6b286ccf1bf7747d31f1174259c16f2a9e0d371eb039a48c135b6118ace\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T07:59:18Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0312 07:58:49.313745 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0312 07:58:49.315796 1 observer_polling.go:159] Starting file observer\\\\nI0312 07:58:49.351950 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0312 07:58:49.359297 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0312 07:59:18.847872 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0312 07:59:18.847934 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5d018b14132bda1464a87d5f4073560c28ae90351c58099d35605cfa0a18492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c0f55493f640a3d45a1337317ad9715adf5e032d370a4c4fa72254a52368ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31ad14167b5c219c3f37847ba32dd923e81e22b517173000441413914c132d99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:02Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:02 crc kubenswrapper[4809]: I0312 08:01:02.194469 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4677e418d05a581aee3be8fd421402c02922c5eaf004cbcae1ae691fb67d283e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:02Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:02 crc kubenswrapper[4809]: I0312 08:01:02.213623 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"101483ba-8ed3-40eb-9855-077e9add029f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a3044e9eaa4ccc87b594a3f5440d95bf23c648ceb2e8f45abeadeda79b4a773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1dde7e156e97091c5fa419562607743e842e7e610e685fff1be3ce7ce33f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6d4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:02Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:02 crc kubenswrapper[4809]: I0312 08:01:02.237693 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwn7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab4c7cca-c503-41d4-8abf-5b19429defff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaaba74bba436056c80065bda470df45e31d6405fc8b717b0299b5a0fa3ef37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef92b238f6dd7e873d92f846bfa5a6dcbc6f5ab26617736df052a7f331fc5f4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef92b238f6dd7e873d92f846bfa5a6dcbc6f5ab26617736df052a7f331fc5f4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04cf364fbd4a2f9039a1733e337000f89db4b2ef6b4cd8de079f4c01c91add76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04cf364fbd4a2f9039a1733e337000f89db4b2ef6b4cd8de079f4c01c91add76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://844db69e092396d79015605fd21fde37c13689eb546b220c8e9ecc789aa7dc75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844db69e092396d79015605fd21fde37c13689eb546b220c8e9ecc789aa7dc75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5143175a54d02709bf304d2a4d50a1c3b2c441552e206628f9e1a5dfdb27ae97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5143175a54d02709bf304d2a4d50a1c3b2c441552e206628f9e1a5dfdb27ae97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1baa2e910db1674a7a4e4f09b6e01d305daf0f6ef4c7fee24090365dba810b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baa2e910db1674a7a4e4f09b6e01d305daf0f6ef4c7fee24090365dba810b9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwn7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:02Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:02 crc kubenswrapper[4809]: E0312 08:01:02.243176 4809 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 08:01:02 crc kubenswrapper[4809]: I0312 08:01:02.257608 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09f5e61a-c077-449f-8291-dbf93ac9aca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95634c408015e53a4c8d044b27803c5070ad086be1f3372a760744db1bc9c033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c386318c2476130eb02e6620ab75fce7d5cd814d7f214bed4f449de60d126a14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efb3f6bb09e8fcec0267b3fdbdf43301e33aa2ba8bfaf8b3a124c7e7714949ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9814d65f13344afd5484baf20498a643e575626a91a474b937602f4bb06f564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T08:00:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 08:00:01.682190 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 08:00:01.682347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 08:00:01.683010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1444506139/tls.crt::/tmp/serving-cert-1444506139/tls.key\\\\\\\"\\\\nI0312 08:00:02.063779 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 08:00:02.066800 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 08:00:02.066837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 08:00:02.066881 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 08:00:02.066894 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 08:00:02.073309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 08:00:02.073332 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073342 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 08:00:02.073345 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 08:00:02.073348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 08:00:02.073351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 08:00:02.073351 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 08:00:02.075060 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://411ee556dd1461097ea682e93564110b2a46d02734cb91026ba06ffe7de7a80f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:02Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:02 crc kubenswrapper[4809]: I0312 08:01:02.282776 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83bfb2b1536a6a7a823599dddb47b6ad50b038509d00298d2fcee85b0a5b6c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:02Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:02 crc kubenswrapper[4809]: I0312 08:01:02.299684 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:02Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:02 crc kubenswrapper[4809]: I0312 08:01:02.314240 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kbwnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acc48a9a-cc0a-4361-b1bc-7c7684f9bf93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5d8cd4006dec3e1304218cda9d03ea761465524e055daccb8e61ef1368a7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j44jn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kbwnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:02Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:02 crc kubenswrapper[4809]: I0312 08:01:02.333279 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xgl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa0e58ad29c62a229115fb214956dc11f004bd7f4a8188d7435e0857f07ff46b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa0e58ad29c62a229115fb214956dc11f004bd7f4a8188d7435e0857f07ff46b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T08:01:01Z\\\",\\\"message\\\":\\\"2026-03-12T08:00:16+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_40cc8575-4964-494a-ac65-ea59ea86e3c3\\\\n2026-03-12T08:00:16+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_40cc8575-4964-494a-ac65-ea59ea86e3c3 to /host/opt/cni/bin/\\\\n2026-03-12T08:00:16Z [verbose] multus-daemon started\\\\n2026-03-12T08:00:16Z [verbose] Readiness Indicator file check\\\\n2026-03-12T08:01:01Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2nhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xgl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:02Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:02 crc kubenswrapper[4809]: I0312 08:01:02.353419 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-p566k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d31c58d-0f0d-431f-bebc-57173f467eee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j742\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j742\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-p566k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:02Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:02 crc kubenswrapper[4809]: I0312 08:01:02.388969 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90601560-88a4-4e9e-8c6c-b7cfd6193dfc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcce0bfc6e523f2bd2e05ab2403c80a280f3c22b7c9d4e61bde3bb414b1b6c1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afe86a0784b3f9ecefb87d8d8de1e616420abe185dfcf0e47ad17f7d3e18a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ecaed5f8be6ee7aa9b143575897770874696cfd0d71c30ef8e46d75a67be75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d974ea0c31aedbb16a24b692f45399aa47ba7a1e4b14467bb954c10757b38cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c32315e2dbe601b08a2b2c3db8a50dfc5d99be959dbf33c19f6a2c55df38c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28106812f58b36dc991efb2c3b84b4c3f38016271262418de7cb6d3130ee2743\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28106812f58b36dc991efb2c3b84b4c3f38016271262418de7cb6d3130ee2743\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://759e1933c261395715f2209ea1aa72957dfb4c5b1049c830e87854b0dcdba55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759e1933c261395715f2209ea1aa72957dfb4c5b1049c830e87854b0dcdba55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0d58c5fd72b76f75c191f46f25e0aa9fb3026191f5fb97c210907e1ea38baa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0d58c5fd72b76f75c191f46f25e0aa9fb3026191f5fb97c210907e1ea38baa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:02Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:02 crc kubenswrapper[4809]: I0312 08:01:02.408946 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7050281d-f7e1-447e-9cfd-66fe7342efff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a8ee6bd6947c48013498a11e9479a358028706146a645bc6632fcda8717bc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cb0402282c7f399b69b70fb9af47794517f105267b536b58393e3c69aa0cacc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f6b51b0d31592c1695ecc8f19dd957bcb44a4a9c6ef3cf371818c729aee03e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9274fd03c22f70f9a14af97eb08241f288c6fa0c1c69b75f77c1057d35a7cbc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9274fd03c22f70f9a14af97eb08241f288c6fa0c1c69b75f77c1057d35a7cbc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:02Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:02 crc kubenswrapper[4809]: I0312 08:01:02.429395 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0dc5e31901266db36374e4881b99b1a40767bc6ff58afe1e6226adf1f5c60fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aef513f474f25e56fd48d7c8b5f110939514be75dec224356dee883508d604f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:02Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:02 crc kubenswrapper[4809]: I0312 08:01:02.448547 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d7prn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b97e5c85-4adb-4061-b2e0-1be7440c2133\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aba607a1e3cfe1831b9a97989571fc863a712e4579f87587f6d0089b255ab12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f99hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edfd33ff837e4b09002fef30a5d35f242dd087139d7cabc6416bae56811968c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f99hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-d7prn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:02Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:02 crc kubenswrapper[4809]: I0312 08:01:02.469574 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:02Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:02 crc kubenswrapper[4809]: I0312 08:01:02.487221 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:02Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:02 crc kubenswrapper[4809]: I0312 08:01:02.514649 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a4d3f57db51abc6b567283017a96e97ab98055de09513fe14d0be447942b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06c9bb77ec46b7ddf658e865660c808ab82509e698598c42daf2b199305c32c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://833b7c9d59ec9b7cf52e9b6c12705d034b46279a46145c181f92d30ffb6e7f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b49430238b4b9dc6fefc7169830e7106e6d43b1130d0b2fd358f103161f390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://943bab511f40b661cb31e977ec90fbc4edd6cb41d26593b0d7829dfd854fc047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f06a94e052ce6d8bc2496ade124907ca536c37d2710afe11c824327dc3de5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd7061a04af8eca7049be8a049e77071d26305cf53171b0ba73714b5f0fe8f89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd7061a04af8eca7049be8a049e77071d26305cf53171b0ba73714b5f0fe8f89\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T08:00:39Z\\\",\\\"message\\\":\\\"t-secret-name:serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0008826d7 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: authentication-operator,},ClusterIP:10.217.5.150,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.150],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0312 08:00:39.104426 7023 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handle\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7h9l6_openshift-ovn-kubernetes(cc7631d0-7d4b-4f5a-ab01-7516b2ed998e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a39013e253a1014bfc666169571d96ed943c28cc73221fcfa3f85d31ee9ea85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7h9l6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:02Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:02 crc kubenswrapper[4809]: I0312 08:01:02.528564 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vcxlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7db3e1-e4cd-4800-a7c2-30d8254caa4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a976f7b8cb11f2d6a4393b26321f830f79835db3c42bdc340ac4b189ee0e56d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vcxlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:02Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:02 crc kubenswrapper[4809]: I0312 08:01:02.936824 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4xgl7_85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff/kube-multus/0.log" Mar 12 08:01:02 crc kubenswrapper[4809]: I0312 08:01:02.936906 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4xgl7" event={"ID":"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff","Type":"ContainerStarted","Data":"a8d25ff9aef142a7444e4e864d929858a9e9a4d6831ed4e012b288e96ad920d6"} Mar 12 08:01:02 crc kubenswrapper[4809]: I0312 08:01:02.959589 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d5675be-ed00-4551-8b81-3ec73f344320\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fceb15dac1090a34620398fe604b2fe04af865b536619b0491dbc17edb6242b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0fd6b286ccf1bf7747d31f1174259c16f2a9e0d371eb039a48c135b6118ace\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T07:59:18Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0312 07:58:49.313745 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0312 07:58:49.315796 1 observer_polling.go:159] Starting file observer\\\\nI0312 07:58:49.351950 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0312 07:58:49.359297 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0312 07:59:18.847872 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0312 07:59:18.847934 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5d018b14132bda1464a87d5f4073560c28ae90351c58099d35605cfa0a18492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c0f55493f640a3d45a1337317ad9715adf5e032d370a4c4fa72254a52368ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31ad14167b5c219c3f37847ba32dd923e81e22b517173000441413914c132d99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:02Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:02 crc kubenswrapper[4809]: I0312 08:01:02.976587 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4677e418d05a581aee3be8fd421402c02922c5eaf004cbcae1ae691fb67d283e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:02Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:02 crc kubenswrapper[4809]: I0312 08:01:02.993383 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"101483ba-8ed3-40eb-9855-077e9add029f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a3044e9eaa4ccc87b594a3f5440d95bf23c648ceb2e8f45abeadeda79b4a773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1dde7e156e97091c5fa419562607743e842e7e610e685fff1be3ce7ce33f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6d4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:02Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:03 crc kubenswrapper[4809]: I0312 08:01:03.016446 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwn7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab4c7cca-c503-41d4-8abf-5b19429defff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaaba74bba436056c80065bda470df45e31d6405fc8b717b0299b5a0fa3ef37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef92b238f6dd7e873d92f846bfa5a6dcbc6f5ab26617736df052a7f331fc5f4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef92b238f6dd7e873d92f846bfa5a6dcbc6f5ab26617736df052a7f331fc5f4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04cf364fbd4a2f9039a1733e337000f89db4b2ef6b4cd8de079f4c01c91add76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04cf364fbd4a2f9039a1733e337000f89db4b2ef6b4cd8de079f4c01c91add76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://844db69e092396d79015605fd21fde37c13689eb546b220c8e9ecc789aa7dc75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844db69e092396d79015605fd21fde37c13689eb546b220c8e9ecc789aa7dc75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5143175a54d02709bf304d2a4d50a1c3b2c441552e206628f9e1a5dfdb27ae97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5143175a54d02709bf304d2a4d50a1c3b2c441552e206628f9e1a5dfdb27ae97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1baa2e910db1674a7a4e4f09b6e01d305daf0f6ef4c7fee24090365dba810b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baa2e910db1674a7a4e4f09b6e01d305daf0f6ef4c7fee24090365dba810b9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwn7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:03Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:03 crc kubenswrapper[4809]: I0312 08:01:03.037269 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xgl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8d25ff9aef142a7444e4e864d929858a9e9a4d6831ed4e012b288e96ad920d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa0e58ad29c62a229115fb214956dc11f004bd7f4a8188d7435e0857f07ff46b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T08:01:01Z\\\",\\\"message\\\":\\\"2026-03-12T08:00:16+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_40cc8575-4964-494a-ac65-ea59ea86e3c3\\\\n2026-03-12T08:00:16+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_40cc8575-4964-494a-ac65-ea59ea86e3c3 to /host/opt/cni/bin/\\\\n2026-03-12T08:00:16Z [verbose] multus-daemon started\\\\n2026-03-12T08:00:16Z [verbose] Readiness Indicator file check\\\\n2026-03-12T08:01:01Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2nhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xgl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:03Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:03 crc kubenswrapper[4809]: I0312 08:01:03.053188 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-p566k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d31c58d-0f0d-431f-bebc-57173f467eee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j742\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j742\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-p566k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:03Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:03 crc kubenswrapper[4809]: I0312 08:01:03.075264 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09f5e61a-c077-449f-8291-dbf93ac9aca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95634c408015e53a4c8d044b27803c5070ad086be1f3372a760744db1bc9c033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c386318c2476130eb02e6620ab75fce7d5cd814d7f214bed4f449de60d126a14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efb3f6bb09e8fcec0267b3fdbdf43301e33aa2ba8bfaf8b3a124c7e7714949ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9814d65f13344afd5484baf20498a643e575626a91a474b937602f4bb06f564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T08:00:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 08:00:01.682190 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 08:00:01.682347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 08:00:01.683010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1444506139/tls.crt::/tmp/serving-cert-1444506139/tls.key\\\\\\\"\\\\nI0312 08:00:02.063779 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 08:00:02.066800 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 08:00:02.066837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 08:00:02.066881 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 08:00:02.066894 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 08:00:02.073309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 08:00:02.073332 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073342 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 08:00:02.073345 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 08:00:02.073348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 08:00:02.073351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 08:00:02.073351 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 08:00:02.075060 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://411ee556dd1461097ea682e93564110b2a46d02734cb91026ba06ffe7de7a80f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:03Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:03 crc kubenswrapper[4809]: I0312 08:01:03.095470 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83bfb2b1536a6a7a823599dddb47b6ad50b038509d00298d2fcee85b0a5b6c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:03Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:03 crc kubenswrapper[4809]: I0312 08:01:03.105392 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p566k" Mar 12 08:01:03 crc kubenswrapper[4809]: E0312 08:01:03.105537 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p566k" podUID="3d31c58d-0f0d-431f-bebc-57173f467eee" Mar 12 08:01:03 crc kubenswrapper[4809]: I0312 08:01:03.110905 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:03Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:03 crc kubenswrapper[4809]: I0312 08:01:03.124363 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kbwnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acc48a9a-cc0a-4361-b1bc-7c7684f9bf93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5d8cd4006dec3e1304218cda9d03ea761465524e055daccb8e61ef1368a7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j44jn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kbwnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:03Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:03 crc kubenswrapper[4809]: I0312 08:01:03.146896 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90601560-88a4-4e9e-8c6c-b7cfd6193dfc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcce0bfc6e523f2bd2e05ab2403c80a280f3c22b7c9d4e61bde3bb414b1b6c1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afe86a0784b3f9ecefb87d8d8de1e616420abe185dfcf0e47ad17f7d3e18a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ecaed5f8be6ee7aa9b143575897770874696cfd0d71c30ef8e46d75a67be75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d974ea0c31aedbb16a24b692f45399aa47ba7a1e4b14467bb954c10757b38cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c32315e2dbe601b08a2b2c3db8a50dfc5d99be959dbf33c19f6a2c55df38c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28106812f58b36dc991efb2c3b84b4c3f38016271262418de7cb6d3130ee2743\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28106812f58b36dc991efb2c3b84b4c3f38016271262418de7cb6d3130ee2743\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://759e1933c261395715f2209ea1aa72957dfb4c5b1049c830e87854b0dcdba55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759e1933c261395715f2209ea1aa72957dfb4c5b1049c830e87854b0dcdba55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0d58c5fd72b76f75c191f46f25e0aa9fb3026191f5fb97c210907e1ea38baa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0d58c5fd72b76f75c191f46f25e0aa9fb3026191f5fb97c210907e1ea38baa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:03Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:03 crc kubenswrapper[4809]: I0312 08:01:03.162859 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7050281d-f7e1-447e-9cfd-66fe7342efff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a8ee6bd6947c48013498a11e9479a358028706146a645bc6632fcda8717bc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cb0402282c7f399b69b70fb9af47794517f105267b536b58393e3c69aa0cacc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f6b51b0d31592c1695ecc8f19dd957bcb44a4a9c6ef3cf371818c729aee03e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9274fd03c22f70f9a14af97eb08241f288c6fa0c1c69b75f77c1057d35a7cbc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9274fd03c22f70f9a14af97eb08241f288c6fa0c1c69b75f77c1057d35a7cbc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:03Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:03 crc kubenswrapper[4809]: I0312 08:01:03.183629 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0dc5e31901266db36374e4881b99b1a40767bc6ff58afe1e6226adf1f5c60fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aef513f474f25e56fd48d7c8b5f110939514be75dec224356dee883508d604f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:03Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:03 crc kubenswrapper[4809]: I0312 08:01:03.199778 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d7prn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b97e5c85-4adb-4061-b2e0-1be7440c2133\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aba607a1e3cfe1831b9a97989571fc863a712e4579f87587f6d0089b255ab12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f99hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edfd33ff837e4b09002fef30a5d35f242dd087139d7cabc6416bae56811968c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f99hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-d7prn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:03Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:03 crc kubenswrapper[4809]: I0312 08:01:03.218091 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:03Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:03 crc kubenswrapper[4809]: I0312 08:01:03.235330 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:03Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:03 crc kubenswrapper[4809]: I0312 08:01:03.269041 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a4d3f57db51abc6b567283017a96e97ab98055de09513fe14d0be447942b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06c9bb77ec46b7ddf658e865660c808ab82509e698598c42daf2b199305c32c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://833b7c9d59ec9b7cf52e9b6c12705d034b46279a46145c181f92d30ffb6e7f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b49430238b4b9dc6fefc7169830e7106e6d43b1130d0b2fd358f103161f390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://943bab511f40b661cb31e977ec90fbc4edd6cb41d26593b0d7829dfd854fc047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f06a94e052ce6d8bc2496ade124907ca536c37d2710afe11c824327dc3de5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd7061a04af8eca7049be8a049e77071d26305cf53171b0ba73714b5f0fe8f89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd7061a04af8eca7049be8a049e77071d26305cf53171b0ba73714b5f0fe8f89\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T08:00:39Z\\\",\\\"message\\\":\\\"t-secret-name:serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0008826d7 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: authentication-operator,},ClusterIP:10.217.5.150,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.150],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0312 08:00:39.104426 7023 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handle\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7h9l6_openshift-ovn-kubernetes(cc7631d0-7d4b-4f5a-ab01-7516b2ed998e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a39013e253a1014bfc666169571d96ed943c28cc73221fcfa3f85d31ee9ea85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7h9l6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:03Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:03 crc kubenswrapper[4809]: I0312 08:01:03.282603 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vcxlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7db3e1-e4cd-4800-a7c2-30d8254caa4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a976f7b8cb11f2d6a4393b26321f830f79835db3c42bdc340ac4b189ee0e56d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vcxlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:03Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:04 crc kubenswrapper[4809]: I0312 08:01:04.105328 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:01:04 crc kubenswrapper[4809]: I0312 08:01:04.105447 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:01:04 crc kubenswrapper[4809]: E0312 08:01:04.105501 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 08:01:04 crc kubenswrapper[4809]: I0312 08:01:04.105348 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:01:04 crc kubenswrapper[4809]: E0312 08:01:04.105641 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 08:01:04 crc kubenswrapper[4809]: E0312 08:01:04.105809 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 08:01:05 crc kubenswrapper[4809]: I0312 08:01:05.105246 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p566k" Mar 12 08:01:05 crc kubenswrapper[4809]: E0312 08:01:05.105492 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p566k" podUID="3d31c58d-0f0d-431f-bebc-57173f467eee" Mar 12 08:01:06 crc kubenswrapper[4809]: I0312 08:01:06.105574 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:01:06 crc kubenswrapper[4809]: I0312 08:01:06.105666 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:01:06 crc kubenswrapper[4809]: E0312 08:01:06.105766 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 08:01:06 crc kubenswrapper[4809]: I0312 08:01:06.105880 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:01:06 crc kubenswrapper[4809]: E0312 08:01:06.106575 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 08:01:06 crc kubenswrapper[4809]: E0312 08:01:06.106828 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 08:01:06 crc kubenswrapper[4809]: I0312 08:01:06.107200 4809 scope.go:117] "RemoveContainer" containerID="bd7061a04af8eca7049be8a049e77071d26305cf53171b0ba73714b5f0fe8f89" Mar 12 08:01:06 crc kubenswrapper[4809]: I0312 08:01:06.953983 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7h9l6_cc7631d0-7d4b-4f5a-ab01-7516b2ed998e/ovnkube-controller/2.log" Mar 12 08:01:06 crc kubenswrapper[4809]: I0312 08:01:06.957141 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" event={"ID":"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e","Type":"ContainerStarted","Data":"b1d9f4aacaba0b546f2a616ca1c4e27fabeb14a3a1800c92409bdf7bea6f167d"} Mar 12 08:01:06 crc kubenswrapper[4809]: I0312 08:01:06.957712 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:01:06 crc kubenswrapper[4809]: I0312 08:01:06.975479 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4677e418d05a581aee3be8fd421402c02922c5eaf004cbcae1ae691fb67d283e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:06Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:06 crc kubenswrapper[4809]: I0312 08:01:06.991106 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"101483ba-8ed3-40eb-9855-077e9add029f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a3044e9eaa4ccc87b594a3f5440d95bf23c648ceb2e8f45abeadeda79b4a773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1dde7e156e97091c5fa419562607743e842e7e610e685fff1be3ce7ce33f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6d4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:06Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:07 crc kubenswrapper[4809]: I0312 08:01:07.013441 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwn7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab4c7cca-c503-41d4-8abf-5b19429defff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaaba74bba436056c80065bda470df45e31d6405fc8b717b0299b5a0fa3ef37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef92b238f6dd7e873d92f846bfa5a6dcbc6f5ab26617736df052a7f331fc5f4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef92b238f6dd7e873d92f846bfa5a6dcbc6f5ab26617736df052a7f331fc5f4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04cf364fbd4a2f9039a1733e337000f89db4b2ef6b4cd8de079f4c01c91add76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04cf364fbd4a2f9039a1733e337000f89db4b2ef6b4cd8de079f4c01c91add76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://844db69e092396d79015605fd21fde37c13689eb546b220c8e9ecc789aa7dc75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844db69e092396d79015605fd21fde37c13689eb546b220c8e9ecc789aa7dc75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5143175a54d02709bf304d2a4d50a1c3b2c441552e206628f9e1a5dfdb27ae97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5143175a54d02709bf304d2a4d50a1c3b2c441552e206628f9e1a5dfdb27ae97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1baa2e910db1674a7a4e4f09b6e01d305daf0f6ef4c7fee24090365dba810b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baa2e910db1674a7a4e4f09b6e01d305daf0f6ef4c7fee24090365dba810b9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwn7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:07Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:07 crc kubenswrapper[4809]: I0312 08:01:07.028867 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d5675be-ed00-4551-8b81-3ec73f344320\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fceb15dac1090a34620398fe604b2fe04af865b536619b0491dbc17edb6242b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0fd6b286ccf1bf7747d31f1174259c16f2a9e0d371eb039a48c135b6118ace\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T07:59:18Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0312 07:58:49.313745 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0312 07:58:49.315796 1 observer_polling.go:159] Starting file observer\\\\nI0312 07:58:49.351950 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0312 07:58:49.359297 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0312 07:59:18.847872 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0312 07:59:18.847934 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5d018b14132bda1464a87d5f4073560c28ae90351c58099d35605cfa0a18492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c0f55493f640a3d45a1337317ad9715adf5e032d370a4c4fa72254a52368ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31ad14167b5c219c3f37847ba32dd923e81e22b517173000441413914c132d99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:07Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:07 crc kubenswrapper[4809]: I0312 08:01:07.046990 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09f5e61a-c077-449f-8291-dbf93ac9aca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95634c408015e53a4c8d044b27803c5070ad086be1f3372a760744db1bc9c033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c386318c2476130eb02e6620ab75fce7d5cd814d7f214bed4f449de60d126a14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efb3f6bb09e8fcec0267b3fdbdf43301e33aa2ba8bfaf8b3a124c7e7714949ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9814d65f13344afd5484baf20498a643e575626a91a474b937602f4bb06f564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T08:00:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 08:00:01.682190 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 08:00:01.682347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 08:00:01.683010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1444506139/tls.crt::/tmp/serving-cert-1444506139/tls.key\\\\\\\"\\\\nI0312 08:00:02.063779 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 08:00:02.066800 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 08:00:02.066837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 08:00:02.066881 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 08:00:02.066894 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 08:00:02.073309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 08:00:02.073332 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073342 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 08:00:02.073345 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 08:00:02.073348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 08:00:02.073351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 08:00:02.073351 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 08:00:02.075060 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://411ee556dd1461097ea682e93564110b2a46d02734cb91026ba06ffe7de7a80f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:07Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:07 crc kubenswrapper[4809]: I0312 08:01:07.066044 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83bfb2b1536a6a7a823599dddb47b6ad50b038509d00298d2fcee85b0a5b6c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:07Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:07 crc kubenswrapper[4809]: I0312 08:01:07.085014 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:07Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:07 crc kubenswrapper[4809]: I0312 08:01:07.094875 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kbwnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acc48a9a-cc0a-4361-b1bc-7c7684f9bf93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5d8cd4006dec3e1304218cda9d03ea761465524e055daccb8e61ef1368a7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j44jn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kbwnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:07Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:07 crc kubenswrapper[4809]: I0312 08:01:07.104972 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p566k" Mar 12 08:01:07 crc kubenswrapper[4809]: E0312 08:01:07.105220 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p566k" podUID="3d31c58d-0f0d-431f-bebc-57173f467eee" Mar 12 08:01:07 crc kubenswrapper[4809]: I0312 08:01:07.112170 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xgl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8d25ff9aef142a7444e4e864d929858a9e9a4d6831ed4e012b288e96ad920d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa0e58ad29c62a229115fb214956dc11f004bd7f4a8188d7435e0857f07ff46b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T08:01:01Z\\\",\\\"message\\\":\\\"2026-03-12T08:00:16+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_40cc8575-4964-494a-ac65-ea59ea86e3c3\\\\n2026-03-12T08:00:16+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_40cc8575-4964-494a-ac65-ea59ea86e3c3 to /host/opt/cni/bin/\\\\n2026-03-12T08:00:16Z [verbose] multus-daemon started\\\\n2026-03-12T08:00:16Z [verbose] Readiness Indicator file check\\\\n2026-03-12T08:01:01Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2nhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xgl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:07Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:07 crc kubenswrapper[4809]: I0312 08:01:07.126363 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-p566k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d31c58d-0f0d-431f-bebc-57173f467eee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j742\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j742\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-p566k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:07Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:07 crc kubenswrapper[4809]: I0312 08:01:07.140453 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7050281d-f7e1-447e-9cfd-66fe7342efff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a8ee6bd6947c48013498a11e9479a358028706146a645bc6632fcda8717bc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cb0402282c7f399b69b70fb9af47794517f105267b536b58393e3c69aa0cacc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f6b51b0d31592c1695ecc8f19dd957bcb44a4a9c6ef3cf371818c729aee03e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9274fd03c22f70f9a14af97eb08241f288c6fa0c1c69b75f77c1057d35a7cbc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9274fd03c22f70f9a14af97eb08241f288c6fa0c1c69b75f77c1057d35a7cbc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:07Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:07 crc kubenswrapper[4809]: I0312 08:01:07.156393 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0dc5e31901266db36374e4881b99b1a40767bc6ff58afe1e6226adf1f5c60fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aef513f474f25e56fd48d7c8b5f110939514be75dec224356dee883508d604f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:07Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:07 crc kubenswrapper[4809]: I0312 08:01:07.170242 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d7prn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b97e5c85-4adb-4061-b2e0-1be7440c2133\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aba607a1e3cfe1831b9a97989571fc863a712e4579f87587f6d0089b255ab12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f99hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edfd33ff837e4b09002fef30a5d35f242dd087139d7cabc6416bae56811968c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f99hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-d7prn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:07Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:07 crc kubenswrapper[4809]: I0312 08:01:07.201947 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90601560-88a4-4e9e-8c6c-b7cfd6193dfc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcce0bfc6e523f2bd2e05ab2403c80a280f3c22b7c9d4e61bde3bb414b1b6c1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afe86a0784b3f9ecefb87d8d8de1e616420abe185dfcf0e47ad17f7d3e18a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ecaed5f8be6ee7aa9b143575897770874696cfd0d71c30ef8e46d75a67be75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d974ea0c31aedbb16a24b692f45399aa47ba7a1e4b14467bb954c10757b38cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c32315e2dbe601b08a2b2c3db8a50dfc5d99be959dbf33c19f6a2c55df38c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28106812f58b36dc991efb2c3b84b4c3f38016271262418de7cb6d3130ee2743\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28106812f58b36dc991efb2c3b84b4c3f38016271262418de7cb6d3130ee2743\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://759e1933c261395715f2209ea1aa72957dfb4c5b1049c830e87854b0dcdba55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759e1933c261395715f2209ea1aa72957dfb4c5b1049c830e87854b0dcdba55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0d58c5fd72b76f75c191f46f25e0aa9fb3026191f5fb97c210907e1ea38baa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0d58c5fd72b76f75c191f46f25e0aa9fb3026191f5fb97c210907e1ea38baa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:07Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:07 crc kubenswrapper[4809]: I0312 08:01:07.219014 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:07Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:07 crc kubenswrapper[4809]: I0312 08:01:07.243229 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a4d3f57db51abc6b567283017a96e97ab98055de09513fe14d0be447942b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06c9bb77ec46b7ddf658e865660c808ab82509e698598c42daf2b199305c32c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://833b7c9d59ec9b7cf52e9b6c12705d034b46279a46145c181f92d30ffb6e7f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b49430238b4b9dc6fefc7169830e7106e6d43b1130d0b2fd358f103161f390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://943bab511f40b661cb31e977ec90fbc4edd6cb41d26593b0d7829dfd854fc047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f06a94e052ce6d8bc2496ade124907ca536c37d2710afe11c824327dc3de5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1d9f4aacaba0b546f2a616ca1c4e27fabeb14a3a1800c92409bdf7bea6f167d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd7061a04af8eca7049be8a049e77071d26305cf53171b0ba73714b5f0fe8f89\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T08:00:39Z\\\",\\\"message\\\":\\\"t-secret-name:serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0008826d7 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: authentication-operator,},ClusterIP:10.217.5.150,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.150],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0312 08:00:39.104426 7023 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handle\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a39013e253a1014bfc666169571d96ed943c28cc73221fcfa3f85d31ee9ea85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7h9l6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:07Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:07 crc kubenswrapper[4809]: E0312 08:01:07.244062 4809 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 08:01:07 crc kubenswrapper[4809]: I0312 08:01:07.269465 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vcxlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7db3e1-e4cd-4800-a7c2-30d8254caa4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a976f7b8cb11f2d6a4393b26321f830f79835db3c42bdc340ac4b189ee0e56d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vcxlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:07Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:07 crc kubenswrapper[4809]: I0312 08:01:07.292763 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:07Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:07 crc kubenswrapper[4809]: I0312 08:01:07.313134 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:07Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:07 crc kubenswrapper[4809]: I0312 08:01:07.332480 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:07Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:07 crc kubenswrapper[4809]: I0312 08:01:07.362493 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a4d3f57db51abc6b567283017a96e97ab98055de09513fe14d0be447942b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06c9bb77ec46b7ddf658e865660c808ab82509e698598c42daf2b199305c32c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://833b7c9d59ec9b7cf52e9b6c12705d034b46279a46145c181f92d30ffb6e7f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b49430238b4b9dc6fefc7169830e7106e6d43b1130d0b2fd358f103161f390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://943bab511f40b661cb31e977ec90fbc4edd6cb41d26593b0d7829dfd854fc047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f06a94e052ce6d8bc2496ade124907ca536c37d2710afe11c824327dc3de5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1d9f4aacaba0b546f2a616ca1c4e27fabeb14a3a1800c92409bdf7bea6f167d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd7061a04af8eca7049be8a049e77071d26305cf53171b0ba73714b5f0fe8f89\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T08:00:39Z\\\",\\\"message\\\":\\\"t-secret-name:serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0008826d7 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: authentication-operator,},ClusterIP:10.217.5.150,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.150],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0312 08:00:39.104426 7023 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handle\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a39013e253a1014bfc666169571d96ed943c28cc73221fcfa3f85d31ee9ea85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7h9l6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:07Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:07 crc kubenswrapper[4809]: I0312 08:01:07.377661 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vcxlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7db3e1-e4cd-4800-a7c2-30d8254caa4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a976f7b8cb11f2d6a4393b26321f830f79835db3c42bdc340ac4b189ee0e56d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vcxlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:07Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:07 crc kubenswrapper[4809]: I0312 08:01:07.398273 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d5675be-ed00-4551-8b81-3ec73f344320\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fceb15dac1090a34620398fe604b2fe04af865b536619b0491dbc17edb6242b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0fd6b286ccf1bf7747d31f1174259c16f2a9e0d371eb039a48c135b6118ace\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T07:59:18Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0312 07:58:49.313745 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0312 07:58:49.315796 1 observer_polling.go:159] Starting file observer\\\\nI0312 07:58:49.351950 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0312 07:58:49.359297 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0312 07:59:18.847872 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0312 07:59:18.847934 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5d018b14132bda1464a87d5f4073560c28ae90351c58099d35605cfa0a18492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c0f55493f640a3d45a1337317ad9715adf5e032d370a4c4fa72254a52368ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31ad14167b5c219c3f37847ba32dd923e81e22b517173000441413914c132d99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:07Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:07 crc kubenswrapper[4809]: I0312 08:01:07.413697 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4677e418d05a581aee3be8fd421402c02922c5eaf004cbcae1ae691fb67d283e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:07Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:07 crc kubenswrapper[4809]: I0312 08:01:07.428062 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"101483ba-8ed3-40eb-9855-077e9add029f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a3044e9eaa4ccc87b594a3f5440d95bf23c648ceb2e8f45abeadeda79b4a773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1dde7e156e97091c5fa419562607743e842e7e610e685fff1be3ce7ce33f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6d4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:07Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:07 crc kubenswrapper[4809]: I0312 08:01:07.445579 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwn7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab4c7cca-c503-41d4-8abf-5b19429defff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaaba74bba436056c80065bda470df45e31d6405fc8b717b0299b5a0fa3ef37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef92b238f6dd7e873d92f846bfa5a6dcbc6f5ab26617736df052a7f331fc5f4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef92b238f6dd7e873d92f846bfa5a6dcbc6f5ab26617736df052a7f331fc5f4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04cf364fbd4a2f9039a1733e337000f89db4b2ef6b4cd8de079f4c01c91add76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04cf364fbd4a2f9039a1733e337000f89db4b2ef6b4cd8de079f4c01c91add76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://844db69e092396d79015605fd21fde37c13689eb546b220c8e9ecc789aa7dc75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844db69e092396d79015605fd21fde37c13689eb546b220c8e9ecc789aa7dc75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5143175a54d02709bf304d2a4d50a1c3b2c441552e206628f9e1a5dfdb27ae97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5143175a54d02709bf304d2a4d50a1c3b2c441552e206628f9e1a5dfdb27ae97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1baa2e910db1674a7a4e4f09b6e01d305daf0f6ef4c7fee24090365dba810b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baa2e910db1674a7a4e4f09b6e01d305daf0f6ef4c7fee24090365dba810b9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwn7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:07Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:07 crc kubenswrapper[4809]: I0312 08:01:07.463925 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xgl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8d25ff9aef142a7444e4e864d929858a9e9a4d6831ed4e012b288e96ad920d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa0e58ad29c62a229115fb214956dc11f004bd7f4a8188d7435e0857f07ff46b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T08:01:01Z\\\",\\\"message\\\":\\\"2026-03-12T08:00:16+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_40cc8575-4964-494a-ac65-ea59ea86e3c3\\\\n2026-03-12T08:00:16+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_40cc8575-4964-494a-ac65-ea59ea86e3c3 to /host/opt/cni/bin/\\\\n2026-03-12T08:00:16Z [verbose] multus-daemon started\\\\n2026-03-12T08:00:16Z [verbose] Readiness Indicator file check\\\\n2026-03-12T08:01:01Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2nhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xgl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:07Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:07 crc kubenswrapper[4809]: I0312 08:01:07.475413 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-p566k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d31c58d-0f0d-431f-bebc-57173f467eee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j742\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j742\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-p566k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:07Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:07 crc kubenswrapper[4809]: I0312 08:01:07.496066 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09f5e61a-c077-449f-8291-dbf93ac9aca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95634c408015e53a4c8d044b27803c5070ad086be1f3372a760744db1bc9c033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c386318c2476130eb02e6620ab75fce7d5cd814d7f214bed4f449de60d126a14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efb3f6bb09e8fcec0267b3fdbdf43301e33aa2ba8bfaf8b3a124c7e7714949ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9814d65f13344afd5484baf20498a643e575626a91a474b937602f4bb06f564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T08:00:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 08:00:01.682190 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 08:00:01.682347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 08:00:01.683010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1444506139/tls.crt::/tmp/serving-cert-1444506139/tls.key\\\\\\\"\\\\nI0312 08:00:02.063779 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 08:00:02.066800 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 08:00:02.066837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 08:00:02.066881 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 08:00:02.066894 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 08:00:02.073309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 08:00:02.073332 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073342 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 08:00:02.073345 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 08:00:02.073348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 08:00:02.073351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 08:00:02.073351 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 08:00:02.075060 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://411ee556dd1461097ea682e93564110b2a46d02734cb91026ba06ffe7de7a80f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:07Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:07 crc kubenswrapper[4809]: I0312 08:01:07.513760 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83bfb2b1536a6a7a823599dddb47b6ad50b038509d00298d2fcee85b0a5b6c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:07Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:07 crc kubenswrapper[4809]: I0312 08:01:07.533016 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:07Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:07 crc kubenswrapper[4809]: I0312 08:01:07.549903 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kbwnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acc48a9a-cc0a-4361-b1bc-7c7684f9bf93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5d8cd4006dec3e1304218cda9d03ea761465524e055daccb8e61ef1368a7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j44jn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kbwnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:07Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:07 crc kubenswrapper[4809]: I0312 08:01:07.584594 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90601560-88a4-4e9e-8c6c-b7cfd6193dfc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcce0bfc6e523f2bd2e05ab2403c80a280f3c22b7c9d4e61bde3bb414b1b6c1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afe86a0784b3f9ecefb87d8d8de1e616420abe185dfcf0e47ad17f7d3e18a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ecaed5f8be6ee7aa9b143575897770874696cfd0d71c30ef8e46d75a67be75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d974ea0c31aedbb16a24b692f45399aa47ba7a1e4b14467bb954c10757b38cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c32315e2dbe601b08a2b2c3db8a50dfc5d99be959dbf33c19f6a2c55df38c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28106812f58b36dc991efb2c3b84b4c3f38016271262418de7cb6d3130ee2743\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28106812f58b36dc991efb2c3b84b4c3f38016271262418de7cb6d3130ee2743\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://759e1933c261395715f2209ea1aa72957dfb4c5b1049c830e87854b0dcdba55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759e1933c261395715f2209ea1aa72957dfb4c5b1049c830e87854b0dcdba55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0d58c5fd72b76f75c191f46f25e0aa9fb3026191f5fb97c210907e1ea38baa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0d58c5fd72b76f75c191f46f25e0aa9fb3026191f5fb97c210907e1ea38baa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:07Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:07 crc kubenswrapper[4809]: I0312 08:01:07.603896 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7050281d-f7e1-447e-9cfd-66fe7342efff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a8ee6bd6947c48013498a11e9479a358028706146a645bc6632fcda8717bc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cb0402282c7f399b69b70fb9af47794517f105267b536b58393e3c69aa0cacc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f6b51b0d31592c1695ecc8f19dd957bcb44a4a9c6ef3cf371818c729aee03e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9274fd03c22f70f9a14af97eb08241f288c6fa0c1c69b75f77c1057d35a7cbc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9274fd03c22f70f9a14af97eb08241f288c6fa0c1c69b75f77c1057d35a7cbc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:07Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:07 crc kubenswrapper[4809]: I0312 08:01:07.620725 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0dc5e31901266db36374e4881b99b1a40767bc6ff58afe1e6226adf1f5c60fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aef513f474f25e56fd48d7c8b5f110939514be75dec224356dee883508d604f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:07Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:07 crc kubenswrapper[4809]: I0312 08:01:07.639989 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d7prn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b97e5c85-4adb-4061-b2e0-1be7440c2133\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aba607a1e3cfe1831b9a97989571fc863a712e4579f87587f6d0089b255ab12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f99hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edfd33ff837e4b09002fef30a5d35f242dd087139d7cabc6416bae56811968c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f99hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-d7prn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:07Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:07 crc kubenswrapper[4809]: I0312 08:01:07.964759 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7h9l6_cc7631d0-7d4b-4f5a-ab01-7516b2ed998e/ovnkube-controller/3.log" Mar 12 08:01:07 crc kubenswrapper[4809]: I0312 08:01:07.965975 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7h9l6_cc7631d0-7d4b-4f5a-ab01-7516b2ed998e/ovnkube-controller/2.log" Mar 12 08:01:07 crc kubenswrapper[4809]: I0312 08:01:07.970702 4809 generic.go:334] "Generic (PLEG): container finished" podID="cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" containerID="b1d9f4aacaba0b546f2a616ca1c4e27fabeb14a3a1800c92409bdf7bea6f167d" exitCode=1 Mar 12 08:01:07 crc kubenswrapper[4809]: I0312 08:01:07.970802 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" event={"ID":"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e","Type":"ContainerDied","Data":"b1d9f4aacaba0b546f2a616ca1c4e27fabeb14a3a1800c92409bdf7bea6f167d"} Mar 12 08:01:07 crc kubenswrapper[4809]: I0312 08:01:07.970892 4809 scope.go:117] "RemoveContainer" containerID="bd7061a04af8eca7049be8a049e77071d26305cf53171b0ba73714b5f0fe8f89" Mar 12 08:01:07 crc kubenswrapper[4809]: I0312 08:01:07.972436 4809 scope.go:117] "RemoveContainer" containerID="b1d9f4aacaba0b546f2a616ca1c4e27fabeb14a3a1800c92409bdf7bea6f167d" Mar 12 08:01:07 crc kubenswrapper[4809]: E0312 08:01:07.973157 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-7h9l6_openshift-ovn-kubernetes(cc7631d0-7d4b-4f5a-ab01-7516b2ed998e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" podUID="cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" Mar 12 08:01:07 crc kubenswrapper[4809]: I0312 08:01:07.987288 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kbwnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acc48a9a-cc0a-4361-b1bc-7c7684f9bf93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5d8cd4006dec3e1304218cda9d03ea761465524e055daccb8e61ef1368a7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j44jn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kbwnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:07Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:08 crc kubenswrapper[4809]: I0312 08:01:08.010687 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xgl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8d25ff9aef142a7444e4e864d929858a9e9a4d6831ed4e012b288e96ad920d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa0e58ad29c62a229115fb214956dc11f004bd7f4a8188d7435e0857f07ff46b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T08:01:01Z\\\",\\\"message\\\":\\\"2026-03-12T08:00:16+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_40cc8575-4964-494a-ac65-ea59ea86e3c3\\\\n2026-03-12T08:00:16+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_40cc8575-4964-494a-ac65-ea59ea86e3c3 to /host/opt/cni/bin/\\\\n2026-03-12T08:00:16Z [verbose] multus-daemon started\\\\n2026-03-12T08:00:16Z [verbose] Readiness Indicator file check\\\\n2026-03-12T08:01:01Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2nhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xgl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:08Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:08 crc kubenswrapper[4809]: I0312 08:01:08.018922 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:01:08 crc kubenswrapper[4809]: I0312 08:01:08.019037 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:01:08 crc kubenswrapper[4809]: E0312 08:01:08.019070 4809 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 12 08:01:08 crc kubenswrapper[4809]: E0312 08:01:08.019254 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-12 08:02:12.01923114 +0000 UTC m=+205.601266943 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 12 08:01:08 crc kubenswrapper[4809]: E0312 08:01:08.019167 4809 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 12 08:01:08 crc kubenswrapper[4809]: E0312 08:01:08.019389 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-12 08:02:12.019352873 +0000 UTC m=+205.601388646 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 12 08:01:08 crc kubenswrapper[4809]: I0312 08:01:08.024328 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-p566k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d31c58d-0f0d-431f-bebc-57173f467eee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j742\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j742\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-p566k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:08Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:08 crc kubenswrapper[4809]: I0312 08:01:08.042306 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09f5e61a-c077-449f-8291-dbf93ac9aca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95634c408015e53a4c8d044b27803c5070ad086be1f3372a760744db1bc9c033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c386318c2476130eb02e6620ab75fce7d5cd814d7f214bed4f449de60d126a14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efb3f6bb09e8fcec0267b3fdbdf43301e33aa2ba8bfaf8b3a124c7e7714949ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9814d65f13344afd5484baf20498a643e575626a91a474b937602f4bb06f564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T08:00:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 08:00:01.682190 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 08:00:01.682347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 08:00:01.683010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1444506139/tls.crt::/tmp/serving-cert-1444506139/tls.key\\\\\\\"\\\\nI0312 08:00:02.063779 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 08:00:02.066800 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 08:00:02.066837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 08:00:02.066881 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 08:00:02.066894 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 08:00:02.073309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 08:00:02.073332 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073342 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 08:00:02.073345 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 08:00:02.073348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 08:00:02.073351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 08:00:02.073351 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 08:00:02.075060 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://411ee556dd1461097ea682e93564110b2a46d02734cb91026ba06ffe7de7a80f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:08Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:08 crc kubenswrapper[4809]: I0312 08:01:08.059977 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83bfb2b1536a6a7a823599dddb47b6ad50b038509d00298d2fcee85b0a5b6c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:08Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:08 crc kubenswrapper[4809]: I0312 08:01:08.076678 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:08Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:08 crc kubenswrapper[4809]: I0312 08:01:08.099853 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90601560-88a4-4e9e-8c6c-b7cfd6193dfc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcce0bfc6e523f2bd2e05ab2403c80a280f3c22b7c9d4e61bde3bb414b1b6c1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afe86a0784b3f9ecefb87d8d8de1e616420abe185dfcf0e47ad17f7d3e18a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ecaed5f8be6ee7aa9b143575897770874696cfd0d71c30ef8e46d75a67be75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d974ea0c31aedbb16a24b692f45399aa47ba7a1e4b14467bb954c10757b38cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c32315e2dbe601b08a2b2c3db8a50dfc5d99be959dbf33c19f6a2c55df38c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28106812f58b36dc991efb2c3b84b4c3f38016271262418de7cb6d3130ee2743\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28106812f58b36dc991efb2c3b84b4c3f38016271262418de7cb6d3130ee2743\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://759e1933c261395715f2209ea1aa72957dfb4c5b1049c830e87854b0dcdba55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759e1933c261395715f2209ea1aa72957dfb4c5b1049c830e87854b0dcdba55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0d58c5fd72b76f75c191f46f25e0aa9fb3026191f5fb97c210907e1ea38baa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0d58c5fd72b76f75c191f46f25e0aa9fb3026191f5fb97c210907e1ea38baa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:08Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:08 crc kubenswrapper[4809]: I0312 08:01:08.105744 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:01:08 crc kubenswrapper[4809]: I0312 08:01:08.105877 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:01:08 crc kubenswrapper[4809]: E0312 08:01:08.106049 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 08:01:08 crc kubenswrapper[4809]: I0312 08:01:08.106145 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:01:08 crc kubenswrapper[4809]: E0312 08:01:08.106336 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 08:01:08 crc kubenswrapper[4809]: E0312 08:01:08.105889 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 08:01:08 crc kubenswrapper[4809]: I0312 08:01:08.115862 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7050281d-f7e1-447e-9cfd-66fe7342efff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a8ee6bd6947c48013498a11e9479a358028706146a645bc6632fcda8717bc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cb0402282c7f399b69b70fb9af47794517f105267b536b58393e3c69aa0cacc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f6b51b0d31592c1695ecc8f19dd957bcb44a4a9c6ef3cf371818c729aee03e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9274fd03c22f70f9a14af97eb08241f288c6fa0c1c69b75f77c1057d35a7cbc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9274fd03c22f70f9a14af97eb08241f288c6fa0c1c69b75f77c1057d35a7cbc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:08Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:08 crc kubenswrapper[4809]: I0312 08:01:08.119705 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 08:01:08 crc kubenswrapper[4809]: E0312 08:01:08.119917 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 08:02:12.119887302 +0000 UTC m=+205.701923065 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:01:08 crc kubenswrapper[4809]: I0312 08:01:08.120258 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:01:08 crc kubenswrapper[4809]: I0312 08:01:08.120474 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:01:08 crc kubenswrapper[4809]: E0312 08:01:08.120479 4809 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 08:01:08 crc kubenswrapper[4809]: E0312 08:01:08.120791 4809 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 08:01:08 crc kubenswrapper[4809]: E0312 08:01:08.120930 4809 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 08:01:08 crc kubenswrapper[4809]: E0312 08:01:08.121103 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-12 08:02:12.121083546 +0000 UTC m=+205.703119309 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 08:01:08 crc kubenswrapper[4809]: E0312 08:01:08.120554 4809 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 08:01:08 crc kubenswrapper[4809]: E0312 08:01:08.121443 4809 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 08:01:08 crc kubenswrapper[4809]: E0312 08:01:08.121581 4809 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 08:01:08 crc kubenswrapper[4809]: E0312 08:01:08.121751 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-12 08:02:12.121733114 +0000 UTC m=+205.703768877 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 08:01:08 crc kubenswrapper[4809]: I0312 08:01:08.131001 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0dc5e31901266db36374e4881b99b1a40767bc6ff58afe1e6226adf1f5c60fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aef513f474f25e56fd48d7c8b5f110939514be75dec224356dee883508d604f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:08Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:08 crc kubenswrapper[4809]: I0312 08:01:08.145152 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d7prn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b97e5c85-4adb-4061-b2e0-1be7440c2133\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aba607a1e3cfe1831b9a97989571fc863a712e4579f87587f6d0089b255ab12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f99hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edfd33ff837e4b09002fef30a5d35f242dd087139d7cabc6416bae56811968c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f99hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-d7prn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:08Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:08 crc kubenswrapper[4809]: I0312 08:01:08.158679 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:08Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:08 crc kubenswrapper[4809]: I0312 08:01:08.172776 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:08Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:08 crc kubenswrapper[4809]: I0312 08:01:08.201645 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a4d3f57db51abc6b567283017a96e97ab98055de09513fe14d0be447942b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06c9bb77ec46b7ddf658e865660c808ab82509e698598c42daf2b199305c32c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://833b7c9d59ec9b7cf52e9b6c12705d034b46279a46145c181f92d30ffb6e7f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b49430238b4b9dc6fefc7169830e7106e6d43b1130d0b2fd358f103161f390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://943bab511f40b661cb31e977ec90fbc4edd6cb41d26593b0d7829dfd854fc047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f06a94e052ce6d8bc2496ade124907ca536c37d2710afe11c824327dc3de5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1d9f4aacaba0b546f2a616ca1c4e27fabeb14a3a1800c92409bdf7bea6f167d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd7061a04af8eca7049be8a049e77071d26305cf53171b0ba73714b5f0fe8f89\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T08:00:39Z\\\",\\\"message\\\":\\\"t-secret-name:serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0008826d7 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: authentication-operator,},ClusterIP:10.217.5.150,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.150],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0312 08:00:39.104426 7023 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handle\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1d9f4aacaba0b546f2a616ca1c4e27fabeb14a3a1800c92409bdf7bea6f167d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T08:01:07Z\\\",\\\"message\\\":\\\"e column _uuid == {c0c2f725-e461-454e-a88c-c8350d62e1ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0312 08:01:07.047140 7366 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-authentication/oauth-openshift]} name:Service_openshift-authentication/oauth-openshift_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.222:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c0c2f725-e461-454e-a88c-c8350d62e1ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0312 08:01:07.047425 7366 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0312 08:01:07.047542 7366 factory.go:656] Stopping watch factory\\\\nI0312 08:01:07.047541 7366 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0312 08:01:07.047560 7366 ovnkube.go:599] Stopped ovnkube\\\\nI0312 08:01:07.047588 7366 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0312 08:01:07.047667 7366 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a39013e253a1014bfc666169571d96ed943c28cc73221fcfa3f85d31ee9ea85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7h9l6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:08Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:08 crc kubenswrapper[4809]: I0312 08:01:08.217931 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vcxlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7db3e1-e4cd-4800-a7c2-30d8254caa4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a976f7b8cb11f2d6a4393b26321f830f79835db3c42bdc340ac4b189ee0e56d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vcxlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:08Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:08 crc kubenswrapper[4809]: I0312 08:01:08.244786 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d5675be-ed00-4551-8b81-3ec73f344320\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fceb15dac1090a34620398fe604b2fe04af865b536619b0491dbc17edb6242b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0fd6b286ccf1bf7747d31f1174259c16f2a9e0d371eb039a48c135b6118ace\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T07:59:18Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0312 07:58:49.313745 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0312 07:58:49.315796 1 observer_polling.go:159] Starting file observer\\\\nI0312 07:58:49.351950 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0312 07:58:49.359297 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0312 07:59:18.847872 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0312 07:59:18.847934 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5d018b14132bda1464a87d5f4073560c28ae90351c58099d35605cfa0a18492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c0f55493f640a3d45a1337317ad9715adf5e032d370a4c4fa72254a52368ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31ad14167b5c219c3f37847ba32dd923e81e22b517173000441413914c132d99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:08Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:08 crc kubenswrapper[4809]: I0312 08:01:08.265067 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4677e418d05a581aee3be8fd421402c02922c5eaf004cbcae1ae691fb67d283e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:08Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:08 crc kubenswrapper[4809]: I0312 08:01:08.282883 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"101483ba-8ed3-40eb-9855-077e9add029f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a3044e9eaa4ccc87b594a3f5440d95bf23c648ceb2e8f45abeadeda79b4a773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1dde7e156e97091c5fa419562607743e842e7e610e685fff1be3ce7ce33f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6d4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:08Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:08 crc kubenswrapper[4809]: I0312 08:01:08.304592 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwn7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab4c7cca-c503-41d4-8abf-5b19429defff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaaba74bba436056c80065bda470df45e31d6405fc8b717b0299b5a0fa3ef37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef92b238f6dd7e873d92f846bfa5a6dcbc6f5ab26617736df052a7f331fc5f4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef92b238f6dd7e873d92f846bfa5a6dcbc6f5ab26617736df052a7f331fc5f4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04cf364fbd4a2f9039a1733e337000f89db4b2ef6b4cd8de079f4c01c91add76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04cf364fbd4a2f9039a1733e337000f89db4b2ef6b4cd8de079f4c01c91add76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://844db69e092396d79015605fd21fde37c13689eb546b220c8e9ecc789aa7dc75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844db69e092396d79015605fd21fde37c13689eb546b220c8e9ecc789aa7dc75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5143175a54d02709bf304d2a4d50a1c3b2c441552e206628f9e1a5dfdb27ae97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5143175a54d02709bf304d2a4d50a1c3b2c441552e206628f9e1a5dfdb27ae97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1baa2e910db1674a7a4e4f09b6e01d305daf0f6ef4c7fee24090365dba810b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baa2e910db1674a7a4e4f09b6e01d305daf0f6ef4c7fee24090365dba810b9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwn7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:08Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:08 crc kubenswrapper[4809]: I0312 08:01:08.827690 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:01:08 crc kubenswrapper[4809]: I0312 08:01:08.827793 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:01:08 crc kubenswrapper[4809]: I0312 08:01:08.827816 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:01:08 crc kubenswrapper[4809]: I0312 08:01:08.827848 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:01:08 crc kubenswrapper[4809]: I0312 08:01:08.827871 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:01:08Z","lastTransitionTime":"2026-03-12T08:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:01:08 crc kubenswrapper[4809]: E0312 08:01:08.851668 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:01:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:01:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:01:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:01:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f5485f66-5ce6-43bd-a720-57f2c3255098\\\",\\\"systemUUID\\\":\\\"f124771b-42e6-4243-8a1b-002105aaecbe\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:08Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:08 crc kubenswrapper[4809]: I0312 08:01:08.856588 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:01:08 crc kubenswrapper[4809]: I0312 08:01:08.856653 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:01:08 crc kubenswrapper[4809]: I0312 08:01:08.856671 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:01:08 crc kubenswrapper[4809]: I0312 08:01:08.856694 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:01:08 crc kubenswrapper[4809]: I0312 08:01:08.856712 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:01:08Z","lastTransitionTime":"2026-03-12T08:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:01:08 crc kubenswrapper[4809]: E0312 08:01:08.875379 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:01:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:01:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:01:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:01:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f5485f66-5ce6-43bd-a720-57f2c3255098\\\",\\\"systemUUID\\\":\\\"f124771b-42e6-4243-8a1b-002105aaecbe\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:08Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:08 crc kubenswrapper[4809]: I0312 08:01:08.879692 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:01:08 crc kubenswrapper[4809]: I0312 08:01:08.879757 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:01:08 crc kubenswrapper[4809]: I0312 08:01:08.879776 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:01:08 crc kubenswrapper[4809]: I0312 08:01:08.879801 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:01:08 crc kubenswrapper[4809]: I0312 08:01:08.879821 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:01:08Z","lastTransitionTime":"2026-03-12T08:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:01:08 crc kubenswrapper[4809]: E0312 08:01:08.894806 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:01:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:01:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:01:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:01:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f5485f66-5ce6-43bd-a720-57f2c3255098\\\",\\\"systemUUID\\\":\\\"f124771b-42e6-4243-8a1b-002105aaecbe\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:08Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:08 crc kubenswrapper[4809]: I0312 08:01:08.899794 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:01:08 crc kubenswrapper[4809]: I0312 08:01:08.899834 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:01:08 crc kubenswrapper[4809]: I0312 08:01:08.899844 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:01:08 crc kubenswrapper[4809]: I0312 08:01:08.899862 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:01:08 crc kubenswrapper[4809]: I0312 08:01:08.899874 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:01:08Z","lastTransitionTime":"2026-03-12T08:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:01:08 crc kubenswrapper[4809]: E0312 08:01:08.917610 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:01:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:01:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:01:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:01:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f5485f66-5ce6-43bd-a720-57f2c3255098\\\",\\\"systemUUID\\\":\\\"f124771b-42e6-4243-8a1b-002105aaecbe\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:08Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:08 crc kubenswrapper[4809]: I0312 08:01:08.921987 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:01:08 crc kubenswrapper[4809]: I0312 08:01:08.922036 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:01:08 crc kubenswrapper[4809]: I0312 08:01:08.922049 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:01:08 crc kubenswrapper[4809]: I0312 08:01:08.922066 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:01:08 crc kubenswrapper[4809]: I0312 08:01:08.922082 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:01:08Z","lastTransitionTime":"2026-03-12T08:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:01:08 crc kubenswrapper[4809]: E0312 08:01:08.941037 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:01:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:01:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:01:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:01:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f5485f66-5ce6-43bd-a720-57f2c3255098\\\",\\\"systemUUID\\\":\\\"f124771b-42e6-4243-8a1b-002105aaecbe\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:08Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:08 crc kubenswrapper[4809]: E0312 08:01:08.941217 4809 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 12 08:01:08 crc kubenswrapper[4809]: I0312 08:01:08.976193 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7h9l6_cc7631d0-7d4b-4f5a-ab01-7516b2ed998e/ovnkube-controller/3.log" Mar 12 08:01:08 crc kubenswrapper[4809]: I0312 08:01:08.981034 4809 scope.go:117] "RemoveContainer" containerID="b1d9f4aacaba0b546f2a616ca1c4e27fabeb14a3a1800c92409bdf7bea6f167d" Mar 12 08:01:08 crc kubenswrapper[4809]: E0312 08:01:08.981271 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-7h9l6_openshift-ovn-kubernetes(cc7631d0-7d4b-4f5a-ab01-7516b2ed998e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" podUID="cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" Mar 12 08:01:08 crc kubenswrapper[4809]: I0312 08:01:08.995870 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09f5e61a-c077-449f-8291-dbf93ac9aca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95634c408015e53a4c8d044b27803c5070ad086be1f3372a760744db1bc9c033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c386318c2476130eb02e6620ab75fce7d5cd814d7f214bed4f449de60d126a14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efb3f6bb09e8fcec0267b3fdbdf43301e33aa2ba8bfaf8b3a124c7e7714949ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9814d65f13344afd5484baf20498a643e575626a91a474b937602f4bb06f564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T08:00:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 08:00:01.682190 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 08:00:01.682347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 08:00:01.683010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1444506139/tls.crt::/tmp/serving-cert-1444506139/tls.key\\\\\\\"\\\\nI0312 08:00:02.063779 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 08:00:02.066800 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 08:00:02.066837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 08:00:02.066881 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 08:00:02.066894 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 08:00:02.073309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 08:00:02.073332 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073342 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 08:00:02.073345 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 08:00:02.073348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 08:00:02.073351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 08:00:02.073351 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 08:00:02.075060 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://411ee556dd1461097ea682e93564110b2a46d02734cb91026ba06ffe7de7a80f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:08Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:09 crc kubenswrapper[4809]: I0312 08:01:09.012038 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83bfb2b1536a6a7a823599dddb47b6ad50b038509d00298d2fcee85b0a5b6c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:09Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:09 crc kubenswrapper[4809]: I0312 08:01:09.026036 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:09Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:09 crc kubenswrapper[4809]: I0312 08:01:09.039701 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kbwnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acc48a9a-cc0a-4361-b1bc-7c7684f9bf93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5d8cd4006dec3e1304218cda9d03ea761465524e055daccb8e61ef1368a7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j44jn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kbwnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:09Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:09 crc kubenswrapper[4809]: I0312 08:01:09.055369 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xgl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8d25ff9aef142a7444e4e864d929858a9e9a4d6831ed4e012b288e96ad920d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa0e58ad29c62a229115fb214956dc11f004bd7f4a8188d7435e0857f07ff46b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T08:01:01Z\\\",\\\"message\\\":\\\"2026-03-12T08:00:16+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_40cc8575-4964-494a-ac65-ea59ea86e3c3\\\\n2026-03-12T08:00:16+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_40cc8575-4964-494a-ac65-ea59ea86e3c3 to /host/opt/cni/bin/\\\\n2026-03-12T08:00:16Z [verbose] multus-daemon started\\\\n2026-03-12T08:00:16Z [verbose] Readiness Indicator file check\\\\n2026-03-12T08:01:01Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2nhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xgl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:09Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:09 crc kubenswrapper[4809]: I0312 08:01:09.065698 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-p566k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d31c58d-0f0d-431f-bebc-57173f467eee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j742\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j742\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-p566k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:09Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:09 crc kubenswrapper[4809]: I0312 08:01:09.077876 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7050281d-f7e1-447e-9cfd-66fe7342efff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a8ee6bd6947c48013498a11e9479a358028706146a645bc6632fcda8717bc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cb0402282c7f399b69b70fb9af47794517f105267b536b58393e3c69aa0cacc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f6b51b0d31592c1695ecc8f19dd957bcb44a4a9c6ef3cf371818c729aee03e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9274fd03c22f70f9a14af97eb08241f288c6fa0c1c69b75f77c1057d35a7cbc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9274fd03c22f70f9a14af97eb08241f288c6fa0c1c69b75f77c1057d35a7cbc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:09Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:09 crc kubenswrapper[4809]: I0312 08:01:09.090830 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0dc5e31901266db36374e4881b99b1a40767bc6ff58afe1e6226adf1f5c60fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aef513f474f25e56fd48d7c8b5f110939514be75dec224356dee883508d604f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:09Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:09 crc kubenswrapper[4809]: I0312 08:01:09.101734 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d7prn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b97e5c85-4adb-4061-b2e0-1be7440c2133\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aba607a1e3cfe1831b9a97989571fc863a712e4579f87587f6d0089b255ab12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f99hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edfd33ff837e4b09002fef30a5d35f242dd087139d7cabc6416bae56811968c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f99hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-d7prn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:09Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:09 crc kubenswrapper[4809]: I0312 08:01:09.105189 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p566k" Mar 12 08:01:09 crc kubenswrapper[4809]: E0312 08:01:09.105335 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p566k" podUID="3d31c58d-0f0d-431f-bebc-57173f467eee" Mar 12 08:01:09 crc kubenswrapper[4809]: I0312 08:01:09.117943 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90601560-88a4-4e9e-8c6c-b7cfd6193dfc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcce0bfc6e523f2bd2e05ab2403c80a280f3c22b7c9d4e61bde3bb414b1b6c1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afe86a0784b3f9ecefb87d8d8de1e616420abe185dfcf0e47ad17f7d3e18a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ecaed5f8be6ee7aa9b143575897770874696cfd0d71c30ef8e46d75a67be75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d974ea0c31aedbb16a24b692f45399aa47ba7a1e4b14467bb954c10757b38cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c32315e2dbe601b08a2b2c3db8a50dfc5d99be959dbf33c19f6a2c55df38c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28106812f58b36dc991efb2c3b84b4c3f38016271262418de7cb6d3130ee2743\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28106812f58b36dc991efb2c3b84b4c3f38016271262418de7cb6d3130ee2743\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://759e1933c261395715f2209ea1aa72957dfb4c5b1049c830e87854b0dcdba55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759e1933c261395715f2209ea1aa72957dfb4c5b1049c830e87854b0dcdba55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0d58c5fd72b76f75c191f46f25e0aa9fb3026191f5fb97c210907e1ea38baa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0d58c5fd72b76f75c191f46f25e0aa9fb3026191f5fb97c210907e1ea38baa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:09Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:09 crc kubenswrapper[4809]: I0312 08:01:09.128464 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:09Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:09 crc kubenswrapper[4809]: I0312 08:01:09.145497 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a4d3f57db51abc6b567283017a96e97ab98055de09513fe14d0be447942b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06c9bb77ec46b7ddf658e865660c808ab82509e698598c42daf2b199305c32c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://833b7c9d59ec9b7cf52e9b6c12705d034b46279a46145c181f92d30ffb6e7f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b49430238b4b9dc6fefc7169830e7106e6d43b1130d0b2fd358f103161f390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://943bab511f40b661cb31e977ec90fbc4edd6cb41d26593b0d7829dfd854fc047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f06a94e052ce6d8bc2496ade124907ca536c37d2710afe11c824327dc3de5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1d9f4aacaba0b546f2a616ca1c4e27fabeb14a3a1800c92409bdf7bea6f167d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1d9f4aacaba0b546f2a616ca1c4e27fabeb14a3a1800c92409bdf7bea6f167d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T08:01:07Z\\\",\\\"message\\\":\\\"e column _uuid == {c0c2f725-e461-454e-a88c-c8350d62e1ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0312 08:01:07.047140 7366 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-authentication/oauth-openshift]} name:Service_openshift-authentication/oauth-openshift_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.222:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c0c2f725-e461-454e-a88c-c8350d62e1ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0312 08:01:07.047425 7366 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0312 08:01:07.047542 7366 factory.go:656] Stopping watch factory\\\\nI0312 08:01:07.047541 7366 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0312 08:01:07.047560 7366 ovnkube.go:599] Stopped ovnkube\\\\nI0312 08:01:07.047588 7366 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0312 08:01:07.047667 7366 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:01:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-7h9l6_openshift-ovn-kubernetes(cc7631d0-7d4b-4f5a-ab01-7516b2ed998e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a39013e253a1014bfc666169571d96ed943c28cc73221fcfa3f85d31ee9ea85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7h9l6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:09Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:09 crc kubenswrapper[4809]: I0312 08:01:09.153930 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vcxlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7db3e1-e4cd-4800-a7c2-30d8254caa4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a976f7b8cb11f2d6a4393b26321f830f79835db3c42bdc340ac4b189ee0e56d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vcxlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:09Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:09 crc kubenswrapper[4809]: I0312 08:01:09.169191 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:09Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:09 crc kubenswrapper[4809]: I0312 08:01:09.185667 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4677e418d05a581aee3be8fd421402c02922c5eaf004cbcae1ae691fb67d283e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:09Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:09 crc kubenswrapper[4809]: I0312 08:01:09.200956 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"101483ba-8ed3-40eb-9855-077e9add029f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a3044e9eaa4ccc87b594a3f5440d95bf23c648ceb2e8f45abeadeda79b4a773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1dde7e156e97091c5fa419562607743e842e7e610e685fff1be3ce7ce33f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6d4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:09Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:09 crc kubenswrapper[4809]: I0312 08:01:09.217753 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwn7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab4c7cca-c503-41d4-8abf-5b19429defff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaaba74bba436056c80065bda470df45e31d6405fc8b717b0299b5a0fa3ef37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef92b238f6dd7e873d92f846bfa5a6dcbc6f5ab26617736df052a7f331fc5f4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef92b238f6dd7e873d92f846bfa5a6dcbc6f5ab26617736df052a7f331fc5f4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04cf364fbd4a2f9039a1733e337000f89db4b2ef6b4cd8de079f4c01c91add76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04cf364fbd4a2f9039a1733e337000f89db4b2ef6b4cd8de079f4c01c91add76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://844db69e092396d79015605fd21fde37c13689eb546b220c8e9ecc789aa7dc75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844db69e092396d79015605fd21fde37c13689eb546b220c8e9ecc789aa7dc75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5143175a54d02709bf304d2a4d50a1c3b2c441552e206628f9e1a5dfdb27ae97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5143175a54d02709bf304d2a4d50a1c3b2c441552e206628f9e1a5dfdb27ae97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1baa2e910db1674a7a4e4f09b6e01d305daf0f6ef4c7fee24090365dba810b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baa2e910db1674a7a4e4f09b6e01d305daf0f6ef4c7fee24090365dba810b9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwn7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:09Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:09 crc kubenswrapper[4809]: I0312 08:01:09.234668 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d5675be-ed00-4551-8b81-3ec73f344320\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fceb15dac1090a34620398fe604b2fe04af865b536619b0491dbc17edb6242b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0fd6b286ccf1bf7747d31f1174259c16f2a9e0d371eb039a48c135b6118ace\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T07:59:18Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0312 07:58:49.313745 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0312 07:58:49.315796 1 observer_polling.go:159] Starting file observer\\\\nI0312 07:58:49.351950 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0312 07:58:49.359297 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0312 07:59:18.847872 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0312 07:59:18.847934 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5d018b14132bda1464a87d5f4073560c28ae90351c58099d35605cfa0a18492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c0f55493f640a3d45a1337317ad9715adf5e032d370a4c4fa72254a52368ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31ad14167b5c219c3f37847ba32dd923e81e22b517173000441413914c132d99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:09Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:10 crc kubenswrapper[4809]: I0312 08:01:10.104975 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:01:10 crc kubenswrapper[4809]: E0312 08:01:10.105564 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 08:01:10 crc kubenswrapper[4809]: I0312 08:01:10.105037 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:01:10 crc kubenswrapper[4809]: E0312 08:01:10.105778 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 08:01:10 crc kubenswrapper[4809]: I0312 08:01:10.104975 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:01:10 crc kubenswrapper[4809]: E0312 08:01:10.105958 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 08:01:11 crc kubenswrapper[4809]: I0312 08:01:11.106500 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p566k" Mar 12 08:01:11 crc kubenswrapper[4809]: E0312 08:01:11.106695 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p566k" podUID="3d31c58d-0f0d-431f-bebc-57173f467eee" Mar 12 08:01:12 crc kubenswrapper[4809]: I0312 08:01:12.105220 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:01:12 crc kubenswrapper[4809]: I0312 08:01:12.105234 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:01:12 crc kubenswrapper[4809]: E0312 08:01:12.105424 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 08:01:12 crc kubenswrapper[4809]: E0312 08:01:12.105665 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 08:01:12 crc kubenswrapper[4809]: I0312 08:01:12.105917 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:01:12 crc kubenswrapper[4809]: E0312 08:01:12.106432 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 08:01:12 crc kubenswrapper[4809]: E0312 08:01:12.245375 4809 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 08:01:13 crc kubenswrapper[4809]: I0312 08:01:13.105666 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p566k" Mar 12 08:01:13 crc kubenswrapper[4809]: E0312 08:01:13.106004 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p566k" podUID="3d31c58d-0f0d-431f-bebc-57173f467eee" Mar 12 08:01:14 crc kubenswrapper[4809]: I0312 08:01:14.105841 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:01:14 crc kubenswrapper[4809]: I0312 08:01:14.105937 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:01:14 crc kubenswrapper[4809]: I0312 08:01:14.105862 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:01:14 crc kubenswrapper[4809]: E0312 08:01:14.106027 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 08:01:14 crc kubenswrapper[4809]: E0312 08:01:14.106221 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 08:01:14 crc kubenswrapper[4809]: E0312 08:01:14.106381 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 08:01:15 crc kubenswrapper[4809]: I0312 08:01:15.105464 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p566k" Mar 12 08:01:15 crc kubenswrapper[4809]: E0312 08:01:15.105732 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p566k" podUID="3d31c58d-0f0d-431f-bebc-57173f467eee" Mar 12 08:01:16 crc kubenswrapper[4809]: I0312 08:01:16.105255 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:01:16 crc kubenswrapper[4809]: I0312 08:01:16.105380 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:01:16 crc kubenswrapper[4809]: I0312 08:01:16.105274 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:01:16 crc kubenswrapper[4809]: E0312 08:01:16.105463 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 08:01:16 crc kubenswrapper[4809]: E0312 08:01:16.105554 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 08:01:16 crc kubenswrapper[4809]: E0312 08:01:16.105680 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 08:01:17 crc kubenswrapper[4809]: I0312 08:01:17.105199 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p566k" Mar 12 08:01:17 crc kubenswrapper[4809]: E0312 08:01:17.105738 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p566k" podUID="3d31c58d-0f0d-431f-bebc-57173f467eee" Mar 12 08:01:17 crc kubenswrapper[4809]: I0312 08:01:17.143163 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a4d3f57db51abc6b567283017a96e97ab98055de09513fe14d0be447942b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06c9bb77ec46b7ddf658e865660c808ab82509e698598c42daf2b199305c32c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://833b7c9d59ec9b7cf52e9b6c12705d034b46279a46145c181f92d30ffb6e7f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b49430238b4b9dc6fefc7169830e7106e6d43b1130d0b2fd358f103161f390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://943bab511f40b661cb31e977ec90fbc4edd6cb41d26593b0d7829dfd854fc047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f06a94e052ce6d8bc2496ade124907ca536c37d2710afe11c824327dc3de5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1d9f4aacaba0b546f2a616ca1c4e27fabeb14a3a1800c92409bdf7bea6f167d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1d9f4aacaba0b546f2a616ca1c4e27fabeb14a3a1800c92409bdf7bea6f167d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T08:01:07Z\\\",\\\"message\\\":\\\"e column _uuid == {c0c2f725-e461-454e-a88c-c8350d62e1ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0312 08:01:07.047140 7366 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-authentication/oauth-openshift]} name:Service_openshift-authentication/oauth-openshift_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.222:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c0c2f725-e461-454e-a88c-c8350d62e1ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0312 08:01:07.047425 7366 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0312 08:01:07.047542 7366 factory.go:656] Stopping watch factory\\\\nI0312 08:01:07.047541 7366 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0312 08:01:07.047560 7366 ovnkube.go:599] Stopped ovnkube\\\\nI0312 08:01:07.047588 7366 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0312 08:01:07.047667 7366 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:01:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-7h9l6_openshift-ovn-kubernetes(cc7631d0-7d4b-4f5a-ab01-7516b2ed998e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a39013e253a1014bfc666169571d96ed943c28cc73221fcfa3f85d31ee9ea85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7h9l6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:17Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:17 crc kubenswrapper[4809]: I0312 08:01:17.161840 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vcxlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7db3e1-e4cd-4800-a7c2-30d8254caa4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a976f7b8cb11f2d6a4393b26321f830f79835db3c42bdc340ac4b189ee0e56d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vcxlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:17Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:17 crc kubenswrapper[4809]: I0312 08:01:17.182333 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:17Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:17 crc kubenswrapper[4809]: I0312 08:01:17.200069 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:17Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:17 crc kubenswrapper[4809]: I0312 08:01:17.219665 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"101483ba-8ed3-40eb-9855-077e9add029f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a3044e9eaa4ccc87b594a3f5440d95bf23c648ceb2e8f45abeadeda79b4a773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1dde7e156e97091c5fa419562607743e842e7e610e685fff1be3ce7ce33f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6d4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:17Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:17 crc kubenswrapper[4809]: I0312 08:01:17.243942 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwn7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab4c7cca-c503-41d4-8abf-5b19429defff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaaba74bba436056c80065bda470df45e31d6405fc8b717b0299b5a0fa3ef37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef92b238f6dd7e873d92f846bfa5a6dcbc6f5ab26617736df052a7f331fc5f4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef92b238f6dd7e873d92f846bfa5a6dcbc6f5ab26617736df052a7f331fc5f4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04cf364fbd4a2f9039a1733e337000f89db4b2ef6b4cd8de079f4c01c91add76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04cf364fbd4a2f9039a1733e337000f89db4b2ef6b4cd8de079f4c01c91add76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://844db69e092396d79015605fd21fde37c13689eb546b220c8e9ecc789aa7dc75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844db69e092396d79015605fd21fde37c13689eb546b220c8e9ecc789aa7dc75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5143175a54d02709bf304d2a4d50a1c3b2c441552e206628f9e1a5dfdb27ae97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5143175a54d02709bf304d2a4d50a1c3b2c441552e206628f9e1a5dfdb27ae97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1baa2e910db1674a7a4e4f09b6e01d305daf0f6ef4c7fee24090365dba810b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baa2e910db1674a7a4e4f09b6e01d305daf0f6ef4c7fee24090365dba810b9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwn7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:17Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:17 crc kubenswrapper[4809]: E0312 08:01:17.246670 4809 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 08:01:17 crc kubenswrapper[4809]: I0312 08:01:17.267228 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d5675be-ed00-4551-8b81-3ec73f344320\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fceb15dac1090a34620398fe604b2fe04af865b536619b0491dbc17edb6242b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0fd6b286ccf1bf7747d31f1174259c16f2a9e0d371eb039a48c135b6118ace\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T07:59:18Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0312 07:58:49.313745 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0312 07:58:49.315796 1 observer_polling.go:159] Starting file observer\\\\nI0312 07:58:49.351950 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0312 07:58:49.359297 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0312 07:59:18.847872 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0312 07:59:18.847934 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5d018b14132bda1464a87d5f4073560c28ae90351c58099d35605cfa0a18492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c0f55493f640a3d45a1337317ad9715adf5e032d370a4c4fa72254a52368ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31ad14167b5c219c3f37847ba32dd923e81e22b517173000441413914c132d99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:17Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:17 crc kubenswrapper[4809]: I0312 08:01:17.288944 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4677e418d05a581aee3be8fd421402c02922c5eaf004cbcae1ae691fb67d283e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:17Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:17 crc kubenswrapper[4809]: I0312 08:01:17.311776 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83bfb2b1536a6a7a823599dddb47b6ad50b038509d00298d2fcee85b0a5b6c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:17Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:17 crc kubenswrapper[4809]: I0312 08:01:17.332958 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:17Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:17 crc kubenswrapper[4809]: I0312 08:01:17.350014 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kbwnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acc48a9a-cc0a-4361-b1bc-7c7684f9bf93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5d8cd4006dec3e1304218cda9d03ea761465524e055daccb8e61ef1368a7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j44jn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kbwnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:17Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:17 crc kubenswrapper[4809]: I0312 08:01:17.372162 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xgl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8d25ff9aef142a7444e4e864d929858a9e9a4d6831ed4e012b288e96ad920d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa0e58ad29c62a229115fb214956dc11f004bd7f4a8188d7435e0857f07ff46b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T08:01:01Z\\\",\\\"message\\\":\\\"2026-03-12T08:00:16+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_40cc8575-4964-494a-ac65-ea59ea86e3c3\\\\n2026-03-12T08:00:16+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_40cc8575-4964-494a-ac65-ea59ea86e3c3 to /host/opt/cni/bin/\\\\n2026-03-12T08:00:16Z [verbose] multus-daemon started\\\\n2026-03-12T08:00:16Z [verbose] Readiness Indicator file check\\\\n2026-03-12T08:01:01Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2nhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xgl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:17Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:17 crc kubenswrapper[4809]: I0312 08:01:17.392823 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-p566k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d31c58d-0f0d-431f-bebc-57173f467eee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j742\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j742\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-p566k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:17Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:17 crc kubenswrapper[4809]: I0312 08:01:17.417728 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09f5e61a-c077-449f-8291-dbf93ac9aca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95634c408015e53a4c8d044b27803c5070ad086be1f3372a760744db1bc9c033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c386318c2476130eb02e6620ab75fce7d5cd814d7f214bed4f449de60d126a14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efb3f6bb09e8fcec0267b3fdbdf43301e33aa2ba8bfaf8b3a124c7e7714949ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9814d65f13344afd5484baf20498a643e575626a91a474b937602f4bb06f564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T08:00:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 08:00:01.682190 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 08:00:01.682347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 08:00:01.683010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1444506139/tls.crt::/tmp/serving-cert-1444506139/tls.key\\\\\\\"\\\\nI0312 08:00:02.063779 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 08:00:02.066800 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 08:00:02.066837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 08:00:02.066881 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 08:00:02.066894 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 08:00:02.073309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 08:00:02.073332 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073342 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 08:00:02.073345 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 08:00:02.073348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 08:00:02.073351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 08:00:02.073351 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 08:00:02.075060 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://411ee556dd1461097ea682e93564110b2a46d02734cb91026ba06ffe7de7a80f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:17Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:17 crc kubenswrapper[4809]: I0312 08:01:17.438811 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0dc5e31901266db36374e4881b99b1a40767bc6ff58afe1e6226adf1f5c60fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aef513f474f25e56fd48d7c8b5f110939514be75dec224356dee883508d604f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:17Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:17 crc kubenswrapper[4809]: I0312 08:01:17.456452 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d7prn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b97e5c85-4adb-4061-b2e0-1be7440c2133\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aba607a1e3cfe1831b9a97989571fc863a712e4579f87587f6d0089b255ab12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f99hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edfd33ff837e4b09002fef30a5d35f242dd087139d7cabc6416bae56811968c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f99hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-d7prn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:17Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:17 crc kubenswrapper[4809]: I0312 08:01:17.490091 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90601560-88a4-4e9e-8c6c-b7cfd6193dfc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcce0bfc6e523f2bd2e05ab2403c80a280f3c22b7c9d4e61bde3bb414b1b6c1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afe86a0784b3f9ecefb87d8d8de1e616420abe185dfcf0e47ad17f7d3e18a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ecaed5f8be6ee7aa9b143575897770874696cfd0d71c30ef8e46d75a67be75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d974ea0c31aedbb16a24b692f45399aa47ba7a1e4b14467bb954c10757b38cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c32315e2dbe601b08a2b2c3db8a50dfc5d99be959dbf33c19f6a2c55df38c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28106812f58b36dc991efb2c3b84b4c3f38016271262418de7cb6d3130ee2743\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28106812f58b36dc991efb2c3b84b4c3f38016271262418de7cb6d3130ee2743\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://759e1933c261395715f2209ea1aa72957dfb4c5b1049c830e87854b0dcdba55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759e1933c261395715f2209ea1aa72957dfb4c5b1049c830e87854b0dcdba55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0d58c5fd72b76f75c191f46f25e0aa9fb3026191f5fb97c210907e1ea38baa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0d58c5fd72b76f75c191f46f25e0aa9fb3026191f5fb97c210907e1ea38baa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:17Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:17 crc kubenswrapper[4809]: I0312 08:01:17.509537 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7050281d-f7e1-447e-9cfd-66fe7342efff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a8ee6bd6947c48013498a11e9479a358028706146a645bc6632fcda8717bc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cb0402282c7f399b69b70fb9af47794517f105267b536b58393e3c69aa0cacc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f6b51b0d31592c1695ecc8f19dd957bcb44a4a9c6ef3cf371818c729aee03e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9274fd03c22f70f9a14af97eb08241f288c6fa0c1c69b75f77c1057d35a7cbc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9274fd03c22f70f9a14af97eb08241f288c6fa0c1c69b75f77c1057d35a7cbc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:17Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:18 crc kubenswrapper[4809]: I0312 08:01:18.105597 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:01:18 crc kubenswrapper[4809]: I0312 08:01:18.105650 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:01:18 crc kubenswrapper[4809]: I0312 08:01:18.105717 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:01:18 crc kubenswrapper[4809]: E0312 08:01:18.105802 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 08:01:18 crc kubenswrapper[4809]: E0312 08:01:18.106076 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 08:01:18 crc kubenswrapper[4809]: E0312 08:01:18.106287 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 08:01:19 crc kubenswrapper[4809]: I0312 08:01:19.105383 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p566k" Mar 12 08:01:19 crc kubenswrapper[4809]: E0312 08:01:19.105599 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p566k" podUID="3d31c58d-0f0d-431f-bebc-57173f467eee" Mar 12 08:01:19 crc kubenswrapper[4809]: I0312 08:01:19.230210 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:01:19 crc kubenswrapper[4809]: I0312 08:01:19.230864 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:01:19 crc kubenswrapper[4809]: I0312 08:01:19.230894 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:01:19 crc kubenswrapper[4809]: I0312 08:01:19.230929 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:01:19 crc kubenswrapper[4809]: I0312 08:01:19.230957 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:01:19Z","lastTransitionTime":"2026-03-12T08:01:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:01:19 crc kubenswrapper[4809]: E0312 08:01:19.252008 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:01:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:01:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:01:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:01:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f5485f66-5ce6-43bd-a720-57f2c3255098\\\",\\\"systemUUID\\\":\\\"f124771b-42e6-4243-8a1b-002105aaecbe\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:19Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:19 crc kubenswrapper[4809]: I0312 08:01:19.257810 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:01:19 crc kubenswrapper[4809]: I0312 08:01:19.257890 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:01:19 crc kubenswrapper[4809]: I0312 08:01:19.257912 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:01:19 crc kubenswrapper[4809]: I0312 08:01:19.257945 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:01:19 crc kubenswrapper[4809]: I0312 08:01:19.257967 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:01:19Z","lastTransitionTime":"2026-03-12T08:01:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:01:19 crc kubenswrapper[4809]: E0312 08:01:19.278940 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:01:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:01:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:01:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:01:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f5485f66-5ce6-43bd-a720-57f2c3255098\\\",\\\"systemUUID\\\":\\\"f124771b-42e6-4243-8a1b-002105aaecbe\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:19Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:19 crc kubenswrapper[4809]: I0312 08:01:19.286844 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:01:19 crc kubenswrapper[4809]: I0312 08:01:19.286929 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:01:19 crc kubenswrapper[4809]: I0312 08:01:19.286951 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:01:19 crc kubenswrapper[4809]: I0312 08:01:19.286980 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:01:19 crc kubenswrapper[4809]: I0312 08:01:19.287003 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:01:19Z","lastTransitionTime":"2026-03-12T08:01:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:01:19 crc kubenswrapper[4809]: E0312 08:01:19.309261 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:01:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:01:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:01:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:01:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f5485f66-5ce6-43bd-a720-57f2c3255098\\\",\\\"systemUUID\\\":\\\"f124771b-42e6-4243-8a1b-002105aaecbe\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:19Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:19 crc kubenswrapper[4809]: I0312 08:01:19.314502 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:01:19 crc kubenswrapper[4809]: I0312 08:01:19.314590 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:01:19 crc kubenswrapper[4809]: I0312 08:01:19.314618 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:01:19 crc kubenswrapper[4809]: I0312 08:01:19.314648 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:01:19 crc kubenswrapper[4809]: I0312 08:01:19.314670 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:01:19Z","lastTransitionTime":"2026-03-12T08:01:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:01:19 crc kubenswrapper[4809]: E0312 08:01:19.340220 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:01:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:01:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:01:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:01:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f5485f66-5ce6-43bd-a720-57f2c3255098\\\",\\\"systemUUID\\\":\\\"f124771b-42e6-4243-8a1b-002105aaecbe\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:19Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:19 crc kubenswrapper[4809]: I0312 08:01:19.346982 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:01:19 crc kubenswrapper[4809]: I0312 08:01:19.347159 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:01:19 crc kubenswrapper[4809]: I0312 08:01:19.347186 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:01:19 crc kubenswrapper[4809]: I0312 08:01:19.347210 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:01:19 crc kubenswrapper[4809]: I0312 08:01:19.347249 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:01:19Z","lastTransitionTime":"2026-03-12T08:01:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:01:19 crc kubenswrapper[4809]: E0312 08:01:19.373344 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:01:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:01:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:01:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:01:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f5485f66-5ce6-43bd-a720-57f2c3255098\\\",\\\"systemUUID\\\":\\\"f124771b-42e6-4243-8a1b-002105aaecbe\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:19Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:19 crc kubenswrapper[4809]: E0312 08:01:19.373577 4809 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 12 08:01:20 crc kubenswrapper[4809]: I0312 08:01:20.105957 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:01:20 crc kubenswrapper[4809]: I0312 08:01:20.106022 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:01:20 crc kubenswrapper[4809]: E0312 08:01:20.106169 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 08:01:20 crc kubenswrapper[4809]: I0312 08:01:20.106237 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:01:20 crc kubenswrapper[4809]: E0312 08:01:20.106425 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 08:01:20 crc kubenswrapper[4809]: E0312 08:01:20.106524 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 08:01:20 crc kubenswrapper[4809]: I0312 08:01:20.108060 4809 scope.go:117] "RemoveContainer" containerID="b1d9f4aacaba0b546f2a616ca1c4e27fabeb14a3a1800c92409bdf7bea6f167d" Mar 12 08:01:20 crc kubenswrapper[4809]: E0312 08:01:20.108511 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-7h9l6_openshift-ovn-kubernetes(cc7631d0-7d4b-4f5a-ab01-7516b2ed998e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" podUID="cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" Mar 12 08:01:21 crc kubenswrapper[4809]: I0312 08:01:21.105787 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p566k" Mar 12 08:01:21 crc kubenswrapper[4809]: E0312 08:01:21.106016 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p566k" podUID="3d31c58d-0f0d-431f-bebc-57173f467eee" Mar 12 08:01:21 crc kubenswrapper[4809]: I0312 08:01:21.119031 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Mar 12 08:01:22 crc kubenswrapper[4809]: I0312 08:01:22.105966 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:01:22 crc kubenswrapper[4809]: E0312 08:01:22.106156 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 08:01:22 crc kubenswrapper[4809]: I0312 08:01:22.105991 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:01:22 crc kubenswrapper[4809]: E0312 08:01:22.106268 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 08:01:22 crc kubenswrapper[4809]: I0312 08:01:22.105969 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:01:22 crc kubenswrapper[4809]: E0312 08:01:22.106390 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 08:01:22 crc kubenswrapper[4809]: E0312 08:01:22.247955 4809 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 08:01:23 crc kubenswrapper[4809]: I0312 08:01:23.105536 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p566k" Mar 12 08:01:23 crc kubenswrapper[4809]: E0312 08:01:23.105973 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p566k" podUID="3d31c58d-0f0d-431f-bebc-57173f467eee" Mar 12 08:01:24 crc kubenswrapper[4809]: I0312 08:01:24.105193 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:01:24 crc kubenswrapper[4809]: I0312 08:01:24.105289 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:01:24 crc kubenswrapper[4809]: I0312 08:01:24.105333 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:01:24 crc kubenswrapper[4809]: E0312 08:01:24.105829 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 08:01:24 crc kubenswrapper[4809]: E0312 08:01:24.105950 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 08:01:24 crc kubenswrapper[4809]: E0312 08:01:24.105729 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 08:01:25 crc kubenswrapper[4809]: I0312 08:01:25.105860 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p566k" Mar 12 08:01:25 crc kubenswrapper[4809]: E0312 08:01:25.106069 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p566k" podUID="3d31c58d-0f0d-431f-bebc-57173f467eee" Mar 12 08:01:26 crc kubenswrapper[4809]: I0312 08:01:26.105341 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:01:26 crc kubenswrapper[4809]: I0312 08:01:26.105443 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:01:26 crc kubenswrapper[4809]: E0312 08:01:26.105564 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 08:01:26 crc kubenswrapper[4809]: E0312 08:01:26.105655 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 08:01:26 crc kubenswrapper[4809]: I0312 08:01:26.105572 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:01:26 crc kubenswrapper[4809]: E0312 08:01:26.105840 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 08:01:27 crc kubenswrapper[4809]: I0312 08:01:27.106040 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p566k" Mar 12 08:01:27 crc kubenswrapper[4809]: E0312 08:01:27.106631 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p566k" podUID="3d31c58d-0f0d-431f-bebc-57173f467eee" Mar 12 08:01:27 crc kubenswrapper[4809]: I0312 08:01:27.116826 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kbwnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acc48a9a-cc0a-4361-b1bc-7c7684f9bf93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb5d8cd4006dec3e1304218cda9d03ea761465524e055daccb8e61ef1368a7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j44jn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kbwnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:27 crc kubenswrapper[4809]: I0312 08:01:27.135558 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xgl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8d25ff9aef142a7444e4e864d929858a9e9a4d6831ed4e012b288e96ad920d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa0e58ad29c62a229115fb214956dc11f004bd7f4a8188d7435e0857f07ff46b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T08:01:01Z\\\",\\\"message\\\":\\\"2026-03-12T08:00:16+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_40cc8575-4964-494a-ac65-ea59ea86e3c3\\\\n2026-03-12T08:00:16+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_40cc8575-4964-494a-ac65-ea59ea86e3c3 to /host/opt/cni/bin/\\\\n2026-03-12T08:00:16Z [verbose] multus-daemon started\\\\n2026-03-12T08:00:16Z [verbose] Readiness Indicator file check\\\\n2026-03-12T08:01:01Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2nhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xgl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:27 crc kubenswrapper[4809]: I0312 08:01:27.153216 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-p566k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d31c58d-0f0d-431f-bebc-57173f467eee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j742\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j742\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-p566k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:27 crc kubenswrapper[4809]: I0312 08:01:27.166254 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f40ec2b4-d1f5-49c8-bad4-5a5c265069cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a062252e0e5fcfebe2a2395329a45050cbcc0b991ac12aeaf1120d054fc4217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a08fd81f3edc99cf92fd9feec5a8ef32ad3566a45d6247cd7e7d947cea7979bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a08fd81f3edc99cf92fd9feec5a8ef32ad3566a45d6247cd7e7d947cea7979bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:27 crc kubenswrapper[4809]: I0312 08:01:27.183435 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09f5e61a-c077-449f-8291-dbf93ac9aca3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95634c408015e53a4c8d044b27803c5070ad086be1f3372a760744db1bc9c033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c386318c2476130eb02e6620ab75fce7d5cd814d7f214bed4f449de60d126a14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efb3f6bb09e8fcec0267b3fdbdf43301e33aa2ba8bfaf8b3a124c7e7714949ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9814d65f13344afd5484baf20498a643e575626a91a474b937602f4bb06f564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T08:00:02Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 08:00:01.682190 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 08:00:01.682347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 08:00:01.683010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1444506139/tls.crt::/tmp/serving-cert-1444506139/tls.key\\\\\\\"\\\\nI0312 08:00:02.063779 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 08:00:02.066800 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 08:00:02.066837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 08:00:02.066881 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 08:00:02.066894 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 08:00:02.073309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 08:00:02.073332 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073337 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 08:00:02.073342 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 08:00:02.073345 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 08:00:02.073348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 08:00:02.073351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 08:00:02.073351 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 08:00:02.075060 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:01Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://411ee556dd1461097ea682e93564110b2a46d02734cb91026ba06ffe7de7a80f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:27 crc kubenswrapper[4809]: I0312 08:01:27.204921 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83bfb2b1536a6a7a823599dddb47b6ad50b038509d00298d2fcee85b0a5b6c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:27 crc kubenswrapper[4809]: I0312 08:01:27.223746 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:27 crc kubenswrapper[4809]: I0312 08:01:27.244773 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90601560-88a4-4e9e-8c6c-b7cfd6193dfc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcce0bfc6e523f2bd2e05ab2403c80a280f3c22b7c9d4e61bde3bb414b1b6c1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afe86a0784b3f9ecefb87d8d8de1e616420abe185dfcf0e47ad17f7d3e18a0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ecaed5f8be6ee7aa9b143575897770874696cfd0d71c30ef8e46d75a67be75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d974ea0c31aedbb16a24b692f45399aa47ba7a1e4b14467bb954c10757b38cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c32315e2dbe601b08a2b2c3db8a50dfc5d99be959dbf33c19f6a2c55df38c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28106812f58b36dc991efb2c3b84b4c3f38016271262418de7cb6d3130ee2743\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28106812f58b36dc991efb2c3b84b4c3f38016271262418de7cb6d3130ee2743\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://759e1933c261395715f2209ea1aa72957dfb4c5b1049c830e87854b0dcdba55f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759e1933c261395715f2209ea1aa72957dfb4c5b1049c830e87854b0dcdba55f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0d58c5fd72b76f75c191f46f25e0aa9fb3026191f5fb97c210907e1ea38baa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0d58c5fd72b76f75c191f46f25e0aa9fb3026191f5fb97c210907e1ea38baa1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:27 crc kubenswrapper[4809]: E0312 08:01:27.248554 4809 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 08:01:27 crc kubenswrapper[4809]: I0312 08:01:27.260623 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7050281d-f7e1-447e-9cfd-66fe7342efff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a8ee6bd6947c48013498a11e9479a358028706146a645bc6632fcda8717bc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cb0402282c7f399b69b70fb9af47794517f105267b536b58393e3c69aa0cacc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f6b51b0d31592c1695ecc8f19dd957bcb44a4a9c6ef3cf371818c729aee03e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9274fd03c22f70f9a14af97eb08241f288c6fa0c1c69b75f77c1057d35a7cbc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9274fd03c22f70f9a14af97eb08241f288c6fa0c1c69b75f77c1057d35a7cbc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T07:58:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:27 crc kubenswrapper[4809]: I0312 08:01:27.278755 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0dc5e31901266db36374e4881b99b1a40767bc6ff58afe1e6226adf1f5c60fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aef513f474f25e56fd48d7c8b5f110939514be75dec224356dee883508d604f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:27 crc kubenswrapper[4809]: I0312 08:01:27.290674 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d7prn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b97e5c85-4adb-4061-b2e0-1be7440c2133\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aba607a1e3cfe1831b9a97989571fc863a712e4579f87587f6d0089b255ab12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f99hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edfd33ff837e4b09002fef30a5d35f242dd087139d7cabc6416bae56811968c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f99hq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-d7prn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:27 crc kubenswrapper[4809]: I0312 08:01:27.304311 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:27 crc kubenswrapper[4809]: I0312 08:01:27.317199 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:27 crc kubenswrapper[4809]: I0312 08:01:27.340540 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a4d3f57db51abc6b567283017a96e97ab98055de09513fe14d0be447942b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06c9bb77ec46b7ddf658e865660c808ab82509e698598c42daf2b199305c32c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://833b7c9d59ec9b7cf52e9b6c12705d034b46279a46145c181f92d30ffb6e7f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95b49430238b4b9dc6fefc7169830e7106e6d43b1130d0b2fd358f103161f390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://943bab511f40b661cb31e977ec90fbc4edd6cb41d26593b0d7829dfd854fc047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f06a94e052ce6d8bc2496ade124907ca536c37d2710afe11c824327dc3de5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1d9f4aacaba0b546f2a616ca1c4e27fabeb14a3a1800c92409bdf7bea6f167d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1d9f4aacaba0b546f2a616ca1c4e27fabeb14a3a1800c92409bdf7bea6f167d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T08:01:07Z\\\",\\\"message\\\":\\\"e column _uuid == {c0c2f725-e461-454e-a88c-c8350d62e1ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0312 08:01:07.047140 7366 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-authentication/oauth-openshift]} name:Service_openshift-authentication/oauth-openshift_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.222:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c0c2f725-e461-454e-a88c-c8350d62e1ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0312 08:01:07.047425 7366 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0312 08:01:07.047542 7366 factory.go:656] Stopping watch factory\\\\nI0312 08:01:07.047541 7366 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0312 08:01:07.047560 7366 ovnkube.go:599] Stopped ovnkube\\\\nI0312 08:01:07.047588 7366 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0312 08:01:07.047667 7366 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T08:01:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-7h9l6_openshift-ovn-kubernetes(cc7631d0-7d4b-4f5a-ab01-7516b2ed998e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a39013e253a1014bfc666169571d96ed943c28cc73221fcfa3f85d31ee9ea85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7h9l6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:27 crc kubenswrapper[4809]: I0312 08:01:27.350831 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vcxlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7db3e1-e4cd-4800-a7c2-30d8254caa4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a976f7b8cb11f2d6a4393b26321f830f79835db3c42bdc340ac4b189ee0e56d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpqkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vcxlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:27 crc kubenswrapper[4809]: I0312 08:01:27.362271 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d5675be-ed00-4551-8b81-3ec73f344320\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T07:58:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fceb15dac1090a34620398fe604b2fe04af865b536619b0491dbc17edb6242b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0fd6b286ccf1bf7747d31f1174259c16f2a9e0d371eb039a48c135b6118ace\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T07:59:18Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0312 07:58:49.313745 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0312 07:58:49.315796 1 observer_polling.go:159] Starting file observer\\\\nI0312 07:58:49.351950 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0312 07:58:49.359297 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0312 07:59:18.847872 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0312 07:59:18.847934 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5d018b14132bda1464a87d5f4073560c28ae90351c58099d35605cfa0a18492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c0f55493f640a3d45a1337317ad9715adf5e032d370a4c4fa72254a52368ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31ad14167b5c219c3f37847ba32dd923e81e22b517173000441413914c132d99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T07:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T07:58:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:27 crc kubenswrapper[4809]: I0312 08:01:27.373019 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4677e418d05a581aee3be8fd421402c02922c5eaf004cbcae1ae691fb67d283e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:27 crc kubenswrapper[4809]: I0312 08:01:27.382614 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"101483ba-8ed3-40eb-9855-077e9add029f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a3044e9eaa4ccc87b594a3f5440d95bf23c648ceb2e8f45abeadeda79b4a773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1dde7e156e97091c5fa419562607743e842e7e610e685fff1be3ce7ce33f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prxds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6d4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:27 crc kubenswrapper[4809]: I0312 08:01:27.394223 4809 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwn7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab4c7cca-c503-41d4-8abf-5b19429defff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T08:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaaba74bba436056c80065bda470df45e31d6405fc8b717b0299b5a0fa3ef37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T08:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82cb6004e82b37bef0726b533d990daf76a16c7fb25a015102d9d854bc69271b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef92b238f6dd7e873d92f846bfa5a6dcbc6f5ab26617736df052a7f331fc5f4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef92b238f6dd7e873d92f846bfa5a6dcbc6f5ab26617736df052a7f331fc5f4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04cf364fbd4a2f9039a1733e337000f89db4b2ef6b4cd8de079f4c01c91add76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04cf364fbd4a2f9039a1733e337000f89db4b2ef6b4cd8de079f4c01c91add76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://844db69e092396d79015605fd21fde37c13689eb546b220c8e9ecc789aa7dc75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844db69e092396d79015605fd21fde37c13689eb546b220c8e9ecc789aa7dc75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5143175a54d02709bf304d2a4d50a1c3b2c441552e206628f9e1a5dfdb27ae97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5143175a54d02709bf304d2a4d50a1c3b2c441552e206628f9e1a5dfdb27ae97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1baa2e910db1674a7a4e4f09b6e01d305daf0f6ef4c7fee24090365dba810b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baa2e910db1674a7a4e4f09b6e01d305daf0f6ef4c7fee24090365dba810b9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T08:00:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T08:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fp69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T08:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwn7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:27Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:28 crc kubenswrapper[4809]: I0312 08:01:28.105698 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:01:28 crc kubenswrapper[4809]: I0312 08:01:28.105778 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:01:28 crc kubenswrapper[4809]: I0312 08:01:28.105977 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:01:28 crc kubenswrapper[4809]: E0312 08:01:28.105968 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 08:01:28 crc kubenswrapper[4809]: E0312 08:01:28.106216 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 08:01:28 crc kubenswrapper[4809]: E0312 08:01:28.106350 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 08:01:29 crc kubenswrapper[4809]: I0312 08:01:29.106074 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p566k" Mar 12 08:01:29 crc kubenswrapper[4809]: E0312 08:01:29.106335 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p566k" podUID="3d31c58d-0f0d-431f-bebc-57173f467eee" Mar 12 08:01:29 crc kubenswrapper[4809]: I0312 08:01:29.741908 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:01:29 crc kubenswrapper[4809]: I0312 08:01:29.742007 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:01:29 crc kubenswrapper[4809]: I0312 08:01:29.742030 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:01:29 crc kubenswrapper[4809]: I0312 08:01:29.742061 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:01:29 crc kubenswrapper[4809]: I0312 08:01:29.742082 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:01:29Z","lastTransitionTime":"2026-03-12T08:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:01:29 crc kubenswrapper[4809]: E0312 08:01:29.762441 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:01:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:01:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:01:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:01:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f5485f66-5ce6-43bd-a720-57f2c3255098\\\",\\\"systemUUID\\\":\\\"f124771b-42e6-4243-8a1b-002105aaecbe\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:29Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:29 crc kubenswrapper[4809]: I0312 08:01:29.766745 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:01:29 crc kubenswrapper[4809]: I0312 08:01:29.766784 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:01:29 crc kubenswrapper[4809]: I0312 08:01:29.766795 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:01:29 crc kubenswrapper[4809]: I0312 08:01:29.766814 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:01:29 crc kubenswrapper[4809]: I0312 08:01:29.766825 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:01:29Z","lastTransitionTime":"2026-03-12T08:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:01:29 crc kubenswrapper[4809]: E0312 08:01:29.785409 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:01:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:01:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:01:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:01:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f5485f66-5ce6-43bd-a720-57f2c3255098\\\",\\\"systemUUID\\\":\\\"f124771b-42e6-4243-8a1b-002105aaecbe\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:29Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:29 crc kubenswrapper[4809]: I0312 08:01:29.790012 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:01:29 crc kubenswrapper[4809]: I0312 08:01:29.790048 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:01:29 crc kubenswrapper[4809]: I0312 08:01:29.790080 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:01:29 crc kubenswrapper[4809]: I0312 08:01:29.790098 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:01:29 crc kubenswrapper[4809]: I0312 08:01:29.790138 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:01:29Z","lastTransitionTime":"2026-03-12T08:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:01:29 crc kubenswrapper[4809]: E0312 08:01:29.808004 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:01:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:01:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:01:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:01:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f5485f66-5ce6-43bd-a720-57f2c3255098\\\",\\\"systemUUID\\\":\\\"f124771b-42e6-4243-8a1b-002105aaecbe\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:29Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:29 crc kubenswrapper[4809]: I0312 08:01:29.811644 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:01:29 crc kubenswrapper[4809]: I0312 08:01:29.811708 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:01:29 crc kubenswrapper[4809]: I0312 08:01:29.811728 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:01:29 crc kubenswrapper[4809]: I0312 08:01:29.811753 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:01:29 crc kubenswrapper[4809]: I0312 08:01:29.811771 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:01:29Z","lastTransitionTime":"2026-03-12T08:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:01:29 crc kubenswrapper[4809]: E0312 08:01:29.823987 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:01:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:01:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:01:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:01:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f5485f66-5ce6-43bd-a720-57f2c3255098\\\",\\\"systemUUID\\\":\\\"f124771b-42e6-4243-8a1b-002105aaecbe\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:29Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:29 crc kubenswrapper[4809]: I0312 08:01:29.827593 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:01:29 crc kubenswrapper[4809]: I0312 08:01:29.827628 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:01:29 crc kubenswrapper[4809]: I0312 08:01:29.827638 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:01:29 crc kubenswrapper[4809]: I0312 08:01:29.827654 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:01:29 crc kubenswrapper[4809]: I0312 08:01:29.827665 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:01:29Z","lastTransitionTime":"2026-03-12T08:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:01:29 crc kubenswrapper[4809]: E0312 08:01:29.837972 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:01:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:01:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:01:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:01:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T08:01:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f5485f66-5ce6-43bd-a720-57f2c3255098\\\",\\\"systemUUID\\\":\\\"f124771b-42e6-4243-8a1b-002105aaecbe\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T08:01:29Z is after 2025-08-24T17:21:41Z" Mar 12 08:01:29 crc kubenswrapper[4809]: E0312 08:01:29.838105 4809 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 12 08:01:30 crc kubenswrapper[4809]: I0312 08:01:30.105235 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:01:30 crc kubenswrapper[4809]: E0312 08:01:30.105353 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 08:01:30 crc kubenswrapper[4809]: I0312 08:01:30.105390 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:01:30 crc kubenswrapper[4809]: I0312 08:01:30.105569 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:01:30 crc kubenswrapper[4809]: E0312 08:01:30.105695 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 08:01:30 crc kubenswrapper[4809]: E0312 08:01:30.105874 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 08:01:31 crc kubenswrapper[4809]: I0312 08:01:31.105689 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p566k" Mar 12 08:01:31 crc kubenswrapper[4809]: E0312 08:01:31.106229 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p566k" podUID="3d31c58d-0f0d-431f-bebc-57173f467eee" Mar 12 08:01:31 crc kubenswrapper[4809]: I0312 08:01:31.107535 4809 scope.go:117] "RemoveContainer" containerID="b1d9f4aacaba0b546f2a616ca1c4e27fabeb14a3a1800c92409bdf7bea6f167d" Mar 12 08:01:31 crc kubenswrapper[4809]: E0312 08:01:31.107787 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-7h9l6_openshift-ovn-kubernetes(cc7631d0-7d4b-4f5a-ab01-7516b2ed998e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" podUID="cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" Mar 12 08:01:31 crc kubenswrapper[4809]: I0312 08:01:31.498527 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3d31c58d-0f0d-431f-bebc-57173f467eee-metrics-certs\") pod \"network-metrics-daemon-p566k\" (UID: \"3d31c58d-0f0d-431f-bebc-57173f467eee\") " pod="openshift-multus/network-metrics-daemon-p566k" Mar 12 08:01:31 crc kubenswrapper[4809]: E0312 08:01:31.498699 4809 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 08:01:31 crc kubenswrapper[4809]: E0312 08:01:31.498781 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d31c58d-0f0d-431f-bebc-57173f467eee-metrics-certs podName:3d31c58d-0f0d-431f-bebc-57173f467eee nodeName:}" failed. No retries permitted until 2026-03-12 08:02:35.498759227 +0000 UTC m=+229.080795000 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3d31c58d-0f0d-431f-bebc-57173f467eee-metrics-certs") pod "network-metrics-daemon-p566k" (UID: "3d31c58d-0f0d-431f-bebc-57173f467eee") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 08:01:32 crc kubenswrapper[4809]: I0312 08:01:32.105221 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:01:32 crc kubenswrapper[4809]: I0312 08:01:32.105349 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:01:32 crc kubenswrapper[4809]: E0312 08:01:32.105423 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 08:01:32 crc kubenswrapper[4809]: E0312 08:01:32.105644 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 08:01:32 crc kubenswrapper[4809]: I0312 08:01:32.105771 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:01:32 crc kubenswrapper[4809]: E0312 08:01:32.105869 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 08:01:32 crc kubenswrapper[4809]: E0312 08:01:32.307634 4809 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 08:01:33 crc kubenswrapper[4809]: I0312 08:01:33.105779 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p566k" Mar 12 08:01:33 crc kubenswrapper[4809]: E0312 08:01:33.106025 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p566k" podUID="3d31c58d-0f0d-431f-bebc-57173f467eee" Mar 12 08:01:34 crc kubenswrapper[4809]: I0312 08:01:34.105943 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:01:34 crc kubenswrapper[4809]: I0312 08:01:34.105999 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:01:34 crc kubenswrapper[4809]: I0312 08:01:34.105999 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:01:34 crc kubenswrapper[4809]: E0312 08:01:34.106256 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 08:01:34 crc kubenswrapper[4809]: E0312 08:01:34.106416 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 08:01:34 crc kubenswrapper[4809]: E0312 08:01:34.106690 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 08:01:35 crc kubenswrapper[4809]: I0312 08:01:35.105842 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p566k" Mar 12 08:01:35 crc kubenswrapper[4809]: E0312 08:01:35.106193 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p566k" podUID="3d31c58d-0f0d-431f-bebc-57173f467eee" Mar 12 08:01:36 crc kubenswrapper[4809]: I0312 08:01:36.105882 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:01:36 crc kubenswrapper[4809]: I0312 08:01:36.105987 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:01:36 crc kubenswrapper[4809]: I0312 08:01:36.106037 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:01:36 crc kubenswrapper[4809]: E0312 08:01:36.106077 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 08:01:36 crc kubenswrapper[4809]: E0312 08:01:36.106295 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 08:01:36 crc kubenswrapper[4809]: E0312 08:01:36.106510 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 08:01:37 crc kubenswrapper[4809]: I0312 08:01:37.105817 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p566k" Mar 12 08:01:37 crc kubenswrapper[4809]: E0312 08:01:37.106028 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p566k" podUID="3d31c58d-0f0d-431f-bebc-57173f467eee" Mar 12 08:01:37 crc kubenswrapper[4809]: I0312 08:01:37.159851 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=48.159822054 podStartE2EDuration="48.159822054s" podCreationTimestamp="2026-03-12 08:00:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:01:37.141873096 +0000 UTC m=+170.723908869" watchObservedRunningTime="2026-03-12 08:01:37.159822054 +0000 UTC m=+170.741857827" Mar 12 08:01:37 crc kubenswrapper[4809]: I0312 08:01:37.217872 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podStartSLOduration=133.217846448 podStartE2EDuration="2m13.217846448s" podCreationTimestamp="2026-03-12 07:59:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:01:37.181095207 +0000 UTC m=+170.763130980" watchObservedRunningTime="2026-03-12 08:01:37.217846448 +0000 UTC m=+170.799882221" Mar 12 08:01:37 crc kubenswrapper[4809]: I0312 08:01:37.218088 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-mwn7b" podStartSLOduration=133.218079904 podStartE2EDuration="2m13.218079904s" podCreationTimestamp="2026-03-12 07:59:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:01:37.218037083 +0000 UTC m=+170.800072856" watchObservedRunningTime="2026-03-12 08:01:37.218079904 +0000 UTC m=+170.800115677" Mar 12 08:01:37 crc kubenswrapper[4809]: I0312 08:01:37.241533 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-4xgl7" podStartSLOduration=133.241505938 podStartE2EDuration="2m13.241505938s" podCreationTimestamp="2026-03-12 07:59:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:01:37.241276432 +0000 UTC m=+170.823312225" watchObservedRunningTime="2026-03-12 08:01:37.241505938 +0000 UTC m=+170.823541701" Mar 12 08:01:37 crc kubenswrapper[4809]: I0312 08:01:37.278387 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=16.278362132 podStartE2EDuration="16.278362132s" podCreationTimestamp="2026-03-12 08:01:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:01:37.277810217 +0000 UTC m=+170.859846030" watchObservedRunningTime="2026-03-12 08:01:37.278362132 +0000 UTC m=+170.860397875" Mar 12 08:01:37 crc kubenswrapper[4809]: E0312 08:01:37.308433 4809 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 08:01:37 crc kubenswrapper[4809]: I0312 08:01:37.323918 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=88.323898423 podStartE2EDuration="1m28.323898423s" podCreationTimestamp="2026-03-12 08:00:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:01:37.303003581 +0000 UTC m=+170.885039304" watchObservedRunningTime="2026-03-12 08:01:37.323898423 +0000 UTC m=+170.905934166" Mar 12 08:01:37 crc kubenswrapper[4809]: I0312 08:01:37.351294 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-kbwnt" podStartSLOduration=133.351275388 podStartE2EDuration="2m13.351275388s" podCreationTimestamp="2026-03-12 07:59:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:01:37.350740883 +0000 UTC m=+170.932776616" watchObservedRunningTime="2026-03-12 08:01:37.351275388 +0000 UTC m=+170.933311121" Mar 12 08:01:37 crc kubenswrapper[4809]: I0312 08:01:37.386625 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=81.38660559 podStartE2EDuration="1m21.38660559s" podCreationTimestamp="2026-03-12 08:00:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:01:37.38631028 +0000 UTC m=+170.968346043" watchObservedRunningTime="2026-03-12 08:01:37.38660559 +0000 UTC m=+170.968641323" Mar 12 08:01:37 crc kubenswrapper[4809]: I0312 08:01:37.421534 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=49.421516029 podStartE2EDuration="49.421516029s" podCreationTimestamp="2026-03-12 08:00:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:01:37.404765324 +0000 UTC m=+170.986801067" watchObservedRunningTime="2026-03-12 08:01:37.421516029 +0000 UTC m=+171.003551762" Mar 12 08:01:37 crc kubenswrapper[4809]: I0312 08:01:37.434551 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d7prn" podStartSLOduration=133.434527587 podStartE2EDuration="2m13.434527587s" podCreationTimestamp="2026-03-12 07:59:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:01:37.433991031 +0000 UTC m=+171.016026764" watchObservedRunningTime="2026-03-12 08:01:37.434527587 +0000 UTC m=+171.016563330" Mar 12 08:01:37 crc kubenswrapper[4809]: I0312 08:01:37.530762 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-vcxlw" podStartSLOduration=133.530744583 podStartE2EDuration="2m13.530744583s" podCreationTimestamp="2026-03-12 07:59:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:01:37.530198947 +0000 UTC m=+171.112234680" watchObservedRunningTime="2026-03-12 08:01:37.530744583 +0000 UTC m=+171.112780316" Mar 12 08:01:38 crc kubenswrapper[4809]: I0312 08:01:38.105858 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:01:38 crc kubenswrapper[4809]: I0312 08:01:38.105941 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:01:38 crc kubenswrapper[4809]: I0312 08:01:38.106005 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:01:38 crc kubenswrapper[4809]: E0312 08:01:38.106165 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 08:01:38 crc kubenswrapper[4809]: E0312 08:01:38.106393 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 08:01:38 crc kubenswrapper[4809]: E0312 08:01:38.106856 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 08:01:39 crc kubenswrapper[4809]: I0312 08:01:39.105300 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p566k" Mar 12 08:01:39 crc kubenswrapper[4809]: E0312 08:01:39.105437 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p566k" podUID="3d31c58d-0f0d-431f-bebc-57173f467eee" Mar 12 08:01:39 crc kubenswrapper[4809]: I0312 08:01:39.965546 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 08:01:39 crc kubenswrapper[4809]: I0312 08:01:39.965581 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 08:01:39 crc kubenswrapper[4809]: I0312 08:01:39.965590 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 08:01:39 crc kubenswrapper[4809]: I0312 08:01:39.965603 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 08:01:39 crc kubenswrapper[4809]: I0312 08:01:39.965613 4809 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T08:01:39Z","lastTransitionTime":"2026-03-12T08:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 08:01:40 crc kubenswrapper[4809]: I0312 08:01:40.026895 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-xbfhd"] Mar 12 08:01:40 crc kubenswrapper[4809]: I0312 08:01:40.027260 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xbfhd" Mar 12 08:01:40 crc kubenswrapper[4809]: I0312 08:01:40.030620 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 12 08:01:40 crc kubenswrapper[4809]: I0312 08:01:40.031055 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 12 08:01:40 crc kubenswrapper[4809]: I0312 08:01:40.031342 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Mar 12 08:01:40 crc kubenswrapper[4809]: I0312 08:01:40.031711 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 12 08:01:40 crc kubenswrapper[4809]: I0312 08:01:40.102231 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/63796dd3-f853-4adf-b368-b20086dcc602-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-xbfhd\" (UID: \"63796dd3-f853-4adf-b368-b20086dcc602\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xbfhd" Mar 12 08:01:40 crc kubenswrapper[4809]: I0312 08:01:40.102264 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/63796dd3-f853-4adf-b368-b20086dcc602-service-ca\") pod \"cluster-version-operator-5c965bbfc6-xbfhd\" (UID: \"63796dd3-f853-4adf-b368-b20086dcc602\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xbfhd" Mar 12 08:01:40 crc kubenswrapper[4809]: I0312 08:01:40.102293 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/63796dd3-f853-4adf-b368-b20086dcc602-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-xbfhd\" (UID: \"63796dd3-f853-4adf-b368-b20086dcc602\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xbfhd" Mar 12 08:01:40 crc kubenswrapper[4809]: I0312 08:01:40.102403 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63796dd3-f853-4adf-b368-b20086dcc602-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-xbfhd\" (UID: \"63796dd3-f853-4adf-b368-b20086dcc602\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xbfhd" Mar 12 08:01:40 crc kubenswrapper[4809]: I0312 08:01:40.102426 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/63796dd3-f853-4adf-b368-b20086dcc602-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-xbfhd\" (UID: \"63796dd3-f853-4adf-b368-b20086dcc602\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xbfhd" Mar 12 08:01:40 crc kubenswrapper[4809]: I0312 08:01:40.105277 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:01:40 crc kubenswrapper[4809]: I0312 08:01:40.105277 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:01:40 crc kubenswrapper[4809]: E0312 08:01:40.105488 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 08:01:40 crc kubenswrapper[4809]: E0312 08:01:40.105550 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 08:01:40 crc kubenswrapper[4809]: I0312 08:01:40.105308 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:01:40 crc kubenswrapper[4809]: E0312 08:01:40.105624 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 08:01:40 crc kubenswrapper[4809]: I0312 08:01:40.203160 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/63796dd3-f853-4adf-b368-b20086dcc602-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-xbfhd\" (UID: \"63796dd3-f853-4adf-b368-b20086dcc602\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xbfhd" Mar 12 08:01:40 crc kubenswrapper[4809]: I0312 08:01:40.203311 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63796dd3-f853-4adf-b368-b20086dcc602-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-xbfhd\" (UID: \"63796dd3-f853-4adf-b368-b20086dcc602\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xbfhd" Mar 12 08:01:40 crc kubenswrapper[4809]: I0312 08:01:40.203328 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/63796dd3-f853-4adf-b368-b20086dcc602-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-xbfhd\" (UID: \"63796dd3-f853-4adf-b368-b20086dcc602\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xbfhd" Mar 12 08:01:40 crc kubenswrapper[4809]: I0312 08:01:40.203362 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/63796dd3-f853-4adf-b368-b20086dcc602-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-xbfhd\" (UID: \"63796dd3-f853-4adf-b368-b20086dcc602\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xbfhd" Mar 12 08:01:40 crc kubenswrapper[4809]: I0312 08:01:40.203448 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/63796dd3-f853-4adf-b368-b20086dcc602-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-xbfhd\" (UID: \"63796dd3-f853-4adf-b368-b20086dcc602\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xbfhd" Mar 12 08:01:40 crc kubenswrapper[4809]: I0312 08:01:40.203504 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/63796dd3-f853-4adf-b368-b20086dcc602-service-ca\") pod \"cluster-version-operator-5c965bbfc6-xbfhd\" (UID: \"63796dd3-f853-4adf-b368-b20086dcc602\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xbfhd" Mar 12 08:01:40 crc kubenswrapper[4809]: I0312 08:01:40.203532 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/63796dd3-f853-4adf-b368-b20086dcc602-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-xbfhd\" (UID: \"63796dd3-f853-4adf-b368-b20086dcc602\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xbfhd" Mar 12 08:01:40 crc kubenswrapper[4809]: I0312 08:01:40.205399 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/63796dd3-f853-4adf-b368-b20086dcc602-service-ca\") pod \"cluster-version-operator-5c965bbfc6-xbfhd\" (UID: \"63796dd3-f853-4adf-b368-b20086dcc602\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xbfhd" Mar 12 08:01:40 crc kubenswrapper[4809]: I0312 08:01:40.213764 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63796dd3-f853-4adf-b368-b20086dcc602-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-xbfhd\" (UID: \"63796dd3-f853-4adf-b368-b20086dcc602\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xbfhd" Mar 12 08:01:40 crc kubenswrapper[4809]: I0312 08:01:40.220499 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/63796dd3-f853-4adf-b368-b20086dcc602-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-xbfhd\" (UID: \"63796dd3-f853-4adf-b368-b20086dcc602\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xbfhd" Mar 12 08:01:40 crc kubenswrapper[4809]: I0312 08:01:40.310170 4809 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Mar 12 08:01:40 crc kubenswrapper[4809]: I0312 08:01:40.318348 4809 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 12 08:01:40 crc kubenswrapper[4809]: I0312 08:01:40.352501 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xbfhd" Mar 12 08:01:40 crc kubenswrapper[4809]: W0312 08:01:40.374980 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63796dd3_f853_4adf_b368_b20086dcc602.slice/crio-2e5dab5b596289d80a35c5448c753bfe6fc9dbe5aa2c56d3c501a02ffe78680d WatchSource:0}: Error finding container 2e5dab5b596289d80a35c5448c753bfe6fc9dbe5aa2c56d3c501a02ffe78680d: Status 404 returned error can't find the container with id 2e5dab5b596289d80a35c5448c753bfe6fc9dbe5aa2c56d3c501a02ffe78680d Mar 12 08:01:41 crc kubenswrapper[4809]: I0312 08:01:41.105024 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p566k" Mar 12 08:01:41 crc kubenswrapper[4809]: E0312 08:01:41.105175 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p566k" podUID="3d31c58d-0f0d-431f-bebc-57173f467eee" Mar 12 08:01:41 crc kubenswrapper[4809]: I0312 08:01:41.111709 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xbfhd" event={"ID":"63796dd3-f853-4adf-b368-b20086dcc602","Type":"ContainerStarted","Data":"8f0cd1e7c7ce5f05c6681610f682a8e8d0bd40fae1fe2fcdd1a5927029546dae"} Mar 12 08:01:41 crc kubenswrapper[4809]: I0312 08:01:41.111792 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xbfhd" event={"ID":"63796dd3-f853-4adf-b368-b20086dcc602","Type":"ContainerStarted","Data":"2e5dab5b596289d80a35c5448c753bfe6fc9dbe5aa2c56d3c501a02ffe78680d"} Mar 12 08:01:42 crc kubenswrapper[4809]: I0312 08:01:42.105496 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:01:42 crc kubenswrapper[4809]: E0312 08:01:42.105716 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 08:01:42 crc kubenswrapper[4809]: I0312 08:01:42.105515 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:01:42 crc kubenswrapper[4809]: E0312 08:01:42.106216 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 08:01:42 crc kubenswrapper[4809]: I0312 08:01:42.106310 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:01:42 crc kubenswrapper[4809]: E0312 08:01:42.106442 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 08:01:42 crc kubenswrapper[4809]: E0312 08:01:42.310496 4809 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 08:01:43 crc kubenswrapper[4809]: I0312 08:01:43.105721 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p566k" Mar 12 08:01:43 crc kubenswrapper[4809]: E0312 08:01:43.105967 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p566k" podUID="3d31c58d-0f0d-431f-bebc-57173f467eee" Mar 12 08:01:44 crc kubenswrapper[4809]: I0312 08:01:44.106152 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:01:44 crc kubenswrapper[4809]: I0312 08:01:44.106221 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:01:44 crc kubenswrapper[4809]: I0312 08:01:44.106195 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:01:44 crc kubenswrapper[4809]: E0312 08:01:44.106407 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 08:01:44 crc kubenswrapper[4809]: E0312 08:01:44.106585 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 08:01:44 crc kubenswrapper[4809]: E0312 08:01:44.107377 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 08:01:45 crc kubenswrapper[4809]: I0312 08:01:45.105711 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p566k" Mar 12 08:01:45 crc kubenswrapper[4809]: E0312 08:01:45.105850 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p566k" podUID="3d31c58d-0f0d-431f-bebc-57173f467eee" Mar 12 08:01:46 crc kubenswrapper[4809]: I0312 08:01:46.105381 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:01:46 crc kubenswrapper[4809]: I0312 08:01:46.105480 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:01:46 crc kubenswrapper[4809]: I0312 08:01:46.105485 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:01:46 crc kubenswrapper[4809]: E0312 08:01:46.105559 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 08:01:46 crc kubenswrapper[4809]: E0312 08:01:46.105643 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 08:01:46 crc kubenswrapper[4809]: E0312 08:01:46.106333 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 08:01:46 crc kubenswrapper[4809]: I0312 08:01:46.107255 4809 scope.go:117] "RemoveContainer" containerID="b1d9f4aacaba0b546f2a616ca1c4e27fabeb14a3a1800c92409bdf7bea6f167d" Mar 12 08:01:46 crc kubenswrapper[4809]: E0312 08:01:46.107619 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-7h9l6_openshift-ovn-kubernetes(cc7631d0-7d4b-4f5a-ab01-7516b2ed998e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" podUID="cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" Mar 12 08:01:47 crc kubenswrapper[4809]: I0312 08:01:47.105750 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p566k" Mar 12 08:01:47 crc kubenswrapper[4809]: E0312 08:01:47.108075 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p566k" podUID="3d31c58d-0f0d-431f-bebc-57173f467eee" Mar 12 08:01:47 crc kubenswrapper[4809]: E0312 08:01:47.311945 4809 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 08:01:48 crc kubenswrapper[4809]: I0312 08:01:48.105053 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:01:48 crc kubenswrapper[4809]: I0312 08:01:48.105188 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:01:48 crc kubenswrapper[4809]: I0312 08:01:48.105594 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:01:48 crc kubenswrapper[4809]: E0312 08:01:48.105821 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 08:01:48 crc kubenswrapper[4809]: E0312 08:01:48.105960 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 08:01:48 crc kubenswrapper[4809]: E0312 08:01:48.106155 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 08:01:48 crc kubenswrapper[4809]: I0312 08:01:48.133045 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4xgl7_85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff/kube-multus/1.log" Mar 12 08:01:48 crc kubenswrapper[4809]: I0312 08:01:48.133835 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4xgl7_85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff/kube-multus/0.log" Mar 12 08:01:48 crc kubenswrapper[4809]: I0312 08:01:48.133956 4809 generic.go:334] "Generic (PLEG): container finished" podID="85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff" containerID="a8d25ff9aef142a7444e4e864d929858a9e9a4d6831ed4e012b288e96ad920d6" exitCode=1 Mar 12 08:01:48 crc kubenswrapper[4809]: I0312 08:01:48.134002 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4xgl7" event={"ID":"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff","Type":"ContainerDied","Data":"a8d25ff9aef142a7444e4e864d929858a9e9a4d6831ed4e012b288e96ad920d6"} Mar 12 08:01:48 crc kubenswrapper[4809]: I0312 08:01:48.134059 4809 scope.go:117] "RemoveContainer" containerID="aa0e58ad29c62a229115fb214956dc11f004bd7f4a8188d7435e0857f07ff46b" Mar 12 08:01:48 crc kubenswrapper[4809]: I0312 08:01:48.134694 4809 scope.go:117] "RemoveContainer" containerID="a8d25ff9aef142a7444e4e864d929858a9e9a4d6831ed4e012b288e96ad920d6" Mar 12 08:01:48 crc kubenswrapper[4809]: E0312 08:01:48.135067 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-4xgl7_openshift-multus(85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff)\"" pod="openshift-multus/multus-4xgl7" podUID="85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff" Mar 12 08:01:48 crc kubenswrapper[4809]: I0312 08:01:48.160576 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xbfhd" podStartSLOduration=144.160526152 podStartE2EDuration="2m24.160526152s" podCreationTimestamp="2026-03-12 07:59:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:01:41.126555798 +0000 UTC m=+174.708591541" watchObservedRunningTime="2026-03-12 08:01:48.160526152 +0000 UTC m=+181.742561915" Mar 12 08:01:49 crc kubenswrapper[4809]: I0312 08:01:49.105409 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p566k" Mar 12 08:01:49 crc kubenswrapper[4809]: E0312 08:01:49.105589 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p566k" podUID="3d31c58d-0f0d-431f-bebc-57173f467eee" Mar 12 08:01:49 crc kubenswrapper[4809]: I0312 08:01:49.139675 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4xgl7_85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff/kube-multus/1.log" Mar 12 08:01:50 crc kubenswrapper[4809]: I0312 08:01:50.105011 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:01:50 crc kubenswrapper[4809]: I0312 08:01:50.105077 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:01:50 crc kubenswrapper[4809]: E0312 08:01:50.105219 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 08:01:50 crc kubenswrapper[4809]: I0312 08:01:50.105011 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:01:50 crc kubenswrapper[4809]: E0312 08:01:50.105886 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 08:01:50 crc kubenswrapper[4809]: E0312 08:01:50.105994 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 08:01:51 crc kubenswrapper[4809]: I0312 08:01:51.105651 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p566k" Mar 12 08:01:51 crc kubenswrapper[4809]: E0312 08:01:51.105906 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p566k" podUID="3d31c58d-0f0d-431f-bebc-57173f467eee" Mar 12 08:01:52 crc kubenswrapper[4809]: I0312 08:01:52.105681 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:01:52 crc kubenswrapper[4809]: I0312 08:01:52.105771 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:01:52 crc kubenswrapper[4809]: I0312 08:01:52.105772 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:01:52 crc kubenswrapper[4809]: E0312 08:01:52.106431 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 08:01:52 crc kubenswrapper[4809]: E0312 08:01:52.106713 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 08:01:52 crc kubenswrapper[4809]: E0312 08:01:52.106803 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 08:01:52 crc kubenswrapper[4809]: E0312 08:01:52.314182 4809 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 08:01:53 crc kubenswrapper[4809]: I0312 08:01:53.105504 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p566k" Mar 12 08:01:53 crc kubenswrapper[4809]: E0312 08:01:53.105749 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p566k" podUID="3d31c58d-0f0d-431f-bebc-57173f467eee" Mar 12 08:01:54 crc kubenswrapper[4809]: I0312 08:01:54.105497 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:01:54 crc kubenswrapper[4809]: I0312 08:01:54.105550 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:01:54 crc kubenswrapper[4809]: E0312 08:01:54.105621 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 08:01:54 crc kubenswrapper[4809]: I0312 08:01:54.105492 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:01:54 crc kubenswrapper[4809]: E0312 08:01:54.105802 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 08:01:54 crc kubenswrapper[4809]: E0312 08:01:54.105896 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 08:01:55 crc kubenswrapper[4809]: I0312 08:01:55.106310 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p566k" Mar 12 08:01:55 crc kubenswrapper[4809]: E0312 08:01:55.106578 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p566k" podUID="3d31c58d-0f0d-431f-bebc-57173f467eee" Mar 12 08:01:56 crc kubenswrapper[4809]: I0312 08:01:56.104982 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:01:56 crc kubenswrapper[4809]: I0312 08:01:56.105070 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:01:56 crc kubenswrapper[4809]: I0312 08:01:56.104979 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:01:56 crc kubenswrapper[4809]: E0312 08:01:56.105218 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 08:01:56 crc kubenswrapper[4809]: E0312 08:01:56.105404 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 08:01:56 crc kubenswrapper[4809]: E0312 08:01:56.105535 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 08:01:57 crc kubenswrapper[4809]: I0312 08:01:57.105777 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p566k" Mar 12 08:01:57 crc kubenswrapper[4809]: E0312 08:01:57.107100 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p566k" podUID="3d31c58d-0f0d-431f-bebc-57173f467eee" Mar 12 08:01:57 crc kubenswrapper[4809]: E0312 08:01:57.315703 4809 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 08:01:58 crc kubenswrapper[4809]: I0312 08:01:58.105574 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:01:58 crc kubenswrapper[4809]: I0312 08:01:58.105574 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:01:58 crc kubenswrapper[4809]: E0312 08:01:58.105792 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 08:01:58 crc kubenswrapper[4809]: I0312 08:01:58.105590 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:01:58 crc kubenswrapper[4809]: E0312 08:01:58.105917 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 08:01:58 crc kubenswrapper[4809]: E0312 08:01:58.106084 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 08:01:59 crc kubenswrapper[4809]: I0312 08:01:59.105440 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p566k" Mar 12 08:01:59 crc kubenswrapper[4809]: E0312 08:01:59.105631 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p566k" podUID="3d31c58d-0f0d-431f-bebc-57173f467eee" Mar 12 08:02:00 crc kubenswrapper[4809]: I0312 08:02:00.105758 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:02:00 crc kubenswrapper[4809]: I0312 08:02:00.105811 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:02:00 crc kubenswrapper[4809]: I0312 08:02:00.105780 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:02:00 crc kubenswrapper[4809]: E0312 08:02:00.105965 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 08:02:00 crc kubenswrapper[4809]: E0312 08:02:00.106062 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 08:02:00 crc kubenswrapper[4809]: E0312 08:02:00.106297 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 08:02:01 crc kubenswrapper[4809]: I0312 08:02:01.105215 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p566k" Mar 12 08:02:01 crc kubenswrapper[4809]: E0312 08:02:01.105991 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p566k" podUID="3d31c58d-0f0d-431f-bebc-57173f467eee" Mar 12 08:02:01 crc kubenswrapper[4809]: I0312 08:02:01.106557 4809 scope.go:117] "RemoveContainer" containerID="b1d9f4aacaba0b546f2a616ca1c4e27fabeb14a3a1800c92409bdf7bea6f167d" Mar 12 08:02:02 crc kubenswrapper[4809]: I0312 08:02:02.035464 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-p566k"] Mar 12 08:02:02 crc kubenswrapper[4809]: I0312 08:02:02.035660 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p566k" Mar 12 08:02:02 crc kubenswrapper[4809]: E0312 08:02:02.036096 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p566k" podUID="3d31c58d-0f0d-431f-bebc-57173f467eee" Mar 12 08:02:02 crc kubenswrapper[4809]: I0312 08:02:02.105387 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:02:02 crc kubenswrapper[4809]: I0312 08:02:02.105498 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:02:02 crc kubenswrapper[4809]: I0312 08:02:02.105563 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:02:02 crc kubenswrapper[4809]: E0312 08:02:02.105686 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 08:02:02 crc kubenswrapper[4809]: E0312 08:02:02.105915 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 08:02:02 crc kubenswrapper[4809]: E0312 08:02:02.105951 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 08:02:02 crc kubenswrapper[4809]: I0312 08:02:02.187952 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7h9l6_cc7631d0-7d4b-4f5a-ab01-7516b2ed998e/ovnkube-controller/3.log" Mar 12 08:02:02 crc kubenswrapper[4809]: I0312 08:02:02.191917 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" event={"ID":"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e","Type":"ContainerStarted","Data":"7c512f2d07de26de4898c7699488bb0fc61be75e0ad81b970d46e2cd73b3cae9"} Mar 12 08:02:02 crc kubenswrapper[4809]: I0312 08:02:02.192651 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:02:02 crc kubenswrapper[4809]: E0312 08:02:02.317584 4809 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 08:02:03 crc kubenswrapper[4809]: I0312 08:02:03.106080 4809 scope.go:117] "RemoveContainer" containerID="a8d25ff9aef142a7444e4e864d929858a9e9a4d6831ed4e012b288e96ad920d6" Mar 12 08:02:03 crc kubenswrapper[4809]: I0312 08:02:03.125786 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" podStartSLOduration=159.12575743 podStartE2EDuration="2m39.12575743s" podCreationTimestamp="2026-03-12 07:59:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:02:02.223565522 +0000 UTC m=+195.805601265" watchObservedRunningTime="2026-03-12 08:02:03.12575743 +0000 UTC m=+196.707793203" Mar 12 08:02:04 crc kubenswrapper[4809]: I0312 08:02:04.105450 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p566k" Mar 12 08:02:04 crc kubenswrapper[4809]: I0312 08:02:04.105689 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:02:04 crc kubenswrapper[4809]: E0312 08:02:04.105679 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p566k" podUID="3d31c58d-0f0d-431f-bebc-57173f467eee" Mar 12 08:02:04 crc kubenswrapper[4809]: I0312 08:02:04.105775 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:02:04 crc kubenswrapper[4809]: E0312 08:02:04.105967 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 08:02:04 crc kubenswrapper[4809]: I0312 08:02:04.106045 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:02:04 crc kubenswrapper[4809]: E0312 08:02:04.106253 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 08:02:04 crc kubenswrapper[4809]: E0312 08:02:04.106381 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 08:02:04 crc kubenswrapper[4809]: I0312 08:02:04.200620 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4xgl7_85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff/kube-multus/1.log" Mar 12 08:02:04 crc kubenswrapper[4809]: I0312 08:02:04.200714 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4xgl7" event={"ID":"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff","Type":"ContainerStarted","Data":"4a23740d494fbb78dad140e4ef9ec81eea5ab2a2bec25923af1731fb54b4beed"} Mar 12 08:02:06 crc kubenswrapper[4809]: I0312 08:02:06.105387 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:02:06 crc kubenswrapper[4809]: I0312 08:02:06.105498 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p566k" Mar 12 08:02:06 crc kubenswrapper[4809]: I0312 08:02:06.105533 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:02:06 crc kubenswrapper[4809]: I0312 08:02:06.105613 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:02:06 crc kubenswrapper[4809]: E0312 08:02:06.105844 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 08:02:06 crc kubenswrapper[4809]: E0312 08:02:06.106099 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 08:02:06 crc kubenswrapper[4809]: E0312 08:02:06.106288 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p566k" podUID="3d31c58d-0f0d-431f-bebc-57173f467eee" Mar 12 08:02:06 crc kubenswrapper[4809]: E0312 08:02:06.106389 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 08:02:08 crc kubenswrapper[4809]: I0312 08:02:08.105619 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p566k" Mar 12 08:02:08 crc kubenswrapper[4809]: I0312 08:02:08.105679 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:02:08 crc kubenswrapper[4809]: I0312 08:02:08.105736 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:02:08 crc kubenswrapper[4809]: I0312 08:02:08.105796 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:02:08 crc kubenswrapper[4809]: I0312 08:02:08.108051 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 12 08:02:08 crc kubenswrapper[4809]: I0312 08:02:08.108270 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Mar 12 08:02:08 crc kubenswrapper[4809]: I0312 08:02:08.110293 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 12 08:02:08 crc kubenswrapper[4809]: I0312 08:02:08.111431 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 12 08:02:08 crc kubenswrapper[4809]: I0312 08:02:08.111914 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 12 08:02:08 crc kubenswrapper[4809]: I0312 08:02:08.111966 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 12 08:02:10 crc kubenswrapper[4809]: I0312 08:02:10.922724 4809 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Mar 12 08:02:10 crc kubenswrapper[4809]: I0312 08:02:10.977991 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-r5hmb"] Mar 12 08:02:10 crc kubenswrapper[4809]: I0312 08:02:10.978711 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-r5hmb" Mar 12 08:02:10 crc kubenswrapper[4809]: I0312 08:02:10.979534 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-d525c"] Mar 12 08:02:10 crc kubenswrapper[4809]: I0312 08:02:10.980606 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-d525c" Mar 12 08:02:10 crc kubenswrapper[4809]: W0312 08:02:10.982547 4809 reflector.go:561] object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c": failed to list *v1.Secret: secrets "openshift-controller-manager-sa-dockercfg-msq4c" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Mar 12 08:02:10 crc kubenswrapper[4809]: E0312 08:02:10.982618 4809 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-msq4c\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-controller-manager-sa-dockercfg-msq4c\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Mar 12 08:02:10 crc kubenswrapper[4809]: I0312 08:02:10.983195 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/481587db-884d-425d-ba58-4921449275ef-trusted-ca-bundle\") pod \"apiserver-76f77b778f-d525c\" (UID: \"481587db-884d-425d-ba58-4921449275ef\") " pod="openshift-apiserver/apiserver-76f77b778f-d525c" Mar 12 08:02:10 crc kubenswrapper[4809]: I0312 08:02:10.983254 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vl6p\" (UniqueName: \"kubernetes.io/projected/98cc5813-f6d5-4e2a-a7d4-2546e18e60f9-kube-api-access-7vl6p\") pod \"controller-manager-879f6c89f-r5hmb\" (UID: \"98cc5813-f6d5-4e2a-a7d4-2546e18e60f9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5hmb" Mar 12 08:02:10 crc kubenswrapper[4809]: I0312 08:02:10.983314 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/98cc5813-f6d5-4e2a-a7d4-2546e18e60f9-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-r5hmb\" (UID: \"98cc5813-f6d5-4e2a-a7d4-2546e18e60f9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5hmb" Mar 12 08:02:10 crc kubenswrapper[4809]: I0312 08:02:10.983392 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/98cc5813-f6d5-4e2a-a7d4-2546e18e60f9-client-ca\") pod \"controller-manager-879f6c89f-r5hmb\" (UID: \"98cc5813-f6d5-4e2a-a7d4-2546e18e60f9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5hmb" Mar 12 08:02:10 crc kubenswrapper[4809]: I0312 08:02:10.983437 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/481587db-884d-425d-ba58-4921449275ef-serving-cert\") pod \"apiserver-76f77b778f-d525c\" (UID: \"481587db-884d-425d-ba58-4921449275ef\") " pod="openshift-apiserver/apiserver-76f77b778f-d525c" Mar 12 08:02:10 crc kubenswrapper[4809]: I0312 08:02:10.983474 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98cc5813-f6d5-4e2a-a7d4-2546e18e60f9-serving-cert\") pod \"controller-manager-879f6c89f-r5hmb\" (UID: \"98cc5813-f6d5-4e2a-a7d4-2546e18e60f9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5hmb" Mar 12 08:02:10 crc kubenswrapper[4809]: I0312 08:02:10.983510 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7df9\" (UniqueName: \"kubernetes.io/projected/481587db-884d-425d-ba58-4921449275ef-kube-api-access-w7df9\") pod \"apiserver-76f77b778f-d525c\" (UID: \"481587db-884d-425d-ba58-4921449275ef\") " pod="openshift-apiserver/apiserver-76f77b778f-d525c" Mar 12 08:02:10 crc kubenswrapper[4809]: I0312 08:02:10.983546 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/481587db-884d-425d-ba58-4921449275ef-encryption-config\") pod \"apiserver-76f77b778f-d525c\" (UID: \"481587db-884d-425d-ba58-4921449275ef\") " pod="openshift-apiserver/apiserver-76f77b778f-d525c" Mar 12 08:02:10 crc kubenswrapper[4809]: I0312 08:02:10.983602 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/481587db-884d-425d-ba58-4921449275ef-audit\") pod \"apiserver-76f77b778f-d525c\" (UID: \"481587db-884d-425d-ba58-4921449275ef\") " pod="openshift-apiserver/apiserver-76f77b778f-d525c" Mar 12 08:02:10 crc kubenswrapper[4809]: I0312 08:02:10.983635 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/481587db-884d-425d-ba58-4921449275ef-image-import-ca\") pod \"apiserver-76f77b778f-d525c\" (UID: \"481587db-884d-425d-ba58-4921449275ef\") " pod="openshift-apiserver/apiserver-76f77b778f-d525c" Mar 12 08:02:10 crc kubenswrapper[4809]: I0312 08:02:10.983679 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/481587db-884d-425d-ba58-4921449275ef-config\") pod \"apiserver-76f77b778f-d525c\" (UID: \"481587db-884d-425d-ba58-4921449275ef\") " pod="openshift-apiserver/apiserver-76f77b778f-d525c" Mar 12 08:02:10 crc kubenswrapper[4809]: I0312 08:02:10.983714 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/481587db-884d-425d-ba58-4921449275ef-etcd-client\") pod \"apiserver-76f77b778f-d525c\" (UID: \"481587db-884d-425d-ba58-4921449275ef\") " pod="openshift-apiserver/apiserver-76f77b778f-d525c" Mar 12 08:02:10 crc kubenswrapper[4809]: I0312 08:02:10.983843 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98cc5813-f6d5-4e2a-a7d4-2546e18e60f9-config\") pod \"controller-manager-879f6c89f-r5hmb\" (UID: \"98cc5813-f6d5-4e2a-a7d4-2546e18e60f9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5hmb" Mar 12 08:02:10 crc kubenswrapper[4809]: I0312 08:02:10.983879 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/481587db-884d-425d-ba58-4921449275ef-etcd-serving-ca\") pod \"apiserver-76f77b778f-d525c\" (UID: \"481587db-884d-425d-ba58-4921449275ef\") " pod="openshift-apiserver/apiserver-76f77b778f-d525c" Mar 12 08:02:10 crc kubenswrapper[4809]: I0312 08:02:10.983912 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/481587db-884d-425d-ba58-4921449275ef-audit-dir\") pod \"apiserver-76f77b778f-d525c\" (UID: \"481587db-884d-425d-ba58-4921449275ef\") " pod="openshift-apiserver/apiserver-76f77b778f-d525c" Mar 12 08:02:10 crc kubenswrapper[4809]: I0312 08:02:10.983946 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/481587db-884d-425d-ba58-4921449275ef-node-pullsecrets\") pod \"apiserver-76f77b778f-d525c\" (UID: \"481587db-884d-425d-ba58-4921449275ef\") " pod="openshift-apiserver/apiserver-76f77b778f-d525c" Mar 12 08:02:10 crc kubenswrapper[4809]: I0312 08:02:10.984743 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-cqnvn"] Mar 12 08:02:10 crc kubenswrapper[4809]: I0312 08:02:10.985781 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-cqnvn" Mar 12 08:02:10 crc kubenswrapper[4809]: W0312 08:02:10.987330 4809 reflector.go:561] object-"openshift-controller-manager"/"config": failed to list *v1.ConfigMap: configmaps "config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Mar 12 08:02:10 crc kubenswrapper[4809]: W0312 08:02:10.987363 4809 reflector.go:561] object-"openshift-controller-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Mar 12 08:02:10 crc kubenswrapper[4809]: E0312 08:02:10.987381 4809 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Mar 12 08:02:10 crc kubenswrapper[4809]: W0312 08:02:10.987422 4809 reflector.go:561] object-"openshift-apiserver"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Mar 12 08:02:10 crc kubenswrapper[4809]: E0312 08:02:10.987442 4809 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Mar 12 08:02:10 crc kubenswrapper[4809]: W0312 08:02:10.987483 4809 reflector.go:561] object-"openshift-controller-manager"/"openshift-global-ca": failed to list *v1.ConfigMap: configmaps "openshift-global-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Mar 12 08:02:10 crc kubenswrapper[4809]: E0312 08:02:10.987481 4809 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Mar 12 08:02:10 crc kubenswrapper[4809]: E0312 08:02:10.987507 4809 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-global-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-global-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Mar 12 08:02:10 crc kubenswrapper[4809]: W0312 08:02:10.987549 4809 reflector.go:561] object-"openshift-apiserver"/"etcd-client": failed to list *v1.Secret: secrets "etcd-client" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Mar 12 08:02:10 crc kubenswrapper[4809]: E0312 08:02:10.987577 4809 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"etcd-client\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"etcd-client\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Mar 12 08:02:10 crc kubenswrapper[4809]: I0312 08:02:10.989597 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 12 08:02:10 crc kubenswrapper[4809]: I0312 08:02:10.989777 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cd8df"] Mar 12 08:02:10 crc kubenswrapper[4809]: I0312 08:02:10.990221 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cd8df" Mar 12 08:02:10 crc kubenswrapper[4809]: I0312 08:02:10.990918 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-ljb8d"] Mar 12 08:02:10 crc kubenswrapper[4809]: I0312 08:02:10.991346 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 12 08:02:10 crc kubenswrapper[4809]: W0312 08:02:10.991524 4809 reflector.go:561] object-"openshift-apiserver"/"trusted-ca-bundle": failed to list *v1.ConfigMap: configmaps "trusted-ca-bundle" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Mar 12 08:02:10 crc kubenswrapper[4809]: E0312 08:02:10.991559 4809 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"trusted-ca-bundle\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Mar 12 08:02:10 crc kubenswrapper[4809]: W0312 08:02:10.991628 4809 reflector.go:561] object-"openshift-apiserver"/"etcd-serving-ca": failed to list *v1.ConfigMap: configmaps "etcd-serving-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Mar 12 08:02:10 crc kubenswrapper[4809]: E0312 08:02:10.991646 4809 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"etcd-serving-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"etcd-serving-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Mar 12 08:02:10 crc kubenswrapper[4809]: I0312 08:02:10.991694 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ljb8d" Mar 12 08:02:10 crc kubenswrapper[4809]: I0312 08:02:10.991749 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 12 08:02:10 crc kubenswrapper[4809]: W0312 08:02:10.991907 4809 reflector.go:561] object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff": failed to list *v1.Secret: secrets "openshift-apiserver-sa-dockercfg-djjff" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Mar 12 08:02:10 crc kubenswrapper[4809]: E0312 08:02:10.991928 4809 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-djjff\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-apiserver-sa-dockercfg-djjff\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Mar 12 08:02:10 crc kubenswrapper[4809]: W0312 08:02:10.991974 4809 reflector.go:561] object-"openshift-apiserver"/"image-import-ca": failed to list *v1.ConfigMap: configmaps "image-import-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Mar 12 08:02:10 crc kubenswrapper[4809]: E0312 08:02:10.991988 4809 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"image-import-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"image-import-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Mar 12 08:02:10 crc kubenswrapper[4809]: W0312 08:02:10.992024 4809 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-images": failed to list *v1.ConfigMap: configmaps "machine-api-operator-images" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Mar 12 08:02:10 crc kubenswrapper[4809]: E0312 08:02:10.992038 4809 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-images\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"machine-api-operator-images\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Mar 12 08:02:10 crc kubenswrapper[4809]: W0312 08:02:10.992305 4809 reflector.go:561] object-"openshift-apiserver"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Mar 12 08:02:10 crc kubenswrapper[4809]: E0312 08:02:10.992357 4809 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Mar 12 08:02:10 crc kubenswrapper[4809]: W0312 08:02:10.992801 4809 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7": failed to list *v1.Secret: secrets "machine-api-operator-dockercfg-mfbb7" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Mar 12 08:02:10 crc kubenswrapper[4809]: E0312 08:02:10.992857 4809 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-mfbb7\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-api-operator-dockercfg-mfbb7\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Mar 12 08:02:10 crc kubenswrapper[4809]: W0312 08:02:10.992974 4809 reflector.go:561] object-"openshift-machine-api"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Mar 12 08:02:10 crc kubenswrapper[4809]: E0312 08:02:10.993006 4809 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Mar 12 08:02:10 crc kubenswrapper[4809]: I0312 08:02:10.993236 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 12 08:02:10 crc kubenswrapper[4809]: W0312 08:02:10.993466 4809 reflector.go:561] object-"openshift-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Mar 12 08:02:10 crc kubenswrapper[4809]: E0312 08:02:10.993524 4809 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Mar 12 08:02:10 crc kubenswrapper[4809]: W0312 08:02:10.995273 4809 reflector.go:561] object-"openshift-apiserver"/"encryption-config-1": failed to list *v1.Secret: secrets "encryption-config-1" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Mar 12 08:02:10 crc kubenswrapper[4809]: W0312 08:02:10.995287 4809 reflector.go:561] object-"openshift-apiserver"/"audit-1": failed to list *v1.ConfigMap: configmaps "audit-1" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Mar 12 08:02:10 crc kubenswrapper[4809]: E0312 08:02:10.995322 4809 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"encryption-config-1\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"encryption-config-1\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Mar 12 08:02:10 crc kubenswrapper[4809]: E0312 08:02:10.995347 4809 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"audit-1\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"audit-1\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Mar 12 08:02:10 crc kubenswrapper[4809]: I0312 08:02:10.995352 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 12 08:02:10 crc kubenswrapper[4809]: I0312 08:02:10.995767 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-dppmm"] Mar 12 08:02:10 crc kubenswrapper[4809]: I0312 08:02:10.996476 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-dppmm" Mar 12 08:02:10 crc kubenswrapper[4809]: I0312 08:02:10.996587 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Mar 12 08:02:10 crc kubenswrapper[4809]: I0312 08:02:10.997437 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-fl8dg"] Mar 12 08:02:10 crc kubenswrapper[4809]: I0312 08:02:10.998549 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-fl8dg" Mar 12 08:02:10 crc kubenswrapper[4809]: I0312 08:02:10.998927 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g4vh8"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.003211 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-fvmsc"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.003857 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-s8wdz"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.009324 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g4vh8" Mar 12 08:02:11 crc kubenswrapper[4809]: W0312 08:02:11.009593 4809 reflector.go:561] object-"openshift-apiserver"/"config": failed to list *v1.ConfigMap: configmaps "config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Mar 12 08:02:11 crc kubenswrapper[4809]: E0312 08:02:11.009633 4809 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.009741 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.018782 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.019508 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.019866 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.019513 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.020635 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.021384 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.022006 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.022849 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.023255 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.024271 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.031127 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.034407 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.044896 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.045717 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.045891 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.046042 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.046228 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.046366 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.046824 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-pqkrs"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.047236 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-fvtvw"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.047383 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.047597 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-6vssd"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.047871 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-p5h4v"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.048381 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-p5h4v" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.048756 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-s8wdz" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.048883 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fvmsc" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.049132 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pqkrs" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.049447 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-fvtvw" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.050345 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.050369 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-6vssd" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.055596 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.055751 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.057603 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.058891 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-pghrh"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.059377 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-pghrh" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.060000 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-rvfv4"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.060738 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-rvfv4" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.061330 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.061555 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-5d8gm"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.061903 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.062063 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.074982 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.075476 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.075701 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.075890 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.075933 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.076062 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.076106 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.076208 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.076241 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-2qrsm"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.076349 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.076558 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.076735 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.076795 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.076839 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.076898 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qrsm" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.076918 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.076937 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.077002 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.077049 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.077104 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.077171 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.077240 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.077300 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.077328 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.076807 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kmr2s"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.077437 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.077445 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.077526 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.077593 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.077645 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.077732 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.077743 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.077851 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.077872 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.077985 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.078175 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.078214 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-494j4"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.078275 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.078351 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.078432 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.078505 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.078590 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.078731 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-494j4" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.078828 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.078923 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.078998 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kmr2s" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.079003 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.079031 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.079080 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.079319 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.079353 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.079804 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-ttmrw"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.084874 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98cc5813-f6d5-4e2a-a7d4-2546e18e60f9-config\") pod \"controller-manager-879f6c89f-r5hmb\" (UID: \"98cc5813-f6d5-4e2a-a7d4-2546e18e60f9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5hmb" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.084935 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/481587db-884d-425d-ba58-4921449275ef-etcd-serving-ca\") pod \"apiserver-76f77b778f-d525c\" (UID: \"481587db-884d-425d-ba58-4921449275ef\") " pod="openshift-apiserver/apiserver-76f77b778f-d525c" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.084962 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/481587db-884d-425d-ba58-4921449275ef-audit-dir\") pod \"apiserver-76f77b778f-d525c\" (UID: \"481587db-884d-425d-ba58-4921449275ef\") " pod="openshift-apiserver/apiserver-76f77b778f-d525c" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.084990 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/481587db-884d-425d-ba58-4921449275ef-node-pullsecrets\") pod \"apiserver-76f77b778f-d525c\" (UID: \"481587db-884d-425d-ba58-4921449275ef\") " pod="openshift-apiserver/apiserver-76f77b778f-d525c" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.085022 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6bd51111-825a-4679-95fa-6dfe33ff138c-trusted-ca-bundle\") pod \"console-f9d7485db-rvfv4\" (UID: \"6bd51111-825a-4679-95fa-6dfe33ff138c\") " pod="openshift-console/console-f9d7485db-rvfv4" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.085073 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0483712f-354b-4628-a732-73dc9740e700-machine-approver-tls\") pod \"machine-approver-56656f9798-ljb8d\" (UID: \"0483712f-354b-4628-a732-73dc9740e700\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ljb8d" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.085106 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/481587db-884d-425d-ba58-4921449275ef-trusted-ca-bundle\") pod \"apiserver-76f77b778f-d525c\" (UID: \"481587db-884d-425d-ba58-4921449275ef\") " pod="openshift-apiserver/apiserver-76f77b778f-d525c" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.085177 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a71a8c5d-bc8d-4f9d-9e8d-86a8075cc76f-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-g4vh8\" (UID: \"a71a8c5d-bc8d-4f9d-9e8d-86a8075cc76f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g4vh8" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.085208 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vl6p\" (UniqueName: \"kubernetes.io/projected/98cc5813-f6d5-4e2a-a7d4-2546e18e60f9-kube-api-access-7vl6p\") pod \"controller-manager-879f6c89f-r5hmb\" (UID: \"98cc5813-f6d5-4e2a-a7d4-2546e18e60f9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5hmb" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.085245 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/98cc5813-f6d5-4e2a-a7d4-2546e18e60f9-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-r5hmb\" (UID: \"98cc5813-f6d5-4e2a-a7d4-2546e18e60f9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5hmb" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.085272 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrbqp\" (UniqueName: \"kubernetes.io/projected/a71a8c5d-bc8d-4f9d-9e8d-86a8075cc76f-kube-api-access-qrbqp\") pod \"cluster-image-registry-operator-dc59b4c8b-g4vh8\" (UID: \"a71a8c5d-bc8d-4f9d-9e8d-86a8075cc76f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g4vh8" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.085293 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/481587db-884d-425d-ba58-4921449275ef-node-pullsecrets\") pod \"apiserver-76f77b778f-d525c\" (UID: \"481587db-884d-425d-ba58-4921449275ef\") " pod="openshift-apiserver/apiserver-76f77b778f-d525c" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.085301 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/98cc5813-f6d5-4e2a-a7d4-2546e18e60f9-client-ca\") pod \"controller-manager-879f6c89f-r5hmb\" (UID: \"98cc5813-f6d5-4e2a-a7d4-2546e18e60f9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5hmb" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.086290 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/98cc5813-f6d5-4e2a-a7d4-2546e18e60f9-client-ca\") pod \"controller-manager-879f6c89f-r5hmb\" (UID: \"98cc5813-f6d5-4e2a-a7d4-2546e18e60f9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5hmb" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.086579 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/481587db-884d-425d-ba58-4921449275ef-audit-dir\") pod \"apiserver-76f77b778f-d525c\" (UID: \"481587db-884d-425d-ba58-4921449275ef\") " pod="openshift-apiserver/apiserver-76f77b778f-d525c" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.086635 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/481587db-884d-425d-ba58-4921449275ef-serving-cert\") pod \"apiserver-76f77b778f-d525c\" (UID: \"481587db-884d-425d-ba58-4921449275ef\") " pod="openshift-apiserver/apiserver-76f77b778f-d525c" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.086703 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr42q\" (UniqueName: \"kubernetes.io/projected/63bf4c2c-7ce7-49b6-b0f2-5fec14dc0148-kube-api-access-mr42q\") pod \"dns-operator-744455d44c-fvtvw\" (UID: \"63bf4c2c-7ce7-49b6-b0f2-5fec14dc0148\") " pod="openshift-dns-operator/dns-operator-744455d44c-fvtvw" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.086810 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98cc5813-f6d5-4e2a-a7d4-2546e18e60f9-serving-cert\") pod \"controller-manager-879f6c89f-r5hmb\" (UID: \"98cc5813-f6d5-4e2a-a7d4-2546e18e60f9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5hmb" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.086858 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6bd51111-825a-4679-95fa-6dfe33ff138c-service-ca\") pod \"console-f9d7485db-rvfv4\" (UID: \"6bd51111-825a-4679-95fa-6dfe33ff138c\") " pod="openshift-console/console-f9d7485db-rvfv4" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.087717 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/481587db-884d-425d-ba58-4921449275ef-encryption-config\") pod \"apiserver-76f77b778f-d525c\" (UID: \"481587db-884d-425d-ba58-4921449275ef\") " pod="openshift-apiserver/apiserver-76f77b778f-d525c" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.087824 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7df9\" (UniqueName: \"kubernetes.io/projected/481587db-884d-425d-ba58-4921449275ef-kube-api-access-w7df9\") pod \"apiserver-76f77b778f-d525c\" (UID: \"481587db-884d-425d-ba58-4921449275ef\") " pod="openshift-apiserver/apiserver-76f77b778f-d525c" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.087887 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6bd51111-825a-4679-95fa-6dfe33ff138c-oauth-serving-cert\") pod \"console-f9d7485db-rvfv4\" (UID: \"6bd51111-825a-4679-95fa-6dfe33ff138c\") " pod="openshift-console/console-f9d7485db-rvfv4" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.087912 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8h92\" (UniqueName: \"kubernetes.io/projected/6bd51111-825a-4679-95fa-6dfe33ff138c-kube-api-access-d8h92\") pod \"console-f9d7485db-rvfv4\" (UID: \"6bd51111-825a-4679-95fa-6dfe33ff138c\") " pod="openshift-console/console-f9d7485db-rvfv4" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.087933 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0483712f-354b-4628-a732-73dc9740e700-auth-proxy-config\") pod \"machine-approver-56656f9798-ljb8d\" (UID: \"0483712f-354b-4628-a732-73dc9740e700\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ljb8d" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.087967 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/481587db-884d-425d-ba58-4921449275ef-audit\") pod \"apiserver-76f77b778f-d525c\" (UID: \"481587db-884d-425d-ba58-4921449275ef\") " pod="openshift-apiserver/apiserver-76f77b778f-d525c" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.088032 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/481587db-884d-425d-ba58-4921449275ef-image-import-ca\") pod \"apiserver-76f77b778f-d525c\" (UID: \"481587db-884d-425d-ba58-4921449275ef\") " pod="openshift-apiserver/apiserver-76f77b778f-d525c" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.088072 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6bd51111-825a-4679-95fa-6dfe33ff138c-console-oauth-config\") pod \"console-f9d7485db-rvfv4\" (UID: \"6bd51111-825a-4679-95fa-6dfe33ff138c\") " pod="openshift-console/console-f9d7485db-rvfv4" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.088202 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmxcf\" (UniqueName: \"kubernetes.io/projected/0483712f-354b-4628-a732-73dc9740e700-kube-api-access-mmxcf\") pod \"machine-approver-56656f9798-ljb8d\" (UID: \"0483712f-354b-4628-a732-73dc9740e700\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ljb8d" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.088249 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/481587db-884d-425d-ba58-4921449275ef-config\") pod \"apiserver-76f77b778f-d525c\" (UID: \"481587db-884d-425d-ba58-4921449275ef\") " pod="openshift-apiserver/apiserver-76f77b778f-d525c" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.088277 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6bd51111-825a-4679-95fa-6dfe33ff138c-console-serving-cert\") pod \"console-f9d7485db-rvfv4\" (UID: \"6bd51111-825a-4679-95fa-6dfe33ff138c\") " pod="openshift-console/console-f9d7485db-rvfv4" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.088310 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/481587db-884d-425d-ba58-4921449275ef-etcd-client\") pod \"apiserver-76f77b778f-d525c\" (UID: \"481587db-884d-425d-ba58-4921449275ef\") " pod="openshift-apiserver/apiserver-76f77b778f-d525c" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.088327 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555042-287fz"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.088341 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a71a8c5d-bc8d-4f9d-9e8d-86a8075cc76f-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-g4vh8\" (UID: \"a71a8c5d-bc8d-4f9d-9e8d-86a8075cc76f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g4vh8" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.088368 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/63bf4c2c-7ce7-49b6-b0f2-5fec14dc0148-metrics-tls\") pod \"dns-operator-744455d44c-fvtvw\" (UID: \"63bf4c2c-7ce7-49b6-b0f2-5fec14dc0148\") " pod="openshift-dns-operator/dns-operator-744455d44c-fvtvw" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.088400 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0483712f-354b-4628-a732-73dc9740e700-config\") pod \"machine-approver-56656f9798-ljb8d\" (UID: \"0483712f-354b-4628-a732-73dc9740e700\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ljb8d" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.088462 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6bd51111-825a-4679-95fa-6dfe33ff138c-console-config\") pod \"console-f9d7485db-rvfv4\" (UID: \"6bd51111-825a-4679-95fa-6dfe33ff138c\") " pod="openshift-console/console-f9d7485db-rvfv4" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.088508 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a71a8c5d-bc8d-4f9d-9e8d-86a8075cc76f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-g4vh8\" (UID: \"a71a8c5d-bc8d-4f9d-9e8d-86a8075cc76f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g4vh8" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.088824 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-bqvjf"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.089398 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-ttmrw" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.089477 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.117267 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98cc5813-f6d5-4e2a-a7d4-2546e18e60f9-serving-cert\") pod \"controller-manager-879f6c89f-r5hmb\" (UID: \"98cc5813-f6d5-4e2a-a7d4-2546e18e60f9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5hmb" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.089417 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555042-287fz" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.122181 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hrnnz"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.123448 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vh7gf"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.124783 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-p6jxb"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.089416 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-bqvjf" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.157270 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hrnnz" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.160314 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vh7gf" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.169462 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.170167 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.172468 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.172720 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.174069 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-p6jxb" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.178291 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.180240 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.186479 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.186487 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.186649 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.188086 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.189554 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8h92\" (UniqueName: \"kubernetes.io/projected/6bd51111-825a-4679-95fa-6dfe33ff138c-kube-api-access-d8h92\") pod \"console-f9d7485db-rvfv4\" (UID: \"6bd51111-825a-4679-95fa-6dfe33ff138c\") " pod="openshift-console/console-f9d7485db-rvfv4" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.189601 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0483712f-354b-4628-a732-73dc9740e700-auth-proxy-config\") pod \"machine-approver-56656f9798-ljb8d\" (UID: \"0483712f-354b-4628-a732-73dc9740e700\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ljb8d" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.189627 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6bd51111-825a-4679-95fa-6dfe33ff138c-oauth-serving-cert\") pod \"console-f9d7485db-rvfv4\" (UID: \"6bd51111-825a-4679-95fa-6dfe33ff138c\") " pod="openshift-console/console-f9d7485db-rvfv4" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.189669 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6bd51111-825a-4679-95fa-6dfe33ff138c-console-oauth-config\") pod \"console-f9d7485db-rvfv4\" (UID: \"6bd51111-825a-4679-95fa-6dfe33ff138c\") " pod="openshift-console/console-f9d7485db-rvfv4" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.189702 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmxcf\" (UniqueName: \"kubernetes.io/projected/0483712f-354b-4628-a732-73dc9740e700-kube-api-access-mmxcf\") pod \"machine-approver-56656f9798-ljb8d\" (UID: \"0483712f-354b-4628-a732-73dc9740e700\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ljb8d" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.189731 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6bd51111-825a-4679-95fa-6dfe33ff138c-console-serving-cert\") pod \"console-f9d7485db-rvfv4\" (UID: \"6bd51111-825a-4679-95fa-6dfe33ff138c\") " pod="openshift-console/console-f9d7485db-rvfv4" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.189756 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a71a8c5d-bc8d-4f9d-9e8d-86a8075cc76f-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-g4vh8\" (UID: \"a71a8c5d-bc8d-4f9d-9e8d-86a8075cc76f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g4vh8" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.189781 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/63bf4c2c-7ce7-49b6-b0f2-5fec14dc0148-metrics-tls\") pod \"dns-operator-744455d44c-fvtvw\" (UID: \"63bf4c2c-7ce7-49b6-b0f2-5fec14dc0148\") " pod="openshift-dns-operator/dns-operator-744455d44c-fvtvw" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.189805 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0483712f-354b-4628-a732-73dc9740e700-config\") pod \"machine-approver-56656f9798-ljb8d\" (UID: \"0483712f-354b-4628-a732-73dc9740e700\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ljb8d" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.189829 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6bd51111-825a-4679-95fa-6dfe33ff138c-console-config\") pod \"console-f9d7485db-rvfv4\" (UID: \"6bd51111-825a-4679-95fa-6dfe33ff138c\") " pod="openshift-console/console-f9d7485db-rvfv4" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.189851 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a71a8c5d-bc8d-4f9d-9e8d-86a8075cc76f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-g4vh8\" (UID: \"a71a8c5d-bc8d-4f9d-9e8d-86a8075cc76f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g4vh8" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.189887 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6bd51111-825a-4679-95fa-6dfe33ff138c-trusted-ca-bundle\") pod \"console-f9d7485db-rvfv4\" (UID: \"6bd51111-825a-4679-95fa-6dfe33ff138c\") " pod="openshift-console/console-f9d7485db-rvfv4" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.189905 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0483712f-354b-4628-a732-73dc9740e700-machine-approver-tls\") pod \"machine-approver-56656f9798-ljb8d\" (UID: \"0483712f-354b-4628-a732-73dc9740e700\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ljb8d" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.189936 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a71a8c5d-bc8d-4f9d-9e8d-86a8075cc76f-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-g4vh8\" (UID: \"a71a8c5d-bc8d-4f9d-9e8d-86a8075cc76f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g4vh8" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.189976 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrbqp\" (UniqueName: \"kubernetes.io/projected/a71a8c5d-bc8d-4f9d-9e8d-86a8075cc76f-kube-api-access-qrbqp\") pod \"cluster-image-registry-operator-dc59b4c8b-g4vh8\" (UID: \"a71a8c5d-bc8d-4f9d-9e8d-86a8075cc76f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g4vh8" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.189998 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mr42q\" (UniqueName: \"kubernetes.io/projected/63bf4c2c-7ce7-49b6-b0f2-5fec14dc0148-kube-api-access-mr42q\") pod \"dns-operator-744455d44c-fvtvw\" (UID: \"63bf4c2c-7ce7-49b6-b0f2-5fec14dc0148\") " pod="openshift-dns-operator/dns-operator-744455d44c-fvtvw" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.190013 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6bd51111-825a-4679-95fa-6dfe33ff138c-service-ca\") pod \"console-f9d7485db-rvfv4\" (UID: \"6bd51111-825a-4679-95fa-6dfe33ff138c\") " pod="openshift-console/console-f9d7485db-rvfv4" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.191273 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0483712f-354b-4628-a732-73dc9740e700-auth-proxy-config\") pod \"machine-approver-56656f9798-ljb8d\" (UID: \"0483712f-354b-4628-a732-73dc9740e700\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ljb8d" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.191863 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0483712f-354b-4628-a732-73dc9740e700-config\") pod \"machine-approver-56656f9798-ljb8d\" (UID: \"0483712f-354b-4628-a732-73dc9740e700\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ljb8d" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.192701 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6bd51111-825a-4679-95fa-6dfe33ff138c-oauth-serving-cert\") pod \"console-f9d7485db-rvfv4\" (UID: \"6bd51111-825a-4679-95fa-6dfe33ff138c\") " pod="openshift-console/console-f9d7485db-rvfv4" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.194339 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6bd51111-825a-4679-95fa-6dfe33ff138c-console-config\") pod \"console-f9d7485db-rvfv4\" (UID: \"6bd51111-825a-4679-95fa-6dfe33ff138c\") " pod="openshift-console/console-f9d7485db-rvfv4" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.195873 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a71a8c5d-bc8d-4f9d-9e8d-86a8075cc76f-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-g4vh8\" (UID: \"a71a8c5d-bc8d-4f9d-9e8d-86a8075cc76f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g4vh8" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.202680 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0483712f-354b-4628-a732-73dc9740e700-machine-approver-tls\") pod \"machine-approver-56656f9798-ljb8d\" (UID: \"0483712f-354b-4628-a732-73dc9740e700\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ljb8d" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.202803 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6bd51111-825a-4679-95fa-6dfe33ff138c-console-serving-cert\") pod \"console-f9d7485db-rvfv4\" (UID: \"6bd51111-825a-4679-95fa-6dfe33ff138c\") " pod="openshift-console/console-f9d7485db-rvfv4" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.203094 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.203355 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a71a8c5d-bc8d-4f9d-9e8d-86a8075cc76f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-g4vh8\" (UID: \"a71a8c5d-bc8d-4f9d-9e8d-86a8075cc76f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g4vh8" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.207819 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.211874 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6bd51111-825a-4679-95fa-6dfe33ff138c-console-oauth-config\") pod \"console-f9d7485db-rvfv4\" (UID: \"6bd51111-825a-4679-95fa-6dfe33ff138c\") " pod="openshift-console/console-f9d7485db-rvfv4" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.213307 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-7qz8g"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.215667 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6bd51111-825a-4679-95fa-6dfe33ff138c-trusted-ca-bundle\") pod \"console-f9d7485db-rvfv4\" (UID: \"6bd51111-825a-4679-95fa-6dfe33ff138c\") " pod="openshift-console/console-f9d7485db-rvfv4" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.215819 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nfbkw"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.216273 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-cqnvn"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.216298 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555040-4v65h"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.216599 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-pxnfg"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.216903 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vqtvl"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.217253 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6lhmm"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.217608 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-8m4nh"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.218154 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-k88nv"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.218616 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/63bf4c2c-7ce7-49b6-b0f2-5fec14dc0148-metrics-tls\") pod \"dns-operator-744455d44c-fvtvw\" (UID: \"63bf4c2c-7ce7-49b6-b0f2-5fec14dc0148\") " pod="openshift-dns-operator/dns-operator-744455d44c-fvtvw" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.218757 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vqtvl" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.219830 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6lhmm" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.224783 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w2hxb"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.224849 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.229501 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-pxnfg" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.229602 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555040-4v65h" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.229643 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nfbkw" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.229664 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-k88nv" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.229698 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8m4nh" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.230035 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7qz8g" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.231188 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-r24lv"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.231351 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w2hxb" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.236013 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tjjbt"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.237409 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-r24lv" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.243142 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xx52w"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.245778 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-sm7hg"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.246572 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tjjbt" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.247805 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.253939 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xx52w" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.262511 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sm7hg" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.262825 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.262591 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-dppmm"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.263281 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g4vh8"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.263329 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cd8df"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.263349 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-fvmsc"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.263393 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-d525c"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.263416 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-n7bs7"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.267832 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-fvtvw"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.267872 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-pqkrs"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.267885 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-p5h4v"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.267988 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-n7bs7" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.269990 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-bqvjf"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.271045 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hrnnz"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.271837 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6bd51111-825a-4679-95fa-6dfe33ff138c-service-ca\") pod \"console-f9d7485db-rvfv4\" (UID: \"6bd51111-825a-4679-95fa-6dfe33ff138c\") " pod="openshift-console/console-f9d7485db-rvfv4" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.273212 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vh7gf"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.273320 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-5d8gm"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.277238 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-pxnfg"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.278780 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kmr2s"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.280125 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-6vssd"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.281037 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vqtvl"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.289160 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nfbkw"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.292419 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-pghrh"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.293784 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-fl8dg"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.295313 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-s8wdz"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.296414 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-494j4"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.298991 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-7qz8g"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.301333 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.304627 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555042-287fz"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.306009 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-p6jxb"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.310361 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-k88nv"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.311507 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-rvfv4"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.313055 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tjjbt"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.314337 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555040-4v65h"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.315585 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6lhmm"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.317728 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w2hxb"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.319076 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-2qrsm"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.321255 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.321438 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-r24lv"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.325749 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-sm7hg"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.327121 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-8m4nh"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.328483 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xx52w"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.330370 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-76fnn"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.331439 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-76fnn" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.331671 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-lt58n"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.332829 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-lt58n" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.333142 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-r5hmb"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.334256 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-lt58n"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.335371 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-76fnn"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.336984 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-njprg"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.338601 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-njprg"] Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.338727 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-njprg" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.340887 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.360577 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.381005 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.400648 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.420553 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.440834 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.461221 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.480601 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.501345 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.524016 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.562913 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.621083 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.641265 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.661583 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.680920 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.701630 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.721634 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.741868 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.761789 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.781847 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.803239 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.822876 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.849968 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.860679 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.881149 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.902004 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.920958 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.941903 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.961986 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 12 08:02:11 crc kubenswrapper[4809]: I0312 08:02:11.982018 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.001341 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.021954 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.042459 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.060985 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.081692 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 12 08:02:12 crc kubenswrapper[4809]: E0312 08:02:12.087344 4809 secret.go:188] Couldn't get secret openshift-apiserver/serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 12 08:02:12 crc kubenswrapper[4809]: E0312 08:02:12.087418 4809 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: failed to sync configmap cache: timed out waiting for the condition Mar 12 08:02:12 crc kubenswrapper[4809]: E0312 08:02:12.087447 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/481587db-884d-425d-ba58-4921449275ef-serving-cert podName:481587db-884d-425d-ba58-4921449275ef nodeName:}" failed. No retries permitted until 2026-03-12 08:02:12.587414107 +0000 UTC m=+206.169449870 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/481587db-884d-425d-ba58-4921449275ef-serving-cert") pod "apiserver-76f77b778f-d525c" (UID: "481587db-884d-425d-ba58-4921449275ef") : failed to sync secret cache: timed out waiting for the condition Mar 12 08:02:12 crc kubenswrapper[4809]: E0312 08:02:12.087837 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/98cc5813-f6d5-4e2a-a7d4-2546e18e60f9-proxy-ca-bundles podName:98cc5813-f6d5-4e2a-a7d4-2546e18e60f9 nodeName:}" failed. No retries permitted until 2026-03-12 08:02:12.587817 +0000 UTC m=+206.169852763 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/98cc5813-f6d5-4e2a-a7d4-2546e18e60f9-proxy-ca-bundles") pod "controller-manager-879f6c89f-r5hmb" (UID: "98cc5813-f6d5-4e2a-a7d4-2546e18e60f9") : failed to sync configmap cache: timed out waiting for the condition Mar 12 08:02:12 crc kubenswrapper[4809]: E0312 08:02:12.087451 4809 configmap.go:193] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 12 08:02:12 crc kubenswrapper[4809]: E0312 08:02:12.088166 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/481587db-884d-425d-ba58-4921449275ef-trusted-ca-bundle podName:481587db-884d-425d-ba58-4921449275ef nodeName:}" failed. No retries permitted until 2026-03-12 08:02:12.588068047 +0000 UTC m=+206.170103820 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/481587db-884d-425d-ba58-4921449275ef-trusted-ca-bundle") pod "apiserver-76f77b778f-d525c" (UID: "481587db-884d-425d-ba58-4921449275ef") : failed to sync configmap cache: timed out waiting for the condition Mar 12 08:02:12 crc kubenswrapper[4809]: E0312 08:02:12.087492 4809 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Mar 12 08:02:12 crc kubenswrapper[4809]: E0312 08:02:12.088255 4809 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-1: failed to sync configmap cache: timed out waiting for the condition Mar 12 08:02:12 crc kubenswrapper[4809]: E0312 08:02:12.087520 4809 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: failed to sync configmap cache: timed out waiting for the condition Mar 12 08:02:12 crc kubenswrapper[4809]: E0312 08:02:12.088322 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/481587db-884d-425d-ba58-4921449275ef-audit podName:481587db-884d-425d-ba58-4921449275ef nodeName:}" failed. No retries permitted until 2026-03-12 08:02:12.588299663 +0000 UTC m=+206.170335436 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/481587db-884d-425d-ba58-4921449275ef-audit") pod "apiserver-76f77b778f-d525c" (UID: "481587db-884d-425d-ba58-4921449275ef") : failed to sync configmap cache: timed out waiting for the condition Mar 12 08:02:12 crc kubenswrapper[4809]: E0312 08:02:12.087923 4809 secret.go:188] Couldn't get secret openshift-apiserver/encryption-config-1: failed to sync secret cache: timed out waiting for the condition Mar 12 08:02:12 crc kubenswrapper[4809]: E0312 08:02:12.088349 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/98cc5813-f6d5-4e2a-a7d4-2546e18e60f9-config podName:98cc5813-f6d5-4e2a-a7d4-2546e18e60f9 nodeName:}" failed. No retries permitted until 2026-03-12 08:02:12.588336774 +0000 UTC m=+206.170372537 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/98cc5813-f6d5-4e2a-a7d4-2546e18e60f9-config") pod "controller-manager-879f6c89f-r5hmb" (UID: "98cc5813-f6d5-4e2a-a7d4-2546e18e60f9") : failed to sync configmap cache: timed out waiting for the condition Mar 12 08:02:12 crc kubenswrapper[4809]: E0312 08:02:12.088412 4809 configmap.go:193] Couldn't get configMap openshift-apiserver/config: failed to sync configmap cache: timed out waiting for the condition Mar 12 08:02:12 crc kubenswrapper[4809]: E0312 08:02:12.088464 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/481587db-884d-425d-ba58-4921449275ef-etcd-serving-ca podName:481587db-884d-425d-ba58-4921449275ef nodeName:}" failed. No retries permitted until 2026-03-12 08:02:12.588411977 +0000 UTC m=+206.170447730 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/481587db-884d-425d-ba58-4921449275ef-etcd-serving-ca") pod "apiserver-76f77b778f-d525c" (UID: "481587db-884d-425d-ba58-4921449275ef") : failed to sync configmap cache: timed out waiting for the condition Mar 12 08:02:12 crc kubenswrapper[4809]: E0312 08:02:12.088480 4809 secret.go:188] Couldn't get secret openshift-apiserver/etcd-client: failed to sync secret cache: timed out waiting for the condition Mar 12 08:02:12 crc kubenswrapper[4809]: E0312 08:02:12.088487 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/481587db-884d-425d-ba58-4921449275ef-encryption-config podName:481587db-884d-425d-ba58-4921449275ef nodeName:}" failed. No retries permitted until 2026-03-12 08:02:12.588475778 +0000 UTC m=+206.170511521 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/481587db-884d-425d-ba58-4921449275ef-encryption-config") pod "apiserver-76f77b778f-d525c" (UID: "481587db-884d-425d-ba58-4921449275ef") : failed to sync secret cache: timed out waiting for the condition Mar 12 08:02:12 crc kubenswrapper[4809]: E0312 08:02:12.088589 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/481587db-884d-425d-ba58-4921449275ef-config podName:481587db-884d-425d-ba58-4921449275ef nodeName:}" failed. No retries permitted until 2026-03-12 08:02:12.588565571 +0000 UTC m=+206.170601414 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/481587db-884d-425d-ba58-4921449275ef-config") pod "apiserver-76f77b778f-d525c" (UID: "481587db-884d-425d-ba58-4921449275ef") : failed to sync configmap cache: timed out waiting for the condition Mar 12 08:02:12 crc kubenswrapper[4809]: E0312 08:02:12.088617 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/481587db-884d-425d-ba58-4921449275ef-etcd-client podName:481587db-884d-425d-ba58-4921449275ef nodeName:}" failed. No retries permitted until 2026-03-12 08:02:12.588602582 +0000 UTC m=+206.170638455 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/481587db-884d-425d-ba58-4921449275ef-etcd-client") pod "apiserver-76f77b778f-d525c" (UID: "481587db-884d-425d-ba58-4921449275ef") : failed to sync secret cache: timed out waiting for the condition Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.100333 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.100481 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.103015 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.107723 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:02:12 crc kubenswrapper[4809]: E0312 08:02:12.122691 4809 configmap.go:193] Couldn't get configMap openshift-apiserver/image-import-ca: failed to sync configmap cache: timed out waiting for the condition Mar 12 08:02:12 crc kubenswrapper[4809]: E0312 08:02:12.122807 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/481587db-884d-425d-ba58-4921449275ef-image-import-ca podName:481587db-884d-425d-ba58-4921449275ef nodeName:}" failed. No retries permitted until 2026-03-12 08:02:12.622780072 +0000 UTC m=+206.204815845 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/481587db-884d-425d-ba58-4921449275ef-image-import-ca") pod "apiserver-76f77b778f-d525c" (UID: "481587db-884d-425d-ba58-4921449275ef") : failed to sync configmap cache: timed out waiting for the condition Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.134725 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8h92\" (UniqueName: \"kubernetes.io/projected/6bd51111-825a-4679-95fa-6dfe33ff138c-kube-api-access-d8h92\") pod \"console-f9d7485db-rvfv4\" (UID: \"6bd51111-825a-4679-95fa-6dfe33ff138c\") " pod="openshift-console/console-f9d7485db-rvfv4" Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.140055 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mr42q\" (UniqueName: \"kubernetes.io/projected/63bf4c2c-7ce7-49b6-b0f2-5fec14dc0148-kube-api-access-mr42q\") pod \"dns-operator-744455d44c-fvtvw\" (UID: \"63bf4c2c-7ce7-49b6-b0f2-5fec14dc0148\") " pod="openshift-dns-operator/dns-operator-744455d44c-fvtvw" Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.162473 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a71a8c5d-bc8d-4f9d-9e8d-86a8075cc76f-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-g4vh8\" (UID: \"a71a8c5d-bc8d-4f9d-9e8d-86a8075cc76f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g4vh8" Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.182084 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmxcf\" (UniqueName: \"kubernetes.io/projected/0483712f-354b-4628-a732-73dc9740e700-kube-api-access-mmxcf\") pod \"machine-approver-56656f9798-ljb8d\" (UID: \"0483712f-354b-4628-a732-73dc9740e700\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ljb8d" Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.202510 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.203395 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.204021 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.205431 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 12 08:02:12 crc kubenswrapper[4809]: E0312 08:02:12.205789 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 08:04:14.205755524 +0000 UTC m=+327.787791267 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.210001 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.211404 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.218615 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrbqp\" (UniqueName: \"kubernetes.io/projected/a71a8c5d-bc8d-4f9d-9e8d-86a8075cc76f-kube-api-access-qrbqp\") pod \"cluster-image-registry-operator-dc59b4c8b-g4vh8\" (UID: \"a71a8c5d-bc8d-4f9d-9e8d-86a8075cc76f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g4vh8" Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.234302 4809 request.go:700] Waited for 1.012914884s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dolm-operator-serviceaccount-dockercfg-rq7zk&limit=500&resourceVersion=0 Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.234513 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-fvtvw" Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.238018 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.240643 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.241106 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-rvfv4" Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.262374 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.281760 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.301975 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.322091 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.341041 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.351269 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.362471 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.366461 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.379835 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.381296 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.401392 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.421450 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.434863 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ljb8d" Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.441709 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.450582 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-fvtvw"] Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.462278 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 12 08:02:12 crc kubenswrapper[4809]: W0312 08:02:12.467308 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0483712f_354b_4628_a732_73dc9740e700.slice/crio-6cb18e93258e75032a9fdb7a559913ed73f1564d989cf5b9f4f03e0bce1bcbbb WatchSource:0}: Error finding container 6cb18e93258e75032a9fdb7a559913ed73f1564d989cf5b9f4f03e0bce1bcbbb: Status 404 returned error can't find the container with id 6cb18e93258e75032a9fdb7a559913ed73f1564d989cf5b9f4f03e0bce1bcbbb Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.478969 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-rvfv4"] Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.479108 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g4vh8" Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.483201 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.502858 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 12 08:02:12 crc kubenswrapper[4809]: W0312 08:02:12.511198 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6bd51111_825a_4679_95fa_6dfe33ff138c.slice/crio-f862b73f694a27ee8ed01f5a755f71528097afd1a3b257606a763320d603c909 WatchSource:0}: Error finding container f862b73f694a27ee8ed01f5a755f71528097afd1a3b257606a763320d603c909: Status 404 returned error can't find the container with id f862b73f694a27ee8ed01f5a755f71528097afd1a3b257606a763320d603c909 Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.520744 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.541834 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.561023 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 12 08:02:12 crc kubenswrapper[4809]: W0312 08:02:12.564167 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-3fab297ef168e3031292356deb4ae8fedea4a6f1da0b31435efb90451a6c426b WatchSource:0}: Error finding container 3fab297ef168e3031292356deb4ae8fedea4a6f1da0b31435efb90451a6c426b: Status 404 returned error can't find the container with id 3fab297ef168e3031292356deb4ae8fedea4a6f1da0b31435efb90451a6c426b Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.580938 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 12 08:02:12 crc kubenswrapper[4809]: E0312 08:02:12.593485 4809 projected.go:288] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Mar 12 08:02:12 crc kubenswrapper[4809]: E0312 08:02:12.593510 4809 projected.go:194] Error preparing data for projected volume kube-api-access-7vl6p for pod openshift-controller-manager/controller-manager-879f6c89f-r5hmb: failed to sync configmap cache: timed out waiting for the condition Mar 12 08:02:12 crc kubenswrapper[4809]: E0312 08:02:12.593562 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/98cc5813-f6d5-4e2a-a7d4-2546e18e60f9-kube-api-access-7vl6p podName:98cc5813-f6d5-4e2a-a7d4-2546e18e60f9 nodeName:}" failed. No retries permitted until 2026-03-12 08:02:13.09354462 +0000 UTC m=+206.675580353 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7vl6p" (UniqueName: "kubernetes.io/projected/98cc5813-f6d5-4e2a-a7d4-2546e18e60f9-kube-api-access-7vl6p") pod "controller-manager-879f6c89f-r5hmb" (UID: "98cc5813-f6d5-4e2a-a7d4-2546e18e60f9") : failed to sync configmap cache: timed out waiting for the condition Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.606624 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.614264 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98cc5813-f6d5-4e2a-a7d4-2546e18e60f9-config\") pod \"controller-manager-879f6c89f-r5hmb\" (UID: \"98cc5813-f6d5-4e2a-a7d4-2546e18e60f9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5hmb" Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.614307 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/481587db-884d-425d-ba58-4921449275ef-etcd-serving-ca\") pod \"apiserver-76f77b778f-d525c\" (UID: \"481587db-884d-425d-ba58-4921449275ef\") " pod="openshift-apiserver/apiserver-76f77b778f-d525c" Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.614345 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/481587db-884d-425d-ba58-4921449275ef-trusted-ca-bundle\") pod \"apiserver-76f77b778f-d525c\" (UID: \"481587db-884d-425d-ba58-4921449275ef\") " pod="openshift-apiserver/apiserver-76f77b778f-d525c" Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.614376 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/98cc5813-f6d5-4e2a-a7d4-2546e18e60f9-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-r5hmb\" (UID: \"98cc5813-f6d5-4e2a-a7d4-2546e18e60f9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5hmb" Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.614455 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/481587db-884d-425d-ba58-4921449275ef-serving-cert\") pod \"apiserver-76f77b778f-d525c\" (UID: \"481587db-884d-425d-ba58-4921449275ef\") " pod="openshift-apiserver/apiserver-76f77b778f-d525c" Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.614536 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/481587db-884d-425d-ba58-4921449275ef-encryption-config\") pod \"apiserver-76f77b778f-d525c\" (UID: \"481587db-884d-425d-ba58-4921449275ef\") " pod="openshift-apiserver/apiserver-76f77b778f-d525c" Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.614576 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/481587db-884d-425d-ba58-4921449275ef-audit\") pod \"apiserver-76f77b778f-d525c\" (UID: \"481587db-884d-425d-ba58-4921449275ef\") " pod="openshift-apiserver/apiserver-76f77b778f-d525c" Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.614610 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/481587db-884d-425d-ba58-4921449275ef-config\") pod \"apiserver-76f77b778f-d525c\" (UID: \"481587db-884d-425d-ba58-4921449275ef\") " pod="openshift-apiserver/apiserver-76f77b778f-d525c" Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.614697 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/481587db-884d-425d-ba58-4921449275ef-etcd-client\") pod \"apiserver-76f77b778f-d525c\" (UID: \"481587db-884d-425d-ba58-4921449275ef\") " pod="openshift-apiserver/apiserver-76f77b778f-d525c" Mar 12 08:02:12 crc kubenswrapper[4809]: E0312 08:02:12.616049 4809 projected.go:288] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.620782 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 12 08:02:12 crc kubenswrapper[4809]: W0312 08:02:12.621668 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-15503024e3b46c43a3f4c6f9c25af836b144c1d486a9d23b9432300f96d2f620 WatchSource:0}: Error finding container 15503024e3b46c43a3f4c6f9c25af836b144c1d486a9d23b9432300f96d2f620: Status 404 returned error can't find the container with id 15503024e3b46c43a3f4c6f9c25af836b144c1d486a9d23b9432300f96d2f620 Mar 12 08:02:12 crc kubenswrapper[4809]: W0312 08:02:12.623336 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-2c7a5f3c7a9f73e762546b4f2b03fb11b800c87b20af4eaf32c912b5bb5fac18 WatchSource:0}: Error finding container 2c7a5f3c7a9f73e762546b4f2b03fb11b800c87b20af4eaf32c912b5bb5fac18: Status 404 returned error can't find the container with id 2c7a5f3c7a9f73e762546b4f2b03fb11b800c87b20af4eaf32c912b5bb5fac18 Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.640711 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.661480 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.683567 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g4vh8"] Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.683590 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 12 08:02:12 crc kubenswrapper[4809]: W0312 08:02:12.690400 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda71a8c5d_bc8d_4f9d_9e8d_86a8075cc76f.slice/crio-3861acda165e9d23f9dfa52a9825679c09c88f66add7a2a5d4a7de19326968db WatchSource:0}: Error finding container 3861acda165e9d23f9dfa52a9825679c09c88f66add7a2a5d4a7de19326968db: Status 404 returned error can't find the container with id 3861acda165e9d23f9dfa52a9825679c09c88f66add7a2a5d4a7de19326968db Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.701598 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.715587 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/481587db-884d-425d-ba58-4921449275ef-image-import-ca\") pod \"apiserver-76f77b778f-d525c\" (UID: \"481587db-884d-425d-ba58-4921449275ef\") " pod="openshift-apiserver/apiserver-76f77b778f-d525c" Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.723205 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.741526 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.761755 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.782432 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.800728 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.821701 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.841219 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.860777 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.881525 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.901974 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.921260 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.942549 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.962052 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 12 08:02:12 crc kubenswrapper[4809]: I0312 08:02:12.981797 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.001448 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.021534 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.062990 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.082089 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.103599 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.137244 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vl6p\" (UniqueName: \"kubernetes.io/projected/98cc5813-f6d5-4e2a-a7d4-2546e18e60f9-kube-api-access-7vl6p\") pod \"controller-manager-879f6c89f-r5hmb\" (UID: \"98cc5813-f6d5-4e2a-a7d4-2546e18e60f9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5hmb" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.140219 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.141594 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.161569 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.182274 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.201915 4809 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.221260 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.239262 4809 request.go:700] Waited for 1.899796509s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.243274 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.255191 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"c8d55eee30c201adcde1abf9cd9e94ea3011ea8c849959d80c7731b8f8dcf24b"} Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.255255 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"15503024e3b46c43a3f4c6f9c25af836b144c1d486a9d23b9432300f96d2f620"} Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.255426 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.257431 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"32f06f9271375409a77b5b341f2395d1be8f26f0708b95f0ee834832b2319f3f"} Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.257512 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"2c7a5f3c7a9f73e762546b4f2b03fb11b800c87b20af4eaf32c912b5bb5fac18"} Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.259801 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-rvfv4" event={"ID":"6bd51111-825a-4679-95fa-6dfe33ff138c","Type":"ContainerStarted","Data":"25b88d807d99e685a191b84ed90f4a984c34acaa2da7fcd646b5e0e75f4b9a7e"} Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.259838 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-rvfv4" event={"ID":"6bd51111-825a-4679-95fa-6dfe33ff138c","Type":"ContainerStarted","Data":"f862b73f694a27ee8ed01f5a755f71528097afd1a3b257606a763320d603c909"} Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.261812 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"20708d94813d091288bdb2267041b79bbdbddd4a76b4d96190370bae111d707a"} Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.261855 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"3fab297ef168e3031292356deb4ae8fedea4a6f1da0b31435efb90451a6c426b"} Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.264567 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g4vh8" event={"ID":"a71a8c5d-bc8d-4f9d-9e8d-86a8075cc76f","Type":"ContainerStarted","Data":"575e8b50ee98e022e9c25e96fbca91cb68134c31ec8031b9f31d2175780bd84c"} Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.264627 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g4vh8" event={"ID":"a71a8c5d-bc8d-4f9d-9e8d-86a8075cc76f","Type":"ContainerStarted","Data":"3861acda165e9d23f9dfa52a9825679c09c88f66add7a2a5d4a7de19326968db"} Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.267381 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ljb8d" event={"ID":"0483712f-354b-4628-a732-73dc9740e700","Type":"ContainerStarted","Data":"d7f209ad2a47e19d6e91262ac1cfca447b7a5c24c1f460eeaab625f880408fd7"} Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.267415 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ljb8d" event={"ID":"0483712f-354b-4628-a732-73dc9740e700","Type":"ContainerStarted","Data":"b0809d78c7815baad884633dfa9382567056186038da953721abf901a6866982"} Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.267433 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ljb8d" event={"ID":"0483712f-354b-4628-a732-73dc9740e700","Type":"ContainerStarted","Data":"6cb18e93258e75032a9fdb7a559913ed73f1564d989cf5b9f4f03e0bce1bcbbb"} Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.271567 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-fvtvw" event={"ID":"63bf4c2c-7ce7-49b6-b0f2-5fec14dc0148","Type":"ContainerStarted","Data":"d0b0606edfa76e83160bd934e9673ddde1c7cc30d3615694f5125bd981a360a4"} Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.271612 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-fvtvw" event={"ID":"63bf4c2c-7ce7-49b6-b0f2-5fec14dc0148","Type":"ContainerStarted","Data":"3880e868f57db71c05cf754424d454f7be2a12926ec6287f2ba4d174f5bef476"} Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.271627 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-fvtvw" event={"ID":"63bf4c2c-7ce7-49b6-b0f2-5fec14dc0148","Type":"ContainerStarted","Data":"9d586210d7e83c8b230637e108c599f7f64f8a7e750889398a439934adb2434f"} Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.282420 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.302212 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.307290 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98cc5813-f6d5-4e2a-a7d4-2546e18e60f9-config\") pod \"controller-manager-879f6c89f-r5hmb\" (UID: \"98cc5813-f6d5-4e2a-a7d4-2546e18e60f9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5hmb" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.313439 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/481587db-884d-425d-ba58-4921449275ef-encryption-config\") pod \"apiserver-76f77b778f-d525c\" (UID: \"481587db-884d-425d-ba58-4921449275ef\") " pod="openshift-apiserver/apiserver-76f77b778f-d525c" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.322248 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.340659 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b1c6c047-6fde-4d86-a82c-d8d259265412-bound-sa-token\") pod \"image-registry-697d97f7c8-5d8gm\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.340710 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/b24173e0-5140-4c4d-ab8b-ea3696db0b74-stats-auth\") pod \"router-default-5444994796-ttmrw\" (UID: \"b24173e0-5140-4c4d-ab8b-ea3696db0b74\") " pod="openshift-ingress/router-default-5444994796-ttmrw" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.340746 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b1c6c047-6fde-4d86-a82c-d8d259265412-ca-trust-extracted\") pod \"image-registry-697d97f7c8-5d8gm\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.340817 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afb31fcb-c363-4434-a971-126881422750-serving-cert\") pod \"route-controller-manager-6576b87f9c-pqkrs\" (UID: \"afb31fcb-c363-4434-a971-126881422750\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pqkrs" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.340904 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1189f657-b031-4ece-859b-95d3eadd8221-config\") pod \"authentication-operator-69f744f599-dppmm\" (UID: \"1189f657-b031-4ece-859b-95d3eadd8221\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dppmm" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.340968 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b24173e0-5140-4c4d-ab8b-ea3696db0b74-service-ca-bundle\") pod \"router-default-5444994796-ttmrw\" (UID: \"b24173e0-5140-4c4d-ab8b-ea3696db0b74\") " pod="openshift-ingress/router-default-5444994796-ttmrw" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.341006 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60f54e91-6901-4948-937c-cb0b85f43196-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-494j4\" (UID: \"60f54e91-6901-4948-937c-cb0b85f43196\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-494j4" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.341039 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n4pn\" (UniqueName: \"kubernetes.io/projected/e70704a8-4517-4ee2-8a1f-de93c014b0da-kube-api-access-4n4pn\") pod \"cluster-samples-operator-665b6dd947-kmr2s\" (UID: \"e70704a8-4517-4ee2-8a1f-de93c014b0da\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kmr2s" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.341079 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3e14730e-9ab1-4dd1-b786-142b82b59802-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-s8wdz\" (UID: \"3e14730e-9ab1-4dd1-b786-142b82b59802\") " pod="openshift-authentication/oauth-openshift-558db77b4-s8wdz" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.341134 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msw9j\" (UniqueName: \"kubernetes.io/projected/3e14730e-9ab1-4dd1-b786-142b82b59802-kube-api-access-msw9j\") pod \"oauth-openshift-558db77b4-s8wdz\" (UID: \"3e14730e-9ab1-4dd1-b786-142b82b59802\") " pod="openshift-authentication/oauth-openshift-558db77b4-s8wdz" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.341175 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hkwp\" (UniqueName: \"kubernetes.io/projected/63625da7-0f0d-48f1-8b58-c75e04bc31e4-kube-api-access-6hkwp\") pod \"openshift-config-operator-7777fb866f-2qrsm\" (UID: \"63625da7-0f0d-48f1-8b58-c75e04bc31e4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qrsm" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.341209 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3e14730e-9ab1-4dd1-b786-142b82b59802-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-s8wdz\" (UID: \"3e14730e-9ab1-4dd1-b786-142b82b59802\") " pod="openshift-authentication/oauth-openshift-558db77b4-s8wdz" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.341283 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5d8gm\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.341329 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dd7cq\" (UniqueName: \"kubernetes.io/projected/60f54e91-6901-4948-937c-cb0b85f43196-kube-api-access-dd7cq\") pod \"openshift-controller-manager-operator-756b6f6bc6-494j4\" (UID: \"60f54e91-6901-4948-937c-cb0b85f43196\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-494j4" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.341364 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9a23ba2c-e3a5-4447-955c-f60a09b92dcc-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-fvmsc\" (UID: \"9a23ba2c-e3a5-4447-955c-f60a09b92dcc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fvmsc" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.341400 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f849ade-9f03-46fd-b9c5-5ddd61a27d1c-serving-cert\") pod \"console-operator-58897d9998-6vssd\" (UID: \"6f849ade-9f03-46fd-b9c5-5ddd61a27d1c\") " pod="openshift-console-operator/console-operator-58897d9998-6vssd" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.341436 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a9f8452e-0278-4321-9ed1-e38a44ce0cb2-bound-sa-token\") pod \"ingress-operator-5b745b69d9-p5h4v\" (UID: \"a9f8452e-0278-4321-9ed1-e38a44ce0cb2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-p5h4v" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.341468 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqkdw\" (UniqueName: \"kubernetes.io/projected/afb31fcb-c363-4434-a971-126881422750-kube-api-access-vqkdw\") pod \"route-controller-manager-6576b87f9c-pqkrs\" (UID: \"afb31fcb-c363-4434-a971-126881422750\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pqkrs" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.341501 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e14730e-9ab1-4dd1-b786-142b82b59802-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-s8wdz\" (UID: \"3e14730e-9ab1-4dd1-b786-142b82b59802\") " pod="openshift-authentication/oauth-openshift-558db77b4-s8wdz" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.341542 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b1c6c047-6fde-4d86-a82c-d8d259265412-registry-tls\") pod \"image-registry-697d97f7c8-5d8gm\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.341581 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3e14730e-9ab1-4dd1-b786-142b82b59802-audit-dir\") pod \"oauth-openshift-558db77b4-s8wdz\" (UID: \"3e14730e-9ab1-4dd1-b786-142b82b59802\") " pod="openshift-authentication/oauth-openshift-558db77b4-s8wdz" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.341648 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e70704a8-4517-4ee2-8a1f-de93c014b0da-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-kmr2s\" (UID: \"e70704a8-4517-4ee2-8a1f-de93c014b0da\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kmr2s" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.341689 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/d06cfae9-ac07-4c93-bb8d-f4548bbe303d-etcd-ca\") pod \"etcd-operator-b45778765-pghrh\" (UID: \"d06cfae9-ac07-4c93-bb8d-f4548bbe303d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pghrh" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.341724 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76b05f76-b086-4375-9ba4-b1d4f5624ba0-config\") pod \"machine-api-operator-5694c8668f-cqnvn\" (UID: \"76b05f76-b086-4375-9ba4-b1d4f5624ba0\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cqnvn" Mar 12 08:02:13 crc kubenswrapper[4809]: E0312 08:02:13.341800 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 08:02:13.841776161 +0000 UTC m=+207.423811904 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5d8gm" (UID: "b1c6c047-6fde-4d86-a82c-d8d259265412") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.341839 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9a23ba2c-e3a5-4447-955c-f60a09b92dcc-etcd-client\") pod \"apiserver-7bbb656c7d-fvmsc\" (UID: \"9a23ba2c-e3a5-4447-955c-f60a09b92dcc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fvmsc" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.341877 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3e14730e-9ab1-4dd1-b786-142b82b59802-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-s8wdz\" (UID: \"3e14730e-9ab1-4dd1-b786-142b82b59802\") " pod="openshift-authentication/oauth-openshift-558db77b4-s8wdz" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.341937 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjtnl\" (UniqueName: \"kubernetes.io/projected/d06cfae9-ac07-4c93-bb8d-f4548bbe303d-kube-api-access-tjtnl\") pod \"etcd-operator-b45778765-pghrh\" (UID: \"d06cfae9-ac07-4c93-bb8d-f4548bbe303d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pghrh" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.341982 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqjnj\" (UniqueName: \"kubernetes.io/projected/1189f657-b031-4ece-859b-95d3eadd8221-kube-api-access-tqjnj\") pod \"authentication-operator-69f744f599-dppmm\" (UID: \"1189f657-b031-4ece-859b-95d3eadd8221\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dppmm" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.342024 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9a23ba2c-e3a5-4447-955c-f60a09b92dcc-audit-policies\") pod \"apiserver-7bbb656c7d-fvmsc\" (UID: \"9a23ba2c-e3a5-4447-955c-f60a09b92dcc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fvmsc" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.342172 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a23ba2c-e3a5-4447-955c-f60a09b92dcc-serving-cert\") pod \"apiserver-7bbb656c7d-fvmsc\" (UID: \"9a23ba2c-e3a5-4447-955c-f60a09b92dcc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fvmsc" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.342324 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmmvd\" (UniqueName: \"kubernetes.io/projected/d13f5606-abd9-46c4-b1db-1f01a3275ba8-kube-api-access-mmmvd\") pod \"openshift-apiserver-operator-796bbdcf4f-cd8df\" (UID: \"d13f5606-abd9-46c4-b1db-1f01a3275ba8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cd8df" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.342355 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/63625da7-0f0d-48f1-8b58-c75e04bc31e4-available-featuregates\") pod \"openshift-config-operator-7777fb866f-2qrsm\" (UID: \"63625da7-0f0d-48f1-8b58-c75e04bc31e4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qrsm" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.342378 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3e14730e-9ab1-4dd1-b786-142b82b59802-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-s8wdz\" (UID: \"3e14730e-9ab1-4dd1-b786-142b82b59802\") " pod="openshift-authentication/oauth-openshift-558db77b4-s8wdz" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.342400 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b1c6c047-6fde-4d86-a82c-d8d259265412-installation-pull-secrets\") pod \"image-registry-697d97f7c8-5d8gm\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.342427 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d13f5606-abd9-46c4-b1db-1f01a3275ba8-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-cd8df\" (UID: \"d13f5606-abd9-46c4-b1db-1f01a3275ba8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cd8df" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.342446 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/d06cfae9-ac07-4c93-bb8d-f4548bbe303d-etcd-service-ca\") pod \"etcd-operator-b45778765-pghrh\" (UID: \"d06cfae9-ac07-4c93-bb8d-f4548bbe303d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pghrh" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.342479 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b1c6c047-6fde-4d86-a82c-d8d259265412-registry-certificates\") pod \"image-registry-697d97f7c8-5d8gm\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.342498 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1189f657-b031-4ece-859b-95d3eadd8221-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-dppmm\" (UID: \"1189f657-b031-4ece-859b-95d3eadd8221\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dppmm" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.342520 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3e14730e-9ab1-4dd1-b786-142b82b59802-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-s8wdz\" (UID: \"3e14730e-9ab1-4dd1-b786-142b82b59802\") " pod="openshift-authentication/oauth-openshift-558db77b4-s8wdz" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.342545 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3e14730e-9ab1-4dd1-b786-142b82b59802-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-s8wdz\" (UID: \"3e14730e-9ab1-4dd1-b786-142b82b59802\") " pod="openshift-authentication/oauth-openshift-558db77b4-s8wdz" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.342564 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63625da7-0f0d-48f1-8b58-c75e04bc31e4-serving-cert\") pod \"openshift-config-operator-7777fb866f-2qrsm\" (UID: \"63625da7-0f0d-48f1-8b58-c75e04bc31e4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qrsm" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.342601 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/60f54e91-6901-4948-937c-cb0b85f43196-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-494j4\" (UID: \"60f54e91-6901-4948-937c-cb0b85f43196\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-494j4" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.342622 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tx998\" (UniqueName: \"kubernetes.io/projected/76b05f76-b086-4375-9ba4-b1d4f5624ba0-kube-api-access-tx998\") pod \"machine-api-operator-5694c8668f-cqnvn\" (UID: \"76b05f76-b086-4375-9ba4-b1d4f5624ba0\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cqnvn" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.342643 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/b24173e0-5140-4c4d-ab8b-ea3696db0b74-default-certificate\") pod \"router-default-5444994796-ttmrw\" (UID: \"b24173e0-5140-4c4d-ab8b-ea3696db0b74\") " pod="openshift-ingress/router-default-5444994796-ttmrw" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.342711 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b24173e0-5140-4c4d-ab8b-ea3696db0b74-metrics-certs\") pod \"router-default-5444994796-ttmrw\" (UID: \"b24173e0-5140-4c4d-ab8b-ea3696db0b74\") " pod="openshift-ingress/router-default-5444994796-ttmrw" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.342728 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5k5q\" (UniqueName: \"kubernetes.io/projected/9a23ba2c-e3a5-4447-955c-f60a09b92dcc-kube-api-access-m5k5q\") pod \"apiserver-7bbb656c7d-fvmsc\" (UID: \"9a23ba2c-e3a5-4447-955c-f60a09b92dcc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fvmsc" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.342747 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrkgv\" (UniqueName: \"kubernetes.io/projected/6f849ade-9f03-46fd-b9c5-5ddd61a27d1c-kube-api-access-vrkgv\") pod \"console-operator-58897d9998-6vssd\" (UID: \"6f849ade-9f03-46fd-b9c5-5ddd61a27d1c\") " pod="openshift-console-operator/console-operator-58897d9998-6vssd" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.342771 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d06cfae9-ac07-4c93-bb8d-f4548bbe303d-serving-cert\") pod \"etcd-operator-b45778765-pghrh\" (UID: \"d06cfae9-ac07-4c93-bb8d-f4548bbe303d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pghrh" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.342790 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3e14730e-9ab1-4dd1-b786-142b82b59802-audit-policies\") pod \"oauth-openshift-558db77b4-s8wdz\" (UID: \"3e14730e-9ab1-4dd1-b786-142b82b59802\") " pod="openshift-authentication/oauth-openshift-558db77b4-s8wdz" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.342807 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/76b05f76-b086-4375-9ba4-b1d4f5624ba0-images\") pod \"machine-api-operator-5694c8668f-cqnvn\" (UID: \"76b05f76-b086-4375-9ba4-b1d4f5624ba0\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cqnvn" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.342851 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/76b05f76-b086-4375-9ba4-b1d4f5624ba0-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-cqnvn\" (UID: \"76b05f76-b086-4375-9ba4-b1d4f5624ba0\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cqnvn" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.342871 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.342888 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3e14730e-9ab1-4dd1-b786-142b82b59802-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-s8wdz\" (UID: \"3e14730e-9ab1-4dd1-b786-142b82b59802\") " pod="openshift-authentication/oauth-openshift-558db77b4-s8wdz" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.342913 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3e14730e-9ab1-4dd1-b786-142b82b59802-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-s8wdz\" (UID: \"3e14730e-9ab1-4dd1-b786-142b82b59802\") " pod="openshift-authentication/oauth-openshift-558db77b4-s8wdz" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.342945 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d06cfae9-ac07-4c93-bb8d-f4548bbe303d-etcd-client\") pod \"etcd-operator-b45778765-pghrh\" (UID: \"d06cfae9-ac07-4c93-bb8d-f4548bbe303d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pghrh" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.342964 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1189f657-b031-4ece-859b-95d3eadd8221-serving-cert\") pod \"authentication-operator-69f744f599-dppmm\" (UID: \"1189f657-b031-4ece-859b-95d3eadd8221\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dppmm" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.343004 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3e14730e-9ab1-4dd1-b786-142b82b59802-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-s8wdz\" (UID: \"3e14730e-9ab1-4dd1-b786-142b82b59802\") " pod="openshift-authentication/oauth-openshift-558db77b4-s8wdz" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.343035 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f849ade-9f03-46fd-b9c5-5ddd61a27d1c-config\") pod \"console-operator-58897d9998-6vssd\" (UID: \"6f849ade-9f03-46fd-b9c5-5ddd61a27d1c\") " pod="openshift-console-operator/console-operator-58897d9998-6vssd" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.343054 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b1c6c047-6fde-4d86-a82c-d8d259265412-trusted-ca\") pod \"image-registry-697d97f7c8-5d8gm\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.343071 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d13f5606-abd9-46c4-b1db-1f01a3275ba8-config\") pod \"openshift-apiserver-operator-796bbdcf4f-cd8df\" (UID: \"d13f5606-abd9-46c4-b1db-1f01a3275ba8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cd8df" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.343091 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cppn\" (UniqueName: \"kubernetes.io/projected/a752ff7b-9553-492d-83d0-42bb9ea5dfa9-kube-api-access-7cppn\") pod \"downloads-7954f5f757-fl8dg\" (UID: \"a752ff7b-9553-492d-83d0-42bb9ea5dfa9\") " pod="openshift-console/downloads-7954f5f757-fl8dg" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.343441 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a9f8452e-0278-4321-9ed1-e38a44ce0cb2-trusted-ca\") pod \"ingress-operator-5b745b69d9-p5h4v\" (UID: \"a9f8452e-0278-4321-9ed1-e38a44ce0cb2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-p5h4v" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.343470 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3e14730e-9ab1-4dd1-b786-142b82b59802-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-s8wdz\" (UID: \"3e14730e-9ab1-4dd1-b786-142b82b59802\") " pod="openshift-authentication/oauth-openshift-558db77b4-s8wdz" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.343563 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2s8fz\" (UniqueName: \"kubernetes.io/projected/a9f8452e-0278-4321-9ed1-e38a44ce0cb2-kube-api-access-2s8fz\") pod \"ingress-operator-5b745b69d9-p5h4v\" (UID: \"a9f8452e-0278-4321-9ed1-e38a44ce0cb2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-p5h4v" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.343619 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afb31fcb-c363-4434-a971-126881422750-config\") pod \"route-controller-manager-6576b87f9c-pqkrs\" (UID: \"afb31fcb-c363-4434-a971-126881422750\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pqkrs" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.343654 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1189f657-b031-4ece-859b-95d3eadd8221-service-ca-bundle\") pod \"authentication-operator-69f744f599-dppmm\" (UID: \"1189f657-b031-4ece-859b-95d3eadd8221\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dppmm" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.343753 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8zb7\" (UniqueName: \"kubernetes.io/projected/b1c6c047-6fde-4d86-a82c-d8d259265412-kube-api-access-x8zb7\") pod \"image-registry-697d97f7c8-5d8gm\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.343808 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6f849ade-9f03-46fd-b9c5-5ddd61a27d1c-trusted-ca\") pod \"console-operator-58897d9998-6vssd\" (UID: \"6f849ade-9f03-46fd-b9c5-5ddd61a27d1c\") " pod="openshift-console-operator/console-operator-58897d9998-6vssd" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.343949 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a9f8452e-0278-4321-9ed1-e38a44ce0cb2-metrics-tls\") pod \"ingress-operator-5b745b69d9-p5h4v\" (UID: \"a9f8452e-0278-4321-9ed1-e38a44ce0cb2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-p5h4v" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.344026 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9a23ba2c-e3a5-4447-955c-f60a09b92dcc-audit-dir\") pod \"apiserver-7bbb656c7d-fvmsc\" (UID: \"9a23ba2c-e3a5-4447-955c-f60a09b92dcc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fvmsc" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.344098 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcbpn\" (UniqueName: \"kubernetes.io/projected/b24173e0-5140-4c4d-ab8b-ea3696db0b74-kube-api-access-vcbpn\") pod \"router-default-5444994796-ttmrw\" (UID: \"b24173e0-5140-4c4d-ab8b-ea3696db0b74\") " pod="openshift-ingress/router-default-5444994796-ttmrw" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.344169 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a23ba2c-e3a5-4447-955c-f60a09b92dcc-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-fvmsc\" (UID: \"9a23ba2c-e3a5-4447-955c-f60a09b92dcc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fvmsc" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.344231 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9a23ba2c-e3a5-4447-955c-f60a09b92dcc-encryption-config\") pod \"apiserver-7bbb656c7d-fvmsc\" (UID: \"9a23ba2c-e3a5-4447-955c-f60a09b92dcc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fvmsc" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.344342 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d06cfae9-ac07-4c93-bb8d-f4548bbe303d-config\") pod \"etcd-operator-b45778765-pghrh\" (UID: \"d06cfae9-ac07-4c93-bb8d-f4548bbe303d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pghrh" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.344390 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/afb31fcb-c363-4434-a971-126881422750-client-ca\") pod \"route-controller-manager-6576b87f9c-pqkrs\" (UID: \"afb31fcb-c363-4434-a971-126881422750\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pqkrs" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.348948 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/481587db-884d-425d-ba58-4921449275ef-config\") pod \"apiserver-76f77b778f-d525c\" (UID: \"481587db-884d-425d-ba58-4921449275ef\") " pod="openshift-apiserver/apiserver-76f77b778f-d525c" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.361081 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 12 08:02:13 crc kubenswrapper[4809]: E0312 08:02:13.366558 4809 projected.go:194] Error preparing data for projected volume kube-api-access-w7df9 for pod openshift-apiserver/apiserver-76f77b778f-d525c: failed to sync configmap cache: timed out waiting for the condition Mar 12 08:02:13 crc kubenswrapper[4809]: E0312 08:02:13.367481 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/481587db-884d-425d-ba58-4921449275ef-kube-api-access-w7df9 podName:481587db-884d-425d-ba58-4921449275ef nodeName:}" failed. No retries permitted until 2026-03-12 08:02:13.867430494 +0000 UTC m=+207.449466227 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-w7df9" (UniqueName: "kubernetes.io/projected/481587db-884d-425d-ba58-4921449275ef-kube-api-access-w7df9") pod "apiserver-76f77b778f-d525c" (UID: "481587db-884d-425d-ba58-4921449275ef") : failed to sync configmap cache: timed out waiting for the condition Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.382051 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.395307 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vl6p\" (UniqueName: \"kubernetes.io/projected/98cc5813-f6d5-4e2a-a7d4-2546e18e60f9-kube-api-access-7vl6p\") pod \"controller-manager-879f6c89f-r5hmb\" (UID: \"98cc5813-f6d5-4e2a-a7d4-2546e18e60f9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5hmb" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.401206 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.407543 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/481587db-884d-425d-ba58-4921449275ef-image-import-ca\") pod \"apiserver-76f77b778f-d525c\" (UID: \"481587db-884d-425d-ba58-4921449275ef\") " pod="openshift-apiserver/apiserver-76f77b778f-d525c" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.422875 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.430313 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/481587db-884d-425d-ba58-4921449275ef-etcd-client\") pod \"apiserver-76f77b778f-d525c\" (UID: \"481587db-884d-425d-ba58-4921449275ef\") " pod="openshift-apiserver/apiserver-76f77b778f-d525c" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.441857 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.445284 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 08:02:13 crc kubenswrapper[4809]: E0312 08:02:13.445555 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 08:02:13.945514704 +0000 UTC m=+207.527550467 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.445638 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b1c6c047-6fde-4d86-a82c-d8d259265412-registry-certificates\") pod \"image-registry-697d97f7c8-5d8gm\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.445687 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1189f657-b031-4ece-859b-95d3eadd8221-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-dppmm\" (UID: \"1189f657-b031-4ece-859b-95d3eadd8221\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dppmm" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.445739 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/03f48903-53e7-4a18-a2ee-2a94eefc9182-certs\") pod \"machine-config-server-n7bs7\" (UID: \"03f48903-53e7-4a18-a2ee-2a94eefc9182\") " pod="openshift-machine-config-operator/machine-config-server-n7bs7" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.445778 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63625da7-0f0d-48f1-8b58-c75e04bc31e4-serving-cert\") pod \"openshift-config-operator-7777fb866f-2qrsm\" (UID: \"63625da7-0f0d-48f1-8b58-c75e04bc31e4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qrsm" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.445818 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3e14730e-9ab1-4dd1-b786-142b82b59802-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-s8wdz\" (UID: \"3e14730e-9ab1-4dd1-b786-142b82b59802\") " pod="openshift-authentication/oauth-openshift-558db77b4-s8wdz" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.445991 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/42f8eef2-550d-4e94-9719-3f2abbbb3ecc-secret-volume\") pod \"collect-profiles-29555040-4v65h\" (UID: \"42f8eef2-550d-4e94-9719-3f2abbbb3ecc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555040-4v65h" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.446072 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0a6838f8-a7aa-46c5-9e16-6885daff4a88-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-vh7gf\" (UID: \"0a6838f8-a7aa-46c5-9e16-6885daff4a88\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vh7gf" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.446148 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/60f54e91-6901-4948-937c-cb0b85f43196-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-494j4\" (UID: \"60f54e91-6901-4948-937c-cb0b85f43196\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-494j4" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.446179 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tx998\" (UniqueName: \"kubernetes.io/projected/76b05f76-b086-4375-9ba4-b1d4f5624ba0-kube-api-access-tx998\") pod \"machine-api-operator-5694c8668f-cqnvn\" (UID: \"76b05f76-b086-4375-9ba4-b1d4f5624ba0\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cqnvn" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.446207 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/42f8eef2-550d-4e94-9719-3f2abbbb3ecc-config-volume\") pod \"collect-profiles-29555040-4v65h\" (UID: \"42f8eef2-550d-4e94-9719-3f2abbbb3ecc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555040-4v65h" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.446242 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tdth\" (UniqueName: \"kubernetes.io/projected/d8b4cda2-f60b-4a9d-9302-ac44f7acfb7a-kube-api-access-2tdth\") pod \"multus-admission-controller-857f4d67dd-bqvjf\" (UID: \"d8b4cda2-f60b-4a9d-9302-ac44f7acfb7a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-bqvjf" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.446276 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrkgv\" (UniqueName: \"kubernetes.io/projected/6f849ade-9f03-46fd-b9c5-5ddd61a27d1c-kube-api-access-vrkgv\") pod \"console-operator-58897d9998-6vssd\" (UID: \"6f849ade-9f03-46fd-b9c5-5ddd61a27d1c\") " pod="openshift-console-operator/console-operator-58897d9998-6vssd" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.446327 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/76b05f76-b086-4375-9ba4-b1d4f5624ba0-images\") pod \"machine-api-operator-5694c8668f-cqnvn\" (UID: \"76b05f76-b086-4375-9ba4-b1d4f5624ba0\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cqnvn" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.446369 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/54d01473-f99a-47d2-ae35-0a4b933b5098-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-nfbkw\" (UID: \"54d01473-f99a-47d2-ae35-0a4b933b5098\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nfbkw" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.446417 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3e14730e-9ab1-4dd1-b786-142b82b59802-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-s8wdz\" (UID: \"3e14730e-9ab1-4dd1-b786-142b82b59802\") " pod="openshift-authentication/oauth-openshift-558db77b4-s8wdz" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.446447 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ac7cc05e-989a-4474-9685-9600e3502dfd-webhook-cert\") pod \"packageserver-d55dfcdfc-xx52w\" (UID: \"ac7cc05e-989a-4474-9685-9600e3502dfd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xx52w" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.446486 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/50dc6c2c-21cc-4fe3-83c6-9b2a69eb039d-profile-collector-cert\") pod \"catalog-operator-68c6474976-vqtvl\" (UID: \"50dc6c2c-21cc-4fe3-83c6-9b2a69eb039d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vqtvl" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.446513 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/03f48903-53e7-4a18-a2ee-2a94eefc9182-node-bootstrap-token\") pod \"machine-config-server-n7bs7\" (UID: \"03f48903-53e7-4a18-a2ee-2a94eefc9182\") " pod="openshift-machine-config-operator/machine-config-server-n7bs7" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.446542 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/98a6e235-45c4-4192-8060-6771a197f829-images\") pod \"machine-config-operator-74547568cd-7qz8g\" (UID: \"98a6e235-45c4-4192-8060-6771a197f829\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7qz8g" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.446594 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b1c6c047-6fde-4d86-a82c-d8d259265412-trusted-ca\") pod \"image-registry-697d97f7c8-5d8gm\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.446633 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cppn\" (UniqueName: \"kubernetes.io/projected/a752ff7b-9553-492d-83d0-42bb9ea5dfa9-kube-api-access-7cppn\") pod \"downloads-7954f5f757-fl8dg\" (UID: \"a752ff7b-9553-492d-83d0-42bb9ea5dfa9\") " pod="openshift-console/downloads-7954f5f757-fl8dg" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.446695 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a9f8452e-0278-4321-9ed1-e38a44ce0cb2-trusted-ca\") pod \"ingress-operator-5b745b69d9-p5h4v\" (UID: \"a9f8452e-0278-4321-9ed1-e38a44ce0cb2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-p5h4v" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.446737 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgxjt\" (UniqueName: \"kubernetes.io/projected/879e1dfb-bff9-4ff0-99e3-912124941b77-kube-api-access-pgxjt\") pod \"migrator-59844c95c7-8m4nh\" (UID: \"879e1dfb-bff9-4ff0-99e3-912124941b77\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8m4nh" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.446789 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2s8fz\" (UniqueName: \"kubernetes.io/projected/a9f8452e-0278-4321-9ed1-e38a44ce0cb2-kube-api-access-2s8fz\") pod \"ingress-operator-5b745b69d9-p5h4v\" (UID: \"a9f8452e-0278-4321-9ed1-e38a44ce0cb2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-p5h4v" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.446821 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31ccf9d3-41db-457e-9341-744f7945b4f5-serving-cert\") pod \"service-ca-operator-777779d784-r24lv\" (UID: \"31ccf9d3-41db-457e-9341-744f7945b4f5\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-r24lv" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.446872 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a9f8452e-0278-4321-9ed1-e38a44ce0cb2-metrics-tls\") pod \"ingress-operator-5b745b69d9-p5h4v\" (UID: \"a9f8452e-0278-4321-9ed1-e38a44ce0cb2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-p5h4v" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.446920 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9a23ba2c-e3a5-4447-955c-f60a09b92dcc-audit-dir\") pod \"apiserver-7bbb656c7d-fvmsc\" (UID: \"9a23ba2c-e3a5-4447-955c-f60a09b92dcc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fvmsc" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.447127 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1189f657-b031-4ece-859b-95d3eadd8221-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-dppmm\" (UID: \"1189f657-b031-4ece-859b-95d3eadd8221\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dppmm" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.447660 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3e14730e-9ab1-4dd1-b786-142b82b59802-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-s8wdz\" (UID: \"3e14730e-9ab1-4dd1-b786-142b82b59802\") " pod="openshift-authentication/oauth-openshift-558db77b4-s8wdz" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.447995 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b1c6c047-6fde-4d86-a82c-d8d259265412-registry-certificates\") pod \"image-registry-697d97f7c8-5d8gm\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.448264 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a9f8452e-0278-4321-9ed1-e38a44ce0cb2-trusted-ca\") pod \"ingress-operator-5b745b69d9-p5h4v\" (UID: \"a9f8452e-0278-4321-9ed1-e38a44ce0cb2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-p5h4v" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.448525 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/959e6af8-6641-4e70-bb7f-cfe2c8ea56a6-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-k88nv\" (UID: \"959e6af8-6641-4e70-bb7f-cfe2c8ea56a6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-k88nv" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.448609 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9a23ba2c-e3a5-4447-955c-f60a09b92dcc-encryption-config\") pod \"apiserver-7bbb656c7d-fvmsc\" (UID: \"9a23ba2c-e3a5-4447-955c-f60a09b92dcc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fvmsc" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.448684 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/50dc6c2c-21cc-4fe3-83c6-9b2a69eb039d-srv-cert\") pod \"catalog-operator-68c6474976-vqtvl\" (UID: \"50dc6c2c-21cc-4fe3-83c6-9b2a69eb039d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vqtvl" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.448753 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/88d136d4-995f-44d0-8691-e84bfacb68c3-socket-dir\") pod \"csi-hostpathplugin-njprg\" (UID: \"88d136d4-995f-44d0-8691-e84bfacb68c3\") " pod="hostpath-provisioner/csi-hostpathplugin-njprg" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.448831 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8f4l\" (UniqueName: \"kubernetes.io/projected/ac7cc05e-989a-4474-9685-9600e3502dfd-kube-api-access-l8f4l\") pod \"packageserver-d55dfcdfc-xx52w\" (UID: \"ac7cc05e-989a-4474-9685-9600e3502dfd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xx52w" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.449600 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9a23ba2c-e3a5-4447-955c-f60a09b92dcc-audit-dir\") pod \"apiserver-7bbb656c7d-fvmsc\" (UID: \"9a23ba2c-e3a5-4447-955c-f60a09b92dcc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fvmsc" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.450200 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rx75m\" (UniqueName: \"kubernetes.io/projected/88d136d4-995f-44d0-8691-e84bfacb68c3-kube-api-access-rx75m\") pod \"csi-hostpathplugin-njprg\" (UID: \"88d136d4-995f-44d0-8691-e84bfacb68c3\") " pod="hostpath-provisioner/csi-hostpathplugin-njprg" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.450369 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/b24173e0-5140-4c4d-ab8b-ea3696db0b74-stats-auth\") pod \"router-default-5444994796-ttmrw\" (UID: \"b24173e0-5140-4c4d-ab8b-ea3696db0b74\") " pod="openshift-ingress/router-default-5444994796-ttmrw" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.450401 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/959e6af8-6641-4e70-bb7f-cfe2c8ea56a6-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-k88nv\" (UID: \"959e6af8-6641-4e70-bb7f-cfe2c8ea56a6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-k88nv" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.450872 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kt52\" (UniqueName: \"kubernetes.io/projected/50dc6c2c-21cc-4fe3-83c6-9b2a69eb039d-kube-api-access-4kt52\") pod \"catalog-operator-68c6474976-vqtvl\" (UID: \"50dc6c2c-21cc-4fe3-83c6-9b2a69eb039d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vqtvl" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.450876 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b1c6c047-6fde-4d86-a82c-d8d259265412-trusted-ca\") pod \"image-registry-697d97f7c8-5d8gm\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.450937 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5ce55a3c-0a1e-4bb6-84be-c8fe85a210a7-config-volume\") pod \"dns-default-lt58n\" (UID: \"5ce55a3c-0a1e-4bb6-84be-c8fe85a210a7\") " pod="openshift-dns/dns-default-lt58n" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.451247 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b1c6c047-6fde-4d86-a82c-d8d259265412-ca-trust-extracted\") pod \"image-registry-697d97f7c8-5d8gm\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.451347 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ac7cc05e-989a-4474-9685-9600e3502dfd-tmpfs\") pod \"packageserver-d55dfcdfc-xx52w\" (UID: \"ac7cc05e-989a-4474-9685-9600e3502dfd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xx52w" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.451395 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/512b5035-d9af-4615-b351-2199e94f9c50-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hrnnz\" (UID: \"512b5035-d9af-4615-b351-2199e94f9c50\") " pod="openshift-marketplace/marketplace-operator-79b997595-hrnnz" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.451541 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b24173e0-5140-4c4d-ab8b-ea3696db0b74-service-ca-bundle\") pod \"router-default-5444994796-ttmrw\" (UID: \"b24173e0-5140-4c4d-ab8b-ea3696db0b74\") " pod="openshift-ingress/router-default-5444994796-ttmrw" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.451627 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3e14730e-9ab1-4dd1-b786-142b82b59802-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-s8wdz\" (UID: \"3e14730e-9ab1-4dd1-b786-142b82b59802\") " pod="openshift-authentication/oauth-openshift-558db77b4-s8wdz" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.451679 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/512b5035-d9af-4615-b351-2199e94f9c50-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hrnnz\" (UID: \"512b5035-d9af-4615-b351-2199e94f9c50\") " pod="openshift-marketplace/marketplace-operator-79b997595-hrnnz" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.451762 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d8b4cda2-f60b-4a9d-9302-ac44f7acfb7a-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-bqvjf\" (UID: \"d8b4cda2-f60b-4a9d-9302-ac44f7acfb7a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-bqvjf" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.451853 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3e14730e-9ab1-4dd1-b786-142b82b59802-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-s8wdz\" (UID: \"3e14730e-9ab1-4dd1-b786-142b82b59802\") " pod="openshift-authentication/oauth-openshift-558db77b4-s8wdz" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.451943 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/88d136d4-995f-44d0-8691-e84bfacb68c3-registration-dir\") pod \"csi-hostpathplugin-njprg\" (UID: \"88d136d4-995f-44d0-8691-e84bfacb68c3\") " pod="hostpath-provisioner/csi-hostpathplugin-njprg" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.452037 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dd7cq\" (UniqueName: \"kubernetes.io/projected/60f54e91-6901-4948-937c-cb0b85f43196-kube-api-access-dd7cq\") pod \"openshift-controller-manager-operator-756b6f6bc6-494j4\" (UID: \"60f54e91-6901-4948-937c-cb0b85f43196\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-494j4" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.452194 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b1c6c047-6fde-4d86-a82c-d8d259265412-ca-trust-extracted\") pod \"image-registry-697d97f7c8-5d8gm\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.452205 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f849ade-9f03-46fd-b9c5-5ddd61a27d1c-serving-cert\") pod \"console-operator-58897d9998-6vssd\" (UID: \"6f849ade-9f03-46fd-b9c5-5ddd61a27d1c\") " pod="openshift-console-operator/console-operator-58897d9998-6vssd" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.452317 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77tht\" (UniqueName: \"kubernetes.io/projected/38092207-e107-4e5a-8706-b3ad66bea661-kube-api-access-77tht\") pod \"package-server-manager-789f6589d5-w2hxb\" (UID: \"38092207-e107-4e5a-8706-b3ad66bea661\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w2hxb" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.452402 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e14730e-9ab1-4dd1-b786-142b82b59802-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-s8wdz\" (UID: \"3e14730e-9ab1-4dd1-b786-142b82b59802\") " pod="openshift-authentication/oauth-openshift-558db77b4-s8wdz" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.452423 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b24173e0-5140-4c4d-ab8b-ea3696db0b74-service-ca-bundle\") pod \"router-default-5444994796-ttmrw\" (UID: \"b24173e0-5140-4c4d-ab8b-ea3696db0b74\") " pod="openshift-ingress/router-default-5444994796-ttmrw" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.452480 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e70704a8-4517-4ee2-8a1f-de93c014b0da-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-kmr2s\" (UID: \"e70704a8-4517-4ee2-8a1f-de93c014b0da\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kmr2s" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.452540 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2726995b-6d26-48c3-9e7d-da323657f55c-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-p6jxb\" (UID: \"2726995b-6d26-48c3-9e7d-da323657f55c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-p6jxb" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.452575 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztw8s\" (UniqueName: \"kubernetes.io/projected/f9891031-02d7-4b3a-8fb8-40fd8bb9a825-kube-api-access-ztw8s\") pod \"service-ca-9c57cc56f-pxnfg\" (UID: \"f9891031-02d7-4b3a-8fb8-40fd8bb9a825\") " pod="openshift-service-ca/service-ca-9c57cc56f-pxnfg" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.452616 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9a23ba2c-e3a5-4447-955c-f60a09b92dcc-etcd-client\") pod \"apiserver-7bbb656c7d-fvmsc\" (UID: \"9a23ba2c-e3a5-4447-955c-f60a09b92dcc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fvmsc" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.452709 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3e14730e-9ab1-4dd1-b786-142b82b59802-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-s8wdz\" (UID: \"3e14730e-9ab1-4dd1-b786-142b82b59802\") " pod="openshift-authentication/oauth-openshift-558db77b4-s8wdz" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.452826 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2726995b-6d26-48c3-9e7d-da323657f55c-config\") pod \"kube-controller-manager-operator-78b949d7b-p6jxb\" (UID: \"2726995b-6d26-48c3-9e7d-da323657f55c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-p6jxb" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.452917 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqjnj\" (UniqueName: \"kubernetes.io/projected/1189f657-b031-4ece-859b-95d3eadd8221-kube-api-access-tqjnj\") pod \"authentication-operator-69f744f599-dppmm\" (UID: \"1189f657-b031-4ece-859b-95d3eadd8221\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dppmm" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.453003 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9a23ba2c-e3a5-4447-955c-f60a09b92dcc-audit-policies\") pod \"apiserver-7bbb656c7d-fvmsc\" (UID: \"9a23ba2c-e3a5-4447-955c-f60a09b92dcc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fvmsc" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.453089 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a23ba2c-e3a5-4447-955c-f60a09b92dcc-serving-cert\") pod \"apiserver-7bbb656c7d-fvmsc\" (UID: \"9a23ba2c-e3a5-4447-955c-f60a09b92dcc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fvmsc" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.453101 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9a23ba2c-e3a5-4447-955c-f60a09b92dcc-encryption-config\") pod \"apiserver-7bbb656c7d-fvmsc\" (UID: \"9a23ba2c-e3a5-4447-955c-f60a09b92dcc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fvmsc" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.453280 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31ccf9d3-41db-457e-9341-744f7945b4f5-config\") pod \"service-ca-operator-777779d784-r24lv\" (UID: \"31ccf9d3-41db-457e-9341-744f7945b4f5\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-r24lv" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.453417 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmmvd\" (UniqueName: \"kubernetes.io/projected/d13f5606-abd9-46c4-b1db-1f01a3275ba8-kube-api-access-mmmvd\") pod \"openshift-apiserver-operator-796bbdcf4f-cd8df\" (UID: \"d13f5606-abd9-46c4-b1db-1f01a3275ba8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cd8df" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.453479 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3e14730e-9ab1-4dd1-b786-142b82b59802-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-s8wdz\" (UID: \"3e14730e-9ab1-4dd1-b786-142b82b59802\") " pod="openshift-authentication/oauth-openshift-558db77b4-s8wdz" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.453594 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/63625da7-0f0d-48f1-8b58-c75e04bc31e4-available-featuregates\") pod \"openshift-config-operator-7777fb866f-2qrsm\" (UID: \"63625da7-0f0d-48f1-8b58-c75e04bc31e4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qrsm" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.453740 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/d06cfae9-ac07-4c93-bb8d-f4548bbe303d-etcd-service-ca\") pod \"etcd-operator-b45778765-pghrh\" (UID: \"d06cfae9-ac07-4c93-bb8d-f4548bbe303d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pghrh" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.453882 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5ce55a3c-0a1e-4bb6-84be-c8fe85a210a7-metrics-tls\") pod \"dns-default-lt58n\" (UID: \"5ce55a3c-0a1e-4bb6-84be-c8fe85a210a7\") " pod="openshift-dns/dns-default-lt58n" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.454152 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63625da7-0f0d-48f1-8b58-c75e04bc31e4-serving-cert\") pod \"openshift-config-operator-7777fb866f-2qrsm\" (UID: \"63625da7-0f0d-48f1-8b58-c75e04bc31e4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qrsm" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.454765 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/63625da7-0f0d-48f1-8b58-c75e04bc31e4-available-featuregates\") pod \"openshift-config-operator-7777fb866f-2qrsm\" (UID: \"63625da7-0f0d-48f1-8b58-c75e04bc31e4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qrsm" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.458543 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a23ba2c-e3a5-4447-955c-f60a09b92dcc-serving-cert\") pod \"apiserver-7bbb656c7d-fvmsc\" (UID: \"9a23ba2c-e3a5-4447-955c-f60a09b92dcc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fvmsc" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.458610 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3e14730e-9ab1-4dd1-b786-142b82b59802-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-s8wdz\" (UID: \"3e14730e-9ab1-4dd1-b786-142b82b59802\") " pod="openshift-authentication/oauth-openshift-558db77b4-s8wdz" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.459147 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/98a6e235-45c4-4192-8060-6771a197f829-proxy-tls\") pod \"machine-config-operator-74547568cd-7qz8g\" (UID: \"98a6e235-45c4-4192-8060-6771a197f829\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7qz8g" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.459217 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5d4d6ee6-9302-4b5e-b917-3a52513210ed-proxy-tls\") pod \"machine-config-controller-84d6567774-sm7hg\" (UID: \"5d4d6ee6-9302-4b5e-b917-3a52513210ed\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sm7hg" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.459281 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/b24173e0-5140-4c4d-ab8b-ea3696db0b74-default-certificate\") pod \"router-default-5444994796-ttmrw\" (UID: \"b24173e0-5140-4c4d-ab8b-ea3696db0b74\") " pod="openshift-ingress/router-default-5444994796-ttmrw" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.459768 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3e14730e-9ab1-4dd1-b786-142b82b59802-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-s8wdz\" (UID: \"3e14730e-9ab1-4dd1-b786-142b82b59802\") " pod="openshift-authentication/oauth-openshift-558db77b4-s8wdz" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.461682 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/d06cfae9-ac07-4c93-bb8d-f4548bbe303d-etcd-service-ca\") pod \"etcd-operator-b45778765-pghrh\" (UID: \"d06cfae9-ac07-4c93-bb8d-f4548bbe303d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pghrh" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.459168 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9a23ba2c-e3a5-4447-955c-f60a09b92dcc-audit-policies\") pod \"apiserver-7bbb656c7d-fvmsc\" (UID: \"9a23ba2c-e3a5-4447-955c-f60a09b92dcc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fvmsc" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.461805 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.462177 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b24173e0-5140-4c4d-ab8b-ea3696db0b74-metrics-certs\") pod \"router-default-5444994796-ttmrw\" (UID: \"b24173e0-5140-4c4d-ab8b-ea3696db0b74\") " pod="openshift-ingress/router-default-5444994796-ttmrw" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.463011 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e14730e-9ab1-4dd1-b786-142b82b59802-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-s8wdz\" (UID: \"3e14730e-9ab1-4dd1-b786-142b82b59802\") " pod="openshift-authentication/oauth-openshift-558db77b4-s8wdz" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.463091 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3e14730e-9ab1-4dd1-b786-142b82b59802-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-s8wdz\" (UID: \"3e14730e-9ab1-4dd1-b786-142b82b59802\") " pod="openshift-authentication/oauth-openshift-558db77b4-s8wdz" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.463305 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5k5q\" (UniqueName: \"kubernetes.io/projected/9a23ba2c-e3a5-4447-955c-f60a09b92dcc-kube-api-access-m5k5q\") pod \"apiserver-7bbb656c7d-fvmsc\" (UID: \"9a23ba2c-e3a5-4447-955c-f60a09b92dcc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fvmsc" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.463337 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a9f8452e-0278-4321-9ed1-e38a44ce0cb2-metrics-tls\") pod \"ingress-operator-5b745b69d9-p5h4v\" (UID: \"a9f8452e-0278-4321-9ed1-e38a44ce0cb2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-p5h4v" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.463621 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e70704a8-4517-4ee2-8a1f-de93c014b0da-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-kmr2s\" (UID: \"e70704a8-4517-4ee2-8a1f-de93c014b0da\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kmr2s" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.463361 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-752tp\" (UniqueName: \"kubernetes.io/projected/512b5035-d9af-4615-b351-2199e94f9c50-kube-api-access-752tp\") pod \"marketplace-operator-79b997595-hrnnz\" (UID: \"512b5035-d9af-4615-b351-2199e94f9c50\") " pod="openshift-marketplace/marketplace-operator-79b997595-hrnnz" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.463761 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fv62\" (UniqueName: \"kubernetes.io/projected/0621879e-29ff-49a3-81fc-1bde4a2d22ae-kube-api-access-5fv62\") pod \"olm-operator-6b444d44fb-tjjbt\" (UID: \"0621879e-29ff-49a3-81fc-1bde4a2d22ae\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tjjbt" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.463823 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d06cfae9-ac07-4c93-bb8d-f4548bbe303d-serving-cert\") pod \"etcd-operator-b45778765-pghrh\" (UID: \"d06cfae9-ac07-4c93-bb8d-f4548bbe303d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pghrh" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.463871 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3e14730e-9ab1-4dd1-b786-142b82b59802-audit-policies\") pod \"oauth-openshift-558db77b4-s8wdz\" (UID: \"3e14730e-9ab1-4dd1-b786-142b82b59802\") " pod="openshift-authentication/oauth-openshift-558db77b4-s8wdz" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.463916 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/76b05f76-b086-4375-9ba4-b1d4f5624ba0-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-cqnvn\" (UID: \"76b05f76-b086-4375-9ba4-b1d4f5624ba0\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cqnvn" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.463961 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zchwv\" (UniqueName: \"kubernetes.io/projected/42f8eef2-550d-4e94-9719-3f2abbbb3ecc-kube-api-access-zchwv\") pod \"collect-profiles-29555040-4v65h\" (UID: \"42f8eef2-550d-4e94-9719-3f2abbbb3ecc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555040-4v65h" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.464001 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89gq2\" (UniqueName: \"kubernetes.io/projected/31ccf9d3-41db-457e-9341-744f7945b4f5-kube-api-access-89gq2\") pod \"service-ca-operator-777779d784-r24lv\" (UID: \"31ccf9d3-41db-457e-9341-744f7945b4f5\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-r24lv" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.464056 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3e14730e-9ab1-4dd1-b786-142b82b59802-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-s8wdz\" (UID: \"3e14730e-9ab1-4dd1-b786-142b82b59802\") " pod="openshift-authentication/oauth-openshift-558db77b4-s8wdz" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.464097 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0621879e-29ff-49a3-81fc-1bde4a2d22ae-profile-collector-cert\") pod \"olm-operator-6b444d44fb-tjjbt\" (UID: \"0621879e-29ff-49a3-81fc-1bde4a2d22ae\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tjjbt" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.464234 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d06cfae9-ac07-4c93-bb8d-f4548bbe303d-etcd-client\") pod \"etcd-operator-b45778765-pghrh\" (UID: \"d06cfae9-ac07-4c93-bb8d-f4548bbe303d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pghrh" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.464311 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1189f657-b031-4ece-859b-95d3eadd8221-serving-cert\") pod \"authentication-operator-69f744f599-dppmm\" (UID: \"1189f657-b031-4ece-859b-95d3eadd8221\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dppmm" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.464338 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f849ade-9f03-46fd-b9c5-5ddd61a27d1c-serving-cert\") pod \"console-operator-58897d9998-6vssd\" (UID: \"6f849ade-9f03-46fd-b9c5-5ddd61a27d1c\") " pod="openshift-console-operator/console-operator-58897d9998-6vssd" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.464437 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3e14730e-9ab1-4dd1-b786-142b82b59802-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-s8wdz\" (UID: \"3e14730e-9ab1-4dd1-b786-142b82b59802\") " pod="openshift-authentication/oauth-openshift-558db77b4-s8wdz" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.464529 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f849ade-9f03-46fd-b9c5-5ddd61a27d1c-config\") pod \"console-operator-58897d9998-6vssd\" (UID: \"6f849ade-9f03-46fd-b9c5-5ddd61a27d1c\") " pod="openshift-console-operator/console-operator-58897d9998-6vssd" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.464613 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/959e6af8-6641-4e70-bb7f-cfe2c8ea56a6-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-k88nv\" (UID: \"959e6af8-6641-4e70-bb7f-cfe2c8ea56a6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-k88nv" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.464656 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/38092207-e107-4e5a-8706-b3ad66bea661-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-w2hxb\" (UID: \"38092207-e107-4e5a-8706-b3ad66bea661\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w2hxb" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.464741 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d13f5606-abd9-46c4-b1db-1f01a3275ba8-config\") pod \"openshift-apiserver-operator-796bbdcf4f-cd8df\" (UID: \"d13f5606-abd9-46c4-b1db-1f01a3275ba8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cd8df" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.464825 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3e14730e-9ab1-4dd1-b786-142b82b59802-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-s8wdz\" (UID: \"3e14730e-9ab1-4dd1-b786-142b82b59802\") " pod="openshift-authentication/oauth-openshift-558db77b4-s8wdz" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.464916 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a6838f8-a7aa-46c5-9e16-6885daff4a88-config\") pod \"kube-apiserver-operator-766d6c64bb-vh7gf\" (UID: \"0a6838f8-a7aa-46c5-9e16-6885daff4a88\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vh7gf" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.465009 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afb31fcb-c363-4434-a971-126881422750-config\") pod \"route-controller-manager-6576b87f9c-pqkrs\" (UID: \"afb31fcb-c363-4434-a971-126881422750\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pqkrs" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.465096 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1189f657-b031-4ece-859b-95d3eadd8221-service-ca-bundle\") pod \"authentication-operator-69f744f599-dppmm\" (UID: \"1189f657-b031-4ece-859b-95d3eadd8221\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dppmm" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.465210 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8zb7\" (UniqueName: \"kubernetes.io/projected/b1c6c047-6fde-4d86-a82c-d8d259265412-kube-api-access-x8zb7\") pod \"image-registry-697d97f7c8-5d8gm\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.465295 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6f849ade-9f03-46fd-b9c5-5ddd61a27d1c-trusted-ca\") pod \"console-operator-58897d9998-6vssd\" (UID: \"6f849ade-9f03-46fd-b9c5-5ddd61a27d1c\") " pod="openshift-console-operator/console-operator-58897d9998-6vssd" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.465379 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ac7cc05e-989a-4474-9685-9600e3502dfd-apiservice-cert\") pod \"packageserver-d55dfcdfc-xx52w\" (UID: \"ac7cc05e-989a-4474-9685-9600e3502dfd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xx52w" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.465506 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcbpn\" (UniqueName: \"kubernetes.io/projected/b24173e0-5140-4c4d-ab8b-ea3696db0b74-kube-api-access-vcbpn\") pod \"router-default-5444994796-ttmrw\" (UID: \"b24173e0-5140-4c4d-ab8b-ea3696db0b74\") " pod="openshift-ingress/router-default-5444994796-ttmrw" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.465625 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a23ba2c-e3a5-4447-955c-f60a09b92dcc-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-fvmsc\" (UID: \"9a23ba2c-e3a5-4447-955c-f60a09b92dcc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fvmsc" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.465708 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5l6f\" (UniqueName: \"kubernetes.io/projected/03f48903-53e7-4a18-a2ee-2a94eefc9182-kube-api-access-m5l6f\") pod \"machine-config-server-n7bs7\" (UID: \"03f48903-53e7-4a18-a2ee-2a94eefc9182\") " pod="openshift-machine-config-operator/machine-config-server-n7bs7" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.465784 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a6838f8-a7aa-46c5-9e16-6885daff4a88-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-vh7gf\" (UID: \"0a6838f8-a7aa-46c5-9e16-6885daff4a88\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vh7gf" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.465990 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/60f54e91-6901-4948-937c-cb0b85f43196-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-494j4\" (UID: \"60f54e91-6901-4948-937c-cb0b85f43196\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-494j4" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.465234 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3e14730e-9ab1-4dd1-b786-142b82b59802-audit-policies\") pod \"oauth-openshift-558db77b4-s8wdz\" (UID: \"3e14730e-9ab1-4dd1-b786-142b82b59802\") " pod="openshift-authentication/oauth-openshift-558db77b4-s8wdz" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.466657 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3e14730e-9ab1-4dd1-b786-142b82b59802-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-s8wdz\" (UID: \"3e14730e-9ab1-4dd1-b786-142b82b59802\") " pod="openshift-authentication/oauth-openshift-558db77b4-s8wdz" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.467200 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3e14730e-9ab1-4dd1-b786-142b82b59802-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-s8wdz\" (UID: \"3e14730e-9ab1-4dd1-b786-142b82b59802\") " pod="openshift-authentication/oauth-openshift-558db77b4-s8wdz" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.467609 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1189f657-b031-4ece-859b-95d3eadd8221-service-ca-bundle\") pod \"authentication-operator-69f744f599-dppmm\" (UID: \"1189f657-b031-4ece-859b-95d3eadd8221\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dppmm" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.469337 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d13f5606-abd9-46c4-b1db-1f01a3275ba8-config\") pod \"openshift-apiserver-operator-796bbdcf4f-cd8df\" (UID: \"d13f5606-abd9-46c4-b1db-1f01a3275ba8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cd8df" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.469412 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/afb31fcb-c363-4434-a971-126881422750-client-ca\") pod \"route-controller-manager-6576b87f9c-pqkrs\" (UID: \"afb31fcb-c363-4434-a971-126881422750\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pqkrs" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.469457 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d06cfae9-ac07-4c93-bb8d-f4548bbe303d-config\") pod \"etcd-operator-b45778765-pghrh\" (UID: \"d06cfae9-ac07-4c93-bb8d-f4548bbe303d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pghrh" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.469523 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svnhs\" (UniqueName: \"kubernetes.io/projected/bb72ef9d-43d2-476d-80e9-d3b19139c7a9-kube-api-access-svnhs\") pod \"kube-storage-version-migrator-operator-b67b599dd-6lhmm\" (UID: \"bb72ef9d-43d2-476d-80e9-d3b19139c7a9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6lhmm" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.471935 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3e14730e-9ab1-4dd1-b786-142b82b59802-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-s8wdz\" (UID: \"3e14730e-9ab1-4dd1-b786-142b82b59802\") " pod="openshift-authentication/oauth-openshift-558db77b4-s8wdz" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.470856 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1189f657-b031-4ece-859b-95d3eadd8221-serving-cert\") pod \"authentication-operator-69f744f599-dppmm\" (UID: \"1189f657-b031-4ece-859b-95d3eadd8221\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dppmm" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.471630 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3e14730e-9ab1-4dd1-b786-142b82b59802-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-s8wdz\" (UID: \"3e14730e-9ab1-4dd1-b786-142b82b59802\") " pod="openshift-authentication/oauth-openshift-558db77b4-s8wdz" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.471676 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/b24173e0-5140-4c4d-ab8b-ea3696db0b74-stats-auth\") pod \"router-default-5444994796-ttmrw\" (UID: \"b24173e0-5140-4c4d-ab8b-ea3696db0b74\") " pod="openshift-ingress/router-default-5444994796-ttmrw" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.472040 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f849ade-9f03-46fd-b9c5-5ddd61a27d1c-config\") pod \"console-operator-58897d9998-6vssd\" (UID: \"6f849ade-9f03-46fd-b9c5-5ddd61a27d1c\") " pod="openshift-console-operator/console-operator-58897d9998-6vssd" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.471903 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a23ba2c-e3a5-4447-955c-f60a09b92dcc-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-fvmsc\" (UID: \"9a23ba2c-e3a5-4447-955c-f60a09b92dcc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fvmsc" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.471946 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b1c6c047-6fde-4d86-a82c-d8d259265412-bound-sa-token\") pod \"image-registry-697d97f7c8-5d8gm\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.471728 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d06cfae9-ac07-4c93-bb8d-f4548bbe303d-config\") pod \"etcd-operator-b45778765-pghrh\" (UID: \"d06cfae9-ac07-4c93-bb8d-f4548bbe303d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pghrh" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.469707 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/afb31fcb-c363-4434-a971-126881422750-client-ca\") pod \"route-controller-manager-6576b87f9c-pqkrs\" (UID: \"afb31fcb-c363-4434-a971-126881422750\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pqkrs" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.472255 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/b24173e0-5140-4c4d-ab8b-ea3696db0b74-default-certificate\") pod \"router-default-5444994796-ttmrw\" (UID: \"b24173e0-5140-4c4d-ab8b-ea3696db0b74\") " pod="openshift-ingress/router-default-5444994796-ttmrw" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.472287 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afb31fcb-c363-4434-a971-126881422750-config\") pod \"route-controller-manager-6576b87f9c-pqkrs\" (UID: \"afb31fcb-c363-4434-a971-126881422750\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pqkrs" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.472330 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/88d136d4-995f-44d0-8691-e84bfacb68c3-csi-data-dir\") pod \"csi-hostpathplugin-njprg\" (UID: \"88d136d4-995f-44d0-8691-e84bfacb68c3\") " pod="hostpath-provisioner/csi-hostpathplugin-njprg" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.472478 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g7tv\" (UniqueName: \"kubernetes.io/projected/98a6e235-45c4-4192-8060-6771a197f829-kube-api-access-6g7tv\") pod \"machine-config-operator-74547568cd-7qz8g\" (UID: \"98a6e235-45c4-4192-8060-6771a197f829\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7qz8g" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.472613 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk7sk\" (UniqueName: \"kubernetes.io/projected/860674d0-3197-44c8-8912-15ec9f06c643-kube-api-access-xk7sk\") pod \"ingress-canary-76fnn\" (UID: \"860674d0-3197-44c8-8912-15ec9f06c643\") " pod="openshift-ingress-canary/ingress-canary-76fnn" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.472695 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afb31fcb-c363-4434-a971-126881422750-serving-cert\") pod \"route-controller-manager-6576b87f9c-pqkrs\" (UID: \"afb31fcb-c363-4434-a971-126881422750\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pqkrs" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.472738 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1189f657-b031-4ece-859b-95d3eadd8221-config\") pod \"authentication-operator-69f744f599-dppmm\" (UID: \"1189f657-b031-4ece-859b-95d3eadd8221\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dppmm" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.472780 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60f54e91-6901-4948-937c-cb0b85f43196-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-494j4\" (UID: \"60f54e91-6901-4948-937c-cb0b85f43196\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-494j4" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.472809 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4n4pn\" (UniqueName: \"kubernetes.io/projected/e70704a8-4517-4ee2-8a1f-de93c014b0da-kube-api-access-4n4pn\") pod \"cluster-samples-operator-665b6dd947-kmr2s\" (UID: \"e70704a8-4517-4ee2-8a1f-de93c014b0da\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kmr2s" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.472835 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6k2c6\" (UniqueName: \"kubernetes.io/projected/54d01473-f99a-47d2-ae35-0a4b933b5098-kube-api-access-6k2c6\") pod \"control-plane-machine-set-operator-78cbb6b69f-nfbkw\" (UID: \"54d01473-f99a-47d2-ae35-0a4b933b5098\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nfbkw" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.472862 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msw9j\" (UniqueName: \"kubernetes.io/projected/3e14730e-9ab1-4dd1-b786-142b82b59802-kube-api-access-msw9j\") pod \"oauth-openshift-558db77b4-s8wdz\" (UID: \"3e14730e-9ab1-4dd1-b786-142b82b59802\") " pod="openshift-authentication/oauth-openshift-558db77b4-s8wdz" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.472889 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hkwp\" (UniqueName: \"kubernetes.io/projected/63625da7-0f0d-48f1-8b58-c75e04bc31e4-kube-api-access-6hkwp\") pod \"openshift-config-operator-7777fb866f-2qrsm\" (UID: \"63625da7-0f0d-48f1-8b58-c75e04bc31e4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qrsm" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.472923 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5d8gm\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.472947 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9a23ba2c-e3a5-4447-955c-f60a09b92dcc-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-fvmsc\" (UID: \"9a23ba2c-e3a5-4447-955c-f60a09b92dcc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fvmsc" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.472965 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d06cfae9-ac07-4c93-bb8d-f4548bbe303d-serving-cert\") pod \"etcd-operator-b45778765-pghrh\" (UID: \"d06cfae9-ac07-4c93-bb8d-f4548bbe303d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pghrh" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.472984 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a9f8452e-0278-4321-9ed1-e38a44ce0cb2-bound-sa-token\") pod \"ingress-operator-5b745b69d9-p5h4v\" (UID: \"a9f8452e-0278-4321-9ed1-e38a44ce0cb2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-p5h4v" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.473011 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqkdw\" (UniqueName: \"kubernetes.io/projected/afb31fcb-c363-4434-a971-126881422750-kube-api-access-vqkdw\") pod \"route-controller-manager-6576b87f9c-pqkrs\" (UID: \"afb31fcb-c363-4434-a971-126881422750\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pqkrs" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.473040 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb72ef9d-43d2-476d-80e9-d3b19139c7a9-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-6lhmm\" (UID: \"bb72ef9d-43d2-476d-80e9-d3b19139c7a9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6lhmm" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.473068 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb72ef9d-43d2-476d-80e9-d3b19139c7a9-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-6lhmm\" (UID: \"bb72ef9d-43d2-476d-80e9-d3b19139c7a9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6lhmm" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.472635 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/76b05f76-b086-4375-9ba4-b1d4f5624ba0-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-cqnvn\" (UID: \"76b05f76-b086-4375-9ba4-b1d4f5624ba0\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cqnvn" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.473092 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/98a6e235-45c4-4192-8060-6771a197f829-auth-proxy-config\") pod \"machine-config-operator-74547568cd-7qz8g\" (UID: \"98a6e235-45c4-4192-8060-6771a197f829\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7qz8g" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.473148 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f9891031-02d7-4b3a-8fb8-40fd8bb9a825-signing-cabundle\") pod \"service-ca-9c57cc56f-pxnfg\" (UID: \"f9891031-02d7-4b3a-8fb8-40fd8bb9a825\") " pod="openshift-service-ca/service-ca-9c57cc56f-pxnfg" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.473175 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b1c6c047-6fde-4d86-a82c-d8d259265412-registry-tls\") pod \"image-registry-697d97f7c8-5d8gm\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.473199 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3e14730e-9ab1-4dd1-b786-142b82b59802-audit-dir\") pod \"oauth-openshift-558db77b4-s8wdz\" (UID: \"3e14730e-9ab1-4dd1-b786-142b82b59802\") " pod="openshift-authentication/oauth-openshift-558db77b4-s8wdz" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.473224 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/88d136d4-995f-44d0-8691-e84bfacb68c3-plugins-dir\") pod \"csi-hostpathplugin-njprg\" (UID: \"88d136d4-995f-44d0-8691-e84bfacb68c3\") " pod="hostpath-provisioner/csi-hostpathplugin-njprg" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.473249 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f9891031-02d7-4b3a-8fb8-40fd8bb9a825-signing-key\") pod \"service-ca-9c57cc56f-pxnfg\" (UID: \"f9891031-02d7-4b3a-8fb8-40fd8bb9a825\") " pod="openshift-service-ca/service-ca-9c57cc56f-pxnfg" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.473277 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/860674d0-3197-44c8-8912-15ec9f06c643-cert\") pod \"ingress-canary-76fnn\" (UID: \"860674d0-3197-44c8-8912-15ec9f06c643\") " pod="openshift-ingress-canary/ingress-canary-76fnn" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.473306 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/d06cfae9-ac07-4c93-bb8d-f4548bbe303d-etcd-ca\") pod \"etcd-operator-b45778765-pghrh\" (UID: \"d06cfae9-ac07-4c93-bb8d-f4548bbe303d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pghrh" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.473328 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76b05f76-b086-4375-9ba4-b1d4f5624ba0-config\") pod \"machine-api-operator-5694c8668f-cqnvn\" (UID: \"76b05f76-b086-4375-9ba4-b1d4f5624ba0\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cqnvn" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.473354 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/88d136d4-995f-44d0-8691-e84bfacb68c3-mountpoint-dir\") pod \"csi-hostpathplugin-njprg\" (UID: \"88d136d4-995f-44d0-8691-e84bfacb68c3\") " pod="hostpath-provisioner/csi-hostpathplugin-njprg" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.473381 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2726995b-6d26-48c3-9e7d-da323657f55c-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-p6jxb\" (UID: \"2726995b-6d26-48c3-9e7d-da323657f55c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-p6jxb" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.473423 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjtnl\" (UniqueName: \"kubernetes.io/projected/d06cfae9-ac07-4c93-bb8d-f4548bbe303d-kube-api-access-tjtnl\") pod \"etcd-operator-b45778765-pghrh\" (UID: \"d06cfae9-ac07-4c93-bb8d-f4548bbe303d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pghrh" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.473448 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0621879e-29ff-49a3-81fc-1bde4a2d22ae-srv-cert\") pod \"olm-operator-6b444d44fb-tjjbt\" (UID: \"0621879e-29ff-49a3-81fc-1bde4a2d22ae\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tjjbt" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.473473 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5d4d6ee6-9302-4b5e-b917-3a52513210ed-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-sm7hg\" (UID: \"5d4d6ee6-9302-4b5e-b917-3a52513210ed\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sm7hg" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.473502 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jg79j\" (UniqueName: \"kubernetes.io/projected/5d4d6ee6-9302-4b5e-b917-3a52513210ed-kube-api-access-jg79j\") pod \"machine-config-controller-84d6567774-sm7hg\" (UID: \"5d4d6ee6-9302-4b5e-b917-3a52513210ed\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sm7hg" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.473528 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smmpr\" (UniqueName: \"kubernetes.io/projected/da2d7bf2-3fcc-42c4-ae05-c16d5c714a26-kube-api-access-smmpr\") pod \"auto-csr-approver-29555042-287fz\" (UID: \"da2d7bf2-3fcc-42c4-ae05-c16d5c714a26\") " pod="openshift-infra/auto-csr-approver-29555042-287fz" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.473567 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3e14730e-9ab1-4dd1-b786-142b82b59802-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-s8wdz\" (UID: \"3e14730e-9ab1-4dd1-b786-142b82b59802\") " pod="openshift-authentication/oauth-openshift-558db77b4-s8wdz" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.473592 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6csc7\" (UniqueName: \"kubernetes.io/projected/5ce55a3c-0a1e-4bb6-84be-c8fe85a210a7-kube-api-access-6csc7\") pod \"dns-default-lt58n\" (UID: \"5ce55a3c-0a1e-4bb6-84be-c8fe85a210a7\") " pod="openshift-dns/dns-default-lt58n" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.473620 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b1c6c047-6fde-4d86-a82c-d8d259265412-installation-pull-secrets\") pod \"image-registry-697d97f7c8-5d8gm\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.473640 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/481587db-884d-425d-ba58-4921449275ef-serving-cert\") pod \"apiserver-76f77b778f-d525c\" (UID: \"481587db-884d-425d-ba58-4921449275ef\") " pod="openshift-apiserver/apiserver-76f77b778f-d525c" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.473648 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d13f5606-abd9-46c4-b1db-1f01a3275ba8-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-cd8df\" (UID: \"d13f5606-abd9-46c4-b1db-1f01a3275ba8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cd8df" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.473968 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6f849ade-9f03-46fd-b9c5-5ddd61a27d1c-trusted-ca\") pod \"console-operator-58897d9998-6vssd\" (UID: \"6f849ade-9f03-46fd-b9c5-5ddd61a27d1c\") " pod="openshift-console-operator/console-operator-58897d9998-6vssd" Mar 12 08:02:13 crc kubenswrapper[4809]: E0312 08:02:13.474287 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 08:02:13.974272167 +0000 UTC m=+207.556307910 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5d8gm" (UID: "b1c6c047-6fde-4d86-a82c-d8d259265412") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.474560 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/d06cfae9-ac07-4c93-bb8d-f4548bbe303d-etcd-ca\") pod \"etcd-operator-b45778765-pghrh\" (UID: \"d06cfae9-ac07-4c93-bb8d-f4548bbe303d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pghrh" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.474940 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9a23ba2c-e3a5-4447-955c-f60a09b92dcc-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-fvmsc\" (UID: \"9a23ba2c-e3a5-4447-955c-f60a09b92dcc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fvmsc" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.474961 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3e14730e-9ab1-4dd1-b786-142b82b59802-audit-dir\") pod \"oauth-openshift-558db77b4-s8wdz\" (UID: \"3e14730e-9ab1-4dd1-b786-142b82b59802\") " pod="openshift-authentication/oauth-openshift-558db77b4-s8wdz" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.475815 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9a23ba2c-e3a5-4447-955c-f60a09b92dcc-etcd-client\") pod \"apiserver-7bbb656c7d-fvmsc\" (UID: \"9a23ba2c-e3a5-4447-955c-f60a09b92dcc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fvmsc" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.476043 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76b05f76-b086-4375-9ba4-b1d4f5624ba0-config\") pod \"machine-api-operator-5694c8668f-cqnvn\" (UID: \"76b05f76-b086-4375-9ba4-b1d4f5624ba0\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cqnvn" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.476824 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60f54e91-6901-4948-937c-cb0b85f43196-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-494j4\" (UID: \"60f54e91-6901-4948-937c-cb0b85f43196\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-494j4" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.476854 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1189f657-b031-4ece-859b-95d3eadd8221-config\") pod \"authentication-operator-69f744f599-dppmm\" (UID: \"1189f657-b031-4ece-859b-95d3eadd8221\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dppmm" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.477324 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b24173e0-5140-4c4d-ab8b-ea3696db0b74-metrics-certs\") pod \"router-default-5444994796-ttmrw\" (UID: \"b24173e0-5140-4c4d-ab8b-ea3696db0b74\") " pod="openshift-ingress/router-default-5444994796-ttmrw" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.477568 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3e14730e-9ab1-4dd1-b786-142b82b59802-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-s8wdz\" (UID: \"3e14730e-9ab1-4dd1-b786-142b82b59802\") " pod="openshift-authentication/oauth-openshift-558db77b4-s8wdz" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.477808 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d06cfae9-ac07-4c93-bb8d-f4548bbe303d-etcd-client\") pod \"etcd-operator-b45778765-pghrh\" (UID: \"d06cfae9-ac07-4c93-bb8d-f4548bbe303d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pghrh" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.480774 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afb31fcb-c363-4434-a971-126881422750-serving-cert\") pod \"route-controller-manager-6576b87f9c-pqkrs\" (UID: \"afb31fcb-c363-4434-a971-126881422750\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pqkrs" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.481847 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b1c6c047-6fde-4d86-a82c-d8d259265412-installation-pull-secrets\") pod \"image-registry-697d97f7c8-5d8gm\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.481929 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3e14730e-9ab1-4dd1-b786-142b82b59802-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-s8wdz\" (UID: \"3e14730e-9ab1-4dd1-b786-142b82b59802\") " pod="openshift-authentication/oauth-openshift-558db77b4-s8wdz" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.482024 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d13f5606-abd9-46c4-b1db-1f01a3275ba8-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-cd8df\" (UID: \"d13f5606-abd9-46c4-b1db-1f01a3275ba8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cd8df" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.482892 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.483804 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b1c6c047-6fde-4d86-a82c-d8d259265412-registry-tls\") pod \"image-registry-697d97f7c8-5d8gm\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.503051 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.534286 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.538702 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/481587db-884d-425d-ba58-4921449275ef-trusted-ca-bundle\") pod \"apiserver-76f77b778f-d525c\" (UID: \"481587db-884d-425d-ba58-4921449275ef\") " pod="openshift-apiserver/apiserver-76f77b778f-d525c" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.541882 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.561928 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.566228 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/481587db-884d-425d-ba58-4921449275ef-etcd-serving-ca\") pod \"apiserver-76f77b778f-d525c\" (UID: \"481587db-884d-425d-ba58-4921449275ef\") " pod="openshift-apiserver/apiserver-76f77b778f-d525c" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.576918 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 08:02:13 crc kubenswrapper[4809]: E0312 08:02:13.577226 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 08:02:14.077191436 +0000 UTC m=+207.659227209 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.577290 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/88d136d4-995f-44d0-8691-e84bfacb68c3-mountpoint-dir\") pod \"csi-hostpathplugin-njprg\" (UID: \"88d136d4-995f-44d0-8691-e84bfacb68c3\") " pod="hostpath-provisioner/csi-hostpathplugin-njprg" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.577344 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2726995b-6d26-48c3-9e7d-da323657f55c-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-p6jxb\" (UID: \"2726995b-6d26-48c3-9e7d-da323657f55c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-p6jxb" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.577410 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0621879e-29ff-49a3-81fc-1bde4a2d22ae-srv-cert\") pod \"olm-operator-6b444d44fb-tjjbt\" (UID: \"0621879e-29ff-49a3-81fc-1bde4a2d22ae\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tjjbt" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.577448 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5d4d6ee6-9302-4b5e-b917-3a52513210ed-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-sm7hg\" (UID: \"5d4d6ee6-9302-4b5e-b917-3a52513210ed\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sm7hg" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.577456 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/88d136d4-995f-44d0-8691-e84bfacb68c3-mountpoint-dir\") pod \"csi-hostpathplugin-njprg\" (UID: \"88d136d4-995f-44d0-8691-e84bfacb68c3\") " pod="hostpath-provisioner/csi-hostpathplugin-njprg" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.577486 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jg79j\" (UniqueName: \"kubernetes.io/projected/5d4d6ee6-9302-4b5e-b917-3a52513210ed-kube-api-access-jg79j\") pod \"machine-config-controller-84d6567774-sm7hg\" (UID: \"5d4d6ee6-9302-4b5e-b917-3a52513210ed\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sm7hg" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.577545 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smmpr\" (UniqueName: \"kubernetes.io/projected/da2d7bf2-3fcc-42c4-ae05-c16d5c714a26-kube-api-access-smmpr\") pod \"auto-csr-approver-29555042-287fz\" (UID: \"da2d7bf2-3fcc-42c4-ae05-c16d5c714a26\") " pod="openshift-infra/auto-csr-approver-29555042-287fz" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.577588 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6csc7\" (UniqueName: \"kubernetes.io/projected/5ce55a3c-0a1e-4bb6-84be-c8fe85a210a7-kube-api-access-6csc7\") pod \"dns-default-lt58n\" (UID: \"5ce55a3c-0a1e-4bb6-84be-c8fe85a210a7\") " pod="openshift-dns/dns-default-lt58n" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.577631 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/03f48903-53e7-4a18-a2ee-2a94eefc9182-certs\") pod \"machine-config-server-n7bs7\" (UID: \"03f48903-53e7-4a18-a2ee-2a94eefc9182\") " pod="openshift-machine-config-operator/machine-config-server-n7bs7" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.577686 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/42f8eef2-550d-4e94-9719-3f2abbbb3ecc-secret-volume\") pod \"collect-profiles-29555040-4v65h\" (UID: \"42f8eef2-550d-4e94-9719-3f2abbbb3ecc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555040-4v65h" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.577719 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0a6838f8-a7aa-46c5-9e16-6885daff4a88-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-vh7gf\" (UID: \"0a6838f8-a7aa-46c5-9e16-6885daff4a88\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vh7gf" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.577763 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/42f8eef2-550d-4e94-9719-3f2abbbb3ecc-config-volume\") pod \"collect-profiles-29555040-4v65h\" (UID: \"42f8eef2-550d-4e94-9719-3f2abbbb3ecc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555040-4v65h" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.577797 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tdth\" (UniqueName: \"kubernetes.io/projected/d8b4cda2-f60b-4a9d-9302-ac44f7acfb7a-kube-api-access-2tdth\") pod \"multus-admission-controller-857f4d67dd-bqvjf\" (UID: \"d8b4cda2-f60b-4a9d-9302-ac44f7acfb7a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-bqvjf" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.577859 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/54d01473-f99a-47d2-ae35-0a4b933b5098-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-nfbkw\" (UID: \"54d01473-f99a-47d2-ae35-0a4b933b5098\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nfbkw" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.577898 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ac7cc05e-989a-4474-9685-9600e3502dfd-webhook-cert\") pod \"packageserver-d55dfcdfc-xx52w\" (UID: \"ac7cc05e-989a-4474-9685-9600e3502dfd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xx52w" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.577937 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/50dc6c2c-21cc-4fe3-83c6-9b2a69eb039d-profile-collector-cert\") pod \"catalog-operator-68c6474976-vqtvl\" (UID: \"50dc6c2c-21cc-4fe3-83c6-9b2a69eb039d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vqtvl" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.577969 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/03f48903-53e7-4a18-a2ee-2a94eefc9182-node-bootstrap-token\") pod \"machine-config-server-n7bs7\" (UID: \"03f48903-53e7-4a18-a2ee-2a94eefc9182\") " pod="openshift-machine-config-operator/machine-config-server-n7bs7" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.578002 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/98a6e235-45c4-4192-8060-6771a197f829-images\") pod \"machine-config-operator-74547568cd-7qz8g\" (UID: \"98a6e235-45c4-4192-8060-6771a197f829\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7qz8g" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.578057 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgxjt\" (UniqueName: \"kubernetes.io/projected/879e1dfb-bff9-4ff0-99e3-912124941b77-kube-api-access-pgxjt\") pod \"migrator-59844c95c7-8m4nh\" (UID: \"879e1dfb-bff9-4ff0-99e3-912124941b77\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8m4nh" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.578108 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31ccf9d3-41db-457e-9341-744f7945b4f5-serving-cert\") pod \"service-ca-operator-777779d784-r24lv\" (UID: \"31ccf9d3-41db-457e-9341-744f7945b4f5\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-r24lv" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.578198 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/959e6af8-6641-4e70-bb7f-cfe2c8ea56a6-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-k88nv\" (UID: \"959e6af8-6641-4e70-bb7f-cfe2c8ea56a6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-k88nv" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.578235 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/50dc6c2c-21cc-4fe3-83c6-9b2a69eb039d-srv-cert\") pod \"catalog-operator-68c6474976-vqtvl\" (UID: \"50dc6c2c-21cc-4fe3-83c6-9b2a69eb039d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vqtvl" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.578267 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/88d136d4-995f-44d0-8691-e84bfacb68c3-socket-dir\") pod \"csi-hostpathplugin-njprg\" (UID: \"88d136d4-995f-44d0-8691-e84bfacb68c3\") " pod="hostpath-provisioner/csi-hostpathplugin-njprg" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.578297 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8f4l\" (UniqueName: \"kubernetes.io/projected/ac7cc05e-989a-4474-9685-9600e3502dfd-kube-api-access-l8f4l\") pod \"packageserver-d55dfcdfc-xx52w\" (UID: \"ac7cc05e-989a-4474-9685-9600e3502dfd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xx52w" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.578343 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rx75m\" (UniqueName: \"kubernetes.io/projected/88d136d4-995f-44d0-8691-e84bfacb68c3-kube-api-access-rx75m\") pod \"csi-hostpathplugin-njprg\" (UID: \"88d136d4-995f-44d0-8691-e84bfacb68c3\") " pod="hostpath-provisioner/csi-hostpathplugin-njprg" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.578381 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/959e6af8-6641-4e70-bb7f-cfe2c8ea56a6-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-k88nv\" (UID: \"959e6af8-6641-4e70-bb7f-cfe2c8ea56a6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-k88nv" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.578413 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kt52\" (UniqueName: \"kubernetes.io/projected/50dc6c2c-21cc-4fe3-83c6-9b2a69eb039d-kube-api-access-4kt52\") pod \"catalog-operator-68c6474976-vqtvl\" (UID: \"50dc6c2c-21cc-4fe3-83c6-9b2a69eb039d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vqtvl" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.578448 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5ce55a3c-0a1e-4bb6-84be-c8fe85a210a7-config-volume\") pod \"dns-default-lt58n\" (UID: \"5ce55a3c-0a1e-4bb6-84be-c8fe85a210a7\") " pod="openshift-dns/dns-default-lt58n" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.578484 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ac7cc05e-989a-4474-9685-9600e3502dfd-tmpfs\") pod \"packageserver-d55dfcdfc-xx52w\" (UID: \"ac7cc05e-989a-4474-9685-9600e3502dfd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xx52w" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.578517 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/512b5035-d9af-4615-b351-2199e94f9c50-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hrnnz\" (UID: \"512b5035-d9af-4615-b351-2199e94f9c50\") " pod="openshift-marketplace/marketplace-operator-79b997595-hrnnz" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.578554 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/512b5035-d9af-4615-b351-2199e94f9c50-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hrnnz\" (UID: \"512b5035-d9af-4615-b351-2199e94f9c50\") " pod="openshift-marketplace/marketplace-operator-79b997595-hrnnz" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.578585 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d8b4cda2-f60b-4a9d-9302-ac44f7acfb7a-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-bqvjf\" (UID: \"d8b4cda2-f60b-4a9d-9302-ac44f7acfb7a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-bqvjf" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.578620 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/88d136d4-995f-44d0-8691-e84bfacb68c3-registration-dir\") pod \"csi-hostpathplugin-njprg\" (UID: \"88d136d4-995f-44d0-8691-e84bfacb68c3\") " pod="hostpath-provisioner/csi-hostpathplugin-njprg" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.578666 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77tht\" (UniqueName: \"kubernetes.io/projected/38092207-e107-4e5a-8706-b3ad66bea661-kube-api-access-77tht\") pod \"package-server-manager-789f6589d5-w2hxb\" (UID: \"38092207-e107-4e5a-8706-b3ad66bea661\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w2hxb" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.578714 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2726995b-6d26-48c3-9e7d-da323657f55c-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-p6jxb\" (UID: \"2726995b-6d26-48c3-9e7d-da323657f55c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-p6jxb" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.578749 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztw8s\" (UniqueName: \"kubernetes.io/projected/f9891031-02d7-4b3a-8fb8-40fd8bb9a825-kube-api-access-ztw8s\") pod \"service-ca-9c57cc56f-pxnfg\" (UID: \"f9891031-02d7-4b3a-8fb8-40fd8bb9a825\") " pod="openshift-service-ca/service-ca-9c57cc56f-pxnfg" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.578785 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2726995b-6d26-48c3-9e7d-da323657f55c-config\") pod \"kube-controller-manager-operator-78b949d7b-p6jxb\" (UID: \"2726995b-6d26-48c3-9e7d-da323657f55c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-p6jxb" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.578831 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31ccf9d3-41db-457e-9341-744f7945b4f5-config\") pod \"service-ca-operator-777779d784-r24lv\" (UID: \"31ccf9d3-41db-457e-9341-744f7945b4f5\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-r24lv" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.578882 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5ce55a3c-0a1e-4bb6-84be-c8fe85a210a7-metrics-tls\") pod \"dns-default-lt58n\" (UID: \"5ce55a3c-0a1e-4bb6-84be-c8fe85a210a7\") " pod="openshift-dns/dns-default-lt58n" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.578917 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/98a6e235-45c4-4192-8060-6771a197f829-proxy-tls\") pod \"machine-config-operator-74547568cd-7qz8g\" (UID: \"98a6e235-45c4-4192-8060-6771a197f829\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7qz8g" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.578947 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5d4d6ee6-9302-4b5e-b917-3a52513210ed-proxy-tls\") pod \"machine-config-controller-84d6567774-sm7hg\" (UID: \"5d4d6ee6-9302-4b5e-b917-3a52513210ed\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sm7hg" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.578997 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-752tp\" (UniqueName: \"kubernetes.io/projected/512b5035-d9af-4615-b351-2199e94f9c50-kube-api-access-752tp\") pod \"marketplace-operator-79b997595-hrnnz\" (UID: \"512b5035-d9af-4615-b351-2199e94f9c50\") " pod="openshift-marketplace/marketplace-operator-79b997595-hrnnz" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.579031 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fv62\" (UniqueName: \"kubernetes.io/projected/0621879e-29ff-49a3-81fc-1bde4a2d22ae-kube-api-access-5fv62\") pod \"olm-operator-6b444d44fb-tjjbt\" (UID: \"0621879e-29ff-49a3-81fc-1bde4a2d22ae\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tjjbt" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.579068 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zchwv\" (UniqueName: \"kubernetes.io/projected/42f8eef2-550d-4e94-9719-3f2abbbb3ecc-kube-api-access-zchwv\") pod \"collect-profiles-29555040-4v65h\" (UID: \"42f8eef2-550d-4e94-9719-3f2abbbb3ecc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555040-4v65h" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.579100 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89gq2\" (UniqueName: \"kubernetes.io/projected/31ccf9d3-41db-457e-9341-744f7945b4f5-kube-api-access-89gq2\") pod \"service-ca-operator-777779d784-r24lv\" (UID: \"31ccf9d3-41db-457e-9341-744f7945b4f5\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-r24lv" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.579162 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0621879e-29ff-49a3-81fc-1bde4a2d22ae-profile-collector-cert\") pod \"olm-operator-6b444d44fb-tjjbt\" (UID: \"0621879e-29ff-49a3-81fc-1bde4a2d22ae\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tjjbt" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.579210 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/959e6af8-6641-4e70-bb7f-cfe2c8ea56a6-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-k88nv\" (UID: \"959e6af8-6641-4e70-bb7f-cfe2c8ea56a6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-k88nv" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.579246 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/38092207-e107-4e5a-8706-b3ad66bea661-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-w2hxb\" (UID: \"38092207-e107-4e5a-8706-b3ad66bea661\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w2hxb" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.579282 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a6838f8-a7aa-46c5-9e16-6885daff4a88-config\") pod \"kube-apiserver-operator-766d6c64bb-vh7gf\" (UID: \"0a6838f8-a7aa-46c5-9e16-6885daff4a88\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vh7gf" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.579342 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ac7cc05e-989a-4474-9685-9600e3502dfd-apiservice-cert\") pod \"packageserver-d55dfcdfc-xx52w\" (UID: \"ac7cc05e-989a-4474-9685-9600e3502dfd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xx52w" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.579407 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5l6f\" (UniqueName: \"kubernetes.io/projected/03f48903-53e7-4a18-a2ee-2a94eefc9182-kube-api-access-m5l6f\") pod \"machine-config-server-n7bs7\" (UID: \"03f48903-53e7-4a18-a2ee-2a94eefc9182\") " pod="openshift-machine-config-operator/machine-config-server-n7bs7" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.579442 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a6838f8-a7aa-46c5-9e16-6885daff4a88-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-vh7gf\" (UID: \"0a6838f8-a7aa-46c5-9e16-6885daff4a88\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vh7gf" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.579485 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svnhs\" (UniqueName: \"kubernetes.io/projected/bb72ef9d-43d2-476d-80e9-d3b19139c7a9-kube-api-access-svnhs\") pod \"kube-storage-version-migrator-operator-b67b599dd-6lhmm\" (UID: \"bb72ef9d-43d2-476d-80e9-d3b19139c7a9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6lhmm" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.579526 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/88d136d4-995f-44d0-8691-e84bfacb68c3-csi-data-dir\") pod \"csi-hostpathplugin-njprg\" (UID: \"88d136d4-995f-44d0-8691-e84bfacb68c3\") " pod="hostpath-provisioner/csi-hostpathplugin-njprg" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.579561 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6g7tv\" (UniqueName: \"kubernetes.io/projected/98a6e235-45c4-4192-8060-6771a197f829-kube-api-access-6g7tv\") pod \"machine-config-operator-74547568cd-7qz8g\" (UID: \"98a6e235-45c4-4192-8060-6771a197f829\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7qz8g" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.579596 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xk7sk\" (UniqueName: \"kubernetes.io/projected/860674d0-3197-44c8-8912-15ec9f06c643-kube-api-access-xk7sk\") pod \"ingress-canary-76fnn\" (UID: \"860674d0-3197-44c8-8912-15ec9f06c643\") " pod="openshift-ingress-canary/ingress-canary-76fnn" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.579644 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6k2c6\" (UniqueName: \"kubernetes.io/projected/54d01473-f99a-47d2-ae35-0a4b933b5098-kube-api-access-6k2c6\") pod \"control-plane-machine-set-operator-78cbb6b69f-nfbkw\" (UID: \"54d01473-f99a-47d2-ae35-0a4b933b5098\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nfbkw" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.579708 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5d8gm\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.579764 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb72ef9d-43d2-476d-80e9-d3b19139c7a9-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-6lhmm\" (UID: \"bb72ef9d-43d2-476d-80e9-d3b19139c7a9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6lhmm" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.579797 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb72ef9d-43d2-476d-80e9-d3b19139c7a9-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-6lhmm\" (UID: \"bb72ef9d-43d2-476d-80e9-d3b19139c7a9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6lhmm" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.579830 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/98a6e235-45c4-4192-8060-6771a197f829-auth-proxy-config\") pod \"machine-config-operator-74547568cd-7qz8g\" (UID: \"98a6e235-45c4-4192-8060-6771a197f829\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7qz8g" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.579864 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f9891031-02d7-4b3a-8fb8-40fd8bb9a825-signing-cabundle\") pod \"service-ca-9c57cc56f-pxnfg\" (UID: \"f9891031-02d7-4b3a-8fb8-40fd8bb9a825\") " pod="openshift-service-ca/service-ca-9c57cc56f-pxnfg" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.579898 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/88d136d4-995f-44d0-8691-e84bfacb68c3-plugins-dir\") pod \"csi-hostpathplugin-njprg\" (UID: \"88d136d4-995f-44d0-8691-e84bfacb68c3\") " pod="hostpath-provisioner/csi-hostpathplugin-njprg" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.579929 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f9891031-02d7-4b3a-8fb8-40fd8bb9a825-signing-key\") pod \"service-ca-9c57cc56f-pxnfg\" (UID: \"f9891031-02d7-4b3a-8fb8-40fd8bb9a825\") " pod="openshift-service-ca/service-ca-9c57cc56f-pxnfg" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.579962 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/860674d0-3197-44c8-8912-15ec9f06c643-cert\") pod \"ingress-canary-76fnn\" (UID: \"860674d0-3197-44c8-8912-15ec9f06c643\") " pod="openshift-ingress-canary/ingress-canary-76fnn" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.583719 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0621879e-29ff-49a3-81fc-1bde4a2d22ae-srv-cert\") pod \"olm-operator-6b444d44fb-tjjbt\" (UID: \"0621879e-29ff-49a3-81fc-1bde4a2d22ae\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tjjbt" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.585080 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2726995b-6d26-48c3-9e7d-da323657f55c-config\") pod \"kube-controller-manager-operator-78b949d7b-p6jxb\" (UID: \"2726995b-6d26-48c3-9e7d-da323657f55c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-p6jxb" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.585499 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/860674d0-3197-44c8-8912-15ec9f06c643-cert\") pod \"ingress-canary-76fnn\" (UID: \"860674d0-3197-44c8-8912-15ec9f06c643\") " pod="openshift-ingress-canary/ingress-canary-76fnn" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.586427 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/42f8eef2-550d-4e94-9719-3f2abbbb3ecc-secret-volume\") pod \"collect-profiles-29555040-4v65h\" (UID: \"42f8eef2-550d-4e94-9719-3f2abbbb3ecc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555040-4v65h" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.587007 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5d4d6ee6-9302-4b5e-b917-3a52513210ed-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-sm7hg\" (UID: \"5d4d6ee6-9302-4b5e-b917-3a52513210ed\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sm7hg" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.588226 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31ccf9d3-41db-457e-9341-744f7945b4f5-config\") pod \"service-ca-operator-777779d784-r24lv\" (UID: \"31ccf9d3-41db-457e-9341-744f7945b4f5\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-r24lv" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.588632 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.588776 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/88d136d4-995f-44d0-8691-e84bfacb68c3-socket-dir\") pod \"csi-hostpathplugin-njprg\" (UID: \"88d136d4-995f-44d0-8691-e84bfacb68c3\") " pod="hostpath-provisioner/csi-hostpathplugin-njprg" Mar 12 08:02:13 crc kubenswrapper[4809]: E0312 08:02:13.589223 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 08:02:14.089197414 +0000 UTC m=+207.671233167 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5d8gm" (UID: "b1c6c047-6fde-4d86-a82c-d8d259265412") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.590556 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/88d136d4-995f-44d0-8691-e84bfacb68c3-csi-data-dir\") pod \"csi-hostpathplugin-njprg\" (UID: \"88d136d4-995f-44d0-8691-e84bfacb68c3\") " pod="hostpath-provisioner/csi-hostpathplugin-njprg" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.590654 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/42f8eef2-550d-4e94-9719-3f2abbbb3ecc-config-volume\") pod \"collect-profiles-29555040-4v65h\" (UID: \"42f8eef2-550d-4e94-9719-3f2abbbb3ecc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555040-4v65h" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.594205 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0621879e-29ff-49a3-81fc-1bde4a2d22ae-profile-collector-cert\") pod \"olm-operator-6b444d44fb-tjjbt\" (UID: \"0621879e-29ff-49a3-81fc-1bde4a2d22ae\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tjjbt" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.597850 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5ce55a3c-0a1e-4bb6-84be-c8fe85a210a7-metrics-tls\") pod \"dns-default-lt58n\" (UID: \"5ce55a3c-0a1e-4bb6-84be-c8fe85a210a7\") " pod="openshift-dns/dns-default-lt58n" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.598517 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/50dc6c2c-21cc-4fe3-83c6-9b2a69eb039d-profile-collector-cert\") pod \"catalog-operator-68c6474976-vqtvl\" (UID: \"50dc6c2c-21cc-4fe3-83c6-9b2a69eb039d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vqtvl" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.598656 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31ccf9d3-41db-457e-9341-744f7945b4f5-serving-cert\") pod \"service-ca-operator-777779d784-r24lv\" (UID: \"31ccf9d3-41db-457e-9341-744f7945b4f5\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-r24lv" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.599056 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5d4d6ee6-9302-4b5e-b917-3a52513210ed-proxy-tls\") pod \"machine-config-controller-84d6567774-sm7hg\" (UID: \"5d4d6ee6-9302-4b5e-b917-3a52513210ed\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sm7hg" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.600654 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/54d01473-f99a-47d2-ae35-0a4b933b5098-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-nfbkw\" (UID: \"54d01473-f99a-47d2-ae35-0a4b933b5098\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nfbkw" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.601439 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/481587db-884d-425d-ba58-4921449275ef-audit\") pod \"apiserver-76f77b778f-d525c\" (UID: \"481587db-884d-425d-ba58-4921449275ef\") " pod="openshift-apiserver/apiserver-76f77b778f-d525c" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.601943 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2726995b-6d26-48c3-9e7d-da323657f55c-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-p6jxb\" (UID: \"2726995b-6d26-48c3-9e7d-da323657f55c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-p6jxb" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.602889 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ac7cc05e-989a-4474-9685-9600e3502dfd-tmpfs\") pod \"packageserver-d55dfcdfc-xx52w\" (UID: \"ac7cc05e-989a-4474-9685-9600e3502dfd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xx52w" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.603780 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a6838f8-a7aa-46c5-9e16-6885daff4a88-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-vh7gf\" (UID: \"0a6838f8-a7aa-46c5-9e16-6885daff4a88\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vh7gf" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.604519 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb72ef9d-43d2-476d-80e9-d3b19139c7a9-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-6lhmm\" (UID: \"bb72ef9d-43d2-476d-80e9-d3b19139c7a9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6lhmm" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.606261 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/88d136d4-995f-44d0-8691-e84bfacb68c3-registration-dir\") pod \"csi-hostpathplugin-njprg\" (UID: \"88d136d4-995f-44d0-8691-e84bfacb68c3\") " pod="hostpath-provisioner/csi-hostpathplugin-njprg" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.607035 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/88d136d4-995f-44d0-8691-e84bfacb68c3-plugins-dir\") pod \"csi-hostpathplugin-njprg\" (UID: \"88d136d4-995f-44d0-8691-e84bfacb68c3\") " pod="hostpath-provisioner/csi-hostpathplugin-njprg" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.607064 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/98a6e235-45c4-4192-8060-6771a197f829-auth-proxy-config\") pod \"machine-config-operator-74547568cd-7qz8g\" (UID: \"98a6e235-45c4-4192-8060-6771a197f829\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7qz8g" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.607972 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/98a6e235-45c4-4192-8060-6771a197f829-images\") pod \"machine-config-operator-74547568cd-7qz8g\" (UID: \"98a6e235-45c4-4192-8060-6771a197f829\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7qz8g" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.608756 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f9891031-02d7-4b3a-8fb8-40fd8bb9a825-signing-cabundle\") pod \"service-ca-9c57cc56f-pxnfg\" (UID: \"f9891031-02d7-4b3a-8fb8-40fd8bb9a825\") " pod="openshift-service-ca/service-ca-9c57cc56f-pxnfg" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.609036 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5ce55a3c-0a1e-4bb6-84be-c8fe85a210a7-config-volume\") pod \"dns-default-lt58n\" (UID: \"5ce55a3c-0a1e-4bb6-84be-c8fe85a210a7\") " pod="openshift-dns/dns-default-lt58n" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.609363 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/512b5035-d9af-4615-b351-2199e94f9c50-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hrnnz\" (UID: \"512b5035-d9af-4615-b351-2199e94f9c50\") " pod="openshift-marketplace/marketplace-operator-79b997595-hrnnz" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.609461 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ac7cc05e-989a-4474-9685-9600e3502dfd-webhook-cert\") pod \"packageserver-d55dfcdfc-xx52w\" (UID: \"ac7cc05e-989a-4474-9685-9600e3502dfd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xx52w" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.609618 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb72ef9d-43d2-476d-80e9-d3b19139c7a9-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-6lhmm\" (UID: \"bb72ef9d-43d2-476d-80e9-d3b19139c7a9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6lhmm" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.609904 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f9891031-02d7-4b3a-8fb8-40fd8bb9a825-signing-key\") pod \"service-ca-9c57cc56f-pxnfg\" (UID: \"f9891031-02d7-4b3a-8fb8-40fd8bb9a825\") " pod="openshift-service-ca/service-ca-9c57cc56f-pxnfg" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.610171 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a6838f8-a7aa-46c5-9e16-6885daff4a88-config\") pod \"kube-apiserver-operator-766d6c64bb-vh7gf\" (UID: \"0a6838f8-a7aa-46c5-9e16-6885daff4a88\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vh7gf" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.610481 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/512b5035-d9af-4615-b351-2199e94f9c50-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hrnnz\" (UID: \"512b5035-d9af-4615-b351-2199e94f9c50\") " pod="openshift-marketplace/marketplace-operator-79b997595-hrnnz" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.610846 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/959e6af8-6641-4e70-bb7f-cfe2c8ea56a6-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-k88nv\" (UID: \"959e6af8-6641-4e70-bb7f-cfe2c8ea56a6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-k88nv" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.611082 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/03f48903-53e7-4a18-a2ee-2a94eefc9182-node-bootstrap-token\") pod \"machine-config-server-n7bs7\" (UID: \"03f48903-53e7-4a18-a2ee-2a94eefc9182\") " pod="openshift-machine-config-operator/machine-config-server-n7bs7" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.611553 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/50dc6c2c-21cc-4fe3-83c6-9b2a69eb039d-srv-cert\") pod \"catalog-operator-68c6474976-vqtvl\" (UID: \"50dc6c2c-21cc-4fe3-83c6-9b2a69eb039d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vqtvl" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.613810 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/959e6af8-6641-4e70-bb7f-cfe2c8ea56a6-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-k88nv\" (UID: \"959e6af8-6641-4e70-bb7f-cfe2c8ea56a6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-k88nv" Mar 12 08:02:13 crc kubenswrapper[4809]: E0312 08:02:13.614967 4809 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: failed to sync configmap cache: timed out waiting for the condition Mar 12 08:02:13 crc kubenswrapper[4809]: E0312 08:02:13.615089 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/98cc5813-f6d5-4e2a-a7d4-2546e18e60f9-proxy-ca-bundles podName:98cc5813-f6d5-4e2a-a7d4-2546e18e60f9 nodeName:}" failed. No retries permitted until 2026-03-12 08:02:14.615053322 +0000 UTC m=+208.197089055 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/98cc5813-f6d5-4e2a-a7d4-2546e18e60f9-proxy-ca-bundles") pod "controller-manager-879f6c89f-r5hmb" (UID: "98cc5813-f6d5-4e2a-a7d4-2546e18e60f9") : failed to sync configmap cache: timed out waiting for the condition Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.615311 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/98a6e235-45c4-4192-8060-6771a197f829-proxy-tls\") pod \"machine-config-operator-74547568cd-7qz8g\" (UID: \"98a6e235-45c4-4192-8060-6771a197f829\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7qz8g" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.616083 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/38092207-e107-4e5a-8706-b3ad66bea661-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-w2hxb\" (UID: \"38092207-e107-4e5a-8706-b3ad66bea661\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w2hxb" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.618080 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ac7cc05e-989a-4474-9685-9600e3502dfd-apiservice-cert\") pod \"packageserver-d55dfcdfc-xx52w\" (UID: \"ac7cc05e-989a-4474-9685-9600e3502dfd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xx52w" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.620947 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.621300 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/03f48903-53e7-4a18-a2ee-2a94eefc9182-certs\") pod \"machine-config-server-n7bs7\" (UID: \"03f48903-53e7-4a18-a2ee-2a94eefc9182\") " pod="openshift-machine-config-operator/machine-config-server-n7bs7" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.621532 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.629275 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d8b4cda2-f60b-4a9d-9302-ac44f7acfb7a-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-bqvjf\" (UID: \"d8b4cda2-f60b-4a9d-9302-ac44f7acfb7a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-bqvjf" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.635806 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/76b05f76-b086-4375-9ba4-b1d4f5624ba0-images\") pod \"machine-api-operator-5694c8668f-cqnvn\" (UID: \"76b05f76-b086-4375-9ba4-b1d4f5624ba0\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cqnvn" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.680584 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 08:02:13 crc kubenswrapper[4809]: E0312 08:02:13.681035 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 08:02:14.180981101 +0000 UTC m=+207.763016844 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.681305 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5d8gm\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:13 crc kubenswrapper[4809]: E0312 08:02:13.682307 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 08:02:14.182295399 +0000 UTC m=+207.764331142 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5d8gm" (UID: "b1c6c047-6fde-4d86-a82c-d8d259265412") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.683566 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tx998\" (UniqueName: \"kubernetes.io/projected/76b05f76-b086-4375-9ba4-b1d4f5624ba0-kube-api-access-tx998\") pod \"machine-api-operator-5694c8668f-cqnvn\" (UID: \"76b05f76-b086-4375-9ba4-b1d4f5624ba0\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cqnvn" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.701540 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2s8fz\" (UniqueName: \"kubernetes.io/projected/a9f8452e-0278-4321-9ed1-e38a44ce0cb2-kube-api-access-2s8fz\") pod \"ingress-operator-5b745b69d9-p5h4v\" (UID: \"a9f8452e-0278-4321-9ed1-e38a44ce0cb2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-p5h4v" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.722545 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrkgv\" (UniqueName: \"kubernetes.io/projected/6f849ade-9f03-46fd-b9c5-5ddd61a27d1c-kube-api-access-vrkgv\") pod \"console-operator-58897d9998-6vssd\" (UID: \"6f849ade-9f03-46fd-b9c5-5ddd61a27d1c\") " pod="openshift-console-operator/console-operator-58897d9998-6vssd" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.727097 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-6vssd" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.741368 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cppn\" (UniqueName: \"kubernetes.io/projected/a752ff7b-9553-492d-83d0-42bb9ea5dfa9-kube-api-access-7cppn\") pod \"downloads-7954f5f757-fl8dg\" (UID: \"a752ff7b-9553-492d-83d0-42bb9ea5dfa9\") " pod="openshift-console/downloads-7954f5f757-fl8dg" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.764239 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dd7cq\" (UniqueName: \"kubernetes.io/projected/60f54e91-6901-4948-937c-cb0b85f43196-kube-api-access-dd7cq\") pod \"openshift-controller-manager-operator-756b6f6bc6-494j4\" (UID: \"60f54e91-6901-4948-937c-cb0b85f43196\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-494j4" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.769884 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-494j4" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.778970 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqjnj\" (UniqueName: \"kubernetes.io/projected/1189f657-b031-4ece-859b-95d3eadd8221-kube-api-access-tqjnj\") pod \"authentication-operator-69f744f599-dppmm\" (UID: \"1189f657-b031-4ece-859b-95d3eadd8221\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dppmm" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.788267 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 08:02:13 crc kubenswrapper[4809]: E0312 08:02:13.789029 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 08:02:14.289002999 +0000 UTC m=+207.871038732 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.789167 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-cqnvn" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.805458 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmmvd\" (UniqueName: \"kubernetes.io/projected/d13f5606-abd9-46c4-b1db-1f01a3275ba8-kube-api-access-mmmvd\") pod \"openshift-apiserver-operator-796bbdcf4f-cd8df\" (UID: \"d13f5606-abd9-46c4-b1db-1f01a3275ba8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cd8df" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.826446 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5k5q\" (UniqueName: \"kubernetes.io/projected/9a23ba2c-e3a5-4447-955c-f60a09b92dcc-kube-api-access-m5k5q\") pod \"apiserver-7bbb656c7d-fvmsc\" (UID: \"9a23ba2c-e3a5-4447-955c-f60a09b92dcc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fvmsc" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.829501 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cd8df" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.837227 4809 ???:1] "http: TLS handshake error from 192.168.126.11:56650: no serving certificate available for the kubelet" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.843926 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8zb7\" (UniqueName: \"kubernetes.io/projected/b1c6c047-6fde-4d86-a82c-d8d259265412-kube-api-access-x8zb7\") pod \"image-registry-697d97f7c8-5d8gm\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.861255 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcbpn\" (UniqueName: \"kubernetes.io/projected/b24173e0-5140-4c4d-ab8b-ea3696db0b74-kube-api-access-vcbpn\") pod \"router-default-5444994796-ttmrw\" (UID: \"b24173e0-5140-4c4d-ab8b-ea3696db0b74\") " pod="openshift-ingress/router-default-5444994796-ttmrw" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.888704 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b1c6c047-6fde-4d86-a82c-d8d259265412-bound-sa-token\") pod \"image-registry-697d97f7c8-5d8gm\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.890304 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7df9\" (UniqueName: \"kubernetes.io/projected/481587db-884d-425d-ba58-4921449275ef-kube-api-access-w7df9\") pod \"apiserver-76f77b778f-d525c\" (UID: \"481587db-884d-425d-ba58-4921449275ef\") " pod="openshift-apiserver/apiserver-76f77b778f-d525c" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.890571 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5d8gm\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:13 crc kubenswrapper[4809]: E0312 08:02:13.891194 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 08:02:14.391172056 +0000 UTC m=+207.973207789 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5d8gm" (UID: "b1c6c047-6fde-4d86-a82c-d8d259265412") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.895436 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7df9\" (UniqueName: \"kubernetes.io/projected/481587db-884d-425d-ba58-4921449275ef-kube-api-access-w7df9\") pod \"apiserver-76f77b778f-d525c\" (UID: \"481587db-884d-425d-ba58-4921449275ef\") " pod="openshift-apiserver/apiserver-76f77b778f-d525c" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.907187 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4n4pn\" (UniqueName: \"kubernetes.io/projected/e70704a8-4517-4ee2-8a1f-de93c014b0da-kube-api-access-4n4pn\") pod \"cluster-samples-operator-665b6dd947-kmr2s\" (UID: \"e70704a8-4517-4ee2-8a1f-de93c014b0da\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kmr2s" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.921280 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msw9j\" (UniqueName: \"kubernetes.io/projected/3e14730e-9ab1-4dd1-b786-142b82b59802-kube-api-access-msw9j\") pod \"oauth-openshift-558db77b4-s8wdz\" (UID: \"3e14730e-9ab1-4dd1-b786-142b82b59802\") " pod="openshift-authentication/oauth-openshift-558db77b4-s8wdz" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.941862 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hkwp\" (UniqueName: \"kubernetes.io/projected/63625da7-0f0d-48f1-8b58-c75e04bc31e4-kube-api-access-6hkwp\") pod \"openshift-config-operator-7777fb866f-2qrsm\" (UID: \"63625da7-0f0d-48f1-8b58-c75e04bc31e4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qrsm" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.944398 4809 ???:1] "http: TLS handshake error from 192.168.126.11:56654: no serving certificate available for the kubelet" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.948108 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-dppmm" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.956618 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a9f8452e-0278-4321-9ed1-e38a44ce0cb2-bound-sa-token\") pod \"ingress-operator-5b745b69d9-p5h4v\" (UID: \"a9f8452e-0278-4321-9ed1-e38a44ce0cb2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-p5h4v" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.964509 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-6vssd"] Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.966706 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-fl8dg" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.979494 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqkdw\" (UniqueName: \"kubernetes.io/projected/afb31fcb-c363-4434-a971-126881422750-kube-api-access-vqkdw\") pod \"route-controller-manager-6576b87f9c-pqkrs\" (UID: \"afb31fcb-c363-4434-a971-126881422750\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pqkrs" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.987987 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-p5h4v" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.996527 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 08:02:13 crc kubenswrapper[4809]: E0312 08:02:13.996821 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 08:02:14.496790204 +0000 UTC m=+208.078825937 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.996879 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-s8wdz" Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.997040 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5d8gm\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:13 crc kubenswrapper[4809]: E0312 08:02:13.997608 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 08:02:14.497600348 +0000 UTC m=+208.079636081 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5d8gm" (UID: "b1c6c047-6fde-4d86-a82c-d8d259265412") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:13 crc kubenswrapper[4809]: I0312 08:02:13.999570 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjtnl\" (UniqueName: \"kubernetes.io/projected/d06cfae9-ac07-4c93-bb8d-f4548bbe303d-kube-api-access-tjtnl\") pod \"etcd-operator-b45778765-pghrh\" (UID: \"d06cfae9-ac07-4c93-bb8d-f4548bbe303d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pghrh" Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.008377 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fvmsc" Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.011672 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pqkrs" Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.015604 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jg79j\" (UniqueName: \"kubernetes.io/projected/5d4d6ee6-9302-4b5e-b917-3a52513210ed-kube-api-access-jg79j\") pod \"machine-config-controller-84d6567774-sm7hg\" (UID: \"5d4d6ee6-9302-4b5e-b917-3a52513210ed\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sm7hg" Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.035002 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-pghrh" Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.039062 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smmpr\" (UniqueName: \"kubernetes.io/projected/da2d7bf2-3fcc-42c4-ae05-c16d5c714a26-kube-api-access-smmpr\") pod \"auto-csr-approver-29555042-287fz\" (UID: \"da2d7bf2-3fcc-42c4-ae05-c16d5c714a26\") " pod="openshift-infra/auto-csr-approver-29555042-287fz" Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.041379 4809 ???:1] "http: TLS handshake error from 192.168.126.11:56666: no serving certificate available for the kubelet" Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.059603 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qrsm" Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.060887 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6csc7\" (UniqueName: \"kubernetes.io/projected/5ce55a3c-0a1e-4bb6-84be-c8fe85a210a7-kube-api-access-6csc7\") pod \"dns-default-lt58n\" (UID: \"5ce55a3c-0a1e-4bb6-84be-c8fe85a210a7\") " pod="openshift-dns/dns-default-lt58n" Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.074867 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-ttmrw" Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.075769 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-d525c" Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.078849 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kmr2s" Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.081870 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5l6f\" (UniqueName: \"kubernetes.io/projected/03f48903-53e7-4a18-a2ee-2a94eefc9182-kube-api-access-m5l6f\") pod \"machine-config-server-n7bs7\" (UID: \"03f48903-53e7-4a18-a2ee-2a94eefc9182\") " pod="openshift-machine-config-operator/machine-config-server-n7bs7" Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.102083 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 08:02:14 crc kubenswrapper[4809]: E0312 08:02:14.102566 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 08:02:14.602549076 +0000 UTC m=+208.184584799 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.109095 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zchwv\" (UniqueName: \"kubernetes.io/projected/42f8eef2-550d-4e94-9719-3f2abbbb3ecc-kube-api-access-zchwv\") pod \"collect-profiles-29555040-4v65h\" (UID: \"42f8eef2-550d-4e94-9719-3f2abbbb3ecc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555040-4v65h" Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.112874 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555042-287fz" Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.127015 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89gq2\" (UniqueName: \"kubernetes.io/projected/31ccf9d3-41db-457e-9341-744f7945b4f5-kube-api-access-89gq2\") pod \"service-ca-operator-777779d784-r24lv\" (UID: \"31ccf9d3-41db-457e-9341-744f7945b4f5\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-r24lv" Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.139402 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cd8df"] Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.142071 4809 ???:1] "http: TLS handshake error from 192.168.126.11:56672: no serving certificate available for the kubelet" Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.155880 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tdth\" (UniqueName: \"kubernetes.io/projected/d8b4cda2-f60b-4a9d-9302-ac44f7acfb7a-kube-api-access-2tdth\") pod \"multus-admission-controller-857f4d67dd-bqvjf\" (UID: \"d8b4cda2-f60b-4a9d-9302-ac44f7acfb7a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-bqvjf" Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.161045 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555040-4v65h" Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.162241 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0a6838f8-a7aa-46c5-9e16-6885daff4a88-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-vh7gf\" (UID: \"0a6838f8-a7aa-46c5-9e16-6885daff4a88\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vh7gf" Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.177314 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6g7tv\" (UniqueName: \"kubernetes.io/projected/98a6e235-45c4-4192-8060-6771a197f829-kube-api-access-6g7tv\") pod \"machine-config-operator-74547568cd-7qz8g\" (UID: \"98a6e235-45c4-4192-8060-6771a197f829\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7qz8g" Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.183712 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7qz8g" Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.202037 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svnhs\" (UniqueName: \"kubernetes.io/projected/bb72ef9d-43d2-476d-80e9-d3b19139c7a9-kube-api-access-svnhs\") pod \"kube-storage-version-migrator-operator-b67b599dd-6lhmm\" (UID: \"bb72ef9d-43d2-476d-80e9-d3b19139c7a9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6lhmm" Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.206540 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5d8gm\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:14 crc kubenswrapper[4809]: E0312 08:02:14.206979 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 08:02:14.706958178 +0000 UTC m=+208.288993911 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5d8gm" (UID: "b1c6c047-6fde-4d86-a82c-d8d259265412") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.215609 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-r24lv" Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.226080 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-752tp\" (UniqueName: \"kubernetes.io/projected/512b5035-d9af-4615-b351-2199e94f9c50-kube-api-access-752tp\") pod \"marketplace-operator-79b997595-hrnnz\" (UID: \"512b5035-d9af-4615-b351-2199e94f9c50\") " pod="openshift-marketplace/marketplace-operator-79b997595-hrnnz" Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.239704 4809 ???:1] "http: TLS handshake error from 192.168.126.11:56678: no serving certificate available for the kubelet" Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.253200 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sm7hg" Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.262088 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6k2c6\" (UniqueName: \"kubernetes.io/projected/54d01473-f99a-47d2-ae35-0a4b933b5098-kube-api-access-6k2c6\") pod \"control-plane-machine-set-operator-78cbb6b69f-nfbkw\" (UID: \"54d01473-f99a-47d2-ae35-0a4b933b5098\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nfbkw" Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.262540 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-n7bs7" Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.269135 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-fl8dg"] Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.278084 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fv62\" (UniqueName: \"kubernetes.io/projected/0621879e-29ff-49a3-81fc-1bde4a2d22ae-kube-api-access-5fv62\") pod \"olm-operator-6b444d44fb-tjjbt\" (UID: \"0621879e-29ff-49a3-81fc-1bde4a2d22ae\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tjjbt" Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.280182 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-494j4"] Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.283995 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-lt58n" Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.291715 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xk7sk\" (UniqueName: \"kubernetes.io/projected/860674d0-3197-44c8-8912-15ec9f06c643-kube-api-access-xk7sk\") pod \"ingress-canary-76fnn\" (UID: \"860674d0-3197-44c8-8912-15ec9f06c643\") " pod="openshift-ingress-canary/ingress-canary-76fnn" Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.294754 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-cqnvn"] Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.297966 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cd8df" event={"ID":"d13f5606-abd9-46c4-b1db-1f01a3275ba8","Type":"ContainerStarted","Data":"109b4b45c87aecbdd25eadd3b2d8808752d2335c9a2577207aea163cb1a6255e"} Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.315720 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.317031 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77tht\" (UniqueName: \"kubernetes.io/projected/38092207-e107-4e5a-8706-b3ad66bea661-kube-api-access-77tht\") pod \"package-server-manager-789f6589d5-w2hxb\" (UID: \"38092207-e107-4e5a-8706-b3ad66bea661\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w2hxb" Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.318132 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-6vssd" event={"ID":"6f849ade-9f03-46fd-b9c5-5ddd61a27d1c","Type":"ContainerStarted","Data":"d452efd5d28da70248caa45e98cb2a4fbbbd135e48d0babe84509954aa38fbf7"} Mar 12 08:02:14 crc kubenswrapper[4809]: E0312 08:02:14.318181 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 08:02:14.818151617 +0000 UTC m=+208.400187350 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.334029 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgxjt\" (UniqueName: \"kubernetes.io/projected/879e1dfb-bff9-4ff0-99e3-912124941b77-kube-api-access-pgxjt\") pod \"migrator-59844c95c7-8m4nh\" (UID: \"879e1dfb-bff9-4ff0-99e3-912124941b77\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8m4nh" Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.343095 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rx75m\" (UniqueName: \"kubernetes.io/projected/88d136d4-995f-44d0-8691-e84bfacb68c3-kube-api-access-rx75m\") pod \"csi-hostpathplugin-njprg\" (UID: \"88d136d4-995f-44d0-8691-e84bfacb68c3\") " pod="hostpath-provisioner/csi-hostpathplugin-njprg" Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.352887 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-dppmm"] Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.370368 4809 ???:1] "http: TLS handshake error from 192.168.126.11:56682: no serving certificate available for the kubelet" Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.372244 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/959e6af8-6641-4e70-bb7f-cfe2c8ea56a6-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-k88nv\" (UID: \"959e6af8-6641-4e70-bb7f-cfe2c8ea56a6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-k88nv" Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.380333 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8f4l\" (UniqueName: \"kubernetes.io/projected/ac7cc05e-989a-4474-9685-9600e3502dfd-kube-api-access-l8f4l\") pod \"packageserver-d55dfcdfc-xx52w\" (UID: \"ac7cc05e-989a-4474-9685-9600e3502dfd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xx52w" Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.397817 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kt52\" (UniqueName: \"kubernetes.io/projected/50dc6c2c-21cc-4fe3-83c6-9b2a69eb039d-kube-api-access-4kt52\") pod \"catalog-operator-68c6474976-vqtvl\" (UID: \"50dc6c2c-21cc-4fe3-83c6-9b2a69eb039d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vqtvl" Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.416913 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztw8s\" (UniqueName: \"kubernetes.io/projected/f9891031-02d7-4b3a-8fb8-40fd8bb9a825-kube-api-access-ztw8s\") pod \"service-ca-9c57cc56f-pxnfg\" (UID: \"f9891031-02d7-4b3a-8fb8-40fd8bb9a825\") " pod="openshift-service-ca/service-ca-9c57cc56f-pxnfg" Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.417955 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5d8gm\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:14 crc kubenswrapper[4809]: E0312 08:02:14.420409 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 08:02:14.920390157 +0000 UTC m=+208.502426000 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5d8gm" (UID: "b1c6c047-6fde-4d86-a82c-d8d259265412") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.421916 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hrnnz" Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.425736 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-bqvjf" Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.433144 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vh7gf" Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.447134 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vqtvl" Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.453631 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6lhmm" Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.471193 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8m4nh" Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.476149 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nfbkw" Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.486276 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2726995b-6d26-48c3-9e7d-da323657f55c-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-p6jxb\" (UID: \"2726995b-6d26-48c3-9e7d-da323657f55c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-p6jxb" Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.492068 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-k88nv" Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.497997 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-pxnfg" Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.507351 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w2hxb" Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.519601 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 08:02:14 crc kubenswrapper[4809]: E0312 08:02:14.519802 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 08:02:15.019769564 +0000 UTC m=+208.601805297 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.519996 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5d8gm\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:14 crc kubenswrapper[4809]: E0312 08:02:14.520434 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 08:02:15.020418263 +0000 UTC m=+208.602453996 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5d8gm" (UID: "b1c6c047-6fde-4d86-a82c-d8d259265412") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.530463 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tjjbt" Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.538374 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xx52w" Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.555495 4809 ???:1] "http: TLS handshake error from 192.168.126.11:56696: no serving certificate available for the kubelet" Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.577373 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-76fnn" Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.591343 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-njprg" Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.621360 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 08:02:14 crc kubenswrapper[4809]: E0312 08:02:14.621581 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 08:02:15.1215428 +0000 UTC m=+208.703578533 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.621841 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5d8gm\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:14 crc kubenswrapper[4809]: E0312 08:02:14.622176 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 08:02:15.122162028 +0000 UTC m=+208.704197761 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5d8gm" (UID: "b1c6c047-6fde-4d86-a82c-d8d259265412") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.622321 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/98cc5813-f6d5-4e2a-a7d4-2546e18e60f9-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-r5hmb\" (UID: \"98cc5813-f6d5-4e2a-a7d4-2546e18e60f9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5hmb" Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.623268 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/98cc5813-f6d5-4e2a-a7d4-2546e18e60f9-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-r5hmb\" (UID: \"98cc5813-f6d5-4e2a-a7d4-2546e18e60f9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5hmb" Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.723203 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 08:02:14 crc kubenswrapper[4809]: E0312 08:02:14.723350 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 08:02:15.223329417 +0000 UTC m=+208.805365150 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.723900 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5d8gm\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:14 crc kubenswrapper[4809]: E0312 08:02:14.725009 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 08:02:15.224992445 +0000 UTC m=+208.807028178 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5d8gm" (UID: "b1c6c047-6fde-4d86-a82c-d8d259265412") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.739677 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-p6jxb" Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.747073 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-s8wdz"] Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.825233 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 08:02:14 crc kubenswrapper[4809]: E0312 08:02:14.825718 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 08:02:15.32570107 +0000 UTC m=+208.907736793 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.838978 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-p5h4v"] Mar 12 08:02:14 crc kubenswrapper[4809]: W0312 08:02:14.878036 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e14730e_9ab1_4dd1_b786_142b82b59802.slice/crio-129d9cedf8fafeda64a7715f3b69433c81c8b858a158b9d265d5bce44dcb5563 WatchSource:0}: Error finding container 129d9cedf8fafeda64a7715f3b69433c81c8b858a158b9d265d5bce44dcb5563: Status 404 returned error can't find the container with id 129d9cedf8fafeda64a7715f3b69433c81c8b858a158b9d265d5bce44dcb5563 Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.916443 4809 ???:1] "http: TLS handshake error from 192.168.126.11:42260: no serving certificate available for the kubelet" Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.922319 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-r5hmb" Mar 12 08:02:14 crc kubenswrapper[4809]: I0312 08:02:14.927279 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5d8gm\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:14 crc kubenswrapper[4809]: E0312 08:02:14.927768 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 08:02:15.427742755 +0000 UTC m=+209.009778658 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5d8gm" (UID: "b1c6c047-6fde-4d86-a82c-d8d259265412") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:15 crc kubenswrapper[4809]: I0312 08:02:15.023574 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-pqkrs"] Mar 12 08:02:15 crc kubenswrapper[4809]: I0312 08:02:15.030219 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 08:02:15 crc kubenswrapper[4809]: E0312 08:02:15.030678 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 08:02:15.530654543 +0000 UTC m=+209.112690276 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:15 crc kubenswrapper[4809]: I0312 08:02:15.049608 4809 patch_prober.go:28] interesting pod/machine-config-daemon-h6d4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 08:02:15 crc kubenswrapper[4809]: I0312 08:02:15.049670 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 08:02:15 crc kubenswrapper[4809]: W0312 08:02:15.063607 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podafb31fcb_c363_4434_a971_126881422750.slice/crio-b6a5875ba921eb35affcb34b2f123a07e0462d157033e74af60781b99bf1f454 WatchSource:0}: Error finding container b6a5875ba921eb35affcb34b2f123a07e0462d157033e74af60781b99bf1f454: Status 404 returned error can't find the container with id b6a5875ba921eb35affcb34b2f123a07e0462d157033e74af60781b99bf1f454 Mar 12 08:02:15 crc kubenswrapper[4809]: I0312 08:02:15.132239 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5d8gm\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:15 crc kubenswrapper[4809]: E0312 08:02:15.132632 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 08:02:15.632615745 +0000 UTC m=+209.214651478 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5d8gm" (UID: "b1c6c047-6fde-4d86-a82c-d8d259265412") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:15 crc kubenswrapper[4809]: I0312 08:02:15.217805 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-fvmsc"] Mar 12 08:02:15 crc kubenswrapper[4809]: I0312 08:02:15.219071 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-d525c"] Mar 12 08:02:15 crc kubenswrapper[4809]: I0312 08:02:15.219132 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-pghrh"] Mar 12 08:02:15 crc kubenswrapper[4809]: I0312 08:02:15.233260 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 08:02:15 crc kubenswrapper[4809]: E0312 08:02:15.233713 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 08:02:15.733690671 +0000 UTC m=+209.315726404 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:15 crc kubenswrapper[4809]: I0312 08:02:15.314493 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555042-287fz"] Mar 12 08:02:15 crc kubenswrapper[4809]: I0312 08:02:15.331695 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-2qrsm"] Mar 12 08:02:15 crc kubenswrapper[4809]: I0312 08:02:15.338357 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5d8gm\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:15 crc kubenswrapper[4809]: E0312 08:02:15.339472 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 08:02:15.839450913 +0000 UTC m=+209.421486646 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5d8gm" (UID: "b1c6c047-6fde-4d86-a82c-d8d259265412") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:15 crc kubenswrapper[4809]: I0312 08:02:15.378554 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kmr2s"] Mar 12 08:02:15 crc kubenswrapper[4809]: I0312 08:02:15.378631 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-pghrh" event={"ID":"d06cfae9-ac07-4c93-bb8d-f4548bbe303d","Type":"ContainerStarted","Data":"170afa8d181feedab80eebe0e2473af125c16f46212a557959dc24839968a673"} Mar 12 08:02:15 crc kubenswrapper[4809]: I0312 08:02:15.391458 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-d525c" event={"ID":"481587db-884d-425d-ba58-4921449275ef","Type":"ContainerStarted","Data":"ec1f0a3480bbde17c9b005081e74829dc2519bde5a9a88107e91a6c207be0366"} Mar 12 08:02:15 crc kubenswrapper[4809]: I0312 08:02:15.400003 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-s8wdz" event={"ID":"3e14730e-9ab1-4dd1-b786-142b82b59802","Type":"ContainerStarted","Data":"129d9cedf8fafeda64a7715f3b69433c81c8b858a158b9d265d5bce44dcb5563"} Mar 12 08:02:15 crc kubenswrapper[4809]: I0312 08:02:15.402491 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fvmsc" event={"ID":"9a23ba2c-e3a5-4447-955c-f60a09b92dcc","Type":"ContainerStarted","Data":"e5be6cbe78f82ef257ee6aa99ae8fa26bdfcf109538e140c1c69674f3973c2f5"} Mar 12 08:02:15 crc kubenswrapper[4809]: I0312 08:02:15.409046 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-494j4" event={"ID":"60f54e91-6901-4948-937c-cb0b85f43196","Type":"ContainerStarted","Data":"7e8bfed734ea46c7194bfcb76be424ccbbe29afe6cff11d7fdc92a122cca6d53"} Mar 12 08:02:15 crc kubenswrapper[4809]: I0312 08:02:15.409150 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-494j4" event={"ID":"60f54e91-6901-4948-937c-cb0b85f43196","Type":"ContainerStarted","Data":"c87ab94c5af7185abc42a51b5409a3d9d4eff25a7230d78a9909895716802665"} Mar 12 08:02:15 crc kubenswrapper[4809]: I0312 08:02:15.413823 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-p5h4v" event={"ID":"a9f8452e-0278-4321-9ed1-e38a44ce0cb2","Type":"ContainerStarted","Data":"e2917ffacd42dfc886dd82000d96d24795d5c9f6410fbd5fc3d1d0fa9b213c23"} Mar 12 08:02:15 crc kubenswrapper[4809]: I0312 08:02:15.420223 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pqkrs" event={"ID":"afb31fcb-c363-4434-a971-126881422750","Type":"ContainerStarted","Data":"b6a5875ba921eb35affcb34b2f123a07e0462d157033e74af60781b99bf1f454"} Mar 12 08:02:15 crc kubenswrapper[4809]: I0312 08:02:15.423550 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-n7bs7" event={"ID":"03f48903-53e7-4a18-a2ee-2a94eefc9182","Type":"ContainerStarted","Data":"9fac104a0cbb277068d53a71c9a521a89ef64f6479080376560b7e83b3ce2c87"} Mar 12 08:02:15 crc kubenswrapper[4809]: I0312 08:02:15.423583 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-n7bs7" event={"ID":"03f48903-53e7-4a18-a2ee-2a94eefc9182","Type":"ContainerStarted","Data":"1420fa041d209d98efcfc17ea9f4ed627b8653e20a109fb9ce3b0b869be239a1"} Mar 12 08:02:15 crc kubenswrapper[4809]: I0312 08:02:15.426411 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-fl8dg" event={"ID":"a752ff7b-9553-492d-83d0-42bb9ea5dfa9","Type":"ContainerStarted","Data":"f75c66af130d80484fd10471c08e7f9f0fdfddb164f3367b34ee1d69436a347e"} Mar 12 08:02:15 crc kubenswrapper[4809]: I0312 08:02:15.426467 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-fl8dg" event={"ID":"a752ff7b-9553-492d-83d0-42bb9ea5dfa9","Type":"ContainerStarted","Data":"b290a38e8a83157247b307c81c9f9b80979232ab8293d3f4fa3ea69bba2d33d6"} Mar 12 08:02:15 crc kubenswrapper[4809]: I0312 08:02:15.426825 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-fl8dg" Mar 12 08:02:15 crc kubenswrapper[4809]: I0312 08:02:15.431886 4809 patch_prober.go:28] interesting pod/downloads-7954f5f757-fl8dg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Mar 12 08:02:15 crc kubenswrapper[4809]: I0312 08:02:15.431977 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fl8dg" podUID="a752ff7b-9553-492d-83d0-42bb9ea5dfa9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Mar 12 08:02:15 crc kubenswrapper[4809]: I0312 08:02:15.432918 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cd8df" event={"ID":"d13f5606-abd9-46c4-b1db-1f01a3275ba8","Type":"ContainerStarted","Data":"d99d00f848a7b691a2d91774da8b2bfb789ec0450247f0b0632b41343c3a97c6"} Mar 12 08:02:15 crc kubenswrapper[4809]: I0312 08:02:15.434577 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-dppmm" event={"ID":"1189f657-b031-4ece-859b-95d3eadd8221","Type":"ContainerStarted","Data":"2fa15bd351bb201693c81881d012f11f8d49df826a5bf0e3f9c7ab0304d163f4"} Mar 12 08:02:15 crc kubenswrapper[4809]: I0312 08:02:15.434620 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-dppmm" event={"ID":"1189f657-b031-4ece-859b-95d3eadd8221","Type":"ContainerStarted","Data":"adcf3e0db137b0ed66a1280bab1bc15cb2f8dad9262044fc944e5f3e8b545bc0"} Mar 12 08:02:15 crc kubenswrapper[4809]: I0312 08:02:15.437086 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-cqnvn" event={"ID":"76b05f76-b086-4375-9ba4-b1d4f5624ba0","Type":"ContainerStarted","Data":"5d378c75a671f4174657258121dce2d04de347a7cd3ca41c132436941f69fd91"} Mar 12 08:02:15 crc kubenswrapper[4809]: I0312 08:02:15.437139 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-cqnvn" event={"ID":"76b05f76-b086-4375-9ba4-b1d4f5624ba0","Type":"ContainerStarted","Data":"687675a6c2235edf5e17363d113c21eb05dfb82073ffad4dd004bfbae591ead9"} Mar 12 08:02:15 crc kubenswrapper[4809]: I0312 08:02:15.440209 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 08:02:15 crc kubenswrapper[4809]: E0312 08:02:15.441044 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 08:02:15.941022854 +0000 UTC m=+209.523058587 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:15 crc kubenswrapper[4809]: I0312 08:02:15.443920 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-ttmrw" event={"ID":"b24173e0-5140-4c4d-ab8b-ea3696db0b74","Type":"ContainerStarted","Data":"1d7005312ac15624256f0fb0b81489425a792d625a0722a57dbd659e04d2e71e"} Mar 12 08:02:15 crc kubenswrapper[4809]: I0312 08:02:15.443965 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-ttmrw" event={"ID":"b24173e0-5140-4c4d-ab8b-ea3696db0b74","Type":"ContainerStarted","Data":"d0357b2ca9542c62304a9e8680fc8bc20482ad86e8086b4203e0af20aa7374f6"} Mar 12 08:02:15 crc kubenswrapper[4809]: I0312 08:02:15.446370 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-6vssd" event={"ID":"6f849ade-9f03-46fd-b9c5-5ddd61a27d1c","Type":"ContainerStarted","Data":"1687b7a0dc62ca787c34748f8e774d1398baeeb28e8ceb14079cdf2e8b998238"} Mar 12 08:02:15 crc kubenswrapper[4809]: I0312 08:02:15.446713 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-6vssd" Mar 12 08:02:15 crc kubenswrapper[4809]: I0312 08:02:15.449378 4809 patch_prober.go:28] interesting pod/console-operator-58897d9998-6vssd container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/readyz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Mar 12 08:02:15 crc kubenswrapper[4809]: I0312 08:02:15.449438 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-6vssd" podUID="6f849ade-9f03-46fd-b9c5-5ddd61a27d1c" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/readyz\": dial tcp 10.217.0.20:8443: connect: connection refused" Mar 12 08:02:15 crc kubenswrapper[4809]: I0312 08:02:15.485847 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:02:15 crc kubenswrapper[4809]: W0312 08:02:15.539228 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda2d7bf2_3fcc_42c4_ae05_c16d5c714a26.slice/crio-292097c459141af1f0205f62399e3e53498f8d0a406b79717420f972954b3a44 WatchSource:0}: Error finding container 292097c459141af1f0205f62399e3e53498f8d0a406b79717420f972954b3a44: Status 404 returned error can't find the container with id 292097c459141af1f0205f62399e3e53498f8d0a406b79717420f972954b3a44 Mar 12 08:02:15 crc kubenswrapper[4809]: I0312 08:02:15.544661 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5d8gm\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:15 crc kubenswrapper[4809]: E0312 08:02:15.550109 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 08:02:16.04866186 +0000 UTC m=+209.630697593 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5d8gm" (UID: "b1c6c047-6fde-4d86-a82c-d8d259265412") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:15 crc kubenswrapper[4809]: I0312 08:02:15.576582 4809 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 08:02:15 crc kubenswrapper[4809]: I0312 08:02:15.602993 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-fvtvw" podStartSLOduration=171.602963702 podStartE2EDuration="2m51.602963702s" podCreationTimestamp="2026-03-12 07:59:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:02:15.596568127 +0000 UTC m=+209.178603860" watchObservedRunningTime="2026-03-12 08:02:15.602963702 +0000 UTC m=+209.184999435" Mar 12 08:02:15 crc kubenswrapper[4809]: I0312 08:02:15.605354 4809 ???:1] "http: TLS handshake error from 192.168.126.11:42276: no serving certificate available for the kubelet" Mar 12 08:02:15 crc kubenswrapper[4809]: I0312 08:02:15.646841 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 08:02:15 crc kubenswrapper[4809]: E0312 08:02:15.648847 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 08:02:16.14882577 +0000 UTC m=+209.730861503 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:15 crc kubenswrapper[4809]: I0312 08:02:15.714729 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555040-4v65h"] Mar 12 08:02:15 crc kubenswrapper[4809]: I0312 08:02:15.749886 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5d8gm\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:15 crc kubenswrapper[4809]: E0312 08:02:15.750303 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 08:02:16.250287306 +0000 UTC m=+209.832323039 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5d8gm" (UID: "b1c6c047-6fde-4d86-a82c-d8d259265412") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:15 crc kubenswrapper[4809]: I0312 08:02:15.793536 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-rvfv4" podStartSLOduration=171.793518618 podStartE2EDuration="2m51.793518618s" podCreationTimestamp="2026-03-12 07:59:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:02:15.793081046 +0000 UTC m=+209.375116779" watchObservedRunningTime="2026-03-12 08:02:15.793518618 +0000 UTC m=+209.375554351" Mar 12 08:02:15 crc kubenswrapper[4809]: I0312 08:02:15.853063 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 08:02:15 crc kubenswrapper[4809]: E0312 08:02:15.853557 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 08:02:16.353538726 +0000 UTC m=+209.935574459 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:15 crc kubenswrapper[4809]: I0312 08:02:15.884991 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-sm7hg"] Mar 12 08:02:15 crc kubenswrapper[4809]: I0312 08:02:15.896476 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-7qz8g"] Mar 12 08:02:15 crc kubenswrapper[4809]: I0312 08:02:15.906914 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-lt58n"] Mar 12 08:02:15 crc kubenswrapper[4809]: I0312 08:02:15.934623 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-r24lv"] Mar 12 08:02:15 crc kubenswrapper[4809]: I0312 08:02:15.957733 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5d8gm\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:15 crc kubenswrapper[4809]: E0312 08:02:15.958085 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 08:02:16.458073922 +0000 UTC m=+210.040109655 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5d8gm" (UID: "b1c6c047-6fde-4d86-a82c-d8d259265412") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:15 crc kubenswrapper[4809]: I0312 08:02:15.985485 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-k88nv"] Mar 12 08:02:15 crc kubenswrapper[4809]: I0312 08:02:15.998880 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-pxnfg"] Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.059490 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 08:02:16 crc kubenswrapper[4809]: E0312 08:02:16.060021 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 08:02:16.559998573 +0000 UTC m=+210.142034306 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.077621 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-ttmrw" Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.093688 4809 patch_prober.go:28] interesting pod/router-default-5444994796-ttmrw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 08:02:16 crc kubenswrapper[4809]: [-]has-synced failed: reason withheld Mar 12 08:02:16 crc kubenswrapper[4809]: [+]process-running ok Mar 12 08:02:16 crc kubenswrapper[4809]: healthz check failed Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.093741 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ttmrw" podUID="b24173e0-5140-4c4d-ab8b-ea3696db0b74" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.099312 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-76fnn"] Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.118442 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6lhmm"] Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.118950 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vqtvl"] Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.136125 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hrnnz"] Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.160334 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xx52w"] Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.160385 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-8m4nh"] Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.160963 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5d8gm\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:16 crc kubenswrapper[4809]: E0312 08:02:16.161276 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 08:02:16.661265265 +0000 UTC m=+210.243300998 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5d8gm" (UID: "b1c6c047-6fde-4d86-a82c-d8d259265412") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.163710 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nfbkw"] Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.165218 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-bqvjf"] Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.172316 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tjjbt"] Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.172356 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vh7gf"] Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.175978 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-njprg"] Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.179353 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-p6jxb"] Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.184587 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w2hxb"] Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.185015 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g4vh8" podStartSLOduration=172.184996552 podStartE2EDuration="2m52.184996552s" podCreationTimestamp="2026-03-12 07:59:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:02:16.179833252 +0000 UTC m=+209.761868985" watchObservedRunningTime="2026-03-12 08:02:16.184996552 +0000 UTC m=+209.767032285" Mar 12 08:02:16 crc kubenswrapper[4809]: W0312 08:02:16.202270 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod860674d0_3197_44c8_8912_15ec9f06c643.slice/crio-a6fd5957d526886e30bc33a69ba88e52177ce57afcb5d70f67316cbdf90e62a7 WatchSource:0}: Error finding container a6fd5957d526886e30bc33a69ba88e52177ce57afcb5d70f67316cbdf90e62a7: Status 404 returned error can't find the container with id a6fd5957d526886e30bc33a69ba88e52177ce57afcb5d70f67316cbdf90e62a7 Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.205338 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-r5hmb"] Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.266197 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 08:02:16 crc kubenswrapper[4809]: E0312 08:02:16.266614 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 08:02:16.766591924 +0000 UTC m=+210.348627657 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.274930 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ljb8d" podStartSLOduration=172.274888094 podStartE2EDuration="2m52.274888094s" podCreationTimestamp="2026-03-12 07:59:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:02:16.240455647 +0000 UTC m=+209.822491380" watchObservedRunningTime="2026-03-12 08:02:16.274888094 +0000 UTC m=+209.856923827" Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.354214 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-494j4" podStartSLOduration=172.35419337 podStartE2EDuration="2m52.35419337s" podCreationTimestamp="2026-03-12 07:59:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:02:16.322617686 +0000 UTC m=+209.904653419" watchObservedRunningTime="2026-03-12 08:02:16.35419337 +0000 UTC m=+209.936229103" Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.356722 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-ttmrw" podStartSLOduration=172.356715853 podStartE2EDuration="2m52.356715853s" podCreationTimestamp="2026-03-12 07:59:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:02:16.353765687 +0000 UTC m=+209.935801420" watchObservedRunningTime="2026-03-12 08:02:16.356715853 +0000 UTC m=+209.938751586" Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.372304 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5d8gm\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:16 crc kubenswrapper[4809]: E0312 08:02:16.372715 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 08:02:16.872699416 +0000 UTC m=+210.454735139 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5d8gm" (UID: "b1c6c047-6fde-4d86-a82c-d8d259265412") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.409282 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cd8df" podStartSLOduration=172.409260874 podStartE2EDuration="2m52.409260874s" podCreationTimestamp="2026-03-12 07:59:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:02:16.402174759 +0000 UTC m=+209.984210492" watchObservedRunningTime="2026-03-12 08:02:16.409260874 +0000 UTC m=+209.991296607" Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.435083 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-6vssd" podStartSLOduration=172.43506124 podStartE2EDuration="2m52.43506124s" podCreationTimestamp="2026-03-12 07:59:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:02:16.431596721 +0000 UTC m=+210.013632454" watchObservedRunningTime="2026-03-12 08:02:16.43506124 +0000 UTC m=+210.017096973" Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.467700 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vh7gf" event={"ID":"0a6838f8-a7aa-46c5-9e16-6885daff4a88","Type":"ContainerStarted","Data":"a05bdfc02c9220aac201ee287b5d0a856f7060b3d539ce78ed7a3abc4651823a"} Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.468792 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pqkrs" event={"ID":"afb31fcb-c363-4434-a971-126881422750","Type":"ContainerStarted","Data":"212f9ce05b5de2e2b96ebd71f3bfad25ec3dbf76ec0e9e37d6225508c00a3210"} Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.469513 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pqkrs" Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.470528 4809 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-pqkrs container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.470576 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pqkrs" podUID="afb31fcb-c363-4434-a971-126881422750" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.475803 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 08:02:16 crc kubenswrapper[4809]: E0312 08:02:16.475940 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 08:02:16.975918124 +0000 UTC m=+210.557953857 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.476126 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5d8gm\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:16 crc kubenswrapper[4809]: E0312 08:02:16.477986 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 08:02:16.977971423 +0000 UTC m=+210.560007156 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5d8gm" (UID: "b1c6c047-6fde-4d86-a82c-d8d259265412") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.481832 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-fl8dg" podStartSLOduration=172.481813194 podStartE2EDuration="2m52.481813194s" podCreationTimestamp="2026-03-12 07:59:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:02:16.479999532 +0000 UTC m=+210.062035255" watchObservedRunningTime="2026-03-12 08:02:16.481813194 +0000 UTC m=+210.063848927" Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.483217 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7qz8g" event={"ID":"98a6e235-45c4-4192-8060-6771a197f829","Type":"ContainerStarted","Data":"327e05048456043eb038dd1a2a834c44fb6561577a658a15ef63c13f0bc560d6"} Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.485817 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-76fnn" event={"ID":"860674d0-3197-44c8-8912-15ec9f06c643","Type":"ContainerStarted","Data":"a6fd5957d526886e30bc33a69ba88e52177ce57afcb5d70f67316cbdf90e62a7"} Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.549254 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-r5hmb" event={"ID":"98cc5813-f6d5-4e2a-a7d4-2546e18e60f9","Type":"ContainerStarted","Data":"f0dfe500fa4e8d2ea4696c0ee2c91e58dbc07b967dcaccca3dcdb7cce66d03a1"} Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.562172 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-n7bs7" podStartSLOduration=5.562153 podStartE2EDuration="5.562153s" podCreationTimestamp="2026-03-12 08:02:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:02:16.532268825 +0000 UTC m=+210.114304558" watchObservedRunningTime="2026-03-12 08:02:16.562153 +0000 UTC m=+210.144188733" Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.568369 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-dppmm" podStartSLOduration=172.5683459 podStartE2EDuration="2m52.5683459s" podCreationTimestamp="2026-03-12 07:59:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:02:16.56560773 +0000 UTC m=+210.147643463" watchObservedRunningTime="2026-03-12 08:02:16.5683459 +0000 UTC m=+210.150381623" Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.577823 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 08:02:16 crc kubenswrapper[4809]: E0312 08:02:16.578133 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 08:02:17.078102342 +0000 UTC m=+210.660138065 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.583889 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555040-4v65h" event={"ID":"42f8eef2-550d-4e94-9719-3f2abbbb3ecc","Type":"ContainerStarted","Data":"e3ae3ffc84184d6fe0be1bcee6b636b19a55d8c06909baf833e1e44c86967155"} Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.583962 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555040-4v65h" event={"ID":"42f8eef2-550d-4e94-9719-3f2abbbb3ecc","Type":"ContainerStarted","Data":"bf01c3cb2fbdd09ec40c0a06c15ee62decd045987033f20ad5664b140fe01aab"} Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.587288 4809 generic.go:334] "Generic (PLEG): container finished" podID="481587db-884d-425d-ba58-4921449275ef" containerID="3080c45e051204151cd7f5f6d26276708120270eabc4fed6a55b77971532e8f0" exitCode=0 Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.587360 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-d525c" event={"ID":"481587db-884d-425d-ba58-4921449275ef","Type":"ContainerDied","Data":"3080c45e051204151cd7f5f6d26276708120270eabc4fed6a55b77971532e8f0"} Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.595945 4809 generic.go:334] "Generic (PLEG): container finished" podID="9a23ba2c-e3a5-4447-955c-f60a09b92dcc" containerID="b9c4386b9ecfed63154ea39befb630e4b99fa3ad9ddf8d9825d87b3ccd933375" exitCode=0 Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.596028 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fvmsc" event={"ID":"9a23ba2c-e3a5-4447-955c-f60a09b92dcc","Type":"ContainerDied","Data":"b9c4386b9ecfed63154ea39befb630e4b99fa3ad9ddf8d9825d87b3ccd933375"} Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.657768 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w2hxb" event={"ID":"38092207-e107-4e5a-8706-b3ad66bea661","Type":"ContainerStarted","Data":"32ccc9c6cf27b7f3216fd786f6345cf219b3b990dc5225572d337a6615843ecc"} Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.679206 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5d8gm\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:16 crc kubenswrapper[4809]: E0312 08:02:16.679713 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 08:02:17.179695833 +0000 UTC m=+210.761731566 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5d8gm" (UID: "b1c6c047-6fde-4d86-a82c-d8d259265412") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.681231 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-cqnvn" event={"ID":"76b05f76-b086-4375-9ba4-b1d4f5624ba0","Type":"ContainerStarted","Data":"7c49e0cdbf9a3f6a93fa558395c0d1ecc81516cbe90c81cccebd1c5be1c05b0b"} Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.689669 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29555040-4v65h" podStartSLOduration=136.689644421 podStartE2EDuration="2m16.689644421s" podCreationTimestamp="2026-03-12 08:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:02:16.64849792 +0000 UTC m=+210.230533653" watchObservedRunningTime="2026-03-12 08:02:16.689644421 +0000 UTC m=+210.271680154" Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.696008 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6lhmm" event={"ID":"bb72ef9d-43d2-476d-80e9-d3b19139c7a9","Type":"ContainerStarted","Data":"3c1a87182511dd1a9e4a3147f149bcaee12a3881c75b455530a7e5c3c03ede6e"} Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.705755 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-p5h4v" event={"ID":"a9f8452e-0278-4321-9ed1-e38a44ce0cb2","Type":"ContainerStarted","Data":"1d3f37394dc57054dce0f66990f933916a2381d00b457e082b6c4226769cb0b6"} Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.705812 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-p5h4v" event={"ID":"a9f8452e-0278-4321-9ed1-e38a44ce0cb2","Type":"ContainerStarted","Data":"2be4f21fe2f4a6cd6fe76c85d02505a0f6d73b7c8e04d8adacff795ff33585c6"} Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.732646 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8m4nh" event={"ID":"879e1dfb-bff9-4ff0-99e3-912124941b77","Type":"ContainerStarted","Data":"0ba95f5aada58c3128add4f944feeed34355c6689c9932418f29c424cdee516d"} Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.733825 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pqkrs" podStartSLOduration=171.73380707 podStartE2EDuration="2m51.73380707s" podCreationTimestamp="2026-03-12 07:59:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:02:16.732722498 +0000 UTC m=+210.314758221" watchObservedRunningTime="2026-03-12 08:02:16.73380707 +0000 UTC m=+210.315842803" Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.738100 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xx52w" event={"ID":"ac7cc05e-989a-4474-9685-9600e3502dfd","Type":"ContainerStarted","Data":"390ce4386eb43e9d4c698967e90d7e52d72218f7acc0a1a01e1fb958ffe6069c"} Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.739097 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-k88nv" event={"ID":"959e6af8-6641-4e70-bb7f-cfe2c8ea56a6","Type":"ContainerStarted","Data":"1cac8d8bffa0378c5e1d04562d0403351437c3dd5f8f004ab540044ff85e4c4a"} Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.747707 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-pxnfg" event={"ID":"f9891031-02d7-4b3a-8fb8-40fd8bb9a825","Type":"ContainerStarted","Data":"604bf3252ec8055b215e408c4ee34ef9c92368c7b99a0e0f44517f1281bd716c"} Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.760594 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555042-287fz" event={"ID":"da2d7bf2-3fcc-42c4-ae05-c16d5c714a26","Type":"ContainerStarted","Data":"292097c459141af1f0205f62399e3e53498f8d0a406b79717420f972954b3a44"} Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.780265 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 08:02:16 crc kubenswrapper[4809]: E0312 08:02:16.782265 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 08:02:17.282232751 +0000 UTC m=+210.864268484 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.787668 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vqtvl" event={"ID":"50dc6c2c-21cc-4fe3-83c6-9b2a69eb039d","Type":"ContainerStarted","Data":"0257fa4253aa6817817c144e0d09c0c16e6ec666a33ec69c23b2a48dcb7c9aaa"} Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.802206 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-pxnfg" podStartSLOduration=171.802184039 podStartE2EDuration="2m51.802184039s" podCreationTimestamp="2026-03-12 07:59:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:02:16.801688134 +0000 UTC m=+210.383723867" watchObservedRunningTime="2026-03-12 08:02:16.802184039 +0000 UTC m=+210.384219772" Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.808232 4809 generic.go:334] "Generic (PLEG): container finished" podID="63625da7-0f0d-48f1-8b58-c75e04bc31e4" containerID="ecc6eea6a27595a1416b3253570cbdb6e441a088d463aa00c3b563e3e09f3053" exitCode=0 Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.808318 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qrsm" event={"ID":"63625da7-0f0d-48f1-8b58-c75e04bc31e4","Type":"ContainerDied","Data":"ecc6eea6a27595a1416b3253570cbdb6e441a088d463aa00c3b563e3e09f3053"} Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.808353 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qrsm" event={"ID":"63625da7-0f0d-48f1-8b58-c75e04bc31e4","Type":"ContainerStarted","Data":"9ff1c7ee106f81bce9e6242e447fa4c8b76639f9a0e48644c9b187d4d1d39c3e"} Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.823346 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-s8wdz" event={"ID":"3e14730e-9ab1-4dd1-b786-142b82b59802","Type":"ContainerStarted","Data":"07997b868a10f171c145e86a4a3ba2c80b0eae13cf43103285c8510b1f08f0c8"} Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.824042 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-s8wdz" Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.839743 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-p6jxb" event={"ID":"2726995b-6d26-48c3-9e7d-da323657f55c","Type":"ContainerStarted","Data":"7c76a32cb218046fc8399a8d6127eb63da364bbd9a2337df2fc632df5577c7d4"} Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.840761 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-p5h4v" podStartSLOduration=172.840741245 podStartE2EDuration="2m52.840741245s" podCreationTimestamp="2026-03-12 07:59:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:02:16.838631835 +0000 UTC m=+210.420667568" watchObservedRunningTime="2026-03-12 08:02:16.840741245 +0000 UTC m=+210.422776978" Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.851640 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kmr2s" event={"ID":"e70704a8-4517-4ee2-8a1f-de93c014b0da","Type":"ContainerStarted","Data":"02bfe53e6014fbcea9e3d9762eb41c24f7c307aaf2305db8c6ffae3ecbbfe2e6"} Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.851715 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kmr2s" event={"ID":"e70704a8-4517-4ee2-8a1f-de93c014b0da","Type":"ContainerStarted","Data":"d93e3ae1de43676f708aa34ceb1dfe8114befaa143b0f58d6cf3ee1b40c048a1"} Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.883216 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5d8gm\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:16 crc kubenswrapper[4809]: E0312 08:02:16.885742 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 08:02:17.385722357 +0000 UTC m=+210.967758090 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5d8gm" (UID: "b1c6c047-6fde-4d86-a82c-d8d259265412") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.908157 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-cqnvn" podStartSLOduration=172.908130696 podStartE2EDuration="2m52.908130696s" podCreationTimestamp="2026-03-12 07:59:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:02:16.903965905 +0000 UTC m=+210.486001638" watchObservedRunningTime="2026-03-12 08:02:16.908130696 +0000 UTC m=+210.490166419" Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.933896 4809 ???:1] "http: TLS handshake error from 192.168.126.11:42286: no serving certificate available for the kubelet" Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.947463 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-s8wdz" podStartSLOduration=172.947445674 podStartE2EDuration="2m52.947445674s" podCreationTimestamp="2026-03-12 07:59:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:02:16.946223589 +0000 UTC m=+210.528259322" watchObservedRunningTime="2026-03-12 08:02:16.947445674 +0000 UTC m=+210.529481407" Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.955667 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sm7hg" event={"ID":"5d4d6ee6-9302-4b5e-b917-3a52513210ed","Type":"ContainerStarted","Data":"2a420068341ba95c930e19ab839e601edac212b77a3dc44fbfe9537ec8ba2f88"} Mar 12 08:02:16 crc kubenswrapper[4809]: I0312 08:02:16.990872 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 08:02:16 crc kubenswrapper[4809]: E0312 08:02:16.991959 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 08:02:17.491943482 +0000 UTC m=+211.073979215 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:17 crc kubenswrapper[4809]: I0312 08:02:17.004373 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-r24lv" event={"ID":"31ccf9d3-41db-457e-9341-744f7945b4f5","Type":"ContainerStarted","Data":"c7a03fd554b3d48ccb55c319b24f29c7360bb63925de3d432bfb03f3066fe8f0"} Mar 12 08:02:17 crc kubenswrapper[4809]: I0312 08:02:17.038066 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-bqvjf" event={"ID":"d8b4cda2-f60b-4a9d-9302-ac44f7acfb7a","Type":"ContainerStarted","Data":"7174533edd0a328c55a3cf888c9ce1c5242ebbbcb5e91f144108d13606fcc02b"} Mar 12 08:02:17 crc kubenswrapper[4809]: I0312 08:02:17.039571 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-r24lv" podStartSLOduration=172.039549541 podStartE2EDuration="2m52.039549541s" podCreationTimestamp="2026-03-12 07:59:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:02:17.038034647 +0000 UTC m=+210.620070380" watchObservedRunningTime="2026-03-12 08:02:17.039549541 +0000 UTC m=+210.621585274" Mar 12 08:02:17 crc kubenswrapper[4809]: I0312 08:02:17.081575 4809 patch_prober.go:28] interesting pod/router-default-5444994796-ttmrw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 08:02:17 crc kubenswrapper[4809]: [-]has-synced failed: reason withheld Mar 12 08:02:17 crc kubenswrapper[4809]: [+]process-running ok Mar 12 08:02:17 crc kubenswrapper[4809]: healthz check failed Mar 12 08:02:17 crc kubenswrapper[4809]: I0312 08:02:17.081669 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ttmrw" podUID="b24173e0-5140-4c4d-ab8b-ea3696db0b74" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 08:02:17 crc kubenswrapper[4809]: I0312 08:02:17.092822 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5d8gm\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:17 crc kubenswrapper[4809]: E0312 08:02:17.093307 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 08:02:17.593292007 +0000 UTC m=+211.175327740 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5d8gm" (UID: "b1c6c047-6fde-4d86-a82c-d8d259265412") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:17 crc kubenswrapper[4809]: I0312 08:02:17.102633 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-lt58n" event={"ID":"5ce55a3c-0a1e-4bb6-84be-c8fe85a210a7","Type":"ContainerStarted","Data":"dd171ecca0d5ed852bda1d7c5bbbd61fa7c41c2968ec3a9f5e3110928a650112"} Mar 12 08:02:17 crc kubenswrapper[4809]: I0312 08:02:17.144221 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-pghrh" event={"ID":"d06cfae9-ac07-4c93-bb8d-f4548bbe303d","Type":"ContainerStarted","Data":"d66ff422f5ad23cffc5a2d7adbdda57132f27c4349f3ed34be6907ac66be16b9"} Mar 12 08:02:17 crc kubenswrapper[4809]: I0312 08:02:17.165299 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nfbkw" event={"ID":"54d01473-f99a-47d2-ae35-0a4b933b5098","Type":"ContainerStarted","Data":"2a6fa110fe8f01fc8284bee55a7ec8c66aa64d7fdd015129778d61f5fc2b5bbe"} Mar 12 08:02:17 crc kubenswrapper[4809]: I0312 08:02:17.191565 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hrnnz" event={"ID":"512b5035-d9af-4615-b351-2199e94f9c50","Type":"ContainerStarted","Data":"6eb87b3f4a897140c8631b1e3de2c8b460d0c5f5d94aa9803341a1a96391a1ac"} Mar 12 08:02:17 crc kubenswrapper[4809]: I0312 08:02:17.195253 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 08:02:17 crc kubenswrapper[4809]: E0312 08:02:17.196846 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 08:02:17.696811914 +0000 UTC m=+211.278847647 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:17 crc kubenswrapper[4809]: I0312 08:02:17.211916 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tjjbt" event={"ID":"0621879e-29ff-49a3-81fc-1bde4a2d22ae","Type":"ContainerStarted","Data":"7d7787d389c8cf05beab10c3b18f45ac6dd97756845ee5f1c90ee8e05ebd4d86"} Mar 12 08:02:17 crc kubenswrapper[4809]: I0312 08:02:17.222506 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-njprg" event={"ID":"88d136d4-995f-44d0-8691-e84bfacb68c3","Type":"ContainerStarted","Data":"0684b2c5a756104be3376f175b8192ae2bf9f5259cf9742b243f83ef0e4d9c27"} Mar 12 08:02:17 crc kubenswrapper[4809]: I0312 08:02:17.228132 4809 patch_prober.go:28] interesting pod/downloads-7954f5f757-fl8dg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Mar 12 08:02:17 crc kubenswrapper[4809]: I0312 08:02:17.228178 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fl8dg" podUID="a752ff7b-9553-492d-83d0-42bb9ea5dfa9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Mar 12 08:02:17 crc kubenswrapper[4809]: I0312 08:02:17.282306 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-6vssd" Mar 12 08:02:17 crc kubenswrapper[4809]: I0312 08:02:17.297822 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5d8gm\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:17 crc kubenswrapper[4809]: E0312 08:02:17.299707 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 08:02:17.799690562 +0000 UTC m=+211.381726295 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5d8gm" (UID: "b1c6c047-6fde-4d86-a82c-d8d259265412") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:17 crc kubenswrapper[4809]: I0312 08:02:17.399915 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 08:02:17 crc kubenswrapper[4809]: E0312 08:02:17.403253 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 08:02:17.903229659 +0000 UTC m=+211.485265392 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:17 crc kubenswrapper[4809]: I0312 08:02:17.506048 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5d8gm\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:17 crc kubenswrapper[4809]: E0312 08:02:17.507217 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 08:02:18.007187239 +0000 UTC m=+211.589222972 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5d8gm" (UID: "b1c6c047-6fde-4d86-a82c-d8d259265412") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:17 crc kubenswrapper[4809]: I0312 08:02:17.608405 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 08:02:17 crc kubenswrapper[4809]: E0312 08:02:17.608825 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 08:02:18.108811061 +0000 UTC m=+211.690846794 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:17 crc kubenswrapper[4809]: I0312 08:02:17.709844 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5d8gm\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:17 crc kubenswrapper[4809]: E0312 08:02:17.710887 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 08:02:18.210867516 +0000 UTC m=+211.792903249 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5d8gm" (UID: "b1c6c047-6fde-4d86-a82c-d8d259265412") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:17 crc kubenswrapper[4809]: I0312 08:02:17.815868 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 08:02:17 crc kubenswrapper[4809]: E0312 08:02:17.816246 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 08:02:18.316206295 +0000 UTC m=+211.898242028 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:17 crc kubenswrapper[4809]: I0312 08:02:17.826106 4809 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-s8wdz container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.26:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 08:02:17 crc kubenswrapper[4809]: I0312 08:02:17.826188 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-s8wdz" podUID="3e14730e-9ab1-4dd1-b786-142b82b59802" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.26:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 08:02:17 crc kubenswrapper[4809]: I0312 08:02:17.923036 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5d8gm\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:17 crc kubenswrapper[4809]: E0312 08:02:17.923419 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 08:02:18.423406278 +0000 UTC m=+212.005442011 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5d8gm" (UID: "b1c6c047-6fde-4d86-a82c-d8d259265412") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:17 crc kubenswrapper[4809]: I0312 08:02:17.939709 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-pghrh" podStartSLOduration=173.939679399 podStartE2EDuration="2m53.939679399s" podCreationTimestamp="2026-03-12 07:59:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:02:17.908728443 +0000 UTC m=+211.490764176" watchObservedRunningTime="2026-03-12 08:02:17.939679399 +0000 UTC m=+211.521715132" Mar 12 08:02:17 crc kubenswrapper[4809]: I0312 08:02:17.940591 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nfbkw" podStartSLOduration=173.940585116 podStartE2EDuration="2m53.940585116s" podCreationTimestamp="2026-03-12 07:59:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:02:17.937829006 +0000 UTC m=+211.519864739" watchObservedRunningTime="2026-03-12 08:02:17.940585116 +0000 UTC m=+211.522620849" Mar 12 08:02:18 crc kubenswrapper[4809]: I0312 08:02:18.024061 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 08:02:18 crc kubenswrapper[4809]: E0312 08:02:18.025031 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 08:02:18.525001329 +0000 UTC m=+212.107037062 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:18 crc kubenswrapper[4809]: I0312 08:02:18.090651 4809 patch_prober.go:28] interesting pod/router-default-5444994796-ttmrw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 08:02:18 crc kubenswrapper[4809]: [-]has-synced failed: reason withheld Mar 12 08:02:18 crc kubenswrapper[4809]: [+]process-running ok Mar 12 08:02:18 crc kubenswrapper[4809]: healthz check failed Mar 12 08:02:18 crc kubenswrapper[4809]: I0312 08:02:18.090750 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ttmrw" podUID="b24173e0-5140-4c4d-ab8b-ea3696db0b74" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 08:02:18 crc kubenswrapper[4809]: I0312 08:02:18.127170 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5d8gm\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:18 crc kubenswrapper[4809]: E0312 08:02:18.127570 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 08:02:18.627553028 +0000 UTC m=+212.209588761 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5d8gm" (UID: "b1c6c047-6fde-4d86-a82c-d8d259265412") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:18 crc kubenswrapper[4809]: I0312 08:02:18.252299 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 08:02:18 crc kubenswrapper[4809]: E0312 08:02:18.252715 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 08:02:18.752691671 +0000 UTC m=+212.334727394 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:18 crc kubenswrapper[4809]: I0312 08:02:18.332778 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-r5hmb"] Mar 12 08:02:18 crc kubenswrapper[4809]: I0312 08:02:18.354427 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5d8gm\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:18 crc kubenswrapper[4809]: E0312 08:02:18.354960 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 08:02:18.854942741 +0000 UTC m=+212.436978474 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5d8gm" (UID: "b1c6c047-6fde-4d86-a82c-d8d259265412") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:18 crc kubenswrapper[4809]: I0312 08:02:18.373982 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-r5hmb" event={"ID":"98cc5813-f6d5-4e2a-a7d4-2546e18e60f9","Type":"ContainerStarted","Data":"1c88c89720f768a5d2d85e3f188c75ee8b83e83def7f4b3a7d5b927dfcb3601a"} Mar 12 08:02:18 crc kubenswrapper[4809]: I0312 08:02:18.377100 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-r5hmb" Mar 12 08:02:18 crc kubenswrapper[4809]: I0312 08:02:18.382827 4809 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-r5hmb container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Mar 12 08:02:18 crc kubenswrapper[4809]: I0312 08:02:18.382916 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-r5hmb" podUID="98cc5813-f6d5-4e2a-a7d4-2546e18e60f9" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Mar 12 08:02:18 crc kubenswrapper[4809]: I0312 08:02:18.411192 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sm7hg" event={"ID":"5d4d6ee6-9302-4b5e-b917-3a52513210ed","Type":"ContainerStarted","Data":"66cbf8fe8744b53c55033e10e008ed66ca7ec14197ea6d76f70c25211b47988f"} Mar 12 08:02:18 crc kubenswrapper[4809]: I0312 08:02:18.439263 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-r24lv" event={"ID":"31ccf9d3-41db-457e-9341-744f7945b4f5","Type":"ContainerStarted","Data":"3b7c30dcae6cca74abecd2d41fff862524fd1281cc05dac1ad0ae1dd2a414caa"} Mar 12 08:02:18 crc kubenswrapper[4809]: I0312 08:02:18.456024 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 08:02:18 crc kubenswrapper[4809]: E0312 08:02:18.456562 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 08:02:18.956542792 +0000 UTC m=+212.538578525 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:18 crc kubenswrapper[4809]: I0312 08:02:18.470637 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-p6jxb" event={"ID":"2726995b-6d26-48c3-9e7d-da323657f55c","Type":"ContainerStarted","Data":"7c7616eba75ec48378c72b14617e010e94ef6cf6839c2c44c2525292ee8b724b"} Mar 12 08:02:18 crc kubenswrapper[4809]: I0312 08:02:18.474021 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sm7hg" podStartSLOduration=174.474009878 podStartE2EDuration="2m54.474009878s" podCreationTimestamp="2026-03-12 07:59:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:02:18.470239539 +0000 UTC m=+212.052275282" watchObservedRunningTime="2026-03-12 08:02:18.474009878 +0000 UTC m=+212.056045611" Mar 12 08:02:18 crc kubenswrapper[4809]: I0312 08:02:18.474846 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-r5hmb" podStartSLOduration=174.474841362 podStartE2EDuration="2m54.474841362s" podCreationTimestamp="2026-03-12 07:59:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:02:18.427182252 +0000 UTC m=+212.009217975" watchObservedRunningTime="2026-03-12 08:02:18.474841362 +0000 UTC m=+212.056877095" Mar 12 08:02:18 crc kubenswrapper[4809]: I0312 08:02:18.490906 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-bqvjf" event={"ID":"d8b4cda2-f60b-4a9d-9302-ac44f7acfb7a","Type":"ContainerStarted","Data":"7f96c7f4a30eac05ea9139613648d415879efc9dd6f5a774a2d8581535fb29e6"} Mar 12 08:02:18 crc kubenswrapper[4809]: I0312 08:02:18.492583 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-k88nv" event={"ID":"959e6af8-6641-4e70-bb7f-cfe2c8ea56a6","Type":"ContainerStarted","Data":"0083ae8bf2275cf514b03c717d72c5b5624ffe71b4c23b11718bb2a921d5cda0"} Mar 12 08:02:18 crc kubenswrapper[4809]: I0312 08:02:18.494800 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vqtvl" event={"ID":"50dc6c2c-21cc-4fe3-83c6-9b2a69eb039d","Type":"ContainerStarted","Data":"af4d0988f81328cbeea3cd4b51415350852baf2538a83139a2c662be96ba8f83"} Mar 12 08:02:18 crc kubenswrapper[4809]: I0312 08:02:18.495399 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vqtvl" Mar 12 08:02:18 crc kubenswrapper[4809]: I0312 08:02:18.496603 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hrnnz" event={"ID":"512b5035-d9af-4615-b351-2199e94f9c50","Type":"ContainerStarted","Data":"dcf5ef7dc6b85329eeaf508ced67438c6450c1948cf660623dee159685fc882c"} Mar 12 08:02:18 crc kubenswrapper[4809]: I0312 08:02:18.497522 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-hrnnz" Mar 12 08:02:18 crc kubenswrapper[4809]: I0312 08:02:18.497610 4809 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-vqtvl container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Mar 12 08:02:18 crc kubenswrapper[4809]: I0312 08:02:18.497642 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vqtvl" podUID="50dc6c2c-21cc-4fe3-83c6-9b2a69eb039d" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Mar 12 08:02:18 crc kubenswrapper[4809]: I0312 08:02:18.506363 4809 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hrnnz container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/healthz\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Mar 12 08:02:18 crc kubenswrapper[4809]: I0312 08:02:18.506444 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hrnnz" podUID="512b5035-d9af-4615-b351-2199e94f9c50" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.18:8080/healthz\": dial tcp 10.217.0.18:8080: connect: connection refused" Mar 12 08:02:18 crc kubenswrapper[4809]: I0312 08:02:18.513840 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-p6jxb" podStartSLOduration=174.513824651 podStartE2EDuration="2m54.513824651s" podCreationTimestamp="2026-03-12 07:59:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:02:18.511518724 +0000 UTC m=+212.093554457" watchObservedRunningTime="2026-03-12 08:02:18.513824651 +0000 UTC m=+212.095860384" Mar 12 08:02:18 crc kubenswrapper[4809]: I0312 08:02:18.522307 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xx52w" event={"ID":"ac7cc05e-989a-4474-9685-9600e3502dfd","Type":"ContainerStarted","Data":"6dd326b9a3bcd0f8b552830e0fa15fc8392f2f0fb450c2e7b71b827759bce6a5"} Mar 12 08:02:18 crc kubenswrapper[4809]: I0312 08:02:18.523142 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xx52w" Mar 12 08:02:18 crc kubenswrapper[4809]: I0312 08:02:18.540598 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-hrnnz" podStartSLOduration=174.540582565 podStartE2EDuration="2m54.540582565s" podCreationTimestamp="2026-03-12 07:59:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:02:18.538620869 +0000 UTC m=+212.120656602" watchObservedRunningTime="2026-03-12 08:02:18.540582565 +0000 UTC m=+212.122618288" Mar 12 08:02:18 crc kubenswrapper[4809]: I0312 08:02:18.555449 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-pqkrs"] Mar 12 08:02:18 crc kubenswrapper[4809]: I0312 08:02:18.558970 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5d8gm\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:18 crc kubenswrapper[4809]: E0312 08:02:18.561038 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 08:02:19.061023327 +0000 UTC m=+212.643059060 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5d8gm" (UID: "b1c6c047-6fde-4d86-a82c-d8d259265412") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:18 crc kubenswrapper[4809]: I0312 08:02:18.568637 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-76fnn" event={"ID":"860674d0-3197-44c8-8912-15ec9f06c643","Type":"ContainerStarted","Data":"efb33129509a13c328426144ceb769c1cdb01dd7435b46feedcfa59580b84037"} Mar 12 08:02:18 crc kubenswrapper[4809]: I0312 08:02:18.583741 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vqtvl" podStartSLOduration=174.583723824 podStartE2EDuration="2m54.583723824s" podCreationTimestamp="2026-03-12 07:59:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:02:18.581625964 +0000 UTC m=+212.163661707" watchObservedRunningTime="2026-03-12 08:02:18.583723824 +0000 UTC m=+212.165759557" Mar 12 08:02:18 crc kubenswrapper[4809]: I0312 08:02:18.601710 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8m4nh" event={"ID":"879e1dfb-bff9-4ff0-99e3-912124941b77","Type":"ContainerStarted","Data":"2126523de89f269e1d0ea5853733009bc21d045e6feb02867d327af571330127"} Mar 12 08:02:18 crc kubenswrapper[4809]: I0312 08:02:18.618573 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-k88nv" podStartSLOduration=174.618548603 podStartE2EDuration="2m54.618548603s" podCreationTimestamp="2026-03-12 07:59:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:02:18.615759002 +0000 UTC m=+212.197794735" watchObservedRunningTime="2026-03-12 08:02:18.618548603 +0000 UTC m=+212.200584336" Mar 12 08:02:18 crc kubenswrapper[4809]: I0312 08:02:18.635717 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-pxnfg" event={"ID":"f9891031-02d7-4b3a-8fb8-40fd8bb9a825","Type":"ContainerStarted","Data":"1719da72cf05f239e73f87025161cc6aaf95e97611b1a1f15b447bf27aa8e03a"} Mar 12 08:02:18 crc kubenswrapper[4809]: I0312 08:02:18.655450 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xx52w" podStartSLOduration=174.65542109 podStartE2EDuration="2m54.65542109s" podCreationTimestamp="2026-03-12 07:59:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:02:18.654766751 +0000 UTC m=+212.236802484" watchObservedRunningTime="2026-03-12 08:02:18.65542109 +0000 UTC m=+212.237456823" Mar 12 08:02:18 crc kubenswrapper[4809]: I0312 08:02:18.661666 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 08:02:18 crc kubenswrapper[4809]: E0312 08:02:18.661796 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 08:02:19.161772144 +0000 UTC m=+212.743807877 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:18 crc kubenswrapper[4809]: I0312 08:02:18.661989 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5d8gm\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:18 crc kubenswrapper[4809]: E0312 08:02:18.665764 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 08:02:19.165743489 +0000 UTC m=+212.747779222 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5d8gm" (UID: "b1c6c047-6fde-4d86-a82c-d8d259265412") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:18 crc kubenswrapper[4809]: I0312 08:02:18.676878 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kmr2s" event={"ID":"e70704a8-4517-4ee2-8a1f-de93c014b0da","Type":"ContainerStarted","Data":"4a279c0df1ebfee823218552fbf729cc761d8ccdac1930a4a9b3856a858ecc33"} Mar 12 08:02:18 crc kubenswrapper[4809]: I0312 08:02:18.680187 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nfbkw" event={"ID":"54d01473-f99a-47d2-ae35-0a4b933b5098","Type":"ContainerStarted","Data":"2e8da6bc2778e48944c8147d8fdc1f6bc3502d6ae1eb81f2eec3343bc8198b3e"} Mar 12 08:02:18 crc kubenswrapper[4809]: I0312 08:02:18.683626 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tjjbt" event={"ID":"0621879e-29ff-49a3-81fc-1bde4a2d22ae","Type":"ContainerStarted","Data":"e7e4462561121bb3800e77ad26a95cbec413cc38dad9ef6b3a13b009d75431ac"} Mar 12 08:02:18 crc kubenswrapper[4809]: I0312 08:02:18.684275 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tjjbt" Mar 12 08:02:18 crc kubenswrapper[4809]: I0312 08:02:18.699871 4809 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-tjjbt container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Mar 12 08:02:18 crc kubenswrapper[4809]: I0312 08:02:18.700003 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tjjbt" podUID="0621879e-29ff-49a3-81fc-1bde4a2d22ae" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Mar 12 08:02:18 crc kubenswrapper[4809]: I0312 08:02:18.700932 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7qz8g" event={"ID":"98a6e235-45c4-4192-8060-6771a197f829","Type":"ContainerStarted","Data":"f2482d8405b552986624ed142b4750eb83c0a13cc3491cd654ed65671a2ddbde"} Mar 12 08:02:18 crc kubenswrapper[4809]: I0312 08:02:18.737339 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w2hxb" event={"ID":"38092207-e107-4e5a-8706-b3ad66bea661","Type":"ContainerStarted","Data":"7f715b9e61d494d566b1f3662d3b58bf00c2178b8667b741ba9fd7d649d45a3b"} Mar 12 08:02:18 crc kubenswrapper[4809]: I0312 08:02:18.737398 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w2hxb" event={"ID":"38092207-e107-4e5a-8706-b3ad66bea661","Type":"ContainerStarted","Data":"5cebe937ecd853f6cb17650e37861b43f1073ab2d926e6640be300433a872676"} Mar 12 08:02:18 crc kubenswrapper[4809]: I0312 08:02:18.738192 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w2hxb" Mar 12 08:02:18 crc kubenswrapper[4809]: I0312 08:02:18.746496 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-76fnn" podStartSLOduration=7.746482586 podStartE2EDuration="7.746482586s" podCreationTimestamp="2026-03-12 08:02:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:02:18.71690891 +0000 UTC m=+212.298944643" watchObservedRunningTime="2026-03-12 08:02:18.746482586 +0000 UTC m=+212.328518309" Mar 12 08:02:18 crc kubenswrapper[4809]: I0312 08:02:18.750503 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6lhmm" event={"ID":"bb72ef9d-43d2-476d-80e9-d3b19139c7a9","Type":"ContainerStarted","Data":"fd657e6e20ec82b87a501401c65337286f40f0bcb90d4fb038811a768510a469"} Mar 12 08:02:18 crc kubenswrapper[4809]: I0312 08:02:18.765124 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 08:02:18 crc kubenswrapper[4809]: E0312 08:02:18.765534 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 08:02:19.265500827 +0000 UTC m=+212.847536560 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:18 crc kubenswrapper[4809]: I0312 08:02:18.773855 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5d8gm\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:18 crc kubenswrapper[4809]: E0312 08:02:18.776078 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 08:02:19.276056892 +0000 UTC m=+212.858092625 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5d8gm" (UID: "b1c6c047-6fde-4d86-a82c-d8d259265412") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:18 crc kubenswrapper[4809]: I0312 08:02:18.788486 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8m4nh" podStartSLOduration=174.788470491 podStartE2EDuration="2m54.788470491s" podCreationTimestamp="2026-03-12 07:59:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:02:18.749849944 +0000 UTC m=+212.331885687" watchObservedRunningTime="2026-03-12 08:02:18.788470491 +0000 UTC m=+212.370506224" Mar 12 08:02:18 crc kubenswrapper[4809]: I0312 08:02:18.788876 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tjjbt" podStartSLOduration=174.788871614 podStartE2EDuration="2m54.788871614s" podCreationTimestamp="2026-03-12 07:59:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:02:18.786539356 +0000 UTC m=+212.368575089" watchObservedRunningTime="2026-03-12 08:02:18.788871614 +0000 UTC m=+212.370907347" Mar 12 08:02:18 crc kubenswrapper[4809]: I0312 08:02:18.790840 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qrsm" Mar 12 08:02:18 crc kubenswrapper[4809]: I0312 08:02:18.829108 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w2hxb" podStartSLOduration=174.829090187 podStartE2EDuration="2m54.829090187s" podCreationTimestamp="2026-03-12 07:59:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:02:18.825136793 +0000 UTC m=+212.407172526" watchObservedRunningTime="2026-03-12 08:02:18.829090187 +0000 UTC m=+212.411125920" Mar 12 08:02:18 crc kubenswrapper[4809]: I0312 08:02:18.848424 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-lt58n" event={"ID":"5ce55a3c-0a1e-4bb6-84be-c8fe85a210a7","Type":"ContainerStarted","Data":"3eac8e5a2a172d3dfb605073a900a2dad39c38b3ca5160e83266e607b1c2c87f"} Mar 12 08:02:18 crc kubenswrapper[4809]: I0312 08:02:18.848485 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-lt58n" event={"ID":"5ce55a3c-0a1e-4bb6-84be-c8fe85a210a7","Type":"ContainerStarted","Data":"3d499b30de268c0859c08850a6e9dda67b7169a61f7079ae7bb16d98d66889b7"} Mar 12 08:02:18 crc kubenswrapper[4809]: I0312 08:02:18.848544 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-lt58n" Mar 12 08:02:18 crc kubenswrapper[4809]: I0312 08:02:18.877809 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pqkrs" Mar 12 08:02:18 crc kubenswrapper[4809]: I0312 08:02:18.878407 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 08:02:18 crc kubenswrapper[4809]: I0312 08:02:18.879697 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-s8wdz" Mar 12 08:02:18 crc kubenswrapper[4809]: E0312 08:02:18.880102 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 08:02:19.380059754 +0000 UTC m=+212.962095487 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:18 crc kubenswrapper[4809]: I0312 08:02:18.880235 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5d8gm\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:18 crc kubenswrapper[4809]: E0312 08:02:18.882110 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 08:02:19.382101512 +0000 UTC m=+212.964137245 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5d8gm" (UID: "b1c6c047-6fde-4d86-a82c-d8d259265412") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:18 crc kubenswrapper[4809]: I0312 08:02:18.957095 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kmr2s" podStartSLOduration=174.957077503 podStartE2EDuration="2m54.957077503s" podCreationTimestamp="2026-03-12 07:59:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:02:18.956591488 +0000 UTC m=+212.538627221" watchObservedRunningTime="2026-03-12 08:02:18.957077503 +0000 UTC m=+212.539113226" Mar 12 08:02:18 crc kubenswrapper[4809]: I0312 08:02:18.960130 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7qz8g" podStartSLOduration=174.960122411 podStartE2EDuration="2m54.960122411s" podCreationTimestamp="2026-03-12 07:59:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:02:18.864767981 +0000 UTC m=+212.446803714" watchObservedRunningTime="2026-03-12 08:02:18.960122411 +0000 UTC m=+212.542158144" Mar 12 08:02:18 crc kubenswrapper[4809]: I0312 08:02:18.986750 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 08:02:18 crc kubenswrapper[4809]: E0312 08:02:18.988415 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 08:02:19.488398059 +0000 UTC m=+213.070433792 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:19 crc kubenswrapper[4809]: I0312 08:02:19.089237 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5d8gm\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:19 crc kubenswrapper[4809]: E0312 08:02:19.089561 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 08:02:19.589547747 +0000 UTC m=+213.171583480 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5d8gm" (UID: "b1c6c047-6fde-4d86-a82c-d8d259265412") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:19 crc kubenswrapper[4809]: I0312 08:02:19.099386 4809 patch_prober.go:28] interesting pod/router-default-5444994796-ttmrw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 08:02:19 crc kubenswrapper[4809]: [-]has-synced failed: reason withheld Mar 12 08:02:19 crc kubenswrapper[4809]: [+]process-running ok Mar 12 08:02:19 crc kubenswrapper[4809]: healthz check failed Mar 12 08:02:19 crc kubenswrapper[4809]: I0312 08:02:19.099458 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ttmrw" podUID="b24173e0-5140-4c4d-ab8b-ea3696db0b74" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 08:02:19 crc kubenswrapper[4809]: I0312 08:02:19.132929 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vh7gf" podStartSLOduration=175.132912953 podStartE2EDuration="2m55.132912953s" podCreationTimestamp="2026-03-12 07:59:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:02:19.131723628 +0000 UTC m=+212.713759361" watchObservedRunningTime="2026-03-12 08:02:19.132912953 +0000 UTC m=+212.714948686" Mar 12 08:02:19 crc kubenswrapper[4809]: I0312 08:02:19.178808 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-lt58n" podStartSLOduration=8.178784631 podStartE2EDuration="8.178784631s" podCreationTimestamp="2026-03-12 08:02:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:02:19.172277552 +0000 UTC m=+212.754313295" watchObservedRunningTime="2026-03-12 08:02:19.178784631 +0000 UTC m=+212.760820364" Mar 12 08:02:19 crc kubenswrapper[4809]: I0312 08:02:19.190054 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 08:02:19 crc kubenswrapper[4809]: E0312 08:02:19.190220 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 08:02:19.690188831 +0000 UTC m=+213.272224564 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:19 crc kubenswrapper[4809]: I0312 08:02:19.190361 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5d8gm\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:19 crc kubenswrapper[4809]: E0312 08:02:19.190757 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 08:02:19.690735477 +0000 UTC m=+213.272771210 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5d8gm" (UID: "b1c6c047-6fde-4d86-a82c-d8d259265412") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:19 crc kubenswrapper[4809]: I0312 08:02:19.219249 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6lhmm" podStartSLOduration=175.219229322 podStartE2EDuration="2m55.219229322s" podCreationTimestamp="2026-03-12 07:59:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:02:19.216424121 +0000 UTC m=+212.798459854" watchObservedRunningTime="2026-03-12 08:02:19.219229322 +0000 UTC m=+212.801265055" Mar 12 08:02:19 crc kubenswrapper[4809]: I0312 08:02:19.245844 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qrsm" podStartSLOduration=175.245824642 podStartE2EDuration="2m55.245824642s" podCreationTimestamp="2026-03-12 07:59:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:02:19.245250236 +0000 UTC m=+212.827285969" watchObservedRunningTime="2026-03-12 08:02:19.245824642 +0000 UTC m=+212.827860375" Mar 12 08:02:19 crc kubenswrapper[4809]: I0312 08:02:19.291378 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 08:02:19 crc kubenswrapper[4809]: E0312 08:02:19.292927 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 08:02:19.791833314 +0000 UTC m=+213.373869047 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:19 crc kubenswrapper[4809]: I0312 08:02:19.392986 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5d8gm\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:19 crc kubenswrapper[4809]: E0312 08:02:19.393592 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 08:02:19.893575519 +0000 UTC m=+213.475611252 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5d8gm" (UID: "b1c6c047-6fde-4d86-a82c-d8d259265412") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:19 crc kubenswrapper[4809]: I0312 08:02:19.416172 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xx52w" Mar 12 08:02:19 crc kubenswrapper[4809]: I0312 08:02:19.499794 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 08:02:19 crc kubenswrapper[4809]: E0312 08:02:19.500355 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 08:02:20.00033218 +0000 UTC m=+213.582367913 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:19 crc kubenswrapper[4809]: I0312 08:02:19.544522 4809 ???:1] "http: TLS handshake error from 192.168.126.11:42302: no serving certificate available for the kubelet" Mar 12 08:02:19 crc kubenswrapper[4809]: I0312 08:02:19.601746 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5d8gm\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:19 crc kubenswrapper[4809]: E0312 08:02:19.602303 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 08:02:20.102279602 +0000 UTC m=+213.684315335 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5d8gm" (UID: "b1c6c047-6fde-4d86-a82c-d8d259265412") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:19 crc kubenswrapper[4809]: I0312 08:02:19.702935 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 08:02:19 crc kubenswrapper[4809]: E0312 08:02:19.703197 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 08:02:20.203137691 +0000 UTC m=+213.785173424 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:19 crc kubenswrapper[4809]: I0312 08:02:19.703285 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5d8gm\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:19 crc kubenswrapper[4809]: E0312 08:02:19.703658 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 08:02:20.203639285 +0000 UTC m=+213.785675199 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5d8gm" (UID: "b1c6c047-6fde-4d86-a82c-d8d259265412") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:19 crc kubenswrapper[4809]: I0312 08:02:19.805313 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 08:02:19 crc kubenswrapper[4809]: E0312 08:02:19.805837 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 08:02:20.305814734 +0000 UTC m=+213.887850467 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:19 crc kubenswrapper[4809]: I0312 08:02:19.865717 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vh7gf" event={"ID":"0a6838f8-a7aa-46c5-9e16-6885daff4a88","Type":"ContainerStarted","Data":"cde085a856e13f34f66c3aceaf9cdc55cc212ea02cab32e4aa1f06640fe2240d"} Mar 12 08:02:19 crc kubenswrapper[4809]: I0312 08:02:19.877855 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-bqvjf" event={"ID":"d8b4cda2-f60b-4a9d-9302-ac44f7acfb7a","Type":"ContainerStarted","Data":"2c5e7cb97c5d9f248b3f77c89d918aa356c88c9632a5b16f14a3a6cb40cddc7c"} Mar 12 08:02:19 crc kubenswrapper[4809]: I0312 08:02:19.897242 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-d525c" event={"ID":"481587db-884d-425d-ba58-4921449275ef","Type":"ContainerStarted","Data":"081911b220498d4b7ad81dd2fa46e9d988a0bd7abf17f33fac9776a00b92388b"} Mar 12 08:02:19 crc kubenswrapper[4809]: I0312 08:02:19.897324 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-d525c" event={"ID":"481587db-884d-425d-ba58-4921449275ef","Type":"ContainerStarted","Data":"38f68c13415c1bd473393084eba8a803c9001029449dab02ca74294f66404fa4"} Mar 12 08:02:19 crc kubenswrapper[4809]: I0312 08:02:19.907792 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5d8gm\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:19 crc kubenswrapper[4809]: E0312 08:02:19.908515 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 08:02:20.408485596 +0000 UTC m=+213.990521509 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5d8gm" (UID: "b1c6c047-6fde-4d86-a82c-d8d259265412") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:19 crc kubenswrapper[4809]: I0312 08:02:19.910375 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7qz8g" event={"ID":"98a6e235-45c4-4192-8060-6771a197f829","Type":"ContainerStarted","Data":"40fd35d32e9b5a7bd3cca5eff5ad2157e6d8f6d6787eb3be2418055725ab6301"} Mar 12 08:02:19 crc kubenswrapper[4809]: I0312 08:02:19.916067 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-bqvjf" podStartSLOduration=175.916048295 podStartE2EDuration="2m55.916048295s" podCreationTimestamp="2026-03-12 07:59:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:02:19.914404317 +0000 UTC m=+213.496440060" watchObservedRunningTime="2026-03-12 08:02:19.916048295 +0000 UTC m=+213.498084028" Mar 12 08:02:19 crc kubenswrapper[4809]: I0312 08:02:19.917187 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-njprg" event={"ID":"88d136d4-995f-44d0-8691-e84bfacb68c3","Type":"ContainerStarted","Data":"73cb6701c2ef0e15f5ed60cb75f201015ba54a1b58b464db3592b6027e35c63f"} Mar 12 08:02:19 crc kubenswrapper[4809]: I0312 08:02:19.924471 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8m4nh" event={"ID":"879e1dfb-bff9-4ff0-99e3-912124941b77","Type":"ContainerStarted","Data":"5187a45bace535c7c4d76c71103181b62bae5a500c0dc71023c6ecab7b4eeafb"} Mar 12 08:02:19 crc kubenswrapper[4809]: I0312 08:02:19.928220 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fvmsc" event={"ID":"9a23ba2c-e3a5-4447-955c-f60a09b92dcc","Type":"ContainerStarted","Data":"50d9329393003fa7dca68413991cd898ca46b4d4889075f52a111726da7f412a"} Mar 12 08:02:19 crc kubenswrapper[4809]: I0312 08:02:19.958598 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-d525c" podStartSLOduration=175.958556575 podStartE2EDuration="2m55.958556575s" podCreationTimestamp="2026-03-12 07:59:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:02:19.955894298 +0000 UTC m=+213.537930031" watchObservedRunningTime="2026-03-12 08:02:19.958556575 +0000 UTC m=+213.540592308" Mar 12 08:02:19 crc kubenswrapper[4809]: I0312 08:02:19.972318 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sm7hg" event={"ID":"5d4d6ee6-9302-4b5e-b917-3a52513210ed","Type":"ContainerStarted","Data":"ef5e863a6c0077de399f4b3d5916cf659e355c3eaeefdf3d20842196a4d498f4"} Mar 12 08:02:20 crc kubenswrapper[4809]: I0312 08:02:20.006714 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fvmsc" podStartSLOduration=176.006685378 podStartE2EDuration="2m56.006685378s" podCreationTimestamp="2026-03-12 07:59:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:02:20.004688491 +0000 UTC m=+213.586724224" watchObservedRunningTime="2026-03-12 08:02:20.006685378 +0000 UTC m=+213.588721111" Mar 12 08:02:20 crc kubenswrapper[4809]: I0312 08:02:20.009468 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 08:02:20 crc kubenswrapper[4809]: E0312 08:02:20.010596 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 08:02:20.510575941 +0000 UTC m=+214.092611674 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:20 crc kubenswrapper[4809]: I0312 08:02:20.035356 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qrsm" event={"ID":"63625da7-0f0d-48f1-8b58-c75e04bc31e4","Type":"ContainerStarted","Data":"7846a855e1dfdb16db66ee630e5c34cfab65004b2e4ba492c88b071a31defb59"} Mar 12 08:02:20 crc kubenswrapper[4809]: I0312 08:02:20.037887 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-r5hmb" podUID="98cc5813-f6d5-4e2a-a7d4-2546e18e60f9" containerName="controller-manager" containerID="cri-o://1c88c89720f768a5d2d85e3f188c75ee8b83e83def7f4b3a7d5b927dfcb3601a" gracePeriod=30 Mar 12 08:02:20 crc kubenswrapper[4809]: I0312 08:02:20.038252 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pqkrs" podUID="afb31fcb-c363-4434-a971-126881422750" containerName="route-controller-manager" containerID="cri-o://212f9ce05b5de2e2b96ebd71f3bfad25ec3dbf76ec0e9e37d6225508c00a3210" gracePeriod=30 Mar 12 08:02:20 crc kubenswrapper[4809]: I0312 08:02:20.048337 4809 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hrnnz container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/healthz\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Mar 12 08:02:20 crc kubenswrapper[4809]: I0312 08:02:20.048413 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hrnnz" podUID="512b5035-d9af-4615-b351-2199e94f9c50" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.18:8080/healthz\": dial tcp 10.217.0.18:8080: connect: connection refused" Mar 12 08:02:20 crc kubenswrapper[4809]: I0312 08:02:20.056558 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vqtvl" Mar 12 08:02:20 crc kubenswrapper[4809]: I0312 08:02:20.056623 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tjjbt" Mar 12 08:02:20 crc kubenswrapper[4809]: I0312 08:02:20.087325 4809 patch_prober.go:28] interesting pod/router-default-5444994796-ttmrw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 08:02:20 crc kubenswrapper[4809]: [-]has-synced failed: reason withheld Mar 12 08:02:20 crc kubenswrapper[4809]: [+]process-running ok Mar 12 08:02:20 crc kubenswrapper[4809]: healthz check failed Mar 12 08:02:20 crc kubenswrapper[4809]: I0312 08:02:20.087695 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ttmrw" podUID="b24173e0-5140-4c4d-ab8b-ea3696db0b74" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 08:02:20 crc kubenswrapper[4809]: I0312 08:02:20.090001 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-r5hmb" Mar 12 08:02:20 crc kubenswrapper[4809]: I0312 08:02:20.115885 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5d8gm\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:20 crc kubenswrapper[4809]: E0312 08:02:20.117185 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 08:02:20.617166817 +0000 UTC m=+214.199202540 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5d8gm" (UID: "b1c6c047-6fde-4d86-a82c-d8d259265412") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:20 crc kubenswrapper[4809]: I0312 08:02:20.217876 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 08:02:20 crc kubenswrapper[4809]: E0312 08:02:20.219769 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 08:02:20.719746467 +0000 UTC m=+214.301782200 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:20 crc kubenswrapper[4809]: I0312 08:02:20.328484 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5d8gm\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:20 crc kubenswrapper[4809]: E0312 08:02:20.328838 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 08:02:20.828825034 +0000 UTC m=+214.410860767 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5d8gm" (UID: "b1c6c047-6fde-4d86-a82c-d8d259265412") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:20 crc kubenswrapper[4809]: I0312 08:02:20.429885 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 08:02:20 crc kubenswrapper[4809]: E0312 08:02:20.430192 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 08:02:20.930109927 +0000 UTC m=+214.512145870 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:20 crc kubenswrapper[4809]: I0312 08:02:20.531211 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5d8gm\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:20 crc kubenswrapper[4809]: E0312 08:02:20.531608 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 08:02:21.031594144 +0000 UTC m=+214.613629877 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5d8gm" (UID: "b1c6c047-6fde-4d86-a82c-d8d259265412") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:20 crc kubenswrapper[4809]: I0312 08:02:20.636830 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 08:02:20 crc kubenswrapper[4809]: E0312 08:02:20.637011 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 08:02:21.136977716 +0000 UTC m=+214.719013449 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:20 crc kubenswrapper[4809]: I0312 08:02:20.638126 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5d8gm\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:20 crc kubenswrapper[4809]: E0312 08:02:20.638625 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 08:02:21.138604553 +0000 UTC m=+214.720640286 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5d8gm" (UID: "b1c6c047-6fde-4d86-a82c-d8d259265412") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:20 crc kubenswrapper[4809]: I0312 08:02:20.720770 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-r5hmb" Mar 12 08:02:20 crc kubenswrapper[4809]: I0312 08:02:20.739316 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 08:02:20 crc kubenswrapper[4809]: E0312 08:02:20.739778 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 08:02:21.23972817 +0000 UTC m=+214.821763903 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:20 crc kubenswrapper[4809]: I0312 08:02:20.789167 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5dd879c6b8-wpdhg"] Mar 12 08:02:20 crc kubenswrapper[4809]: E0312 08:02:20.789594 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98cc5813-f6d5-4e2a-a7d4-2546e18e60f9" containerName="controller-manager" Mar 12 08:02:20 crc kubenswrapper[4809]: I0312 08:02:20.789613 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="98cc5813-f6d5-4e2a-a7d4-2546e18e60f9" containerName="controller-manager" Mar 12 08:02:20 crc kubenswrapper[4809]: I0312 08:02:20.789857 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="98cc5813-f6d5-4e2a-a7d4-2546e18e60f9" containerName="controller-manager" Mar 12 08:02:20 crc kubenswrapper[4809]: I0312 08:02:20.790544 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5dd879c6b8-wpdhg" Mar 12 08:02:20 crc kubenswrapper[4809]: I0312 08:02:20.808089 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5dd879c6b8-wpdhg"] Mar 12 08:02:20 crc kubenswrapper[4809]: I0312 08:02:20.840408 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98cc5813-f6d5-4e2a-a7d4-2546e18e60f9-config\") pod \"98cc5813-f6d5-4e2a-a7d4-2546e18e60f9\" (UID: \"98cc5813-f6d5-4e2a-a7d4-2546e18e60f9\") " Mar 12 08:02:20 crc kubenswrapper[4809]: I0312 08:02:20.840574 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/98cc5813-f6d5-4e2a-a7d4-2546e18e60f9-client-ca\") pod \"98cc5813-f6d5-4e2a-a7d4-2546e18e60f9\" (UID: \"98cc5813-f6d5-4e2a-a7d4-2546e18e60f9\") " Mar 12 08:02:20 crc kubenswrapper[4809]: I0312 08:02:20.840606 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vl6p\" (UniqueName: \"kubernetes.io/projected/98cc5813-f6d5-4e2a-a7d4-2546e18e60f9-kube-api-access-7vl6p\") pod \"98cc5813-f6d5-4e2a-a7d4-2546e18e60f9\" (UID: \"98cc5813-f6d5-4e2a-a7d4-2546e18e60f9\") " Mar 12 08:02:20 crc kubenswrapper[4809]: I0312 08:02:20.840657 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/98cc5813-f6d5-4e2a-a7d4-2546e18e60f9-proxy-ca-bundles\") pod \"98cc5813-f6d5-4e2a-a7d4-2546e18e60f9\" (UID: \"98cc5813-f6d5-4e2a-a7d4-2546e18e60f9\") " Mar 12 08:02:20 crc kubenswrapper[4809]: I0312 08:02:20.840711 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98cc5813-f6d5-4e2a-a7d4-2546e18e60f9-serving-cert\") pod \"98cc5813-f6d5-4e2a-a7d4-2546e18e60f9\" (UID: \"98cc5813-f6d5-4e2a-a7d4-2546e18e60f9\") " Mar 12 08:02:20 crc kubenswrapper[4809]: I0312 08:02:20.841005 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5d8gm\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:20 crc kubenswrapper[4809]: E0312 08:02:20.841940 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 08:02:21.341923589 +0000 UTC m=+214.923959322 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5d8gm" (UID: "b1c6c047-6fde-4d86-a82c-d8d259265412") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:20 crc kubenswrapper[4809]: I0312 08:02:20.842962 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98cc5813-f6d5-4e2a-a7d4-2546e18e60f9-config" (OuterVolumeSpecName: "config") pod "98cc5813-f6d5-4e2a-a7d4-2546e18e60f9" (UID: "98cc5813-f6d5-4e2a-a7d4-2546e18e60f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:02:20 crc kubenswrapper[4809]: I0312 08:02:20.843400 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98cc5813-f6d5-4e2a-a7d4-2546e18e60f9-client-ca" (OuterVolumeSpecName: "client-ca") pod "98cc5813-f6d5-4e2a-a7d4-2546e18e60f9" (UID: "98cc5813-f6d5-4e2a-a7d4-2546e18e60f9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:02:20 crc kubenswrapper[4809]: I0312 08:02:20.845396 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98cc5813-f6d5-4e2a-a7d4-2546e18e60f9-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "98cc5813-f6d5-4e2a-a7d4-2546e18e60f9" (UID: "98cc5813-f6d5-4e2a-a7d4-2546e18e60f9"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:02:20 crc kubenswrapper[4809]: I0312 08:02:20.866034 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98cc5813-f6d5-4e2a-a7d4-2546e18e60f9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "98cc5813-f6d5-4e2a-a7d4-2546e18e60f9" (UID: "98cc5813-f6d5-4e2a-a7d4-2546e18e60f9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:02:20 crc kubenswrapper[4809]: I0312 08:02:20.874601 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98cc5813-f6d5-4e2a-a7d4-2546e18e60f9-kube-api-access-7vl6p" (OuterVolumeSpecName: "kube-api-access-7vl6p") pod "98cc5813-f6d5-4e2a-a7d4-2546e18e60f9" (UID: "98cc5813-f6d5-4e2a-a7d4-2546e18e60f9"). InnerVolumeSpecName "kube-api-access-7vl6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:02:20 crc kubenswrapper[4809]: I0312 08:02:20.916659 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vhrcn"] Mar 12 08:02:20 crc kubenswrapper[4809]: I0312 08:02:20.917982 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vhrcn" Mar 12 08:02:20 crc kubenswrapper[4809]: I0312 08:02:20.926601 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 12 08:02:20 crc kubenswrapper[4809]: I0312 08:02:20.945283 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 08:02:20 crc kubenswrapper[4809]: I0312 08:02:20.945507 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cc396022-e360-45ad-a69d-c913cf15bfc2-proxy-ca-bundles\") pod \"controller-manager-5dd879c6b8-wpdhg\" (UID: \"cc396022-e360-45ad-a69d-c913cf15bfc2\") " pod="openshift-controller-manager/controller-manager-5dd879c6b8-wpdhg" Mar 12 08:02:20 crc kubenswrapper[4809]: I0312 08:02:20.945544 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cc396022-e360-45ad-a69d-c913cf15bfc2-client-ca\") pod \"controller-manager-5dd879c6b8-wpdhg\" (UID: \"cc396022-e360-45ad-a69d-c913cf15bfc2\") " pod="openshift-controller-manager/controller-manager-5dd879c6b8-wpdhg" Mar 12 08:02:20 crc kubenswrapper[4809]: I0312 08:02:20.945612 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4ltd\" (UniqueName: \"kubernetes.io/projected/cc396022-e360-45ad-a69d-c913cf15bfc2-kube-api-access-f4ltd\") pod \"controller-manager-5dd879c6b8-wpdhg\" (UID: \"cc396022-e360-45ad-a69d-c913cf15bfc2\") " pod="openshift-controller-manager/controller-manager-5dd879c6b8-wpdhg" Mar 12 08:02:20 crc kubenswrapper[4809]: I0312 08:02:20.945633 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc396022-e360-45ad-a69d-c913cf15bfc2-serving-cert\") pod \"controller-manager-5dd879c6b8-wpdhg\" (UID: \"cc396022-e360-45ad-a69d-c913cf15bfc2\") " pod="openshift-controller-manager/controller-manager-5dd879c6b8-wpdhg" Mar 12 08:02:20 crc kubenswrapper[4809]: I0312 08:02:20.945676 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc396022-e360-45ad-a69d-c913cf15bfc2-config\") pod \"controller-manager-5dd879c6b8-wpdhg\" (UID: \"cc396022-e360-45ad-a69d-c913cf15bfc2\") " pod="openshift-controller-manager/controller-manager-5dd879c6b8-wpdhg" Mar 12 08:02:20 crc kubenswrapper[4809]: I0312 08:02:20.945749 4809 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/98cc5813-f6d5-4e2a-a7d4-2546e18e60f9-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 12 08:02:20 crc kubenswrapper[4809]: I0312 08:02:20.945761 4809 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98cc5813-f6d5-4e2a-a7d4-2546e18e60f9-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 08:02:20 crc kubenswrapper[4809]: I0312 08:02:20.945771 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98cc5813-f6d5-4e2a-a7d4-2546e18e60f9-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:02:20 crc kubenswrapper[4809]: I0312 08:02:20.945864 4809 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/98cc5813-f6d5-4e2a-a7d4-2546e18e60f9-client-ca\") on node \"crc\" DevicePath \"\"" Mar 12 08:02:20 crc kubenswrapper[4809]: I0312 08:02:20.945878 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vl6p\" (UniqueName: \"kubernetes.io/projected/98cc5813-f6d5-4e2a-a7d4-2546e18e60f9-kube-api-access-7vl6p\") on node \"crc\" DevicePath \"\"" Mar 12 08:02:20 crc kubenswrapper[4809]: E0312 08:02:20.945938 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 08:02:21.445912969 +0000 UTC m=+215.027948892 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:20 crc kubenswrapper[4809]: I0312 08:02:20.948668 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vhrcn"] Mar 12 08:02:20 crc kubenswrapper[4809]: I0312 08:02:20.984627 4809 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.047207 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc396022-e360-45ad-a69d-c913cf15bfc2-config\") pod \"controller-manager-5dd879c6b8-wpdhg\" (UID: \"cc396022-e360-45ad-a69d-c913cf15bfc2\") " pod="openshift-controller-manager/controller-manager-5dd879c6b8-wpdhg" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.047287 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5d8gm\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.047322 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cc396022-e360-45ad-a69d-c913cf15bfc2-proxy-ca-bundles\") pod \"controller-manager-5dd879c6b8-wpdhg\" (UID: \"cc396022-e360-45ad-a69d-c913cf15bfc2\") " pod="openshift-controller-manager/controller-manager-5dd879c6b8-wpdhg" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.047347 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cc396022-e360-45ad-a69d-c913cf15bfc2-client-ca\") pod \"controller-manager-5dd879c6b8-wpdhg\" (UID: \"cc396022-e360-45ad-a69d-c913cf15bfc2\") " pod="openshift-controller-manager/controller-manager-5dd879c6b8-wpdhg" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.047395 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c914c474-5d5b-415b-ad58-76c7ac15dc94-catalog-content\") pod \"certified-operators-vhrcn\" (UID: \"c914c474-5d5b-415b-ad58-76c7ac15dc94\") " pod="openshift-marketplace/certified-operators-vhrcn" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.047422 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrjd7\" (UniqueName: \"kubernetes.io/projected/c914c474-5d5b-415b-ad58-76c7ac15dc94-kube-api-access-mrjd7\") pod \"certified-operators-vhrcn\" (UID: \"c914c474-5d5b-415b-ad58-76c7ac15dc94\") " pod="openshift-marketplace/certified-operators-vhrcn" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.047446 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4ltd\" (UniqueName: \"kubernetes.io/projected/cc396022-e360-45ad-a69d-c913cf15bfc2-kube-api-access-f4ltd\") pod \"controller-manager-5dd879c6b8-wpdhg\" (UID: \"cc396022-e360-45ad-a69d-c913cf15bfc2\") " pod="openshift-controller-manager/controller-manager-5dd879c6b8-wpdhg" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.047472 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc396022-e360-45ad-a69d-c913cf15bfc2-serving-cert\") pod \"controller-manager-5dd879c6b8-wpdhg\" (UID: \"cc396022-e360-45ad-a69d-c913cf15bfc2\") " pod="openshift-controller-manager/controller-manager-5dd879c6b8-wpdhg" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.047504 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c914c474-5d5b-415b-ad58-76c7ac15dc94-utilities\") pod \"certified-operators-vhrcn\" (UID: \"c914c474-5d5b-415b-ad58-76c7ac15dc94\") " pod="openshift-marketplace/certified-operators-vhrcn" Mar 12 08:02:21 crc kubenswrapper[4809]: E0312 08:02:21.048097 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 08:02:21.548077486 +0000 UTC m=+215.130113219 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5d8gm" (UID: "b1c6c047-6fde-4d86-a82c-d8d259265412") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.048849 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc396022-e360-45ad-a69d-c913cf15bfc2-config\") pod \"controller-manager-5dd879c6b8-wpdhg\" (UID: \"cc396022-e360-45ad-a69d-c913cf15bfc2\") " pod="openshift-controller-manager/controller-manager-5dd879c6b8-wpdhg" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.049308 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cc396022-e360-45ad-a69d-c913cf15bfc2-proxy-ca-bundles\") pod \"controller-manager-5dd879c6b8-wpdhg\" (UID: \"cc396022-e360-45ad-a69d-c913cf15bfc2\") " pod="openshift-controller-manager/controller-manager-5dd879c6b8-wpdhg" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.049875 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cc396022-e360-45ad-a69d-c913cf15bfc2-client-ca\") pod \"controller-manager-5dd879c6b8-wpdhg\" (UID: \"cc396022-e360-45ad-a69d-c913cf15bfc2\") " pod="openshift-controller-manager/controller-manager-5dd879c6b8-wpdhg" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.052304 4809 generic.go:334] "Generic (PLEG): container finished" podID="afb31fcb-c363-4434-a971-126881422750" containerID="212f9ce05b5de2e2b96ebd71f3bfad25ec3dbf76ec0e9e37d6225508c00a3210" exitCode=0 Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.052393 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pqkrs" event={"ID":"afb31fcb-c363-4434-a971-126881422750","Type":"ContainerDied","Data":"212f9ce05b5de2e2b96ebd71f3bfad25ec3dbf76ec0e9e37d6225508c00a3210"} Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.061543 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc396022-e360-45ad-a69d-c913cf15bfc2-serving-cert\") pod \"controller-manager-5dd879c6b8-wpdhg\" (UID: \"cc396022-e360-45ad-a69d-c913cf15bfc2\") " pod="openshift-controller-manager/controller-manager-5dd879c6b8-wpdhg" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.073953 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4ltd\" (UniqueName: \"kubernetes.io/projected/cc396022-e360-45ad-a69d-c913cf15bfc2-kube-api-access-f4ltd\") pod \"controller-manager-5dd879c6b8-wpdhg\" (UID: \"cc396022-e360-45ad-a69d-c913cf15bfc2\") " pod="openshift-controller-manager/controller-manager-5dd879c6b8-wpdhg" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.086305 4809 patch_prober.go:28] interesting pod/router-default-5444994796-ttmrw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 08:02:21 crc kubenswrapper[4809]: [-]has-synced failed: reason withheld Mar 12 08:02:21 crc kubenswrapper[4809]: [+]process-running ok Mar 12 08:02:21 crc kubenswrapper[4809]: healthz check failed Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.086390 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ttmrw" podUID="b24173e0-5140-4c4d-ab8b-ea3696db0b74" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.098554 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-njprg" event={"ID":"88d136d4-995f-44d0-8691-e84bfacb68c3","Type":"ContainerStarted","Data":"31fe7bac2bc2801dad984e742294d1072945c5f7f93ef5cc4aea365dbc38312a"} Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.133335 4809 generic.go:334] "Generic (PLEG): container finished" podID="42f8eef2-550d-4e94-9719-3f2abbbb3ecc" containerID="e3ae3ffc84184d6fe0be1bcee6b636b19a55d8c06909baf833e1e44c86967155" exitCode=0 Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.134300 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555040-4v65h" event={"ID":"42f8eef2-550d-4e94-9719-3f2abbbb3ecc","Type":"ContainerDied","Data":"e3ae3ffc84184d6fe0be1bcee6b636b19a55d8c06909baf833e1e44c86967155"} Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.151871 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pqsb2"] Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.139970 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5dd879c6b8-wpdhg" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.149049 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 08:02:21 crc kubenswrapper[4809]: E0312 08:02:21.149159 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 08:02:21.649113881 +0000 UTC m=+215.231149614 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.156911 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5d8gm\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.157168 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c914c474-5d5b-415b-ad58-76c7ac15dc94-catalog-content\") pod \"certified-operators-vhrcn\" (UID: \"c914c474-5d5b-415b-ad58-76c7ac15dc94\") " pod="openshift-marketplace/certified-operators-vhrcn" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.157590 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrjd7\" (UniqueName: \"kubernetes.io/projected/c914c474-5d5b-415b-ad58-76c7ac15dc94-kube-api-access-mrjd7\") pod \"certified-operators-vhrcn\" (UID: \"c914c474-5d5b-415b-ad58-76c7ac15dc94\") " pod="openshift-marketplace/certified-operators-vhrcn" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.157825 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c914c474-5d5b-415b-ad58-76c7ac15dc94-utilities\") pod \"certified-operators-vhrcn\" (UID: \"c914c474-5d5b-415b-ad58-76c7ac15dc94\") " pod="openshift-marketplace/certified-operators-vhrcn" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.159145 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c914c474-5d5b-415b-ad58-76c7ac15dc94-utilities\") pod \"certified-operators-vhrcn\" (UID: \"c914c474-5d5b-415b-ad58-76c7ac15dc94\") " pod="openshift-marketplace/certified-operators-vhrcn" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.149421 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-r5hmb" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.159669 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c914c474-5d5b-415b-ad58-76c7ac15dc94-catalog-content\") pod \"certified-operators-vhrcn\" (UID: \"c914c474-5d5b-415b-ad58-76c7ac15dc94\") " pod="openshift-marketplace/certified-operators-vhrcn" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.143466 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pqkrs" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.149343 4809 generic.go:334] "Generic (PLEG): container finished" podID="98cc5813-f6d5-4e2a-a7d4-2546e18e60f9" containerID="1c88c89720f768a5d2d85e3f188c75ee8b83e83def7f4b3a7d5b927dfcb3601a" exitCode=0 Mar 12 08:02:21 crc kubenswrapper[4809]: E0312 08:02:21.159682 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 08:02:21.659652507 +0000 UTC m=+215.241688240 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5d8gm" (UID: "b1c6c047-6fde-4d86-a82c-d8d259265412") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:21 crc kubenswrapper[4809]: E0312 08:02:21.162408 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afb31fcb-c363-4434-a971-126881422750" containerName="route-controller-manager" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.162435 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="afb31fcb-c363-4434-a971-126881422750" containerName="route-controller-manager" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.162792 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="afb31fcb-c363-4434-a971-126881422750" containerName="route-controller-manager" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.167853 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pqsb2"] Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.167900 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-r5hmb" event={"ID":"98cc5813-f6d5-4e2a-a7d4-2546e18e60f9","Type":"ContainerDied","Data":"1c88c89720f768a5d2d85e3f188c75ee8b83e83def7f4b3a7d5b927dfcb3601a"} Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.167927 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-r5hmb" event={"ID":"98cc5813-f6d5-4e2a-a7d4-2546e18e60f9","Type":"ContainerDied","Data":"f0dfe500fa4e8d2ea4696c0ee2c91e58dbc07b967dcaccca3dcdb7cce66d03a1"} Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.167952 4809 scope.go:117] "RemoveContainer" containerID="1c88c89720f768a5d2d85e3f188c75ee8b83e83def7f4b3a7d5b927dfcb3601a" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.168328 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pqsb2" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.172952 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-hrnnz" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.187476 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.198645 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qrsm" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.218600 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrjd7\" (UniqueName: \"kubernetes.io/projected/c914c474-5d5b-415b-ad58-76c7ac15dc94-kube-api-access-mrjd7\") pod \"certified-operators-vhrcn\" (UID: \"c914c474-5d5b-415b-ad58-76c7ac15dc94\") " pod="openshift-marketplace/certified-operators-vhrcn" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.249874 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vhrcn" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.260389 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.260521 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqkdw\" (UniqueName: \"kubernetes.io/projected/afb31fcb-c363-4434-a971-126881422750-kube-api-access-vqkdw\") pod \"afb31fcb-c363-4434-a971-126881422750\" (UID: \"afb31fcb-c363-4434-a971-126881422750\") " Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.260582 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/afb31fcb-c363-4434-a971-126881422750-client-ca\") pod \"afb31fcb-c363-4434-a971-126881422750\" (UID: \"afb31fcb-c363-4434-a971-126881422750\") " Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.261014 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afb31fcb-c363-4434-a971-126881422750-config\") pod \"afb31fcb-c363-4434-a971-126881422750\" (UID: \"afb31fcb-c363-4434-a971-126881422750\") " Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.261086 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afb31fcb-c363-4434-a971-126881422750-serving-cert\") pod \"afb31fcb-c363-4434-a971-126881422750\" (UID: \"afb31fcb-c363-4434-a971-126881422750\") " Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.261623 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd4be45a-8370-4cbe-a718-bb31fd64d99a-utilities\") pod \"community-operators-pqsb2\" (UID: \"bd4be45a-8370-4cbe-a718-bb31fd64d99a\") " pod="openshift-marketplace/community-operators-pqsb2" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.261839 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlkfz\" (UniqueName: \"kubernetes.io/projected/bd4be45a-8370-4cbe-a718-bb31fd64d99a-kube-api-access-nlkfz\") pod \"community-operators-pqsb2\" (UID: \"bd4be45a-8370-4cbe-a718-bb31fd64d99a\") " pod="openshift-marketplace/community-operators-pqsb2" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.261972 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd4be45a-8370-4cbe-a718-bb31fd64d99a-catalog-content\") pod \"community-operators-pqsb2\" (UID: \"bd4be45a-8370-4cbe-a718-bb31fd64d99a\") " pod="openshift-marketplace/community-operators-pqsb2" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.263674 4809 scope.go:117] "RemoveContainer" containerID="1c88c89720f768a5d2d85e3f188c75ee8b83e83def7f4b3a7d5b927dfcb3601a" Mar 12 08:02:21 crc kubenswrapper[4809]: E0312 08:02:21.265086 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 08:02:21.765057908 +0000 UTC m=+215.347093781 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.276960 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afb31fcb-c363-4434-a971-126881422750-config" (OuterVolumeSpecName: "config") pod "afb31fcb-c363-4434-a971-126881422750" (UID: "afb31fcb-c363-4434-a971-126881422750"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.277038 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afb31fcb-c363-4434-a971-126881422750-client-ca" (OuterVolumeSpecName: "client-ca") pod "afb31fcb-c363-4434-a971-126881422750" (UID: "afb31fcb-c363-4434-a971-126881422750"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.277883 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afb31fcb-c363-4434-a971-126881422750-kube-api-access-vqkdw" (OuterVolumeSpecName: "kube-api-access-vqkdw") pod "afb31fcb-c363-4434-a971-126881422750" (UID: "afb31fcb-c363-4434-a971-126881422750"). InnerVolumeSpecName "kube-api-access-vqkdw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:02:21 crc kubenswrapper[4809]: E0312 08:02:21.282086 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c88c89720f768a5d2d85e3f188c75ee8b83e83def7f4b3a7d5b927dfcb3601a\": container with ID starting with 1c88c89720f768a5d2d85e3f188c75ee8b83e83def7f4b3a7d5b927dfcb3601a not found: ID does not exist" containerID="1c88c89720f768a5d2d85e3f188c75ee8b83e83def7f4b3a7d5b927dfcb3601a" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.282199 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c88c89720f768a5d2d85e3f188c75ee8b83e83def7f4b3a7d5b927dfcb3601a"} err="failed to get container status \"1c88c89720f768a5d2d85e3f188c75ee8b83e83def7f4b3a7d5b927dfcb3601a\": rpc error: code = NotFound desc = could not find container \"1c88c89720f768a5d2d85e3f188c75ee8b83e83def7f4b3a7d5b927dfcb3601a\": container with ID starting with 1c88c89720f768a5d2d85e3f188c75ee8b83e83def7f4b3a7d5b927dfcb3601a not found: ID does not exist" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.302791 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afb31fcb-c363-4434-a971-126881422750-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "afb31fcb-c363-4434-a971-126881422750" (UID: "afb31fcb-c363-4434-a971-126881422750"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.308523 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-m6gcg"] Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.316518 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m6gcg" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.323384 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m6gcg"] Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.356577 4809 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-03-12T08:02:20.984658201Z","Handler":null,"Name":""} Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.363460 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/511df0b5-a255-46fe-aeba-cd5daa01e7c9-utilities\") pod \"certified-operators-m6gcg\" (UID: \"511df0b5-a255-46fe-aeba-cd5daa01e7c9\") " pod="openshift-marketplace/certified-operators-m6gcg" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.363509 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd4be45a-8370-4cbe-a718-bb31fd64d99a-catalog-content\") pod \"community-operators-pqsb2\" (UID: \"bd4be45a-8370-4cbe-a718-bb31fd64d99a\") " pod="openshift-marketplace/community-operators-pqsb2" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.363532 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5d8gm\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.363603 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/511df0b5-a255-46fe-aeba-cd5daa01e7c9-catalog-content\") pod \"certified-operators-m6gcg\" (UID: \"511df0b5-a255-46fe-aeba-cd5daa01e7c9\") " pod="openshift-marketplace/certified-operators-m6gcg" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.363630 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd4be45a-8370-4cbe-a718-bb31fd64d99a-utilities\") pod \"community-operators-pqsb2\" (UID: \"bd4be45a-8370-4cbe-a718-bb31fd64d99a\") " pod="openshift-marketplace/community-operators-pqsb2" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.363652 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk59x\" (UniqueName: \"kubernetes.io/projected/511df0b5-a255-46fe-aeba-cd5daa01e7c9-kube-api-access-xk59x\") pod \"certified-operators-m6gcg\" (UID: \"511df0b5-a255-46fe-aeba-cd5daa01e7c9\") " pod="openshift-marketplace/certified-operators-m6gcg" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.363675 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlkfz\" (UniqueName: \"kubernetes.io/projected/bd4be45a-8370-4cbe-a718-bb31fd64d99a-kube-api-access-nlkfz\") pod \"community-operators-pqsb2\" (UID: \"bd4be45a-8370-4cbe-a718-bb31fd64d99a\") " pod="openshift-marketplace/community-operators-pqsb2" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.363711 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vqkdw\" (UniqueName: \"kubernetes.io/projected/afb31fcb-c363-4434-a971-126881422750-kube-api-access-vqkdw\") on node \"crc\" DevicePath \"\"" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.363739 4809 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/afb31fcb-c363-4434-a971-126881422750-client-ca\") on node \"crc\" DevicePath \"\"" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.363752 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afb31fcb-c363-4434-a971-126881422750-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.363762 4809 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afb31fcb-c363-4434-a971-126881422750-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.364575 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd4be45a-8370-4cbe-a718-bb31fd64d99a-catalog-content\") pod \"community-operators-pqsb2\" (UID: \"bd4be45a-8370-4cbe-a718-bb31fd64d99a\") " pod="openshift-marketplace/community-operators-pqsb2" Mar 12 08:02:21 crc kubenswrapper[4809]: E0312 08:02:21.364933 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 08:02:21.864920279 +0000 UTC m=+215.446956012 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5d8gm" (UID: "b1c6c047-6fde-4d86-a82c-d8d259265412") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.365319 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd4be45a-8370-4cbe-a718-bb31fd64d99a-utilities\") pod \"community-operators-pqsb2\" (UID: \"bd4be45a-8370-4cbe-a718-bb31fd64d99a\") " pod="openshift-marketplace/community-operators-pqsb2" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.383844 4809 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.383900 4809 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.393028 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlkfz\" (UniqueName: \"kubernetes.io/projected/bd4be45a-8370-4cbe-a718-bb31fd64d99a-kube-api-access-nlkfz\") pod \"community-operators-pqsb2\" (UID: \"bd4be45a-8370-4cbe-a718-bb31fd64d99a\") " pod="openshift-marketplace/community-operators-pqsb2" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.468029 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.468434 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/511df0b5-a255-46fe-aeba-cd5daa01e7c9-catalog-content\") pod \"certified-operators-m6gcg\" (UID: \"511df0b5-a255-46fe-aeba-cd5daa01e7c9\") " pod="openshift-marketplace/certified-operators-m6gcg" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.468485 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xk59x\" (UniqueName: \"kubernetes.io/projected/511df0b5-a255-46fe-aeba-cd5daa01e7c9-kube-api-access-xk59x\") pod \"certified-operators-m6gcg\" (UID: \"511df0b5-a255-46fe-aeba-cd5daa01e7c9\") " pod="openshift-marketplace/certified-operators-m6gcg" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.468516 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/511df0b5-a255-46fe-aeba-cd5daa01e7c9-utilities\") pod \"certified-operators-m6gcg\" (UID: \"511df0b5-a255-46fe-aeba-cd5daa01e7c9\") " pod="openshift-marketplace/certified-operators-m6gcg" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.469044 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/511df0b5-a255-46fe-aeba-cd5daa01e7c9-utilities\") pod \"certified-operators-m6gcg\" (UID: \"511df0b5-a255-46fe-aeba-cd5daa01e7c9\") " pod="openshift-marketplace/certified-operators-m6gcg" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.469296 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/511df0b5-a255-46fe-aeba-cd5daa01e7c9-catalog-content\") pod \"certified-operators-m6gcg\" (UID: \"511df0b5-a255-46fe-aeba-cd5daa01e7c9\") " pod="openshift-marketplace/certified-operators-m6gcg" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.480147 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.506555 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-btdvf"] Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.507210 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xk59x\" (UniqueName: \"kubernetes.io/projected/511df0b5-a255-46fe-aeba-cd5daa01e7c9-kube-api-access-xk59x\") pod \"certified-operators-m6gcg\" (UID: \"511df0b5-a255-46fe-aeba-cd5daa01e7c9\") " pod="openshift-marketplace/certified-operators-m6gcg" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.508437 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-btdvf" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.517711 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-btdvf"] Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.569658 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5d8gm\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.569709 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1df02216-0a1b-4417-9914-e7b9452a9c6b-catalog-content\") pod \"community-operators-btdvf\" (UID: \"1df02216-0a1b-4417-9914-e7b9452a9c6b\") " pod="openshift-marketplace/community-operators-btdvf" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.569743 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1df02216-0a1b-4417-9914-e7b9452a9c6b-utilities\") pod \"community-operators-btdvf\" (UID: \"1df02216-0a1b-4417-9914-e7b9452a9c6b\") " pod="openshift-marketplace/community-operators-btdvf" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.569837 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xl7hr\" (UniqueName: \"kubernetes.io/projected/1df02216-0a1b-4417-9914-e7b9452a9c6b-kube-api-access-xl7hr\") pod \"community-operators-btdvf\" (UID: \"1df02216-0a1b-4417-9914-e7b9452a9c6b\") " pod="openshift-marketplace/community-operators-btdvf" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.579779 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pqsb2" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.584422 4809 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.584454 4809 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5d8gm\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.623480 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5d8gm\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.657373 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m6gcg" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.671150 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1df02216-0a1b-4417-9914-e7b9452a9c6b-catalog-content\") pod \"community-operators-btdvf\" (UID: \"1df02216-0a1b-4417-9914-e7b9452a9c6b\") " pod="openshift-marketplace/community-operators-btdvf" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.671227 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1df02216-0a1b-4417-9914-e7b9452a9c6b-utilities\") pod \"community-operators-btdvf\" (UID: \"1df02216-0a1b-4417-9914-e7b9452a9c6b\") " pod="openshift-marketplace/community-operators-btdvf" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.671290 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xl7hr\" (UniqueName: \"kubernetes.io/projected/1df02216-0a1b-4417-9914-e7b9452a9c6b-kube-api-access-xl7hr\") pod \"community-operators-btdvf\" (UID: \"1df02216-0a1b-4417-9914-e7b9452a9c6b\") " pod="openshift-marketplace/community-operators-btdvf" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.671986 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1df02216-0a1b-4417-9914-e7b9452a9c6b-catalog-content\") pod \"community-operators-btdvf\" (UID: \"1df02216-0a1b-4417-9914-e7b9452a9c6b\") " pod="openshift-marketplace/community-operators-btdvf" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.672054 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1df02216-0a1b-4417-9914-e7b9452a9c6b-utilities\") pod \"community-operators-btdvf\" (UID: \"1df02216-0a1b-4417-9914-e7b9452a9c6b\") " pod="openshift-marketplace/community-operators-btdvf" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.686271 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vhrcn"] Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.696779 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xl7hr\" (UniqueName: \"kubernetes.io/projected/1df02216-0a1b-4417-9914-e7b9452a9c6b-kube-api-access-xl7hr\") pod \"community-operators-btdvf\" (UID: \"1df02216-0a1b-4417-9914-e7b9452a9c6b\") " pod="openshift-marketplace/community-operators-btdvf" Mar 12 08:02:21 crc kubenswrapper[4809]: W0312 08:02:21.724365 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc914c474_5d5b_415b_ad58_76c7ac15dc94.slice/crio-51a4ff237bf9424c77a6bd2270d90673c891acd17c99791159b4e9ce221d45bb WatchSource:0}: Error finding container 51a4ff237bf9424c77a6bd2270d90673c891acd17c99791159b4e9ce221d45bb: Status 404 returned error can't find the container with id 51a4ff237bf9424c77a6bd2270d90673c891acd17c99791159b4e9ce221d45bb Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.768866 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5dd879c6b8-wpdhg"] Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.850042 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.855513 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-btdvf" Mar 12 08:02:21 crc kubenswrapper[4809]: I0312 08:02:21.983827 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pqsb2"] Mar 12 08:02:22 crc kubenswrapper[4809]: I0312 08:02:22.073375 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m6gcg"] Mar 12 08:02:22 crc kubenswrapper[4809]: I0312 08:02:22.084072 4809 patch_prober.go:28] interesting pod/router-default-5444994796-ttmrw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 08:02:22 crc kubenswrapper[4809]: [-]has-synced failed: reason withheld Mar 12 08:02:22 crc kubenswrapper[4809]: [+]process-running ok Mar 12 08:02:22 crc kubenswrapper[4809]: healthz check failed Mar 12 08:02:22 crc kubenswrapper[4809]: I0312 08:02:22.084154 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ttmrw" podUID="b24173e0-5140-4c4d-ab8b-ea3696db0b74" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 08:02:22 crc kubenswrapper[4809]: I0312 08:02:22.199039 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5dd879c6b8-wpdhg" event={"ID":"cc396022-e360-45ad-a69d-c913cf15bfc2","Type":"ContainerStarted","Data":"2d2380dfe104c7db5d89ab69e54a3bd07788ff2767d27f9f6c20e55700f0ea54"} Mar 12 08:02:22 crc kubenswrapper[4809]: I0312 08:02:22.199090 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5dd879c6b8-wpdhg" event={"ID":"cc396022-e360-45ad-a69d-c913cf15bfc2","Type":"ContainerStarted","Data":"28f29299a232d5eed2204855428075b36310e6413796bd6fc3b9c3f95a413c12"} Mar 12 08:02:22 crc kubenswrapper[4809]: I0312 08:02:22.199531 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5dd879c6b8-wpdhg" Mar 12 08:02:22 crc kubenswrapper[4809]: I0312 08:02:22.208567 4809 patch_prober.go:28] interesting pod/controller-manager-5dd879c6b8-wpdhg container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.46:8443/healthz\": dial tcp 10.217.0.46:8443: connect: connection refused" start-of-body= Mar 12 08:02:22 crc kubenswrapper[4809]: I0312 08:02:22.208648 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5dd879c6b8-wpdhg" podUID="cc396022-e360-45ad-a69d-c913cf15bfc2" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.46:8443/healthz\": dial tcp 10.217.0.46:8443: connect: connection refused" Mar 12 08:02:22 crc kubenswrapper[4809]: I0312 08:02:22.208993 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pqsb2" event={"ID":"bd4be45a-8370-4cbe-a718-bb31fd64d99a","Type":"ContainerStarted","Data":"4d44820f43bd97e63156bb805d5aa7b888ae62c51e188ca1f434ab459cecd21d"} Mar 12 08:02:22 crc kubenswrapper[4809]: I0312 08:02:22.220608 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pqkrs" Mar 12 08:02:22 crc kubenswrapper[4809]: I0312 08:02:22.221470 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pqkrs" event={"ID":"afb31fcb-c363-4434-a971-126881422750","Type":"ContainerDied","Data":"b6a5875ba921eb35affcb34b2f123a07e0462d157033e74af60781b99bf1f454"} Mar 12 08:02:22 crc kubenswrapper[4809]: I0312 08:02:22.221608 4809 scope.go:117] "RemoveContainer" containerID="212f9ce05b5de2e2b96ebd71f3bfad25ec3dbf76ec0e9e37d6225508c00a3210" Mar 12 08:02:22 crc kubenswrapper[4809]: I0312 08:02:22.246057 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-rvfv4" Mar 12 08:02:22 crc kubenswrapper[4809]: I0312 08:02:22.246090 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-rvfv4" Mar 12 08:02:22 crc kubenswrapper[4809]: I0312 08:02:22.248446 4809 patch_prober.go:28] interesting pod/console-f9d7485db-rvfv4 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.30:8443/health\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Mar 12 08:02:22 crc kubenswrapper[4809]: I0312 08:02:22.248511 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-rvfv4" podUID="6bd51111-825a-4679-95fa-6dfe33ff138c" containerName="console" probeResult="failure" output="Get \"https://10.217.0.30:8443/health\": dial tcp 10.217.0.30:8443: connect: connection refused" Mar 12 08:02:22 crc kubenswrapper[4809]: I0312 08:02:22.256320 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-btdvf"] Mar 12 08:02:22 crc kubenswrapper[4809]: I0312 08:02:22.257696 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-njprg" event={"ID":"88d136d4-995f-44d0-8691-e84bfacb68c3","Type":"ContainerStarted","Data":"cb689ffe95ee36f00339a17c368efead059f285253945a0d63d32a24bf1955c1"} Mar 12 08:02:22 crc kubenswrapper[4809]: I0312 08:02:22.258018 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-njprg" event={"ID":"88d136d4-995f-44d0-8691-e84bfacb68c3","Type":"ContainerStarted","Data":"815f44a95bcc33d14e774eba9d9b0b07950a9e203250e932fd05928b6b4c8467"} Mar 12 08:02:22 crc kubenswrapper[4809]: I0312 08:02:22.276301 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5dd879c6b8-wpdhg" podStartSLOduration=4.276269712 podStartE2EDuration="4.276269712s" podCreationTimestamp="2026-03-12 08:02:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:02:22.235479961 +0000 UTC m=+215.817515694" watchObservedRunningTime="2026-03-12 08:02:22.276269712 +0000 UTC m=+215.858305445" Mar 12 08:02:22 crc kubenswrapper[4809]: I0312 08:02:22.296064 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m6gcg" event={"ID":"511df0b5-a255-46fe-aeba-cd5daa01e7c9","Type":"ContainerStarted","Data":"d69eda06064bdb407560d3f538d582f6d628efe96b17652ce937534d9f250d61"} Mar 12 08:02:22 crc kubenswrapper[4809]: I0312 08:02:22.304963 4809 generic.go:334] "Generic (PLEG): container finished" podID="c914c474-5d5b-415b-ad58-76c7ac15dc94" containerID="c6ac79cab9604a6ea1f843ee80c9be80b0d41407f6da541a7b6f125de9476009" exitCode=0 Mar 12 08:02:22 crc kubenswrapper[4809]: I0312 08:02:22.307609 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vhrcn" event={"ID":"c914c474-5d5b-415b-ad58-76c7ac15dc94","Type":"ContainerDied","Data":"c6ac79cab9604a6ea1f843ee80c9be80b0d41407f6da541a7b6f125de9476009"} Mar 12 08:02:22 crc kubenswrapper[4809]: I0312 08:02:22.307666 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vhrcn" event={"ID":"c914c474-5d5b-415b-ad58-76c7ac15dc94","Type":"ContainerStarted","Data":"51a4ff237bf9424c77a6bd2270d90673c891acd17c99791159b4e9ce221d45bb"} Mar 12 08:02:22 crc kubenswrapper[4809]: I0312 08:02:22.315419 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-njprg" podStartSLOduration=11.315392425 podStartE2EDuration="11.315392425s" podCreationTimestamp="2026-03-12 08:02:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:02:22.302386579 +0000 UTC m=+215.884422312" watchObservedRunningTime="2026-03-12 08:02:22.315392425 +0000 UTC m=+215.897428158" Mar 12 08:02:22 crc kubenswrapper[4809]: I0312 08:02:22.339612 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-5d8gm"] Mar 12 08:02:22 crc kubenswrapper[4809]: I0312 08:02:22.343839 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-pqkrs"] Mar 12 08:02:22 crc kubenswrapper[4809]: I0312 08:02:22.345267 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-pqkrs"] Mar 12 08:02:22 crc kubenswrapper[4809]: I0312 08:02:22.621772 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555040-4v65h" Mar 12 08:02:22 crc kubenswrapper[4809]: I0312 08:02:22.700630 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/42f8eef2-550d-4e94-9719-3f2abbbb3ecc-secret-volume\") pod \"42f8eef2-550d-4e94-9719-3f2abbbb3ecc\" (UID: \"42f8eef2-550d-4e94-9719-3f2abbbb3ecc\") " Mar 12 08:02:22 crc kubenswrapper[4809]: I0312 08:02:22.700748 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/42f8eef2-550d-4e94-9719-3f2abbbb3ecc-config-volume\") pod \"42f8eef2-550d-4e94-9719-3f2abbbb3ecc\" (UID: \"42f8eef2-550d-4e94-9719-3f2abbbb3ecc\") " Mar 12 08:02:22 crc kubenswrapper[4809]: I0312 08:02:22.700777 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zchwv\" (UniqueName: \"kubernetes.io/projected/42f8eef2-550d-4e94-9719-3f2abbbb3ecc-kube-api-access-zchwv\") pod \"42f8eef2-550d-4e94-9719-3f2abbbb3ecc\" (UID: \"42f8eef2-550d-4e94-9719-3f2abbbb3ecc\") " Mar 12 08:02:22 crc kubenswrapper[4809]: I0312 08:02:22.702994 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42f8eef2-550d-4e94-9719-3f2abbbb3ecc-config-volume" (OuterVolumeSpecName: "config-volume") pod "42f8eef2-550d-4e94-9719-3f2abbbb3ecc" (UID: "42f8eef2-550d-4e94-9719-3f2abbbb3ecc"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:02:22 crc kubenswrapper[4809]: I0312 08:02:22.708460 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42f8eef2-550d-4e94-9719-3f2abbbb3ecc-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "42f8eef2-550d-4e94-9719-3f2abbbb3ecc" (UID: "42f8eef2-550d-4e94-9719-3f2abbbb3ecc"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:02:22 crc kubenswrapper[4809]: I0312 08:02:22.708936 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42f8eef2-550d-4e94-9719-3f2abbbb3ecc-kube-api-access-zchwv" (OuterVolumeSpecName: "kube-api-access-zchwv") pod "42f8eef2-550d-4e94-9719-3f2abbbb3ecc" (UID: "42f8eef2-550d-4e94-9719-3f2abbbb3ecc"). InnerVolumeSpecName "kube-api-access-zchwv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:02:22 crc kubenswrapper[4809]: I0312 08:02:22.802998 4809 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/42f8eef2-550d-4e94-9719-3f2abbbb3ecc-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 12 08:02:22 crc kubenswrapper[4809]: I0312 08:02:22.803037 4809 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/42f8eef2-550d-4e94-9719-3f2abbbb3ecc-config-volume\") on node \"crc\" DevicePath \"\"" Mar 12 08:02:22 crc kubenswrapper[4809]: I0312 08:02:22.803050 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zchwv\" (UniqueName: \"kubernetes.io/projected/42f8eef2-550d-4e94-9719-3f2abbbb3ecc-kube-api-access-zchwv\") on node \"crc\" DevicePath \"\"" Mar 12 08:02:22 crc kubenswrapper[4809]: I0312 08:02:22.906808 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jn4tc"] Mar 12 08:02:22 crc kubenswrapper[4809]: E0312 08:02:22.907080 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42f8eef2-550d-4e94-9719-3f2abbbb3ecc" containerName="collect-profiles" Mar 12 08:02:22 crc kubenswrapper[4809]: I0312 08:02:22.907096 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="42f8eef2-550d-4e94-9719-3f2abbbb3ecc" containerName="collect-profiles" Mar 12 08:02:22 crc kubenswrapper[4809]: I0312 08:02:22.907258 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="42f8eef2-550d-4e94-9719-3f2abbbb3ecc" containerName="collect-profiles" Mar 12 08:02:22 crc kubenswrapper[4809]: I0312 08:02:22.908157 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jn4tc" Mar 12 08:02:22 crc kubenswrapper[4809]: I0312 08:02:22.911099 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 12 08:02:22 crc kubenswrapper[4809]: I0312 08:02:22.927274 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jn4tc"] Mar 12 08:02:23 crc kubenswrapper[4809]: I0312 08:02:23.006467 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c060e910-de6e-43df-a148-66f07bc71180-catalog-content\") pod \"redhat-marketplace-jn4tc\" (UID: \"c060e910-de6e-43df-a148-66f07bc71180\") " pod="openshift-marketplace/redhat-marketplace-jn4tc" Mar 12 08:02:23 crc kubenswrapper[4809]: I0312 08:02:23.006585 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96kww\" (UniqueName: \"kubernetes.io/projected/c060e910-de6e-43df-a148-66f07bc71180-kube-api-access-96kww\") pod \"redhat-marketplace-jn4tc\" (UID: \"c060e910-de6e-43df-a148-66f07bc71180\") " pod="openshift-marketplace/redhat-marketplace-jn4tc" Mar 12 08:02:23 crc kubenswrapper[4809]: I0312 08:02:23.006767 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c060e910-de6e-43df-a148-66f07bc71180-utilities\") pod \"redhat-marketplace-jn4tc\" (UID: \"c060e910-de6e-43df-a148-66f07bc71180\") " pod="openshift-marketplace/redhat-marketplace-jn4tc" Mar 12 08:02:23 crc kubenswrapper[4809]: I0312 08:02:23.083562 4809 patch_prober.go:28] interesting pod/router-default-5444994796-ttmrw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 08:02:23 crc kubenswrapper[4809]: [-]has-synced failed: reason withheld Mar 12 08:02:23 crc kubenswrapper[4809]: [+]process-running ok Mar 12 08:02:23 crc kubenswrapper[4809]: healthz check failed Mar 12 08:02:23 crc kubenswrapper[4809]: I0312 08:02:23.083657 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ttmrw" podUID="b24173e0-5140-4c4d-ab8b-ea3696db0b74" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 08:02:23 crc kubenswrapper[4809]: I0312 08:02:23.109457 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c060e910-de6e-43df-a148-66f07bc71180-utilities\") pod \"redhat-marketplace-jn4tc\" (UID: \"c060e910-de6e-43df-a148-66f07bc71180\") " pod="openshift-marketplace/redhat-marketplace-jn4tc" Mar 12 08:02:23 crc kubenswrapper[4809]: I0312 08:02:23.109575 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c060e910-de6e-43df-a148-66f07bc71180-catalog-content\") pod \"redhat-marketplace-jn4tc\" (UID: \"c060e910-de6e-43df-a148-66f07bc71180\") " pod="openshift-marketplace/redhat-marketplace-jn4tc" Mar 12 08:02:23 crc kubenswrapper[4809]: I0312 08:02:23.109636 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96kww\" (UniqueName: \"kubernetes.io/projected/c060e910-de6e-43df-a148-66f07bc71180-kube-api-access-96kww\") pod \"redhat-marketplace-jn4tc\" (UID: \"c060e910-de6e-43df-a148-66f07bc71180\") " pod="openshift-marketplace/redhat-marketplace-jn4tc" Mar 12 08:02:23 crc kubenswrapper[4809]: I0312 08:02:23.110058 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c060e910-de6e-43df-a148-66f07bc71180-utilities\") pod \"redhat-marketplace-jn4tc\" (UID: \"c060e910-de6e-43df-a148-66f07bc71180\") " pod="openshift-marketplace/redhat-marketplace-jn4tc" Mar 12 08:02:23 crc kubenswrapper[4809]: I0312 08:02:23.110074 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c060e910-de6e-43df-a148-66f07bc71180-catalog-content\") pod \"redhat-marketplace-jn4tc\" (UID: \"c060e910-de6e-43df-a148-66f07bc71180\") " pod="openshift-marketplace/redhat-marketplace-jn4tc" Mar 12 08:02:23 crc kubenswrapper[4809]: I0312 08:02:23.116661 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Mar 12 08:02:23 crc kubenswrapper[4809]: I0312 08:02:23.117496 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afb31fcb-c363-4434-a971-126881422750" path="/var/lib/kubelet/pods/afb31fcb-c363-4434-a971-126881422750/volumes" Mar 12 08:02:23 crc kubenswrapper[4809]: I0312 08:02:23.131410 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96kww\" (UniqueName: \"kubernetes.io/projected/c060e910-de6e-43df-a148-66f07bc71180-kube-api-access-96kww\") pod \"redhat-marketplace-jn4tc\" (UID: \"c060e910-de6e-43df-a148-66f07bc71180\") " pod="openshift-marketplace/redhat-marketplace-jn4tc" Mar 12 08:02:23 crc kubenswrapper[4809]: I0312 08:02:23.226906 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jn4tc" Mar 12 08:02:23 crc kubenswrapper[4809]: I0312 08:02:23.314871 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wmft4"] Mar 12 08:02:23 crc kubenswrapper[4809]: I0312 08:02:23.316307 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wmft4" Mar 12 08:02:23 crc kubenswrapper[4809]: I0312 08:02:23.335035 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wmft4"] Mar 12 08:02:23 crc kubenswrapper[4809]: I0312 08:02:23.411722 4809 generic.go:334] "Generic (PLEG): container finished" podID="511df0b5-a255-46fe-aeba-cd5daa01e7c9" containerID="1944e677b6759a1709b151e612f6421b377d4bababca5aab628e2327fee96cc0" exitCode=0 Mar 12 08:02:23 crc kubenswrapper[4809]: I0312 08:02:23.411785 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m6gcg" event={"ID":"511df0b5-a255-46fe-aeba-cd5daa01e7c9","Type":"ContainerDied","Data":"1944e677b6759a1709b151e612f6421b377d4bababca5aab628e2327fee96cc0"} Mar 12 08:02:23 crc kubenswrapper[4809]: I0312 08:02:23.417111 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mj5wj\" (UniqueName: \"kubernetes.io/projected/6adda456-f49a-4a6d-b09b-8841158e9268-kube-api-access-mj5wj\") pod \"redhat-marketplace-wmft4\" (UID: \"6adda456-f49a-4a6d-b09b-8841158e9268\") " pod="openshift-marketplace/redhat-marketplace-wmft4" Mar 12 08:02:23 crc kubenswrapper[4809]: I0312 08:02:23.417191 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6adda456-f49a-4a6d-b09b-8841158e9268-utilities\") pod \"redhat-marketplace-wmft4\" (UID: \"6adda456-f49a-4a6d-b09b-8841158e9268\") " pod="openshift-marketplace/redhat-marketplace-wmft4" Mar 12 08:02:23 crc kubenswrapper[4809]: I0312 08:02:23.417291 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6adda456-f49a-4a6d-b09b-8841158e9268-catalog-content\") pod \"redhat-marketplace-wmft4\" (UID: \"6adda456-f49a-4a6d-b09b-8841158e9268\") " pod="openshift-marketplace/redhat-marketplace-wmft4" Mar 12 08:02:23 crc kubenswrapper[4809]: I0312 08:02:23.438004 4809 generic.go:334] "Generic (PLEG): container finished" podID="1df02216-0a1b-4417-9914-e7b9452a9c6b" containerID="204d12bc9add314f4a2fdc879cb18a3e9bb7e0dbec3d948147d5514fdac04ca1" exitCode=0 Mar 12 08:02:23 crc kubenswrapper[4809]: I0312 08:02:23.438102 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-btdvf" event={"ID":"1df02216-0a1b-4417-9914-e7b9452a9c6b","Type":"ContainerDied","Data":"204d12bc9add314f4a2fdc879cb18a3e9bb7e0dbec3d948147d5514fdac04ca1"} Mar 12 08:02:23 crc kubenswrapper[4809]: I0312 08:02:23.438158 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-btdvf" event={"ID":"1df02216-0a1b-4417-9914-e7b9452a9c6b","Type":"ContainerStarted","Data":"d9d01e20a325b460e97026229e56f712e5d9feb785b9eb4ba03efe2b351ec00d"} Mar 12 08:02:23 crc kubenswrapper[4809]: I0312 08:02:23.447589 4809 generic.go:334] "Generic (PLEG): container finished" podID="bd4be45a-8370-4cbe-a718-bb31fd64d99a" containerID="64665dd2e7b9f48fe25e16a0771d817ce1c5ee178d4741af1f43404a29c495a7" exitCode=0 Mar 12 08:02:23 crc kubenswrapper[4809]: I0312 08:02:23.447658 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pqsb2" event={"ID":"bd4be45a-8370-4cbe-a718-bb31fd64d99a","Type":"ContainerDied","Data":"64665dd2e7b9f48fe25e16a0771d817ce1c5ee178d4741af1f43404a29c495a7"} Mar 12 08:02:23 crc kubenswrapper[4809]: I0312 08:02:23.485038 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" event={"ID":"b1c6c047-6fde-4d86-a82c-d8d259265412","Type":"ContainerStarted","Data":"355ce1ca0ee2b297c82684e9fbd29946a3bd4392727bdc30c17e72db57b02599"} Mar 12 08:02:23 crc kubenswrapper[4809]: I0312 08:02:23.485095 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" event={"ID":"b1c6c047-6fde-4d86-a82c-d8d259265412","Type":"ContainerStarted","Data":"920dd07679d4801e7ea236ad649d9bcd2bf86dca0add2da2d49e152f03ed3b48"} Mar 12 08:02:23 crc kubenswrapper[4809]: I0312 08:02:23.485154 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:23 crc kubenswrapper[4809]: I0312 08:02:23.518182 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6adda456-f49a-4a6d-b09b-8841158e9268-catalog-content\") pod \"redhat-marketplace-wmft4\" (UID: \"6adda456-f49a-4a6d-b09b-8841158e9268\") " pod="openshift-marketplace/redhat-marketplace-wmft4" Mar 12 08:02:23 crc kubenswrapper[4809]: I0312 08:02:23.518294 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mj5wj\" (UniqueName: \"kubernetes.io/projected/6adda456-f49a-4a6d-b09b-8841158e9268-kube-api-access-mj5wj\") pod \"redhat-marketplace-wmft4\" (UID: \"6adda456-f49a-4a6d-b09b-8841158e9268\") " pod="openshift-marketplace/redhat-marketplace-wmft4" Mar 12 08:02:23 crc kubenswrapper[4809]: I0312 08:02:23.518321 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6adda456-f49a-4a6d-b09b-8841158e9268-utilities\") pod \"redhat-marketplace-wmft4\" (UID: \"6adda456-f49a-4a6d-b09b-8841158e9268\") " pod="openshift-marketplace/redhat-marketplace-wmft4" Mar 12 08:02:23 crc kubenswrapper[4809]: I0312 08:02:23.519543 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6adda456-f49a-4a6d-b09b-8841158e9268-catalog-content\") pod \"redhat-marketplace-wmft4\" (UID: \"6adda456-f49a-4a6d-b09b-8841158e9268\") " pod="openshift-marketplace/redhat-marketplace-wmft4" Mar 12 08:02:23 crc kubenswrapper[4809]: I0312 08:02:23.520213 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6adda456-f49a-4a6d-b09b-8841158e9268-utilities\") pod \"redhat-marketplace-wmft4\" (UID: \"6adda456-f49a-4a6d-b09b-8841158e9268\") " pod="openshift-marketplace/redhat-marketplace-wmft4" Mar 12 08:02:23 crc kubenswrapper[4809]: I0312 08:02:23.548387 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555040-4v65h" Mar 12 08:02:23 crc kubenswrapper[4809]: I0312 08:02:23.548789 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555040-4v65h" event={"ID":"42f8eef2-550d-4e94-9719-3f2abbbb3ecc","Type":"ContainerDied","Data":"bf01c3cb2fbdd09ec40c0a06c15ee62decd045987033f20ad5664b140fe01aab"} Mar 12 08:02:23 crc kubenswrapper[4809]: I0312 08:02:23.548813 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf01c3cb2fbdd09ec40c0a06c15ee62decd045987033f20ad5664b140fe01aab" Mar 12 08:02:23 crc kubenswrapper[4809]: I0312 08:02:23.555976 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mj5wj\" (UniqueName: \"kubernetes.io/projected/6adda456-f49a-4a6d-b09b-8841158e9268-kube-api-access-mj5wj\") pod \"redhat-marketplace-wmft4\" (UID: \"6adda456-f49a-4a6d-b09b-8841158e9268\") " pod="openshift-marketplace/redhat-marketplace-wmft4" Mar 12 08:02:23 crc kubenswrapper[4809]: I0312 08:02:23.569314 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5dd879c6b8-wpdhg" Mar 12 08:02:23 crc kubenswrapper[4809]: I0312 08:02:23.641502 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6686bd87bb-zdkcn"] Mar 12 08:02:23 crc kubenswrapper[4809]: I0312 08:02:23.646188 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6686bd87bb-zdkcn" Mar 12 08:02:23 crc kubenswrapper[4809]: I0312 08:02:23.657981 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 12 08:02:23 crc kubenswrapper[4809]: I0312 08:02:23.665217 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 12 08:02:23 crc kubenswrapper[4809]: I0312 08:02:23.665725 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 12 08:02:23 crc kubenswrapper[4809]: I0312 08:02:23.666030 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 12 08:02:23 crc kubenswrapper[4809]: I0312 08:02:23.666335 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 12 08:02:23 crc kubenswrapper[4809]: I0312 08:02:23.669028 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 12 08:02:23 crc kubenswrapper[4809]: I0312 08:02:23.677823 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" podStartSLOduration=179.677776506 podStartE2EDuration="2m59.677776506s" podCreationTimestamp="2026-03-12 07:59:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:02:23.669488916 +0000 UTC m=+217.251524649" watchObservedRunningTime="2026-03-12 08:02:23.677776506 +0000 UTC m=+217.259812239" Mar 12 08:02:23 crc kubenswrapper[4809]: I0312 08:02:23.690259 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6686bd87bb-zdkcn"] Mar 12 08:02:23 crc kubenswrapper[4809]: I0312 08:02:23.739039 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wmft4" Mar 12 08:02:23 crc kubenswrapper[4809]: I0312 08:02:23.840157 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e0684e0-af85-4e7f-a3b5-507d93a0bc7e-serving-cert\") pod \"route-controller-manager-6686bd87bb-zdkcn\" (UID: \"9e0684e0-af85-4e7f-a3b5-507d93a0bc7e\") " pod="openshift-route-controller-manager/route-controller-manager-6686bd87bb-zdkcn" Mar 12 08:02:23 crc kubenswrapper[4809]: I0312 08:02:23.840630 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz95q\" (UniqueName: \"kubernetes.io/projected/9e0684e0-af85-4e7f-a3b5-507d93a0bc7e-kube-api-access-cz95q\") pod \"route-controller-manager-6686bd87bb-zdkcn\" (UID: \"9e0684e0-af85-4e7f-a3b5-507d93a0bc7e\") " pod="openshift-route-controller-manager/route-controller-manager-6686bd87bb-zdkcn" Mar 12 08:02:23 crc kubenswrapper[4809]: I0312 08:02:23.840675 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9e0684e0-af85-4e7f-a3b5-507d93a0bc7e-client-ca\") pod \"route-controller-manager-6686bd87bb-zdkcn\" (UID: \"9e0684e0-af85-4e7f-a3b5-507d93a0bc7e\") " pod="openshift-route-controller-manager/route-controller-manager-6686bd87bb-zdkcn" Mar 12 08:02:23 crc kubenswrapper[4809]: I0312 08:02:23.840700 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e0684e0-af85-4e7f-a3b5-507d93a0bc7e-config\") pod \"route-controller-manager-6686bd87bb-zdkcn\" (UID: \"9e0684e0-af85-4e7f-a3b5-507d93a0bc7e\") " pod="openshift-route-controller-manager/route-controller-manager-6686bd87bb-zdkcn" Mar 12 08:02:23 crc kubenswrapper[4809]: I0312 08:02:23.950942 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e0684e0-af85-4e7f-a3b5-507d93a0bc7e-config\") pod \"route-controller-manager-6686bd87bb-zdkcn\" (UID: \"9e0684e0-af85-4e7f-a3b5-507d93a0bc7e\") " pod="openshift-route-controller-manager/route-controller-manager-6686bd87bb-zdkcn" Mar 12 08:02:23 crc kubenswrapper[4809]: I0312 08:02:23.951103 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e0684e0-af85-4e7f-a3b5-507d93a0bc7e-serving-cert\") pod \"route-controller-manager-6686bd87bb-zdkcn\" (UID: \"9e0684e0-af85-4e7f-a3b5-507d93a0bc7e\") " pod="openshift-route-controller-manager/route-controller-manager-6686bd87bb-zdkcn" Mar 12 08:02:23 crc kubenswrapper[4809]: I0312 08:02:23.951192 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cz95q\" (UniqueName: \"kubernetes.io/projected/9e0684e0-af85-4e7f-a3b5-507d93a0bc7e-kube-api-access-cz95q\") pod \"route-controller-manager-6686bd87bb-zdkcn\" (UID: \"9e0684e0-af85-4e7f-a3b5-507d93a0bc7e\") " pod="openshift-route-controller-manager/route-controller-manager-6686bd87bb-zdkcn" Mar 12 08:02:23 crc kubenswrapper[4809]: I0312 08:02:23.951253 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9e0684e0-af85-4e7f-a3b5-507d93a0bc7e-client-ca\") pod \"route-controller-manager-6686bd87bb-zdkcn\" (UID: \"9e0684e0-af85-4e7f-a3b5-507d93a0bc7e\") " pod="openshift-route-controller-manager/route-controller-manager-6686bd87bb-zdkcn" Mar 12 08:02:23 crc kubenswrapper[4809]: I0312 08:02:23.954241 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9e0684e0-af85-4e7f-a3b5-507d93a0bc7e-client-ca\") pod \"route-controller-manager-6686bd87bb-zdkcn\" (UID: \"9e0684e0-af85-4e7f-a3b5-507d93a0bc7e\") " pod="openshift-route-controller-manager/route-controller-manager-6686bd87bb-zdkcn" Mar 12 08:02:23 crc kubenswrapper[4809]: I0312 08:02:23.955693 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e0684e0-af85-4e7f-a3b5-507d93a0bc7e-config\") pod \"route-controller-manager-6686bd87bb-zdkcn\" (UID: \"9e0684e0-af85-4e7f-a3b5-507d93a0bc7e\") " pod="openshift-route-controller-manager/route-controller-manager-6686bd87bb-zdkcn" Mar 12 08:02:23 crc kubenswrapper[4809]: I0312 08:02:23.962998 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e0684e0-af85-4e7f-a3b5-507d93a0bc7e-serving-cert\") pod \"route-controller-manager-6686bd87bb-zdkcn\" (UID: \"9e0684e0-af85-4e7f-a3b5-507d93a0bc7e\") " pod="openshift-route-controller-manager/route-controller-manager-6686bd87bb-zdkcn" Mar 12 08:02:23 crc kubenswrapper[4809]: I0312 08:02:23.969077 4809 patch_prober.go:28] interesting pod/downloads-7954f5f757-fl8dg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Mar 12 08:02:23 crc kubenswrapper[4809]: I0312 08:02:23.969176 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fl8dg" podUID="a752ff7b-9553-492d-83d0-42bb9ea5dfa9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Mar 12 08:02:23 crc kubenswrapper[4809]: I0312 08:02:23.969228 4809 patch_prober.go:28] interesting pod/downloads-7954f5f757-fl8dg container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Mar 12 08:02:23 crc kubenswrapper[4809]: I0312 08:02:23.969274 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-fl8dg" podUID="a752ff7b-9553-492d-83d0-42bb9ea5dfa9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Mar 12 08:02:23 crc kubenswrapper[4809]: I0312 08:02:23.972389 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cz95q\" (UniqueName: \"kubernetes.io/projected/9e0684e0-af85-4e7f-a3b5-507d93a0bc7e-kube-api-access-cz95q\") pod \"route-controller-manager-6686bd87bb-zdkcn\" (UID: \"9e0684e0-af85-4e7f-a3b5-507d93a0bc7e\") " pod="openshift-route-controller-manager/route-controller-manager-6686bd87bb-zdkcn" Mar 12 08:02:24 crc kubenswrapper[4809]: I0312 08:02:24.005803 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 12 08:02:24 crc kubenswrapper[4809]: I0312 08:02:24.006527 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 12 08:02:24 crc kubenswrapper[4809]: I0312 08:02:24.011258 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Mar 12 08:02:24 crc kubenswrapper[4809]: I0312 08:02:24.011448 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 12 08:02:24 crc kubenswrapper[4809]: I0312 08:02:24.012263 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fvmsc" Mar 12 08:02:24 crc kubenswrapper[4809]: I0312 08:02:24.012293 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fvmsc" Mar 12 08:02:24 crc kubenswrapper[4809]: I0312 08:02:24.029848 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6686bd87bb-zdkcn" Mar 12 08:02:24 crc kubenswrapper[4809]: I0312 08:02:24.032825 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 12 08:02:24 crc kubenswrapper[4809]: I0312 08:02:24.061928 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fvmsc" Mar 12 08:02:24 crc kubenswrapper[4809]: I0312 08:02:24.078049 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-ttmrw" Mar 12 08:02:24 crc kubenswrapper[4809]: I0312 08:02:24.078938 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-d525c" Mar 12 08:02:24 crc kubenswrapper[4809]: I0312 08:02:24.078981 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-d525c" Mar 12 08:02:24 crc kubenswrapper[4809]: I0312 08:02:24.082853 4809 patch_prober.go:28] interesting pod/router-default-5444994796-ttmrw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 08:02:24 crc kubenswrapper[4809]: [-]has-synced failed: reason withheld Mar 12 08:02:24 crc kubenswrapper[4809]: [+]process-running ok Mar 12 08:02:24 crc kubenswrapper[4809]: healthz check failed Mar 12 08:02:24 crc kubenswrapper[4809]: I0312 08:02:24.082888 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ttmrw" podUID="b24173e0-5140-4c4d-ab8b-ea3696db0b74" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 08:02:24 crc kubenswrapper[4809]: I0312 08:02:24.092886 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-d525c" Mar 12 08:02:24 crc kubenswrapper[4809]: I0312 08:02:24.093054 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jn4tc"] Mar 12 08:02:24 crc kubenswrapper[4809]: I0312 08:02:24.114382 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4qrn8"] Mar 12 08:02:24 crc kubenswrapper[4809]: I0312 08:02:24.115414 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4qrn8" Mar 12 08:02:24 crc kubenswrapper[4809]: I0312 08:02:24.119209 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 12 08:02:24 crc kubenswrapper[4809]: I0312 08:02:24.151155 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4qrn8"] Mar 12 08:02:24 crc kubenswrapper[4809]: I0312 08:02:24.159869 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7b6a0553-31de-43c6-8b88-95e21d1852af-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7b6a0553-31de-43c6-8b88-95e21d1852af\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 12 08:02:24 crc kubenswrapper[4809]: I0312 08:02:24.160082 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7b6a0553-31de-43c6-8b88-95e21d1852af-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7b6a0553-31de-43c6-8b88-95e21d1852af\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 12 08:02:24 crc kubenswrapper[4809]: I0312 08:02:24.235995 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wmft4"] Mar 12 08:02:24 crc kubenswrapper[4809]: I0312 08:02:24.262534 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7b6a0553-31de-43c6-8b88-95e21d1852af-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7b6a0553-31de-43c6-8b88-95e21d1852af\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 12 08:02:24 crc kubenswrapper[4809]: I0312 08:02:24.262579 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7b6a0553-31de-43c6-8b88-95e21d1852af-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7b6a0553-31de-43c6-8b88-95e21d1852af\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 12 08:02:24 crc kubenswrapper[4809]: I0312 08:02:24.262688 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f283aa6d-85ad-44ff-8758-d8251b00ae50-catalog-content\") pod \"redhat-operators-4qrn8\" (UID: \"f283aa6d-85ad-44ff-8758-d8251b00ae50\") " pod="openshift-marketplace/redhat-operators-4qrn8" Mar 12 08:02:24 crc kubenswrapper[4809]: I0312 08:02:24.262770 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvthr\" (UniqueName: \"kubernetes.io/projected/f283aa6d-85ad-44ff-8758-d8251b00ae50-kube-api-access-cvthr\") pod \"redhat-operators-4qrn8\" (UID: \"f283aa6d-85ad-44ff-8758-d8251b00ae50\") " pod="openshift-marketplace/redhat-operators-4qrn8" Mar 12 08:02:24 crc kubenswrapper[4809]: I0312 08:02:24.262814 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f283aa6d-85ad-44ff-8758-d8251b00ae50-utilities\") pod \"redhat-operators-4qrn8\" (UID: \"f283aa6d-85ad-44ff-8758-d8251b00ae50\") " pod="openshift-marketplace/redhat-operators-4qrn8" Mar 12 08:02:24 crc kubenswrapper[4809]: I0312 08:02:24.265578 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7b6a0553-31de-43c6-8b88-95e21d1852af-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7b6a0553-31de-43c6-8b88-95e21d1852af\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 12 08:02:24 crc kubenswrapper[4809]: W0312 08:02:24.289608 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6adda456_f49a_4a6d_b09b_8841158e9268.slice/crio-40fb895a60cbccb4c660aa8fe9e0263ae497a534e36ce898654003ff20d2b6b6 WatchSource:0}: Error finding container 40fb895a60cbccb4c660aa8fe9e0263ae497a534e36ce898654003ff20d2b6b6: Status 404 returned error can't find the container with id 40fb895a60cbccb4c660aa8fe9e0263ae497a534e36ce898654003ff20d2b6b6 Mar 12 08:02:24 crc kubenswrapper[4809]: I0312 08:02:24.314207 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7b6a0553-31de-43c6-8b88-95e21d1852af-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7b6a0553-31de-43c6-8b88-95e21d1852af\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 12 08:02:24 crc kubenswrapper[4809]: I0312 08:02:24.356508 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 12 08:02:24 crc kubenswrapper[4809]: I0312 08:02:24.372018 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f283aa6d-85ad-44ff-8758-d8251b00ae50-catalog-content\") pod \"redhat-operators-4qrn8\" (UID: \"f283aa6d-85ad-44ff-8758-d8251b00ae50\") " pod="openshift-marketplace/redhat-operators-4qrn8" Mar 12 08:02:24 crc kubenswrapper[4809]: I0312 08:02:24.372084 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvthr\" (UniqueName: \"kubernetes.io/projected/f283aa6d-85ad-44ff-8758-d8251b00ae50-kube-api-access-cvthr\") pod \"redhat-operators-4qrn8\" (UID: \"f283aa6d-85ad-44ff-8758-d8251b00ae50\") " pod="openshift-marketplace/redhat-operators-4qrn8" Mar 12 08:02:24 crc kubenswrapper[4809]: I0312 08:02:24.372115 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f283aa6d-85ad-44ff-8758-d8251b00ae50-utilities\") pod \"redhat-operators-4qrn8\" (UID: \"f283aa6d-85ad-44ff-8758-d8251b00ae50\") " pod="openshift-marketplace/redhat-operators-4qrn8" Mar 12 08:02:24 crc kubenswrapper[4809]: I0312 08:02:24.372545 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f283aa6d-85ad-44ff-8758-d8251b00ae50-utilities\") pod \"redhat-operators-4qrn8\" (UID: \"f283aa6d-85ad-44ff-8758-d8251b00ae50\") " pod="openshift-marketplace/redhat-operators-4qrn8" Mar 12 08:02:24 crc kubenswrapper[4809]: I0312 08:02:24.373067 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f283aa6d-85ad-44ff-8758-d8251b00ae50-catalog-content\") pod \"redhat-operators-4qrn8\" (UID: \"f283aa6d-85ad-44ff-8758-d8251b00ae50\") " pod="openshift-marketplace/redhat-operators-4qrn8" Mar 12 08:02:24 crc kubenswrapper[4809]: I0312 08:02:24.401320 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvthr\" (UniqueName: \"kubernetes.io/projected/f283aa6d-85ad-44ff-8758-d8251b00ae50-kube-api-access-cvthr\") pod \"redhat-operators-4qrn8\" (UID: \"f283aa6d-85ad-44ff-8758-d8251b00ae50\") " pod="openshift-marketplace/redhat-operators-4qrn8" Mar 12 08:02:24 crc kubenswrapper[4809]: I0312 08:02:24.445872 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4qrn8" Mar 12 08:02:24 crc kubenswrapper[4809]: I0312 08:02:24.526973 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jkqnv"] Mar 12 08:02:24 crc kubenswrapper[4809]: I0312 08:02:24.547442 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jkqnv" Mar 12 08:02:24 crc kubenswrapper[4809]: I0312 08:02:24.570231 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jkqnv"] Mar 12 08:02:24 crc kubenswrapper[4809]: I0312 08:02:24.607746 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wmft4" event={"ID":"6adda456-f49a-4a6d-b09b-8841158e9268","Type":"ContainerStarted","Data":"40fb895a60cbccb4c660aa8fe9e0263ae497a534e36ce898654003ff20d2b6b6"} Mar 12 08:02:24 crc kubenswrapper[4809]: I0312 08:02:24.609203 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6686bd87bb-zdkcn"] Mar 12 08:02:24 crc kubenswrapper[4809]: I0312 08:02:24.633459 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6686bd87bb-zdkcn" event={"ID":"9e0684e0-af85-4e7f-a3b5-507d93a0bc7e","Type":"ContainerStarted","Data":"a1e14f0b58ecc81e4880cf01791a34aa005aeb6159cc441294b20a26dec11fb7"} Mar 12 08:02:24 crc kubenswrapper[4809]: I0312 08:02:24.644036 4809 generic.go:334] "Generic (PLEG): container finished" podID="c060e910-de6e-43df-a148-66f07bc71180" containerID="98d4dcd19719f9d446fe81d94fdb22895a3e29ab190abea832e613920b2fd301" exitCode=0 Mar 12 08:02:24 crc kubenswrapper[4809]: I0312 08:02:24.645823 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jn4tc" event={"ID":"c060e910-de6e-43df-a148-66f07bc71180","Type":"ContainerDied","Data":"98d4dcd19719f9d446fe81d94fdb22895a3e29ab190abea832e613920b2fd301"} Mar 12 08:02:24 crc kubenswrapper[4809]: I0312 08:02:24.645860 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jn4tc" event={"ID":"c060e910-de6e-43df-a148-66f07bc71180","Type":"ContainerStarted","Data":"37444f75c0babbbe85544bb605e046938218da9005ce316ca32c52d02ef8b24f"} Mar 12 08:02:24 crc kubenswrapper[4809]: I0312 08:02:24.666240 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-d525c" Mar 12 08:02:24 crc kubenswrapper[4809]: I0312 08:02:24.669706 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fvmsc" Mar 12 08:02:24 crc kubenswrapper[4809]: I0312 08:02:24.677256 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wphrv\" (UniqueName: \"kubernetes.io/projected/75d1a803-df56-424d-ace1-ecc868081fca-kube-api-access-wphrv\") pod \"redhat-operators-jkqnv\" (UID: \"75d1a803-df56-424d-ace1-ecc868081fca\") " pod="openshift-marketplace/redhat-operators-jkqnv" Mar 12 08:02:24 crc kubenswrapper[4809]: I0312 08:02:24.677626 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75d1a803-df56-424d-ace1-ecc868081fca-catalog-content\") pod \"redhat-operators-jkqnv\" (UID: \"75d1a803-df56-424d-ace1-ecc868081fca\") " pod="openshift-marketplace/redhat-operators-jkqnv" Mar 12 08:02:24 crc kubenswrapper[4809]: I0312 08:02:24.677722 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75d1a803-df56-424d-ace1-ecc868081fca-utilities\") pod \"redhat-operators-jkqnv\" (UID: \"75d1a803-df56-424d-ace1-ecc868081fca\") " pod="openshift-marketplace/redhat-operators-jkqnv" Mar 12 08:02:24 crc kubenswrapper[4809]: I0312 08:02:24.744546 4809 ???:1] "http: TLS handshake error from 192.168.126.11:53754: no serving certificate available for the kubelet" Mar 12 08:02:24 crc kubenswrapper[4809]: I0312 08:02:24.779777 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wphrv\" (UniqueName: \"kubernetes.io/projected/75d1a803-df56-424d-ace1-ecc868081fca-kube-api-access-wphrv\") pod \"redhat-operators-jkqnv\" (UID: \"75d1a803-df56-424d-ace1-ecc868081fca\") " pod="openshift-marketplace/redhat-operators-jkqnv" Mar 12 08:02:24 crc kubenswrapper[4809]: I0312 08:02:24.780013 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75d1a803-df56-424d-ace1-ecc868081fca-catalog-content\") pod \"redhat-operators-jkqnv\" (UID: \"75d1a803-df56-424d-ace1-ecc868081fca\") " pod="openshift-marketplace/redhat-operators-jkqnv" Mar 12 08:02:24 crc kubenswrapper[4809]: I0312 08:02:24.780075 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75d1a803-df56-424d-ace1-ecc868081fca-utilities\") pod \"redhat-operators-jkqnv\" (UID: \"75d1a803-df56-424d-ace1-ecc868081fca\") " pod="openshift-marketplace/redhat-operators-jkqnv" Mar 12 08:02:24 crc kubenswrapper[4809]: I0312 08:02:24.790263 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75d1a803-df56-424d-ace1-ecc868081fca-catalog-content\") pod \"redhat-operators-jkqnv\" (UID: \"75d1a803-df56-424d-ace1-ecc868081fca\") " pod="openshift-marketplace/redhat-operators-jkqnv" Mar 12 08:02:24 crc kubenswrapper[4809]: I0312 08:02:24.791890 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75d1a803-df56-424d-ace1-ecc868081fca-utilities\") pod \"redhat-operators-jkqnv\" (UID: \"75d1a803-df56-424d-ace1-ecc868081fca\") " pod="openshift-marketplace/redhat-operators-jkqnv" Mar 12 08:02:24 crc kubenswrapper[4809]: I0312 08:02:24.834216 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wphrv\" (UniqueName: \"kubernetes.io/projected/75d1a803-df56-424d-ace1-ecc868081fca-kube-api-access-wphrv\") pod \"redhat-operators-jkqnv\" (UID: \"75d1a803-df56-424d-ace1-ecc868081fca\") " pod="openshift-marketplace/redhat-operators-jkqnv" Mar 12 08:02:24 crc kubenswrapper[4809]: I0312 08:02:24.906486 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jkqnv" Mar 12 08:02:25 crc kubenswrapper[4809]: I0312 08:02:25.096372 4809 patch_prober.go:28] interesting pod/router-default-5444994796-ttmrw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 08:02:25 crc kubenswrapper[4809]: [-]has-synced failed: reason withheld Mar 12 08:02:25 crc kubenswrapper[4809]: [+]process-running ok Mar 12 08:02:25 crc kubenswrapper[4809]: healthz check failed Mar 12 08:02:25 crc kubenswrapper[4809]: I0312 08:02:25.096460 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ttmrw" podUID="b24173e0-5140-4c4d-ab8b-ea3696db0b74" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 08:02:25 crc kubenswrapper[4809]: I0312 08:02:25.252552 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4qrn8"] Mar 12 08:02:25 crc kubenswrapper[4809]: I0312 08:02:25.294955 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 12 08:02:25 crc kubenswrapper[4809]: W0312 08:02:25.301346 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf283aa6d_85ad_44ff_8758_d8251b00ae50.slice/crio-0506ef62eabeb4fb3a1d648b591cd0a6ae41373327d468dca287c92c093b397c WatchSource:0}: Error finding container 0506ef62eabeb4fb3a1d648b591cd0a6ae41373327d468dca287c92c093b397c: Status 404 returned error can't find the container with id 0506ef62eabeb4fb3a1d648b591cd0a6ae41373327d468dca287c92c093b397c Mar 12 08:02:25 crc kubenswrapper[4809]: W0312 08:02:25.324633 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod7b6a0553_31de_43c6_8b88_95e21d1852af.slice/crio-3905851a796c1dc98401610e23073b67c67434b6961451beb4cfcd755bdbe32c WatchSource:0}: Error finding container 3905851a796c1dc98401610e23073b67c67434b6961451beb4cfcd755bdbe32c: Status 404 returned error can't find the container with id 3905851a796c1dc98401610e23073b67c67434b6961451beb4cfcd755bdbe32c Mar 12 08:02:25 crc kubenswrapper[4809]: I0312 08:02:25.662368 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"7b6a0553-31de-43c6-8b88-95e21d1852af","Type":"ContainerStarted","Data":"3905851a796c1dc98401610e23073b67c67434b6961451beb4cfcd755bdbe32c"} Mar 12 08:02:25 crc kubenswrapper[4809]: I0312 08:02:25.669045 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6686bd87bb-zdkcn" event={"ID":"9e0684e0-af85-4e7f-a3b5-507d93a0bc7e","Type":"ContainerStarted","Data":"c986b0d598fc8e45f76eb40d02efb60ea62d7feac90a6c53d6aaa84a329b8e13"} Mar 12 08:02:25 crc kubenswrapper[4809]: I0312 08:02:25.669807 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6686bd87bb-zdkcn" Mar 12 08:02:25 crc kubenswrapper[4809]: I0312 08:02:25.672709 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4qrn8" event={"ID":"f283aa6d-85ad-44ff-8758-d8251b00ae50","Type":"ContainerStarted","Data":"c46ed0c35d3dadc5f3cfa341d9b06bd7ad18e531a54d2367f9cfdcbe3c3e9d5e"} Mar 12 08:02:25 crc kubenswrapper[4809]: I0312 08:02:25.672746 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4qrn8" event={"ID":"f283aa6d-85ad-44ff-8758-d8251b00ae50","Type":"ContainerStarted","Data":"0506ef62eabeb4fb3a1d648b591cd0a6ae41373327d468dca287c92c093b397c"} Mar 12 08:02:25 crc kubenswrapper[4809]: I0312 08:02:25.675251 4809 generic.go:334] "Generic (PLEG): container finished" podID="6adda456-f49a-4a6d-b09b-8841158e9268" containerID="6bf8beff6fcfa4314eb9cef0c30e6e8c14b42d80fb72307c2ab2b795c7686b23" exitCode=0 Mar 12 08:02:25 crc kubenswrapper[4809]: I0312 08:02:25.675398 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wmft4" event={"ID":"6adda456-f49a-4a6d-b09b-8841158e9268","Type":"ContainerDied","Data":"6bf8beff6fcfa4314eb9cef0c30e6e8c14b42d80fb72307c2ab2b795c7686b23"} Mar 12 08:02:25 crc kubenswrapper[4809]: I0312 08:02:25.688652 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6686bd87bb-zdkcn" podStartSLOduration=7.688631269 podStartE2EDuration="7.688631269s" podCreationTimestamp="2026-03-12 08:02:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:02:25.684927073 +0000 UTC m=+219.266962806" watchObservedRunningTime="2026-03-12 08:02:25.688631269 +0000 UTC m=+219.270667002" Mar 12 08:02:25 crc kubenswrapper[4809]: I0312 08:02:25.782678 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jkqnv"] Mar 12 08:02:25 crc kubenswrapper[4809]: I0312 08:02:25.852276 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6686bd87bb-zdkcn" Mar 12 08:02:26 crc kubenswrapper[4809]: I0312 08:02:26.082554 4809 patch_prober.go:28] interesting pod/router-default-5444994796-ttmrw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 08:02:26 crc kubenswrapper[4809]: [+]has-synced ok Mar 12 08:02:26 crc kubenswrapper[4809]: [+]process-running ok Mar 12 08:02:26 crc kubenswrapper[4809]: healthz check failed Mar 12 08:02:26 crc kubenswrapper[4809]: I0312 08:02:26.082963 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ttmrw" podUID="b24173e0-5140-4c4d-ab8b-ea3696db0b74" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 08:02:26 crc kubenswrapper[4809]: I0312 08:02:26.288313 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-lt58n" Mar 12 08:02:26 crc kubenswrapper[4809]: I0312 08:02:26.479144 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 12 08:02:26 crc kubenswrapper[4809]: I0312 08:02:26.480235 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 12 08:02:26 crc kubenswrapper[4809]: I0312 08:02:26.482893 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 12 08:02:26 crc kubenswrapper[4809]: I0312 08:02:26.483176 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Mar 12 08:02:26 crc kubenswrapper[4809]: I0312 08:02:26.509633 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 12 08:02:26 crc kubenswrapper[4809]: I0312 08:02:26.621317 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/18583b8a-a86f-44e3-b813-eb9df44eab48-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"18583b8a-a86f-44e3-b813-eb9df44eab48\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 12 08:02:26 crc kubenswrapper[4809]: I0312 08:02:26.621408 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/18583b8a-a86f-44e3-b813-eb9df44eab48-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"18583b8a-a86f-44e3-b813-eb9df44eab48\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 12 08:02:26 crc kubenswrapper[4809]: I0312 08:02:26.686406 4809 generic.go:334] "Generic (PLEG): container finished" podID="f283aa6d-85ad-44ff-8758-d8251b00ae50" containerID="c46ed0c35d3dadc5f3cfa341d9b06bd7ad18e531a54d2367f9cfdcbe3c3e9d5e" exitCode=0 Mar 12 08:02:26 crc kubenswrapper[4809]: I0312 08:02:26.686492 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4qrn8" event={"ID":"f283aa6d-85ad-44ff-8758-d8251b00ae50","Type":"ContainerDied","Data":"c46ed0c35d3dadc5f3cfa341d9b06bd7ad18e531a54d2367f9cfdcbe3c3e9d5e"} Mar 12 08:02:26 crc kubenswrapper[4809]: I0312 08:02:26.691969 4809 generic.go:334] "Generic (PLEG): container finished" podID="75d1a803-df56-424d-ace1-ecc868081fca" containerID="b9059eee3e580902a476f96ef5c15d6835a95bc18c208a5b6ade171f4d3a7bc6" exitCode=0 Mar 12 08:02:26 crc kubenswrapper[4809]: I0312 08:02:26.692043 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jkqnv" event={"ID":"75d1a803-df56-424d-ace1-ecc868081fca","Type":"ContainerDied","Data":"b9059eee3e580902a476f96ef5c15d6835a95bc18c208a5b6ade171f4d3a7bc6"} Mar 12 08:02:26 crc kubenswrapper[4809]: I0312 08:02:26.692071 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jkqnv" event={"ID":"75d1a803-df56-424d-ace1-ecc868081fca","Type":"ContainerStarted","Data":"a1b74cefe74c673a58eef8be378af9c6867ed22ac98aaebe1df76dd907066b09"} Mar 12 08:02:26 crc kubenswrapper[4809]: I0312 08:02:26.722359 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"7b6a0553-31de-43c6-8b88-95e21d1852af","Type":"ContainerStarted","Data":"a902123c50f289093cbfef71d4010f2be9d6a6dbbd31a4f3191cb70d6c21ffae"} Mar 12 08:02:26 crc kubenswrapper[4809]: I0312 08:02:26.726672 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/18583b8a-a86f-44e3-b813-eb9df44eab48-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"18583b8a-a86f-44e3-b813-eb9df44eab48\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 12 08:02:26 crc kubenswrapper[4809]: I0312 08:02:26.726818 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/18583b8a-a86f-44e3-b813-eb9df44eab48-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"18583b8a-a86f-44e3-b813-eb9df44eab48\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 12 08:02:26 crc kubenswrapper[4809]: I0312 08:02:26.726872 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/18583b8a-a86f-44e3-b813-eb9df44eab48-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"18583b8a-a86f-44e3-b813-eb9df44eab48\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 12 08:02:26 crc kubenswrapper[4809]: I0312 08:02:26.760868 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/18583b8a-a86f-44e3-b813-eb9df44eab48-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"18583b8a-a86f-44e3-b813-eb9df44eab48\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 12 08:02:26 crc kubenswrapper[4809]: I0312 08:02:26.770645 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.770617883 podStartE2EDuration="3.770617883s" podCreationTimestamp="2026-03-12 08:02:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:02:26.766894705 +0000 UTC m=+220.348930438" watchObservedRunningTime="2026-03-12 08:02:26.770617883 +0000 UTC m=+220.352653626" Mar 12 08:02:26 crc kubenswrapper[4809]: I0312 08:02:26.821907 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 12 08:02:27 crc kubenswrapper[4809]: I0312 08:02:27.083402 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-ttmrw" Mar 12 08:02:27 crc kubenswrapper[4809]: I0312 08:02:27.088456 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-ttmrw" Mar 12 08:02:27 crc kubenswrapper[4809]: I0312 08:02:27.138502 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 12 08:02:27 crc kubenswrapper[4809]: W0312 08:02:27.193314 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod18583b8a_a86f_44e3_b813_eb9df44eab48.slice/crio-f003df182b8801dfaa36e6173e2ccb650523d1cafe49ffff13e6ba6688ffb586 WatchSource:0}: Error finding container f003df182b8801dfaa36e6173e2ccb650523d1cafe49ffff13e6ba6688ffb586: Status 404 returned error can't find the container with id f003df182b8801dfaa36e6173e2ccb650523d1cafe49ffff13e6ba6688ffb586 Mar 12 08:02:27 crc kubenswrapper[4809]: I0312 08:02:27.743835 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"18583b8a-a86f-44e3-b813-eb9df44eab48","Type":"ContainerStarted","Data":"f003df182b8801dfaa36e6173e2ccb650523d1cafe49ffff13e6ba6688ffb586"} Mar 12 08:02:27 crc kubenswrapper[4809]: I0312 08:02:27.746627 4809 generic.go:334] "Generic (PLEG): container finished" podID="7b6a0553-31de-43c6-8b88-95e21d1852af" containerID="a902123c50f289093cbfef71d4010f2be9d6a6dbbd31a4f3191cb70d6c21ffae" exitCode=0 Mar 12 08:02:27 crc kubenswrapper[4809]: I0312 08:02:27.747994 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"7b6a0553-31de-43c6-8b88-95e21d1852af","Type":"ContainerDied","Data":"a902123c50f289093cbfef71d4010f2be9d6a6dbbd31a4f3191cb70d6c21ffae"} Mar 12 08:02:28 crc kubenswrapper[4809]: I0312 08:02:28.780877 4809 generic.go:334] "Generic (PLEG): container finished" podID="18583b8a-a86f-44e3-b813-eb9df44eab48" containerID="2373bfde708554b1110abb519415ec822086be17cf261f558a3b06fe17ac21f3" exitCode=0 Mar 12 08:02:28 crc kubenswrapper[4809]: I0312 08:02:28.780990 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"18583b8a-a86f-44e3-b813-eb9df44eab48","Type":"ContainerDied","Data":"2373bfde708554b1110abb519415ec822086be17cf261f558a3b06fe17ac21f3"} Mar 12 08:02:31 crc kubenswrapper[4809]: I0312 08:02:31.432151 4809 ???:1] "http: TLS handshake error from 192.168.126.11:53770: no serving certificate available for the kubelet" Mar 12 08:02:32 crc kubenswrapper[4809]: I0312 08:02:32.258506 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-rvfv4" Mar 12 08:02:32 crc kubenswrapper[4809]: I0312 08:02:32.263802 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-rvfv4" Mar 12 08:02:33 crc kubenswrapper[4809]: I0312 08:02:33.983866 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-fl8dg" Mar 12 08:02:35 crc kubenswrapper[4809]: I0312 08:02:35.008507 4809 ???:1] "http: TLS handshake error from 192.168.126.11:47730: no serving certificate available for the kubelet" Mar 12 08:02:35 crc kubenswrapper[4809]: I0312 08:02:35.499432 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3d31c58d-0f0d-431f-bebc-57173f467eee-metrics-certs\") pod \"network-metrics-daemon-p566k\" (UID: \"3d31c58d-0f0d-431f-bebc-57173f467eee\") " pod="openshift-multus/network-metrics-daemon-p566k" Mar 12 08:02:35 crc kubenswrapper[4809]: I0312 08:02:35.523821 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3d31c58d-0f0d-431f-bebc-57173f467eee-metrics-certs\") pod \"network-metrics-daemon-p566k\" (UID: \"3d31c58d-0f0d-431f-bebc-57173f467eee\") " pod="openshift-multus/network-metrics-daemon-p566k" Mar 12 08:02:35 crc kubenswrapper[4809]: I0312 08:02:35.735492 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p566k" Mar 12 08:02:37 crc kubenswrapper[4809]: I0312 08:02:37.839344 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5dd879c6b8-wpdhg"] Mar 12 08:02:37 crc kubenswrapper[4809]: I0312 08:02:37.839579 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5dd879c6b8-wpdhg" podUID="cc396022-e360-45ad-a69d-c913cf15bfc2" containerName="controller-manager" containerID="cri-o://2d2380dfe104c7db5d89ab69e54a3bd07788ff2767d27f9f6c20e55700f0ea54" gracePeriod=30 Mar 12 08:02:37 crc kubenswrapper[4809]: I0312 08:02:37.874827 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6686bd87bb-zdkcn"] Mar 12 08:02:37 crc kubenswrapper[4809]: I0312 08:02:37.875038 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6686bd87bb-zdkcn" podUID="9e0684e0-af85-4e7f-a3b5-507d93a0bc7e" containerName="route-controller-manager" containerID="cri-o://c986b0d598fc8e45f76eb40d02efb60ea62d7feac90a6c53d6aaa84a329b8e13" gracePeriod=30 Mar 12 08:02:38 crc kubenswrapper[4809]: I0312 08:02:38.713469 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 12 08:02:38 crc kubenswrapper[4809]: I0312 08:02:38.857335 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/18583b8a-a86f-44e3-b813-eb9df44eab48-kube-api-access\") pod \"18583b8a-a86f-44e3-b813-eb9df44eab48\" (UID: \"18583b8a-a86f-44e3-b813-eb9df44eab48\") " Mar 12 08:02:38 crc kubenswrapper[4809]: I0312 08:02:38.857422 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/18583b8a-a86f-44e3-b813-eb9df44eab48-kubelet-dir\") pod \"18583b8a-a86f-44e3-b813-eb9df44eab48\" (UID: \"18583b8a-a86f-44e3-b813-eb9df44eab48\") " Mar 12 08:02:38 crc kubenswrapper[4809]: I0312 08:02:38.857970 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18583b8a-a86f-44e3-b813-eb9df44eab48-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "18583b8a-a86f-44e3-b813-eb9df44eab48" (UID: "18583b8a-a86f-44e3-b813-eb9df44eab48"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 08:02:38 crc kubenswrapper[4809]: I0312 08:02:38.866290 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18583b8a-a86f-44e3-b813-eb9df44eab48-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "18583b8a-a86f-44e3-b813-eb9df44eab48" (UID: "18583b8a-a86f-44e3-b813-eb9df44eab48"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:02:38 crc kubenswrapper[4809]: I0312 08:02:38.908155 4809 generic.go:334] "Generic (PLEG): container finished" podID="cc396022-e360-45ad-a69d-c913cf15bfc2" containerID="2d2380dfe104c7db5d89ab69e54a3bd07788ff2767d27f9f6c20e55700f0ea54" exitCode=0 Mar 12 08:02:38 crc kubenswrapper[4809]: I0312 08:02:38.908626 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5dd879c6b8-wpdhg" event={"ID":"cc396022-e360-45ad-a69d-c913cf15bfc2","Type":"ContainerDied","Data":"2d2380dfe104c7db5d89ab69e54a3bd07788ff2767d27f9f6c20e55700f0ea54"} Mar 12 08:02:38 crc kubenswrapper[4809]: I0312 08:02:38.913176 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"18583b8a-a86f-44e3-b813-eb9df44eab48","Type":"ContainerDied","Data":"f003df182b8801dfaa36e6173e2ccb650523d1cafe49ffff13e6ba6688ffb586"} Mar 12 08:02:38 crc kubenswrapper[4809]: I0312 08:02:38.913299 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f003df182b8801dfaa36e6173e2ccb650523d1cafe49ffff13e6ba6688ffb586" Mar 12 08:02:38 crc kubenswrapper[4809]: I0312 08:02:38.913378 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 12 08:02:38 crc kubenswrapper[4809]: I0312 08:02:38.931206 4809 generic.go:334] "Generic (PLEG): container finished" podID="9e0684e0-af85-4e7f-a3b5-507d93a0bc7e" containerID="c986b0d598fc8e45f76eb40d02efb60ea62d7feac90a6c53d6aaa84a329b8e13" exitCode=0 Mar 12 08:02:38 crc kubenswrapper[4809]: I0312 08:02:38.931285 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6686bd87bb-zdkcn" event={"ID":"9e0684e0-af85-4e7f-a3b5-507d93a0bc7e","Type":"ContainerDied","Data":"c986b0d598fc8e45f76eb40d02efb60ea62d7feac90a6c53d6aaa84a329b8e13"} Mar 12 08:02:38 crc kubenswrapper[4809]: I0312 08:02:38.966087 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/18583b8a-a86f-44e3-b813-eb9df44eab48-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 12 08:02:38 crc kubenswrapper[4809]: I0312 08:02:38.966150 4809 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/18583b8a-a86f-44e3-b813-eb9df44eab48-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 12 08:02:40 crc kubenswrapper[4809]: I0312 08:02:40.470883 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 12 08:02:40 crc kubenswrapper[4809]: I0312 08:02:40.590928 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7b6a0553-31de-43c6-8b88-95e21d1852af-kube-api-access\") pod \"7b6a0553-31de-43c6-8b88-95e21d1852af\" (UID: \"7b6a0553-31de-43c6-8b88-95e21d1852af\") " Mar 12 08:02:40 crc kubenswrapper[4809]: I0312 08:02:40.591062 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7b6a0553-31de-43c6-8b88-95e21d1852af-kubelet-dir\") pod \"7b6a0553-31de-43c6-8b88-95e21d1852af\" (UID: \"7b6a0553-31de-43c6-8b88-95e21d1852af\") " Mar 12 08:02:40 crc kubenswrapper[4809]: I0312 08:02:40.591358 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b6a0553-31de-43c6-8b88-95e21d1852af-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7b6a0553-31de-43c6-8b88-95e21d1852af" (UID: "7b6a0553-31de-43c6-8b88-95e21d1852af"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 08:02:40 crc kubenswrapper[4809]: I0312 08:02:40.602501 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b6a0553-31de-43c6-8b88-95e21d1852af-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7b6a0553-31de-43c6-8b88-95e21d1852af" (UID: "7b6a0553-31de-43c6-8b88-95e21d1852af"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:02:40 crc kubenswrapper[4809]: I0312 08:02:40.692959 4809 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7b6a0553-31de-43c6-8b88-95e21d1852af-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 12 08:02:40 crc kubenswrapper[4809]: I0312 08:02:40.693008 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7b6a0553-31de-43c6-8b88-95e21d1852af-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 12 08:02:40 crc kubenswrapper[4809]: I0312 08:02:40.949865 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"7b6a0553-31de-43c6-8b88-95e21d1852af","Type":"ContainerDied","Data":"3905851a796c1dc98401610e23073b67c67434b6961451beb4cfcd755bdbe32c"} Mar 12 08:02:40 crc kubenswrapper[4809]: I0312 08:02:40.949931 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3905851a796c1dc98401610e23073b67c67434b6961451beb4cfcd755bdbe32c" Mar 12 08:02:40 crc kubenswrapper[4809]: I0312 08:02:40.949969 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 12 08:02:41 crc kubenswrapper[4809]: I0312 08:02:41.857765 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:02:42 crc kubenswrapper[4809]: I0312 08:02:42.141158 4809 patch_prober.go:28] interesting pod/controller-manager-5dd879c6b8-wpdhg container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.46:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 08:02:42 crc kubenswrapper[4809]: I0312 08:02:42.141241 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5dd879c6b8-wpdhg" podUID="cc396022-e360-45ad-a69d-c913cf15bfc2" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.46:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 08:02:42 crc kubenswrapper[4809]: I0312 08:02:42.388695 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 08:02:42 crc kubenswrapper[4809]: E0312 08:02:42.936402 4809 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift4/ose-cli:latest" Mar 12 08:02:42 crc kubenswrapper[4809]: E0312 08:02:42.936574 4809 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 12 08:02:42 crc kubenswrapper[4809]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Mar 12 08:02:42 crc kubenswrapper[4809]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-smmpr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29555042-287fz_openshift-infra(da2d7bf2-3fcc-42c4-ae05-c16d5c714a26): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled Mar 12 08:02:42 crc kubenswrapper[4809]: > logger="UnhandledError" Mar 12 08:02:42 crc kubenswrapper[4809]: E0312 08:02:42.938200 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-infra/auto-csr-approver-29555042-287fz" podUID="da2d7bf2-3fcc-42c4-ae05-c16d5c714a26" Mar 12 08:02:42 crc kubenswrapper[4809]: E0312 08:02:42.965090 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29555042-287fz" podUID="da2d7bf2-3fcc-42c4-ae05-c16d5c714a26" Mar 12 08:02:44 crc kubenswrapper[4809]: I0312 08:02:44.004033 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5dd879c6b8-wpdhg" Mar 12 08:02:44 crc kubenswrapper[4809]: I0312 08:02:44.032477 4809 patch_prober.go:28] interesting pod/route-controller-manager-6686bd87bb-zdkcn container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.53:8443/healthz\": dial tcp 10.217.0.53:8443: connect: connection refused" start-of-body= Mar 12 08:02:44 crc kubenswrapper[4809]: I0312 08:02:44.032534 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6686bd87bb-zdkcn" podUID="9e0684e0-af85-4e7f-a3b5-507d93a0bc7e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.53:8443/healthz\": dial tcp 10.217.0.53:8443: connect: connection refused" Mar 12 08:02:44 crc kubenswrapper[4809]: I0312 08:02:44.155337 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc396022-e360-45ad-a69d-c913cf15bfc2-serving-cert\") pod \"cc396022-e360-45ad-a69d-c913cf15bfc2\" (UID: \"cc396022-e360-45ad-a69d-c913cf15bfc2\") " Mar 12 08:02:44 crc kubenswrapper[4809]: I0312 08:02:44.155421 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cc396022-e360-45ad-a69d-c913cf15bfc2-proxy-ca-bundles\") pod \"cc396022-e360-45ad-a69d-c913cf15bfc2\" (UID: \"cc396022-e360-45ad-a69d-c913cf15bfc2\") " Mar 12 08:02:44 crc kubenswrapper[4809]: I0312 08:02:44.155469 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4ltd\" (UniqueName: \"kubernetes.io/projected/cc396022-e360-45ad-a69d-c913cf15bfc2-kube-api-access-f4ltd\") pod \"cc396022-e360-45ad-a69d-c913cf15bfc2\" (UID: \"cc396022-e360-45ad-a69d-c913cf15bfc2\") " Mar 12 08:02:44 crc kubenswrapper[4809]: I0312 08:02:44.155516 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cc396022-e360-45ad-a69d-c913cf15bfc2-client-ca\") pod \"cc396022-e360-45ad-a69d-c913cf15bfc2\" (UID: \"cc396022-e360-45ad-a69d-c913cf15bfc2\") " Mar 12 08:02:44 crc kubenswrapper[4809]: I0312 08:02:44.155546 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc396022-e360-45ad-a69d-c913cf15bfc2-config\") pod \"cc396022-e360-45ad-a69d-c913cf15bfc2\" (UID: \"cc396022-e360-45ad-a69d-c913cf15bfc2\") " Mar 12 08:02:44 crc kubenswrapper[4809]: I0312 08:02:44.156686 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc396022-e360-45ad-a69d-c913cf15bfc2-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "cc396022-e360-45ad-a69d-c913cf15bfc2" (UID: "cc396022-e360-45ad-a69d-c913cf15bfc2"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:02:44 crc kubenswrapper[4809]: I0312 08:02:44.156750 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc396022-e360-45ad-a69d-c913cf15bfc2-client-ca" (OuterVolumeSpecName: "client-ca") pod "cc396022-e360-45ad-a69d-c913cf15bfc2" (UID: "cc396022-e360-45ad-a69d-c913cf15bfc2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:02:44 crc kubenswrapper[4809]: I0312 08:02:44.156822 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc396022-e360-45ad-a69d-c913cf15bfc2-config" (OuterVolumeSpecName: "config") pod "cc396022-e360-45ad-a69d-c913cf15bfc2" (UID: "cc396022-e360-45ad-a69d-c913cf15bfc2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:02:44 crc kubenswrapper[4809]: I0312 08:02:44.169454 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc396022-e360-45ad-a69d-c913cf15bfc2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "cc396022-e360-45ad-a69d-c913cf15bfc2" (UID: "cc396022-e360-45ad-a69d-c913cf15bfc2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:02:44 crc kubenswrapper[4809]: I0312 08:02:44.169475 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc396022-e360-45ad-a69d-c913cf15bfc2-kube-api-access-f4ltd" (OuterVolumeSpecName: "kube-api-access-f4ltd") pod "cc396022-e360-45ad-a69d-c913cf15bfc2" (UID: "cc396022-e360-45ad-a69d-c913cf15bfc2"). InnerVolumeSpecName "kube-api-access-f4ltd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:02:44 crc kubenswrapper[4809]: I0312 08:02:44.257450 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4ltd\" (UniqueName: \"kubernetes.io/projected/cc396022-e360-45ad-a69d-c913cf15bfc2-kube-api-access-f4ltd\") on node \"crc\" DevicePath \"\"" Mar 12 08:02:44 crc kubenswrapper[4809]: I0312 08:02:44.257496 4809 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cc396022-e360-45ad-a69d-c913cf15bfc2-client-ca\") on node \"crc\" DevicePath \"\"" Mar 12 08:02:44 crc kubenswrapper[4809]: I0312 08:02:44.257510 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc396022-e360-45ad-a69d-c913cf15bfc2-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:02:44 crc kubenswrapper[4809]: I0312 08:02:44.257520 4809 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc396022-e360-45ad-a69d-c913cf15bfc2-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 08:02:44 crc kubenswrapper[4809]: I0312 08:02:44.257529 4809 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cc396022-e360-45ad-a69d-c913cf15bfc2-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 12 08:02:44 crc kubenswrapper[4809]: I0312 08:02:44.465682 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6686bd87bb-zdkcn" Mar 12 08:02:44 crc kubenswrapper[4809]: I0312 08:02:44.560755 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9e0684e0-af85-4e7f-a3b5-507d93a0bc7e-client-ca\") pod \"9e0684e0-af85-4e7f-a3b5-507d93a0bc7e\" (UID: \"9e0684e0-af85-4e7f-a3b5-507d93a0bc7e\") " Mar 12 08:02:44 crc kubenswrapper[4809]: I0312 08:02:44.560838 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cz95q\" (UniqueName: \"kubernetes.io/projected/9e0684e0-af85-4e7f-a3b5-507d93a0bc7e-kube-api-access-cz95q\") pod \"9e0684e0-af85-4e7f-a3b5-507d93a0bc7e\" (UID: \"9e0684e0-af85-4e7f-a3b5-507d93a0bc7e\") " Mar 12 08:02:44 crc kubenswrapper[4809]: I0312 08:02:44.560973 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e0684e0-af85-4e7f-a3b5-507d93a0bc7e-serving-cert\") pod \"9e0684e0-af85-4e7f-a3b5-507d93a0bc7e\" (UID: \"9e0684e0-af85-4e7f-a3b5-507d93a0bc7e\") " Mar 12 08:02:44 crc kubenswrapper[4809]: I0312 08:02:44.561170 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e0684e0-af85-4e7f-a3b5-507d93a0bc7e-config\") pod \"9e0684e0-af85-4e7f-a3b5-507d93a0bc7e\" (UID: \"9e0684e0-af85-4e7f-a3b5-507d93a0bc7e\") " Mar 12 08:02:44 crc kubenswrapper[4809]: I0312 08:02:44.562251 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e0684e0-af85-4e7f-a3b5-507d93a0bc7e-client-ca" (OuterVolumeSpecName: "client-ca") pod "9e0684e0-af85-4e7f-a3b5-507d93a0bc7e" (UID: "9e0684e0-af85-4e7f-a3b5-507d93a0bc7e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:02:44 crc kubenswrapper[4809]: I0312 08:02:44.562686 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e0684e0-af85-4e7f-a3b5-507d93a0bc7e-config" (OuterVolumeSpecName: "config") pod "9e0684e0-af85-4e7f-a3b5-507d93a0bc7e" (UID: "9e0684e0-af85-4e7f-a3b5-507d93a0bc7e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:02:44 crc kubenswrapper[4809]: I0312 08:02:44.565952 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e0684e0-af85-4e7f-a3b5-507d93a0bc7e-kube-api-access-cz95q" (OuterVolumeSpecName: "kube-api-access-cz95q") pod "9e0684e0-af85-4e7f-a3b5-507d93a0bc7e" (UID: "9e0684e0-af85-4e7f-a3b5-507d93a0bc7e"). InnerVolumeSpecName "kube-api-access-cz95q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:02:44 crc kubenswrapper[4809]: I0312 08:02:44.566746 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e0684e0-af85-4e7f-a3b5-507d93a0bc7e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9e0684e0-af85-4e7f-a3b5-507d93a0bc7e" (UID: "9e0684e0-af85-4e7f-a3b5-507d93a0bc7e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:02:44 crc kubenswrapper[4809]: I0312 08:02:44.669038 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e0684e0-af85-4e7f-a3b5-507d93a0bc7e-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:02:44 crc kubenswrapper[4809]: I0312 08:02:44.669096 4809 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9e0684e0-af85-4e7f-a3b5-507d93a0bc7e-client-ca\") on node \"crc\" DevicePath \"\"" Mar 12 08:02:44 crc kubenswrapper[4809]: I0312 08:02:44.669130 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cz95q\" (UniqueName: \"kubernetes.io/projected/9e0684e0-af85-4e7f-a3b5-507d93a0bc7e-kube-api-access-cz95q\") on node \"crc\" DevicePath \"\"" Mar 12 08:02:44 crc kubenswrapper[4809]: I0312 08:02:44.669147 4809 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e0684e0-af85-4e7f-a3b5-507d93a0bc7e-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 08:02:44 crc kubenswrapper[4809]: I0312 08:02:44.974837 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5dd879c6b8-wpdhg" event={"ID":"cc396022-e360-45ad-a69d-c913cf15bfc2","Type":"ContainerDied","Data":"28f29299a232d5eed2204855428075b36310e6413796bd6fc3b9c3f95a413c12"} Mar 12 08:02:44 crc kubenswrapper[4809]: I0312 08:02:44.974931 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5dd879c6b8-wpdhg" Mar 12 08:02:44 crc kubenswrapper[4809]: I0312 08:02:44.974945 4809 scope.go:117] "RemoveContainer" containerID="2d2380dfe104c7db5d89ab69e54a3bd07788ff2767d27f9f6c20e55700f0ea54" Mar 12 08:02:44 crc kubenswrapper[4809]: I0312 08:02:44.978833 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6686bd87bb-zdkcn" event={"ID":"9e0684e0-af85-4e7f-a3b5-507d93a0bc7e","Type":"ContainerDied","Data":"a1e14f0b58ecc81e4880cf01791a34aa005aeb6159cc441294b20a26dec11fb7"} Mar 12 08:02:44 crc kubenswrapper[4809]: I0312 08:02:44.978875 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6686bd87bb-zdkcn" Mar 12 08:02:45 crc kubenswrapper[4809]: I0312 08:02:45.011804 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5dd879c6b8-wpdhg"] Mar 12 08:02:45 crc kubenswrapper[4809]: I0312 08:02:45.021031 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5dd879c6b8-wpdhg"] Mar 12 08:02:45 crc kubenswrapper[4809]: I0312 08:02:45.025887 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6686bd87bb-zdkcn"] Mar 12 08:02:45 crc kubenswrapper[4809]: I0312 08:02:45.027997 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6686bd87bb-zdkcn"] Mar 12 08:02:45 crc kubenswrapper[4809]: I0312 08:02:45.048596 4809 patch_prober.go:28] interesting pod/machine-config-daemon-h6d4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 08:02:45 crc kubenswrapper[4809]: I0312 08:02:45.048820 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 08:02:45 crc kubenswrapper[4809]: I0312 08:02:45.115818 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e0684e0-af85-4e7f-a3b5-507d93a0bc7e" path="/var/lib/kubelet/pods/9e0684e0-af85-4e7f-a3b5-507d93a0bc7e/volumes" Mar 12 08:02:45 crc kubenswrapper[4809]: I0312 08:02:45.116535 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc396022-e360-45ad-a69d-c913cf15bfc2" path="/var/lib/kubelet/pods/cc396022-e360-45ad-a69d-c913cf15bfc2/volumes" Mar 12 08:02:46 crc kubenswrapper[4809]: I0312 08:02:46.857572 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-867bdb74dd-jwgsp"] Mar 12 08:02:46 crc kubenswrapper[4809]: E0312 08:02:46.858031 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b6a0553-31de-43c6-8b88-95e21d1852af" containerName="pruner" Mar 12 08:02:46 crc kubenswrapper[4809]: I0312 08:02:46.858053 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b6a0553-31de-43c6-8b88-95e21d1852af" containerName="pruner" Mar 12 08:02:46 crc kubenswrapper[4809]: E0312 08:02:46.858072 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc396022-e360-45ad-a69d-c913cf15bfc2" containerName="controller-manager" Mar 12 08:02:46 crc kubenswrapper[4809]: I0312 08:02:46.858081 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc396022-e360-45ad-a69d-c913cf15bfc2" containerName="controller-manager" Mar 12 08:02:46 crc kubenswrapper[4809]: E0312 08:02:46.858099 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18583b8a-a86f-44e3-b813-eb9df44eab48" containerName="pruner" Mar 12 08:02:46 crc kubenswrapper[4809]: I0312 08:02:46.858106 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="18583b8a-a86f-44e3-b813-eb9df44eab48" containerName="pruner" Mar 12 08:02:46 crc kubenswrapper[4809]: E0312 08:02:46.858135 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e0684e0-af85-4e7f-a3b5-507d93a0bc7e" containerName="route-controller-manager" Mar 12 08:02:46 crc kubenswrapper[4809]: I0312 08:02:46.858143 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e0684e0-af85-4e7f-a3b5-507d93a0bc7e" containerName="route-controller-manager" Mar 12 08:02:46 crc kubenswrapper[4809]: I0312 08:02:46.858270 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc396022-e360-45ad-a69d-c913cf15bfc2" containerName="controller-manager" Mar 12 08:02:46 crc kubenswrapper[4809]: I0312 08:02:46.858291 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="18583b8a-a86f-44e3-b813-eb9df44eab48" containerName="pruner" Mar 12 08:02:46 crc kubenswrapper[4809]: I0312 08:02:46.858301 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e0684e0-af85-4e7f-a3b5-507d93a0bc7e" containerName="route-controller-manager" Mar 12 08:02:46 crc kubenswrapper[4809]: I0312 08:02:46.858314 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b6a0553-31de-43c6-8b88-95e21d1852af" containerName="pruner" Mar 12 08:02:46 crc kubenswrapper[4809]: I0312 08:02:46.858931 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-867bdb74dd-jwgsp" Mar 12 08:02:46 crc kubenswrapper[4809]: I0312 08:02:46.863646 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dbc9bdb9-dkc2c"] Mar 12 08:02:46 crc kubenswrapper[4809]: I0312 08:02:46.864844 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6dbc9bdb9-dkc2c" Mar 12 08:02:46 crc kubenswrapper[4809]: I0312 08:02:46.864903 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 12 08:02:46 crc kubenswrapper[4809]: I0312 08:02:46.865177 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 12 08:02:46 crc kubenswrapper[4809]: I0312 08:02:46.865232 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 12 08:02:46 crc kubenswrapper[4809]: I0312 08:02:46.868920 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 12 08:02:46 crc kubenswrapper[4809]: I0312 08:02:46.889366 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 12 08:02:46 crc kubenswrapper[4809]: I0312 08:02:46.892259 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 12 08:02:46 crc kubenswrapper[4809]: I0312 08:02:46.892583 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 12 08:02:46 crc kubenswrapper[4809]: I0312 08:02:46.892760 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 12 08:02:46 crc kubenswrapper[4809]: I0312 08:02:46.895397 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 12 08:02:46 crc kubenswrapper[4809]: I0312 08:02:46.899654 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 12 08:02:46 crc kubenswrapper[4809]: I0312 08:02:46.900247 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 12 08:02:46 crc kubenswrapper[4809]: I0312 08:02:46.900947 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 12 08:02:46 crc kubenswrapper[4809]: I0312 08:02:46.902583 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-867bdb74dd-jwgsp"] Mar 12 08:02:46 crc kubenswrapper[4809]: I0312 08:02:46.937475 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dbc9bdb9-dkc2c"] Mar 12 08:02:46 crc kubenswrapper[4809]: I0312 08:02:46.939104 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 12 08:02:46 crc kubenswrapper[4809]: I0312 08:02:46.947264 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85dcf3c4-e1ad-4e82-859a-5810accef385-serving-cert\") pod \"route-controller-manager-6dbc9bdb9-dkc2c\" (UID: \"85dcf3c4-e1ad-4e82-859a-5810accef385\") " pod="openshift-route-controller-manager/route-controller-manager-6dbc9bdb9-dkc2c" Mar 12 08:02:46 crc kubenswrapper[4809]: I0312 08:02:46.947335 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d-config\") pod \"controller-manager-867bdb74dd-jwgsp\" (UID: \"734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d\") " pod="openshift-controller-manager/controller-manager-867bdb74dd-jwgsp" Mar 12 08:02:46 crc kubenswrapper[4809]: I0312 08:02:46.947363 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d-proxy-ca-bundles\") pod \"controller-manager-867bdb74dd-jwgsp\" (UID: \"734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d\") " pod="openshift-controller-manager/controller-manager-867bdb74dd-jwgsp" Mar 12 08:02:46 crc kubenswrapper[4809]: I0312 08:02:46.947422 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d-serving-cert\") pod \"controller-manager-867bdb74dd-jwgsp\" (UID: \"734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d\") " pod="openshift-controller-manager/controller-manager-867bdb74dd-jwgsp" Mar 12 08:02:46 crc kubenswrapper[4809]: I0312 08:02:46.947454 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85dcf3c4-e1ad-4e82-859a-5810accef385-config\") pod \"route-controller-manager-6dbc9bdb9-dkc2c\" (UID: \"85dcf3c4-e1ad-4e82-859a-5810accef385\") " pod="openshift-route-controller-manager/route-controller-manager-6dbc9bdb9-dkc2c" Mar 12 08:02:46 crc kubenswrapper[4809]: I0312 08:02:46.947482 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w6hf\" (UniqueName: \"kubernetes.io/projected/734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d-kube-api-access-9w6hf\") pod \"controller-manager-867bdb74dd-jwgsp\" (UID: \"734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d\") " pod="openshift-controller-manager/controller-manager-867bdb74dd-jwgsp" Mar 12 08:02:46 crc kubenswrapper[4809]: I0312 08:02:46.947517 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffgvd\" (UniqueName: \"kubernetes.io/projected/85dcf3c4-e1ad-4e82-859a-5810accef385-kube-api-access-ffgvd\") pod \"route-controller-manager-6dbc9bdb9-dkc2c\" (UID: \"85dcf3c4-e1ad-4e82-859a-5810accef385\") " pod="openshift-route-controller-manager/route-controller-manager-6dbc9bdb9-dkc2c" Mar 12 08:02:46 crc kubenswrapper[4809]: I0312 08:02:46.947550 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d-client-ca\") pod \"controller-manager-867bdb74dd-jwgsp\" (UID: \"734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d\") " pod="openshift-controller-manager/controller-manager-867bdb74dd-jwgsp" Mar 12 08:02:46 crc kubenswrapper[4809]: I0312 08:02:46.947577 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/85dcf3c4-e1ad-4e82-859a-5810accef385-client-ca\") pod \"route-controller-manager-6dbc9bdb9-dkc2c\" (UID: \"85dcf3c4-e1ad-4e82-859a-5810accef385\") " pod="openshift-route-controller-manager/route-controller-manager-6dbc9bdb9-dkc2c" Mar 12 08:02:47 crc kubenswrapper[4809]: I0312 08:02:47.053381 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85dcf3c4-e1ad-4e82-859a-5810accef385-serving-cert\") pod \"route-controller-manager-6dbc9bdb9-dkc2c\" (UID: \"85dcf3c4-e1ad-4e82-859a-5810accef385\") " pod="openshift-route-controller-manager/route-controller-manager-6dbc9bdb9-dkc2c" Mar 12 08:02:47 crc kubenswrapper[4809]: I0312 08:02:47.053432 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d-config\") pod \"controller-manager-867bdb74dd-jwgsp\" (UID: \"734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d\") " pod="openshift-controller-manager/controller-manager-867bdb74dd-jwgsp" Mar 12 08:02:47 crc kubenswrapper[4809]: I0312 08:02:47.053455 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d-proxy-ca-bundles\") pod \"controller-manager-867bdb74dd-jwgsp\" (UID: \"734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d\") " pod="openshift-controller-manager/controller-manager-867bdb74dd-jwgsp" Mar 12 08:02:47 crc kubenswrapper[4809]: I0312 08:02:47.053482 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d-serving-cert\") pod \"controller-manager-867bdb74dd-jwgsp\" (UID: \"734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d\") " pod="openshift-controller-manager/controller-manager-867bdb74dd-jwgsp" Mar 12 08:02:47 crc kubenswrapper[4809]: I0312 08:02:47.053505 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85dcf3c4-e1ad-4e82-859a-5810accef385-config\") pod \"route-controller-manager-6dbc9bdb9-dkc2c\" (UID: \"85dcf3c4-e1ad-4e82-859a-5810accef385\") " pod="openshift-route-controller-manager/route-controller-manager-6dbc9bdb9-dkc2c" Mar 12 08:02:47 crc kubenswrapper[4809]: I0312 08:02:47.053546 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9w6hf\" (UniqueName: \"kubernetes.io/projected/734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d-kube-api-access-9w6hf\") pod \"controller-manager-867bdb74dd-jwgsp\" (UID: \"734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d\") " pod="openshift-controller-manager/controller-manager-867bdb74dd-jwgsp" Mar 12 08:02:47 crc kubenswrapper[4809]: I0312 08:02:47.053579 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffgvd\" (UniqueName: \"kubernetes.io/projected/85dcf3c4-e1ad-4e82-859a-5810accef385-kube-api-access-ffgvd\") pod \"route-controller-manager-6dbc9bdb9-dkc2c\" (UID: \"85dcf3c4-e1ad-4e82-859a-5810accef385\") " pod="openshift-route-controller-manager/route-controller-manager-6dbc9bdb9-dkc2c" Mar 12 08:02:47 crc kubenswrapper[4809]: I0312 08:02:47.053611 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d-client-ca\") pod \"controller-manager-867bdb74dd-jwgsp\" (UID: \"734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d\") " pod="openshift-controller-manager/controller-manager-867bdb74dd-jwgsp" Mar 12 08:02:47 crc kubenswrapper[4809]: I0312 08:02:47.053669 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/85dcf3c4-e1ad-4e82-859a-5810accef385-client-ca\") pod \"route-controller-manager-6dbc9bdb9-dkc2c\" (UID: \"85dcf3c4-e1ad-4e82-859a-5810accef385\") " pod="openshift-route-controller-manager/route-controller-manager-6dbc9bdb9-dkc2c" Mar 12 08:02:47 crc kubenswrapper[4809]: I0312 08:02:47.058607 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 12 08:02:47 crc kubenswrapper[4809]: I0312 08:02:47.058747 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 12 08:02:47 crc kubenswrapper[4809]: I0312 08:02:47.059046 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 12 08:02:47 crc kubenswrapper[4809]: I0312 08:02:47.059252 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 12 08:02:47 crc kubenswrapper[4809]: I0312 08:02:47.059319 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 12 08:02:47 crc kubenswrapper[4809]: I0312 08:02:47.059598 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 12 08:02:47 crc kubenswrapper[4809]: I0312 08:02:47.065288 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 12 08:02:47 crc kubenswrapper[4809]: I0312 08:02:47.066203 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d-client-ca\") pod \"controller-manager-867bdb74dd-jwgsp\" (UID: \"734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d\") " pod="openshift-controller-manager/controller-manager-867bdb74dd-jwgsp" Mar 12 08:02:47 crc kubenswrapper[4809]: I0312 08:02:47.067280 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d-config\") pod \"controller-manager-867bdb74dd-jwgsp\" (UID: \"734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d\") " pod="openshift-controller-manager/controller-manager-867bdb74dd-jwgsp" Mar 12 08:02:47 crc kubenswrapper[4809]: I0312 08:02:47.067658 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d-proxy-ca-bundles\") pod \"controller-manager-867bdb74dd-jwgsp\" (UID: \"734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d\") " pod="openshift-controller-manager/controller-manager-867bdb74dd-jwgsp" Mar 12 08:02:47 crc kubenswrapper[4809]: I0312 08:02:47.067789 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/85dcf3c4-e1ad-4e82-859a-5810accef385-client-ca\") pod \"route-controller-manager-6dbc9bdb9-dkc2c\" (UID: \"85dcf3c4-e1ad-4e82-859a-5810accef385\") " pod="openshift-route-controller-manager/route-controller-manager-6dbc9bdb9-dkc2c" Mar 12 08:02:47 crc kubenswrapper[4809]: I0312 08:02:47.068680 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85dcf3c4-e1ad-4e82-859a-5810accef385-config\") pod \"route-controller-manager-6dbc9bdb9-dkc2c\" (UID: \"85dcf3c4-e1ad-4e82-859a-5810accef385\") " pod="openshift-route-controller-manager/route-controller-manager-6dbc9bdb9-dkc2c" Mar 12 08:02:47 crc kubenswrapper[4809]: I0312 08:02:47.069830 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 12 08:02:47 crc kubenswrapper[4809]: I0312 08:02:47.071328 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 12 08:02:47 crc kubenswrapper[4809]: I0312 08:02:47.074098 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85dcf3c4-e1ad-4e82-859a-5810accef385-serving-cert\") pod \"route-controller-manager-6dbc9bdb9-dkc2c\" (UID: \"85dcf3c4-e1ad-4e82-859a-5810accef385\") " pod="openshift-route-controller-manager/route-controller-manager-6dbc9bdb9-dkc2c" Mar 12 08:02:47 crc kubenswrapper[4809]: I0312 08:02:47.079805 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 12 08:02:47 crc kubenswrapper[4809]: I0312 08:02:47.080910 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 12 08:02:47 crc kubenswrapper[4809]: I0312 08:02:47.088800 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d-serving-cert\") pod \"controller-manager-867bdb74dd-jwgsp\" (UID: \"734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d\") " pod="openshift-controller-manager/controller-manager-867bdb74dd-jwgsp" Mar 12 08:02:47 crc kubenswrapper[4809]: I0312 08:02:47.091655 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9w6hf\" (UniqueName: \"kubernetes.io/projected/734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d-kube-api-access-9w6hf\") pod \"controller-manager-867bdb74dd-jwgsp\" (UID: \"734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d\") " pod="openshift-controller-manager/controller-manager-867bdb74dd-jwgsp" Mar 12 08:02:47 crc kubenswrapper[4809]: I0312 08:02:47.092388 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffgvd\" (UniqueName: \"kubernetes.io/projected/85dcf3c4-e1ad-4e82-859a-5810accef385-kube-api-access-ffgvd\") pod \"route-controller-manager-6dbc9bdb9-dkc2c\" (UID: \"85dcf3c4-e1ad-4e82-859a-5810accef385\") " pod="openshift-route-controller-manager/route-controller-manager-6dbc9bdb9-dkc2c" Mar 12 08:02:47 crc kubenswrapper[4809]: I0312 08:02:47.251829 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 12 08:02:47 crc kubenswrapper[4809]: I0312 08:02:47.258637 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 12 08:02:47 crc kubenswrapper[4809]: I0312 08:02:47.259922 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-867bdb74dd-jwgsp" Mar 12 08:02:47 crc kubenswrapper[4809]: I0312 08:02:47.266835 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6dbc9bdb9-dkc2c" Mar 12 08:02:51 crc kubenswrapper[4809]: I0312 08:02:51.232921 4809 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","burstable","pod98cc5813-f6d5-4e2a-a7d4-2546e18e60f9"] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod98cc5813-f6d5-4e2a-a7d4-2546e18e60f9] : Timed out while waiting for systemd to remove kubepods-burstable-pod98cc5813_f6d5_4e2a_a7d4_2546e18e60f9.slice" Mar 12 08:02:51 crc kubenswrapper[4809]: E0312 08:02:51.233707 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods burstable pod98cc5813-f6d5-4e2a-a7d4-2546e18e60f9] : unable to destroy cgroup paths for cgroup [kubepods burstable pod98cc5813-f6d5-4e2a-a7d4-2546e18e60f9] : Timed out while waiting for systemd to remove kubepods-burstable-pod98cc5813_f6d5_4e2a_a7d4_2546e18e60f9.slice" pod="openshift-controller-manager/controller-manager-879f6c89f-r5hmb" podUID="98cc5813-f6d5-4e2a-a7d4-2546e18e60f9" Mar 12 08:02:51 crc kubenswrapper[4809]: E0312 08:02:51.745540 4809 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Mar 12 08:02:51 crc kubenswrapper[4809]: E0312 08:02:51.746050 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mrjd7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-vhrcn_openshift-marketplace(c914c474-5d5b-415b-ad58-76c7ac15dc94): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 12 08:02:51 crc kubenswrapper[4809]: E0312 08:02:51.747310 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-vhrcn" podUID="c914c474-5d5b-415b-ad58-76c7ac15dc94" Mar 12 08:02:51 crc kubenswrapper[4809]: I0312 08:02:51.861296 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-p566k"] Mar 12 08:02:51 crc kubenswrapper[4809]: I0312 08:02:51.909813 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-r5hmb" Mar 12 08:02:51 crc kubenswrapper[4809]: I0312 08:02:51.946274 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-r5hmb"] Mar 12 08:02:51 crc kubenswrapper[4809]: I0312 08:02:51.947491 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-r5hmb"] Mar 12 08:02:53 crc kubenswrapper[4809]: I0312 08:02:53.112270 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98cc5813-f6d5-4e2a-a7d4-2546e18e60f9" path="/var/lib/kubelet/pods/98cc5813-f6d5-4e2a-a7d4-2546e18e60f9/volumes" Mar 12 08:02:53 crc kubenswrapper[4809]: E0312 08:02:53.196709 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-vhrcn" podUID="c914c474-5d5b-415b-ad58-76c7ac15dc94" Mar 12 08:02:53 crc kubenswrapper[4809]: E0312 08:02:53.267172 4809 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Mar 12 08:02:53 crc kubenswrapper[4809]: E0312 08:02:53.267338 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xl7hr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-btdvf_openshift-marketplace(1df02216-0a1b-4417-9914-e7b9452a9c6b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 12 08:02:53 crc kubenswrapper[4809]: E0312 08:02:53.268566 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-btdvf" podUID="1df02216-0a1b-4417-9914-e7b9452a9c6b" Mar 12 08:02:54 crc kubenswrapper[4809]: I0312 08:02:54.512432 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w2hxb" Mar 12 08:02:54 crc kubenswrapper[4809]: E0312 08:02:54.691773 4809 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Mar 12 08:02:54 crc kubenswrapper[4809]: E0312 08:02:54.692605 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mj5wj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-wmft4_openshift-marketplace(6adda456-f49a-4a6d-b09b-8841158e9268): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 12 08:02:54 crc kubenswrapper[4809]: E0312 08:02:54.693837 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-wmft4" podUID="6adda456-f49a-4a6d-b09b-8841158e9268" Mar 12 08:02:57 crc kubenswrapper[4809]: I0312 08:02:57.836766 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-867bdb74dd-jwgsp"] Mar 12 08:02:57 crc kubenswrapper[4809]: I0312 08:02:57.916258 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dbc9bdb9-dkc2c"] Mar 12 08:02:58 crc kubenswrapper[4809]: I0312 08:02:58.074434 4809 scope.go:117] "RemoveContainer" containerID="c986b0d598fc8e45f76eb40d02efb60ea62d7feac90a6c53d6aaa84a329b8e13" Mar 12 08:02:58 crc kubenswrapper[4809]: W0312 08:02:58.087373 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d31c58d_0f0d_431f_bebc_57173f467eee.slice/crio-e17d842e5c7dd3d16faa8a99ffab9e2cdc512258fa9a483dcbaec1115cf0af00 WatchSource:0}: Error finding container e17d842e5c7dd3d16faa8a99ffab9e2cdc512258fa9a483dcbaec1115cf0af00: Status 404 returned error can't find the container with id e17d842e5c7dd3d16faa8a99ffab9e2cdc512258fa9a483dcbaec1115cf0af00 Mar 12 08:02:58 crc kubenswrapper[4809]: E0312 08:02:58.087894 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-btdvf" podUID="1df02216-0a1b-4417-9914-e7b9452a9c6b" Mar 12 08:02:58 crc kubenswrapper[4809]: E0312 08:02:58.088300 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wmft4" podUID="6adda456-f49a-4a6d-b09b-8841158e9268" Mar 12 08:02:58 crc kubenswrapper[4809]: E0312 08:02:58.161496 4809 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Mar 12 08:02:58 crc kubenswrapper[4809]: E0312 08:02:58.161954 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xk59x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-m6gcg_openshift-marketplace(511df0b5-a255-46fe-aeba-cd5daa01e7c9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 12 08:02:58 crc kubenswrapper[4809]: E0312 08:02:58.163296 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-m6gcg" podUID="511df0b5-a255-46fe-aeba-cd5daa01e7c9" Mar 12 08:02:58 crc kubenswrapper[4809]: E0312 08:02:58.163566 4809 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Mar 12 08:02:58 crc kubenswrapper[4809]: E0312 08:02:58.167903 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wphrv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-jkqnv_openshift-marketplace(75d1a803-df56-424d-ace1-ecc868081fca): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 12 08:02:58 crc kubenswrapper[4809]: E0312 08:02:58.169940 4809 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Mar 12 08:02:58 crc kubenswrapper[4809]: E0312 08:02:58.169943 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-jkqnv" podUID="75d1a803-df56-424d-ace1-ecc868081fca" Mar 12 08:02:58 crc kubenswrapper[4809]: E0312 08:02:58.170243 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nlkfz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-pqsb2_openshift-marketplace(bd4be45a-8370-4cbe-a718-bb31fd64d99a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 12 08:02:58 crc kubenswrapper[4809]: E0312 08:02:58.173526 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-pqsb2" podUID="bd4be45a-8370-4cbe-a718-bb31fd64d99a" Mar 12 08:02:58 crc kubenswrapper[4809]: E0312 08:02:58.210440 4809 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Mar 12 08:02:58 crc kubenswrapper[4809]: E0312 08:02:58.210626 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cvthr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-4qrn8_openshift-marketplace(f283aa6d-85ad-44ff-8758-d8251b00ae50): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 12 08:02:58 crc kubenswrapper[4809]: E0312 08:02:58.211818 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-4qrn8" podUID="f283aa6d-85ad-44ff-8758-d8251b00ae50" Mar 12 08:02:58 crc kubenswrapper[4809]: E0312 08:02:58.257309 4809 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Mar 12 08:02:58 crc kubenswrapper[4809]: E0312 08:02:58.257526 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-96kww,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-jn4tc_openshift-marketplace(c060e910-de6e-43df-a148-66f07bc71180): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 12 08:02:58 crc kubenswrapper[4809]: E0312 08:02:58.259111 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-jn4tc" podUID="c060e910-de6e-43df-a148-66f07bc71180" Mar 12 08:02:58 crc kubenswrapper[4809]: I0312 08:02:58.492964 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dbc9bdb9-dkc2c"] Mar 12 08:02:58 crc kubenswrapper[4809]: W0312 08:02:58.498673 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod85dcf3c4_e1ad_4e82_859a_5810accef385.slice/crio-6a00193000507ab77e61dcafc54d816d0cf6a83de89c54942ca9b02054b6d323 WatchSource:0}: Error finding container 6a00193000507ab77e61dcafc54d816d0cf6a83de89c54942ca9b02054b6d323: Status 404 returned error can't find the container with id 6a00193000507ab77e61dcafc54d816d0cf6a83de89c54942ca9b02054b6d323 Mar 12 08:02:58 crc kubenswrapper[4809]: I0312 08:02:58.573486 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-867bdb74dd-jwgsp"] Mar 12 08:02:58 crc kubenswrapper[4809]: W0312 08:02:58.579745 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod734d7e78_e1bd_4ebe_ae9d_bb289d17fd6d.slice/crio-f9052373b68ea5f5e864abe2e457c2e862315f13d52b7c5bbcb437553bb966ef WatchSource:0}: Error finding container f9052373b68ea5f5e864abe2e457c2e862315f13d52b7c5bbcb437553bb966ef: Status 404 returned error can't find the container with id f9052373b68ea5f5e864abe2e457c2e862315f13d52b7c5bbcb437553bb966ef Mar 12 08:02:58 crc kubenswrapper[4809]: I0312 08:02:58.948887 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6dbc9bdb9-dkc2c" event={"ID":"85dcf3c4-e1ad-4e82-859a-5810accef385","Type":"ContainerStarted","Data":"99f3bb18e47ca4201e638363c900e066b1d3f44e2d17dadfcb63528acc17b427"} Mar 12 08:02:58 crc kubenswrapper[4809]: I0312 08:02:58.950018 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6dbc9bdb9-dkc2c" event={"ID":"85dcf3c4-e1ad-4e82-859a-5810accef385","Type":"ContainerStarted","Data":"6a00193000507ab77e61dcafc54d816d0cf6a83de89c54942ca9b02054b6d323"} Mar 12 08:02:58 crc kubenswrapper[4809]: I0312 08:02:58.950212 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6dbc9bdb9-dkc2c" podUID="85dcf3c4-e1ad-4e82-859a-5810accef385" containerName="route-controller-manager" containerID="cri-o://99f3bb18e47ca4201e638363c900e066b1d3f44e2d17dadfcb63528acc17b427" gracePeriod=30 Mar 12 08:02:58 crc kubenswrapper[4809]: I0312 08:02:58.951874 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6dbc9bdb9-dkc2c" Mar 12 08:02:58 crc kubenswrapper[4809]: I0312 08:02:58.960857 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-867bdb74dd-jwgsp" event={"ID":"734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d","Type":"ContainerStarted","Data":"25771777922a2175550303cb18482a5b2380c7b466235a39086985d633ab4134"} Mar 12 08:02:58 crc kubenswrapper[4809]: I0312 08:02:58.960910 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-867bdb74dd-jwgsp" event={"ID":"734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d","Type":"ContainerStarted","Data":"f9052373b68ea5f5e864abe2e457c2e862315f13d52b7c5bbcb437553bb966ef"} Mar 12 08:02:58 crc kubenswrapper[4809]: I0312 08:02:58.960967 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-867bdb74dd-jwgsp" podUID="734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d" containerName="controller-manager" containerID="cri-o://25771777922a2175550303cb18482a5b2380c7b466235a39086985d633ab4134" gracePeriod=30 Mar 12 08:02:58 crc kubenswrapper[4809]: I0312 08:02:58.961047 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-867bdb74dd-jwgsp" Mar 12 08:02:58 crc kubenswrapper[4809]: I0312 08:02:58.965420 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-p566k" event={"ID":"3d31c58d-0f0d-431f-bebc-57173f467eee","Type":"ContainerStarted","Data":"a7667de72706b3b27647be2dbcfa7bd9cc7e45e2dacc8e784d0e985a9de27848"} Mar 12 08:02:58 crc kubenswrapper[4809]: I0312 08:02:58.965459 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-p566k" event={"ID":"3d31c58d-0f0d-431f-bebc-57173f467eee","Type":"ContainerStarted","Data":"af79b6837f265e3607e5a8d2bd7970666503fe64e02b4310165b455b1638d31d"} Mar 12 08:02:58 crc kubenswrapper[4809]: I0312 08:02:58.965472 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-p566k" event={"ID":"3d31c58d-0f0d-431f-bebc-57173f467eee","Type":"ContainerStarted","Data":"e17d842e5c7dd3d16faa8a99ffab9e2cdc512258fa9a483dcbaec1115cf0af00"} Mar 12 08:02:58 crc kubenswrapper[4809]: E0312 08:02:58.968478 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-jn4tc" podUID="c060e910-de6e-43df-a148-66f07bc71180" Mar 12 08:02:58 crc kubenswrapper[4809]: E0312 08:02:58.968488 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-4qrn8" podUID="f283aa6d-85ad-44ff-8758-d8251b00ae50" Mar 12 08:02:58 crc kubenswrapper[4809]: E0312 08:02:58.968515 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-jkqnv" podUID="75d1a803-df56-424d-ace1-ecc868081fca" Mar 12 08:02:58 crc kubenswrapper[4809]: E0312 08:02:58.968506 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-pqsb2" podUID="bd4be45a-8370-4cbe-a718-bb31fd64d99a" Mar 12 08:02:58 crc kubenswrapper[4809]: E0312 08:02:58.968654 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-m6gcg" podUID="511df0b5-a255-46fe-aeba-cd5daa01e7c9" Mar 12 08:02:58 crc kubenswrapper[4809]: I0312 08:02:58.972318 4809 patch_prober.go:28] interesting pod/route-controller-manager-6dbc9bdb9-dkc2c container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": read tcp 10.217.0.2:43770->10.217.0.60:8443: read: connection reset by peer" start-of-body= Mar 12 08:02:58 crc kubenswrapper[4809]: I0312 08:02:58.972357 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6dbc9bdb9-dkc2c" podUID="85dcf3c4-e1ad-4e82-859a-5810accef385" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": read tcp 10.217.0.2:43770->10.217.0.60:8443: read: connection reset by peer" Mar 12 08:02:58 crc kubenswrapper[4809]: I0312 08:02:58.994865 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-867bdb74dd-jwgsp" Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.012489 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6dbc9bdb9-dkc2c" podStartSLOduration=22.012468997 podStartE2EDuration="22.012468997s" podCreationTimestamp="2026-03-12 08:02:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:02:58.976234748 +0000 UTC m=+252.558270481" watchObservedRunningTime="2026-03-12 08:02:59.012468997 +0000 UTC m=+252.594504730" Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.082443 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.083749 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.096355 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.105101 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.105462 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.127960 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-867bdb74dd-jwgsp" podStartSLOduration=22.12794183 podStartE2EDuration="22.12794183s" podCreationTimestamp="2026-03-12 08:02:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:02:59.104669727 +0000 UTC m=+252.686705460" watchObservedRunningTime="2026-03-12 08:02:59.12794183 +0000 UTC m=+252.709977553" Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.140410 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/22b0d41c-40b2-4eb6-aac0-86073670971b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"22b0d41c-40b2-4eb6-aac0-86073670971b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.140485 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/22b0d41c-40b2-4eb6-aac0-86073670971b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"22b0d41c-40b2-4eb6-aac0-86073670971b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.144398 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-p566k" podStartSLOduration=215.144378206 podStartE2EDuration="3m35.144378206s" podCreationTimestamp="2026-03-12 07:59:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:02:59.143693655 +0000 UTC m=+252.725729408" watchObservedRunningTime="2026-03-12 08:02:59.144378206 +0000 UTC m=+252.726413939" Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.242332 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/22b0d41c-40b2-4eb6-aac0-86073670971b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"22b0d41c-40b2-4eb6-aac0-86073670971b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.242444 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/22b0d41c-40b2-4eb6-aac0-86073670971b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"22b0d41c-40b2-4eb6-aac0-86073670971b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.242535 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/22b0d41c-40b2-4eb6-aac0-86073670971b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"22b0d41c-40b2-4eb6-aac0-86073670971b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.272970 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/22b0d41c-40b2-4eb6-aac0-86073670971b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"22b0d41c-40b2-4eb6-aac0-86073670971b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.386954 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-6dbc9bdb9-dkc2c_85dcf3c4-e1ad-4e82-859a-5810accef385/route-controller-manager/0.log" Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.387028 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6dbc9bdb9-dkc2c" Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.391532 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-867bdb74dd-jwgsp" Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.415423 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b67f6bdb8-9j9d7"] Mar 12 08:02:59 crc kubenswrapper[4809]: E0312 08:02:59.415673 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85dcf3c4-e1ad-4e82-859a-5810accef385" containerName="route-controller-manager" Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.415689 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="85dcf3c4-e1ad-4e82-859a-5810accef385" containerName="route-controller-manager" Mar 12 08:02:59 crc kubenswrapper[4809]: E0312 08:02:59.415699 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d" containerName="controller-manager" Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.415706 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d" containerName="controller-manager" Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.415801 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="85dcf3c4-e1ad-4e82-859a-5810accef385" containerName="route-controller-manager" Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.415810 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d" containerName="controller-manager" Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.416195 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b67f6bdb8-9j9d7" Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.428899 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.432859 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b67f6bdb8-9j9d7"] Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.455177 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffgvd\" (UniqueName: \"kubernetes.io/projected/85dcf3c4-e1ad-4e82-859a-5810accef385-kube-api-access-ffgvd\") pod \"85dcf3c4-e1ad-4e82-859a-5810accef385\" (UID: \"85dcf3c4-e1ad-4e82-859a-5810accef385\") " Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.455286 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d-client-ca\") pod \"734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d\" (UID: \"734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d\") " Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.455323 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85dcf3c4-e1ad-4e82-859a-5810accef385-config\") pod \"85dcf3c4-e1ad-4e82-859a-5810accef385\" (UID: \"85dcf3c4-e1ad-4e82-859a-5810accef385\") " Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.455352 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9w6hf\" (UniqueName: \"kubernetes.io/projected/734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d-kube-api-access-9w6hf\") pod \"734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d\" (UID: \"734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d\") " Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.455374 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85dcf3c4-e1ad-4e82-859a-5810accef385-serving-cert\") pod \"85dcf3c4-e1ad-4e82-859a-5810accef385\" (UID: \"85dcf3c4-e1ad-4e82-859a-5810accef385\") " Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.455420 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d-proxy-ca-bundles\") pod \"734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d\" (UID: \"734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d\") " Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.455464 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d-config\") pod \"734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d\" (UID: \"734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d\") " Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.455487 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d-serving-cert\") pod \"734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d\" (UID: \"734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d\") " Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.455505 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/85dcf3c4-e1ad-4e82-859a-5810accef385-client-ca\") pod \"85dcf3c4-e1ad-4e82-859a-5810accef385\" (UID: \"85dcf3c4-e1ad-4e82-859a-5810accef385\") " Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.455635 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/07758ee8-b281-47c3-9bf5-ffc0d1aa9c38-client-ca\") pod \"route-controller-manager-b67f6bdb8-9j9d7\" (UID: \"07758ee8-b281-47c3-9bf5-ffc0d1aa9c38\") " pod="openshift-route-controller-manager/route-controller-manager-b67f6bdb8-9j9d7" Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.455663 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07758ee8-b281-47c3-9bf5-ffc0d1aa9c38-config\") pod \"route-controller-manager-b67f6bdb8-9j9d7\" (UID: \"07758ee8-b281-47c3-9bf5-ffc0d1aa9c38\") " pod="openshift-route-controller-manager/route-controller-manager-b67f6bdb8-9j9d7" Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.455701 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07758ee8-b281-47c3-9bf5-ffc0d1aa9c38-serving-cert\") pod \"route-controller-manager-b67f6bdb8-9j9d7\" (UID: \"07758ee8-b281-47c3-9bf5-ffc0d1aa9c38\") " pod="openshift-route-controller-manager/route-controller-manager-b67f6bdb8-9j9d7" Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.455744 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvbzp\" (UniqueName: \"kubernetes.io/projected/07758ee8-b281-47c3-9bf5-ffc0d1aa9c38-kube-api-access-pvbzp\") pod \"route-controller-manager-b67f6bdb8-9j9d7\" (UID: \"07758ee8-b281-47c3-9bf5-ffc0d1aa9c38\") " pod="openshift-route-controller-manager/route-controller-manager-b67f6bdb8-9j9d7" Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.457251 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d-config" (OuterVolumeSpecName: "config") pod "734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d" (UID: "734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.458850 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d-client-ca" (OuterVolumeSpecName: "client-ca") pod "734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d" (UID: "734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.459046 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85dcf3c4-e1ad-4e82-859a-5810accef385-kube-api-access-ffgvd" (OuterVolumeSpecName: "kube-api-access-ffgvd") pod "85dcf3c4-e1ad-4e82-859a-5810accef385" (UID: "85dcf3c4-e1ad-4e82-859a-5810accef385"). InnerVolumeSpecName "kube-api-access-ffgvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.459368 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d" (UID: "734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.459406 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d" (UID: "734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.459887 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85dcf3c4-e1ad-4e82-859a-5810accef385-client-ca" (OuterVolumeSpecName: "client-ca") pod "85dcf3c4-e1ad-4e82-859a-5810accef385" (UID: "85dcf3c4-e1ad-4e82-859a-5810accef385"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.459903 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85dcf3c4-e1ad-4e82-859a-5810accef385-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "85dcf3c4-e1ad-4e82-859a-5810accef385" (UID: "85dcf3c4-e1ad-4e82-859a-5810accef385"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.459991 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85dcf3c4-e1ad-4e82-859a-5810accef385-config" (OuterVolumeSpecName: "config") pod "85dcf3c4-e1ad-4e82-859a-5810accef385" (UID: "85dcf3c4-e1ad-4e82-859a-5810accef385"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.461498 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d-kube-api-access-9w6hf" (OuterVolumeSpecName: "kube-api-access-9w6hf") pod "734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d" (UID: "734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d"). InnerVolumeSpecName "kube-api-access-9w6hf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.556990 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvbzp\" (UniqueName: \"kubernetes.io/projected/07758ee8-b281-47c3-9bf5-ffc0d1aa9c38-kube-api-access-pvbzp\") pod \"route-controller-manager-b67f6bdb8-9j9d7\" (UID: \"07758ee8-b281-47c3-9bf5-ffc0d1aa9c38\") " pod="openshift-route-controller-manager/route-controller-manager-b67f6bdb8-9j9d7" Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.557489 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/07758ee8-b281-47c3-9bf5-ffc0d1aa9c38-client-ca\") pod \"route-controller-manager-b67f6bdb8-9j9d7\" (UID: \"07758ee8-b281-47c3-9bf5-ffc0d1aa9c38\") " pod="openshift-route-controller-manager/route-controller-manager-b67f6bdb8-9j9d7" Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.557515 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07758ee8-b281-47c3-9bf5-ffc0d1aa9c38-config\") pod \"route-controller-manager-b67f6bdb8-9j9d7\" (UID: \"07758ee8-b281-47c3-9bf5-ffc0d1aa9c38\") " pod="openshift-route-controller-manager/route-controller-manager-b67f6bdb8-9j9d7" Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.557551 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07758ee8-b281-47c3-9bf5-ffc0d1aa9c38-serving-cert\") pod \"route-controller-manager-b67f6bdb8-9j9d7\" (UID: \"07758ee8-b281-47c3-9bf5-ffc0d1aa9c38\") " pod="openshift-route-controller-manager/route-controller-manager-b67f6bdb8-9j9d7" Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.557598 4809 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d-client-ca\") on node \"crc\" DevicePath \"\"" Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.557610 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85dcf3c4-e1ad-4e82-859a-5810accef385-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.557620 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9w6hf\" (UniqueName: \"kubernetes.io/projected/734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d-kube-api-access-9w6hf\") on node \"crc\" DevicePath \"\"" Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.557630 4809 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85dcf3c4-e1ad-4e82-859a-5810accef385-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.557638 4809 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.557647 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.557655 4809 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.557663 4809 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/85dcf3c4-e1ad-4e82-859a-5810accef385-client-ca\") on node \"crc\" DevicePath \"\"" Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.557671 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ffgvd\" (UniqueName: \"kubernetes.io/projected/85dcf3c4-e1ad-4e82-859a-5810accef385-kube-api-access-ffgvd\") on node \"crc\" DevicePath \"\"" Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.559174 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07758ee8-b281-47c3-9bf5-ffc0d1aa9c38-config\") pod \"route-controller-manager-b67f6bdb8-9j9d7\" (UID: \"07758ee8-b281-47c3-9bf5-ffc0d1aa9c38\") " pod="openshift-route-controller-manager/route-controller-manager-b67f6bdb8-9j9d7" Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.559804 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/07758ee8-b281-47c3-9bf5-ffc0d1aa9c38-client-ca\") pod \"route-controller-manager-b67f6bdb8-9j9d7\" (UID: \"07758ee8-b281-47c3-9bf5-ffc0d1aa9c38\") " pod="openshift-route-controller-manager/route-controller-manager-b67f6bdb8-9j9d7" Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.561661 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07758ee8-b281-47c3-9bf5-ffc0d1aa9c38-serving-cert\") pod \"route-controller-manager-b67f6bdb8-9j9d7\" (UID: \"07758ee8-b281-47c3-9bf5-ffc0d1aa9c38\") " pod="openshift-route-controller-manager/route-controller-manager-b67f6bdb8-9j9d7" Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.570760 4809 csr.go:261] certificate signing request csr-8w97d is approved, waiting to be issued Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.575435 4809 csr.go:257] certificate signing request csr-8w97d is issued Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.576481 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvbzp\" (UniqueName: \"kubernetes.io/projected/07758ee8-b281-47c3-9bf5-ffc0d1aa9c38-kube-api-access-pvbzp\") pod \"route-controller-manager-b67f6bdb8-9j9d7\" (UID: \"07758ee8-b281-47c3-9bf5-ffc0d1aa9c38\") " pod="openshift-route-controller-manager/route-controller-manager-b67f6bdb8-9j9d7" Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.632674 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.732942 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b67f6bdb8-9j9d7" Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.973378 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"22b0d41c-40b2-4eb6-aac0-86073670971b","Type":"ContainerStarted","Data":"d518d63ceb10d489047204b4c91149dc445594af725f0d38f88fca1f76e518d1"} Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.975238 4809 generic.go:334] "Generic (PLEG): container finished" podID="da2d7bf2-3fcc-42c4-ae05-c16d5c714a26" containerID="62e5cc06c620a7dcb17248e279555452486e3ebe420de46cfb780e6705bf96bd" exitCode=0 Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.975336 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555042-287fz" event={"ID":"da2d7bf2-3fcc-42c4-ae05-c16d5c714a26","Type":"ContainerDied","Data":"62e5cc06c620a7dcb17248e279555452486e3ebe420de46cfb780e6705bf96bd"} Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.978295 4809 generic.go:334] "Generic (PLEG): container finished" podID="734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d" containerID="25771777922a2175550303cb18482a5b2380c7b466235a39086985d633ab4134" exitCode=0 Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.978355 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-867bdb74dd-jwgsp" event={"ID":"734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d","Type":"ContainerDied","Data":"25771777922a2175550303cb18482a5b2380c7b466235a39086985d633ab4134"} Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.978439 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-867bdb74dd-jwgsp" event={"ID":"734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d","Type":"ContainerDied","Data":"f9052373b68ea5f5e864abe2e457c2e862315f13d52b7c5bbcb437553bb966ef"} Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.978464 4809 scope.go:117] "RemoveContainer" containerID="25771777922a2175550303cb18482a5b2380c7b466235a39086985d633ab4134" Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.978378 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-867bdb74dd-jwgsp" Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.980185 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-6dbc9bdb9-dkc2c_85dcf3c4-e1ad-4e82-859a-5810accef385/route-controller-manager/0.log" Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.980232 4809 generic.go:334] "Generic (PLEG): container finished" podID="85dcf3c4-e1ad-4e82-859a-5810accef385" containerID="99f3bb18e47ca4201e638363c900e066b1d3f44e2d17dadfcb63528acc17b427" exitCode=255 Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.980474 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6dbc9bdb9-dkc2c" event={"ID":"85dcf3c4-e1ad-4e82-859a-5810accef385","Type":"ContainerDied","Data":"99f3bb18e47ca4201e638363c900e066b1d3f44e2d17dadfcb63528acc17b427"} Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.980513 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6dbc9bdb9-dkc2c" event={"ID":"85dcf3c4-e1ad-4e82-859a-5810accef385","Type":"ContainerDied","Data":"6a00193000507ab77e61dcafc54d816d0cf6a83de89c54942ca9b02054b6d323"} Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.980599 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6dbc9bdb9-dkc2c" Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.996411 4809 scope.go:117] "RemoveContainer" containerID="25771777922a2175550303cb18482a5b2380c7b466235a39086985d633ab4134" Mar 12 08:02:59 crc kubenswrapper[4809]: E0312 08:02:59.998331 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25771777922a2175550303cb18482a5b2380c7b466235a39086985d633ab4134\": container with ID starting with 25771777922a2175550303cb18482a5b2380c7b466235a39086985d633ab4134 not found: ID does not exist" containerID="25771777922a2175550303cb18482a5b2380c7b466235a39086985d633ab4134" Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.998374 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25771777922a2175550303cb18482a5b2380c7b466235a39086985d633ab4134"} err="failed to get container status \"25771777922a2175550303cb18482a5b2380c7b466235a39086985d633ab4134\": rpc error: code = NotFound desc = could not find container \"25771777922a2175550303cb18482a5b2380c7b466235a39086985d633ab4134\": container with ID starting with 25771777922a2175550303cb18482a5b2380c7b466235a39086985d633ab4134 not found: ID does not exist" Mar 12 08:02:59 crc kubenswrapper[4809]: I0312 08:02:59.998406 4809 scope.go:117] "RemoveContainer" containerID="99f3bb18e47ca4201e638363c900e066b1d3f44e2d17dadfcb63528acc17b427" Mar 12 08:03:00 crc kubenswrapper[4809]: I0312 08:03:00.011053 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-867bdb74dd-jwgsp"] Mar 12 08:03:00 crc kubenswrapper[4809]: I0312 08:03:00.014976 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-867bdb74dd-jwgsp"] Mar 12 08:03:00 crc kubenswrapper[4809]: I0312 08:03:00.019399 4809 scope.go:117] "RemoveContainer" containerID="99f3bb18e47ca4201e638363c900e066b1d3f44e2d17dadfcb63528acc17b427" Mar 12 08:03:00 crc kubenswrapper[4809]: E0312 08:03:00.019827 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99f3bb18e47ca4201e638363c900e066b1d3f44e2d17dadfcb63528acc17b427\": container with ID starting with 99f3bb18e47ca4201e638363c900e066b1d3f44e2d17dadfcb63528acc17b427 not found: ID does not exist" containerID="99f3bb18e47ca4201e638363c900e066b1d3f44e2d17dadfcb63528acc17b427" Mar 12 08:03:00 crc kubenswrapper[4809]: I0312 08:03:00.019855 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99f3bb18e47ca4201e638363c900e066b1d3f44e2d17dadfcb63528acc17b427"} err="failed to get container status \"99f3bb18e47ca4201e638363c900e066b1d3f44e2d17dadfcb63528acc17b427\": rpc error: code = NotFound desc = could not find container \"99f3bb18e47ca4201e638363c900e066b1d3f44e2d17dadfcb63528acc17b427\": container with ID starting with 99f3bb18e47ca4201e638363c900e066b1d3f44e2d17dadfcb63528acc17b427 not found: ID does not exist" Mar 12 08:03:00 crc kubenswrapper[4809]: I0312 08:03:00.029567 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dbc9bdb9-dkc2c"] Mar 12 08:03:00 crc kubenswrapper[4809]: I0312 08:03:00.032178 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dbc9bdb9-dkc2c"] Mar 12 08:03:00 crc kubenswrapper[4809]: I0312 08:03:00.141503 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b67f6bdb8-9j9d7"] Mar 12 08:03:00 crc kubenswrapper[4809]: W0312 08:03:00.151535 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07758ee8_b281_47c3_9bf5_ffc0d1aa9c38.slice/crio-1835d52b4fbe98f462bb20d202a15bcc461fed1cf84d8de2a9d9227f588ef0f8 WatchSource:0}: Error finding container 1835d52b4fbe98f462bb20d202a15bcc461fed1cf84d8de2a9d9227f588ef0f8: Status 404 returned error can't find the container with id 1835d52b4fbe98f462bb20d202a15bcc461fed1cf84d8de2a9d9227f588ef0f8 Mar 12 08:03:00 crc kubenswrapper[4809]: I0312 08:03:00.580285 4809 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2027-01-01 20:39:50.379429887 +0000 UTC Mar 12 08:03:00 crc kubenswrapper[4809]: I0312 08:03:00.580875 4809 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7092h36m49.798560975s for next certificate rotation Mar 12 08:03:00 crc kubenswrapper[4809]: I0312 08:03:00.990170 4809 generic.go:334] "Generic (PLEG): container finished" podID="22b0d41c-40b2-4eb6-aac0-86073670971b" containerID="ae3d3513cdf45d2a05102a0cfd0885ff063c084f473ec4850513142824805036" exitCode=0 Mar 12 08:03:00 crc kubenswrapper[4809]: I0312 08:03:00.990230 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"22b0d41c-40b2-4eb6-aac0-86073670971b","Type":"ContainerDied","Data":"ae3d3513cdf45d2a05102a0cfd0885ff063c084f473ec4850513142824805036"} Mar 12 08:03:00 crc kubenswrapper[4809]: I0312 08:03:00.991669 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b67f6bdb8-9j9d7" event={"ID":"07758ee8-b281-47c3-9bf5-ffc0d1aa9c38","Type":"ContainerStarted","Data":"e939c98bba0501614dfb06935887a9e21b94467b8923ce8a532eb5ba9cd88a46"} Mar 12 08:03:00 crc kubenswrapper[4809]: I0312 08:03:00.991701 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b67f6bdb8-9j9d7" event={"ID":"07758ee8-b281-47c3-9bf5-ffc0d1aa9c38","Type":"ContainerStarted","Data":"1835d52b4fbe98f462bb20d202a15bcc461fed1cf84d8de2a9d9227f588ef0f8"} Mar 12 08:03:01 crc kubenswrapper[4809]: I0312 08:03:01.031185 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-b67f6bdb8-9j9d7" podStartSLOduration=4.031150587 podStartE2EDuration="4.031150587s" podCreationTimestamp="2026-03-12 08:02:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:03:01.029419507 +0000 UTC m=+254.611455230" watchObservedRunningTime="2026-03-12 08:03:01.031150587 +0000 UTC m=+254.613186320" Mar 12 08:03:01 crc kubenswrapper[4809]: I0312 08:03:01.126071 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d" path="/var/lib/kubelet/pods/734d7e78-e1bd-4ebe-ae9d-bb289d17fd6d/volumes" Mar 12 08:03:01 crc kubenswrapper[4809]: I0312 08:03:01.126880 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85dcf3c4-e1ad-4e82-859a-5810accef385" path="/var/lib/kubelet/pods/85dcf3c4-e1ad-4e82-859a-5810accef385/volumes" Mar 12 08:03:01 crc kubenswrapper[4809]: I0312 08:03:01.273835 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555042-287fz" Mar 12 08:03:01 crc kubenswrapper[4809]: I0312 08:03:01.396576 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smmpr\" (UniqueName: \"kubernetes.io/projected/da2d7bf2-3fcc-42c4-ae05-c16d5c714a26-kube-api-access-smmpr\") pod \"da2d7bf2-3fcc-42c4-ae05-c16d5c714a26\" (UID: \"da2d7bf2-3fcc-42c4-ae05-c16d5c714a26\") " Mar 12 08:03:01 crc kubenswrapper[4809]: I0312 08:03:01.402927 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da2d7bf2-3fcc-42c4-ae05-c16d5c714a26-kube-api-access-smmpr" (OuterVolumeSpecName: "kube-api-access-smmpr") pod "da2d7bf2-3fcc-42c4-ae05-c16d5c714a26" (UID: "da2d7bf2-3fcc-42c4-ae05-c16d5c714a26"). InnerVolumeSpecName "kube-api-access-smmpr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:03:01 crc kubenswrapper[4809]: I0312 08:03:01.499469 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smmpr\" (UniqueName: \"kubernetes.io/projected/da2d7bf2-3fcc-42c4-ae05-c16d5c714a26-kube-api-access-smmpr\") on node \"crc\" DevicePath \"\"" Mar 12 08:03:01 crc kubenswrapper[4809]: I0312 08:03:01.581479 4809 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2027-01-14 19:38:53.750863178 +0000 UTC Mar 12 08:03:01 crc kubenswrapper[4809]: I0312 08:03:01.581547 4809 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7403h35m52.169318297s for next certificate rotation Mar 12 08:03:01 crc kubenswrapper[4809]: I0312 08:03:01.849915 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-8f5fb9cb7-ftwq8"] Mar 12 08:03:01 crc kubenswrapper[4809]: E0312 08:03:01.850236 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da2d7bf2-3fcc-42c4-ae05-c16d5c714a26" containerName="oc" Mar 12 08:03:01 crc kubenswrapper[4809]: I0312 08:03:01.850250 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="da2d7bf2-3fcc-42c4-ae05-c16d5c714a26" containerName="oc" Mar 12 08:03:01 crc kubenswrapper[4809]: I0312 08:03:01.850361 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="da2d7bf2-3fcc-42c4-ae05-c16d5c714a26" containerName="oc" Mar 12 08:03:01 crc kubenswrapper[4809]: I0312 08:03:01.850767 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8f5fb9cb7-ftwq8" Mar 12 08:03:01 crc kubenswrapper[4809]: I0312 08:03:01.854272 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 12 08:03:01 crc kubenswrapper[4809]: I0312 08:03:01.854773 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 12 08:03:01 crc kubenswrapper[4809]: I0312 08:03:01.854968 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 12 08:03:01 crc kubenswrapper[4809]: I0312 08:03:01.855351 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 12 08:03:01 crc kubenswrapper[4809]: I0312 08:03:01.855504 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 12 08:03:01 crc kubenswrapper[4809]: I0312 08:03:01.855666 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 12 08:03:01 crc kubenswrapper[4809]: I0312 08:03:01.858390 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8f5fb9cb7-ftwq8"] Mar 12 08:03:01 crc kubenswrapper[4809]: I0312 08:03:01.861551 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 12 08:03:01 crc kubenswrapper[4809]: I0312 08:03:01.904431 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5950d3a4-1323-453d-86fc-b5c9cb164dbc-serving-cert\") pod \"controller-manager-8f5fb9cb7-ftwq8\" (UID: \"5950d3a4-1323-453d-86fc-b5c9cb164dbc\") " pod="openshift-controller-manager/controller-manager-8f5fb9cb7-ftwq8" Mar 12 08:03:01 crc kubenswrapper[4809]: I0312 08:03:01.904493 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5950d3a4-1323-453d-86fc-b5c9cb164dbc-client-ca\") pod \"controller-manager-8f5fb9cb7-ftwq8\" (UID: \"5950d3a4-1323-453d-86fc-b5c9cb164dbc\") " pod="openshift-controller-manager/controller-manager-8f5fb9cb7-ftwq8" Mar 12 08:03:01 crc kubenswrapper[4809]: I0312 08:03:01.904536 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-759bs\" (UniqueName: \"kubernetes.io/projected/5950d3a4-1323-453d-86fc-b5c9cb164dbc-kube-api-access-759bs\") pod \"controller-manager-8f5fb9cb7-ftwq8\" (UID: \"5950d3a4-1323-453d-86fc-b5c9cb164dbc\") " pod="openshift-controller-manager/controller-manager-8f5fb9cb7-ftwq8" Mar 12 08:03:01 crc kubenswrapper[4809]: I0312 08:03:01.904638 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5950d3a4-1323-453d-86fc-b5c9cb164dbc-config\") pod \"controller-manager-8f5fb9cb7-ftwq8\" (UID: \"5950d3a4-1323-453d-86fc-b5c9cb164dbc\") " pod="openshift-controller-manager/controller-manager-8f5fb9cb7-ftwq8" Mar 12 08:03:01 crc kubenswrapper[4809]: I0312 08:03:01.904693 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5950d3a4-1323-453d-86fc-b5c9cb164dbc-proxy-ca-bundles\") pod \"controller-manager-8f5fb9cb7-ftwq8\" (UID: \"5950d3a4-1323-453d-86fc-b5c9cb164dbc\") " pod="openshift-controller-manager/controller-manager-8f5fb9cb7-ftwq8" Mar 12 08:03:02 crc kubenswrapper[4809]: I0312 08:03:01.999859 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555042-287fz" event={"ID":"da2d7bf2-3fcc-42c4-ae05-c16d5c714a26","Type":"ContainerDied","Data":"292097c459141af1f0205f62399e3e53498f8d0a406b79717420f972954b3a44"} Mar 12 08:03:02 crc kubenswrapper[4809]: I0312 08:03:01.999922 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="292097c459141af1f0205f62399e3e53498f8d0a406b79717420f972954b3a44" Mar 12 08:03:02 crc kubenswrapper[4809]: I0312 08:03:01.999938 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555042-287fz" Mar 12 08:03:02 crc kubenswrapper[4809]: I0312 08:03:02.000636 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-b67f6bdb8-9j9d7" Mar 12 08:03:02 crc kubenswrapper[4809]: I0312 08:03:02.005576 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-b67f6bdb8-9j9d7" Mar 12 08:03:02 crc kubenswrapper[4809]: I0312 08:03:02.005797 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5950d3a4-1323-453d-86fc-b5c9cb164dbc-config\") pod \"controller-manager-8f5fb9cb7-ftwq8\" (UID: \"5950d3a4-1323-453d-86fc-b5c9cb164dbc\") " pod="openshift-controller-manager/controller-manager-8f5fb9cb7-ftwq8" Mar 12 08:03:02 crc kubenswrapper[4809]: I0312 08:03:02.005837 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5950d3a4-1323-453d-86fc-b5c9cb164dbc-proxy-ca-bundles\") pod \"controller-manager-8f5fb9cb7-ftwq8\" (UID: \"5950d3a4-1323-453d-86fc-b5c9cb164dbc\") " pod="openshift-controller-manager/controller-manager-8f5fb9cb7-ftwq8" Mar 12 08:03:02 crc kubenswrapper[4809]: I0312 08:03:02.005868 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5950d3a4-1323-453d-86fc-b5c9cb164dbc-serving-cert\") pod \"controller-manager-8f5fb9cb7-ftwq8\" (UID: \"5950d3a4-1323-453d-86fc-b5c9cb164dbc\") " pod="openshift-controller-manager/controller-manager-8f5fb9cb7-ftwq8" Mar 12 08:03:02 crc kubenswrapper[4809]: I0312 08:03:02.005925 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5950d3a4-1323-453d-86fc-b5c9cb164dbc-client-ca\") pod \"controller-manager-8f5fb9cb7-ftwq8\" (UID: \"5950d3a4-1323-453d-86fc-b5c9cb164dbc\") " pod="openshift-controller-manager/controller-manager-8f5fb9cb7-ftwq8" Mar 12 08:03:02 crc kubenswrapper[4809]: I0312 08:03:02.005955 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-759bs\" (UniqueName: \"kubernetes.io/projected/5950d3a4-1323-453d-86fc-b5c9cb164dbc-kube-api-access-759bs\") pod \"controller-manager-8f5fb9cb7-ftwq8\" (UID: \"5950d3a4-1323-453d-86fc-b5c9cb164dbc\") " pod="openshift-controller-manager/controller-manager-8f5fb9cb7-ftwq8" Mar 12 08:03:02 crc kubenswrapper[4809]: I0312 08:03:02.007490 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5950d3a4-1323-453d-86fc-b5c9cb164dbc-client-ca\") pod \"controller-manager-8f5fb9cb7-ftwq8\" (UID: \"5950d3a4-1323-453d-86fc-b5c9cb164dbc\") " pod="openshift-controller-manager/controller-manager-8f5fb9cb7-ftwq8" Mar 12 08:03:02 crc kubenswrapper[4809]: I0312 08:03:02.007750 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5950d3a4-1323-453d-86fc-b5c9cb164dbc-config\") pod \"controller-manager-8f5fb9cb7-ftwq8\" (UID: \"5950d3a4-1323-453d-86fc-b5c9cb164dbc\") " pod="openshift-controller-manager/controller-manager-8f5fb9cb7-ftwq8" Mar 12 08:03:02 crc kubenswrapper[4809]: I0312 08:03:02.008382 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5950d3a4-1323-453d-86fc-b5c9cb164dbc-proxy-ca-bundles\") pod \"controller-manager-8f5fb9cb7-ftwq8\" (UID: \"5950d3a4-1323-453d-86fc-b5c9cb164dbc\") " pod="openshift-controller-manager/controller-manager-8f5fb9cb7-ftwq8" Mar 12 08:03:02 crc kubenswrapper[4809]: I0312 08:03:02.011581 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5950d3a4-1323-453d-86fc-b5c9cb164dbc-serving-cert\") pod \"controller-manager-8f5fb9cb7-ftwq8\" (UID: \"5950d3a4-1323-453d-86fc-b5c9cb164dbc\") " pod="openshift-controller-manager/controller-manager-8f5fb9cb7-ftwq8" Mar 12 08:03:02 crc kubenswrapper[4809]: I0312 08:03:02.038600 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-759bs\" (UniqueName: \"kubernetes.io/projected/5950d3a4-1323-453d-86fc-b5c9cb164dbc-kube-api-access-759bs\") pod \"controller-manager-8f5fb9cb7-ftwq8\" (UID: \"5950d3a4-1323-453d-86fc-b5c9cb164dbc\") " pod="openshift-controller-manager/controller-manager-8f5fb9cb7-ftwq8" Mar 12 08:03:02 crc kubenswrapper[4809]: I0312 08:03:02.178076 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8f5fb9cb7-ftwq8" Mar 12 08:03:02 crc kubenswrapper[4809]: I0312 08:03:02.254285 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 12 08:03:02 crc kubenswrapper[4809]: I0312 08:03:02.310193 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/22b0d41c-40b2-4eb6-aac0-86073670971b-kube-api-access\") pod \"22b0d41c-40b2-4eb6-aac0-86073670971b\" (UID: \"22b0d41c-40b2-4eb6-aac0-86073670971b\") " Mar 12 08:03:02 crc kubenswrapper[4809]: I0312 08:03:02.310236 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/22b0d41c-40b2-4eb6-aac0-86073670971b-kubelet-dir\") pod \"22b0d41c-40b2-4eb6-aac0-86073670971b\" (UID: \"22b0d41c-40b2-4eb6-aac0-86073670971b\") " Mar 12 08:03:02 crc kubenswrapper[4809]: I0312 08:03:02.310505 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22b0d41c-40b2-4eb6-aac0-86073670971b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "22b0d41c-40b2-4eb6-aac0-86073670971b" (UID: "22b0d41c-40b2-4eb6-aac0-86073670971b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 08:03:02 crc kubenswrapper[4809]: I0312 08:03:02.316590 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22b0d41c-40b2-4eb6-aac0-86073670971b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "22b0d41c-40b2-4eb6-aac0-86073670971b" (UID: "22b0d41c-40b2-4eb6-aac0-86073670971b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:03:02 crc kubenswrapper[4809]: I0312 08:03:02.403148 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8f5fb9cb7-ftwq8"] Mar 12 08:03:02 crc kubenswrapper[4809]: W0312 08:03:02.409342 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5950d3a4_1323_453d_86fc_b5c9cb164dbc.slice/crio-543600f43e61a00aaaf1fa17de8286945a0dcc2404435adb45ed207f005e28df WatchSource:0}: Error finding container 543600f43e61a00aaaf1fa17de8286945a0dcc2404435adb45ed207f005e28df: Status 404 returned error can't find the container with id 543600f43e61a00aaaf1fa17de8286945a0dcc2404435adb45ed207f005e28df Mar 12 08:03:02 crc kubenswrapper[4809]: I0312 08:03:02.411397 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/22b0d41c-40b2-4eb6-aac0-86073670971b-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 12 08:03:02 crc kubenswrapper[4809]: I0312 08:03:02.411425 4809 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/22b0d41c-40b2-4eb6-aac0-86073670971b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 12 08:03:03 crc kubenswrapper[4809]: I0312 08:03:03.008606 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"22b0d41c-40b2-4eb6-aac0-86073670971b","Type":"ContainerDied","Data":"d518d63ceb10d489047204b4c91149dc445594af725f0d38f88fca1f76e518d1"} Mar 12 08:03:03 crc kubenswrapper[4809]: I0312 08:03:03.009147 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d518d63ceb10d489047204b4c91149dc445594af725f0d38f88fca1f76e518d1" Mar 12 08:03:03 crc kubenswrapper[4809]: I0312 08:03:03.008635 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 12 08:03:03 crc kubenswrapper[4809]: I0312 08:03:03.021389 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8f5fb9cb7-ftwq8" event={"ID":"5950d3a4-1323-453d-86fc-b5c9cb164dbc","Type":"ContainerStarted","Data":"b2d432021fc1410884399a1bd756a2a0fb6edcfcbde9a5aab294f09c7528993b"} Mar 12 08:03:03 crc kubenswrapper[4809]: I0312 08:03:03.021497 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8f5fb9cb7-ftwq8" event={"ID":"5950d3a4-1323-453d-86fc-b5c9cb164dbc","Type":"ContainerStarted","Data":"543600f43e61a00aaaf1fa17de8286945a0dcc2404435adb45ed207f005e28df"} Mar 12 08:03:04 crc kubenswrapper[4809]: I0312 08:03:04.048964 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-8f5fb9cb7-ftwq8" podStartSLOduration=7.048937332 podStartE2EDuration="7.048937332s" podCreationTimestamp="2026-03-12 08:02:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:03:04.046971714 +0000 UTC m=+257.629007447" watchObservedRunningTime="2026-03-12 08:03:04.048937332 +0000 UTC m=+257.630973065" Mar 12 08:03:06 crc kubenswrapper[4809]: I0312 08:03:06.070047 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 12 08:03:06 crc kubenswrapper[4809]: E0312 08:03:06.070761 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22b0d41c-40b2-4eb6-aac0-86073670971b" containerName="pruner" Mar 12 08:03:06 crc kubenswrapper[4809]: I0312 08:03:06.070773 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="22b0d41c-40b2-4eb6-aac0-86073670971b" containerName="pruner" Mar 12 08:03:06 crc kubenswrapper[4809]: I0312 08:03:06.070935 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="22b0d41c-40b2-4eb6-aac0-86073670971b" containerName="pruner" Mar 12 08:03:06 crc kubenswrapper[4809]: I0312 08:03:06.071383 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 12 08:03:06 crc kubenswrapper[4809]: I0312 08:03:06.073603 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Mar 12 08:03:06 crc kubenswrapper[4809]: I0312 08:03:06.073774 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 12 08:03:06 crc kubenswrapper[4809]: I0312 08:03:06.086417 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 12 08:03:06 crc kubenswrapper[4809]: I0312 08:03:06.176447 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5870849b-5199-42a7-b08d-614735e47737-kube-api-access\") pod \"installer-9-crc\" (UID: \"5870849b-5199-42a7-b08d-614735e47737\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 12 08:03:06 crc kubenswrapper[4809]: I0312 08:03:06.176565 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5870849b-5199-42a7-b08d-614735e47737-kubelet-dir\") pod \"installer-9-crc\" (UID: \"5870849b-5199-42a7-b08d-614735e47737\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 12 08:03:06 crc kubenswrapper[4809]: I0312 08:03:06.176599 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5870849b-5199-42a7-b08d-614735e47737-var-lock\") pod \"installer-9-crc\" (UID: \"5870849b-5199-42a7-b08d-614735e47737\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 12 08:03:06 crc kubenswrapper[4809]: I0312 08:03:06.277684 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5870849b-5199-42a7-b08d-614735e47737-kubelet-dir\") pod \"installer-9-crc\" (UID: \"5870849b-5199-42a7-b08d-614735e47737\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 12 08:03:06 crc kubenswrapper[4809]: I0312 08:03:06.277742 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5870849b-5199-42a7-b08d-614735e47737-var-lock\") pod \"installer-9-crc\" (UID: \"5870849b-5199-42a7-b08d-614735e47737\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 12 08:03:06 crc kubenswrapper[4809]: I0312 08:03:06.277791 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5870849b-5199-42a7-b08d-614735e47737-kube-api-access\") pod \"installer-9-crc\" (UID: \"5870849b-5199-42a7-b08d-614735e47737\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 12 08:03:06 crc kubenswrapper[4809]: I0312 08:03:06.277860 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5870849b-5199-42a7-b08d-614735e47737-kubelet-dir\") pod \"installer-9-crc\" (UID: \"5870849b-5199-42a7-b08d-614735e47737\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 12 08:03:06 crc kubenswrapper[4809]: I0312 08:03:06.277923 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5870849b-5199-42a7-b08d-614735e47737-var-lock\") pod \"installer-9-crc\" (UID: \"5870849b-5199-42a7-b08d-614735e47737\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 12 08:03:06 crc kubenswrapper[4809]: I0312 08:03:06.303295 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5870849b-5199-42a7-b08d-614735e47737-kube-api-access\") pod \"installer-9-crc\" (UID: \"5870849b-5199-42a7-b08d-614735e47737\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 12 08:03:06 crc kubenswrapper[4809]: I0312 08:03:06.438933 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 12 08:03:06 crc kubenswrapper[4809]: I0312 08:03:06.900335 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 12 08:03:07 crc kubenswrapper[4809]: I0312 08:03:07.048228 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"5870849b-5199-42a7-b08d-614735e47737","Type":"ContainerStarted","Data":"c81e0298906c3aeb00a790984f638e3fd44782f9b5267de7a53b3e0212685d54"} Mar 12 08:03:07 crc kubenswrapper[4809]: I0312 08:03:07.050873 4809 generic.go:334] "Generic (PLEG): container finished" podID="c914c474-5d5b-415b-ad58-76c7ac15dc94" containerID="37c9d156ce82bc23d254b0fdfc60ad8956e7abffce078f4f62f89baa9a799d85" exitCode=0 Mar 12 08:03:07 crc kubenswrapper[4809]: I0312 08:03:07.050904 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vhrcn" event={"ID":"c914c474-5d5b-415b-ad58-76c7ac15dc94","Type":"ContainerDied","Data":"37c9d156ce82bc23d254b0fdfc60ad8956e7abffce078f4f62f89baa9a799d85"} Mar 12 08:03:08 crc kubenswrapper[4809]: I0312 08:03:08.061738 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vhrcn" event={"ID":"c914c474-5d5b-415b-ad58-76c7ac15dc94","Type":"ContainerStarted","Data":"6ac9290b8842acf4726ca719fcf6a69514ba4d833e22b09fb1312276e3df9ce0"} Mar 12 08:03:08 crc kubenswrapper[4809]: I0312 08:03:08.064064 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"5870849b-5199-42a7-b08d-614735e47737","Type":"ContainerStarted","Data":"dd466d440d4fbc4c6f4697487402dd28a7ca539addce8dd241aba7f548e49590"} Mar 12 08:03:08 crc kubenswrapper[4809]: I0312 08:03:08.089656 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vhrcn" podStartSLOduration=2.753601729 podStartE2EDuration="48.089632402s" podCreationTimestamp="2026-03-12 08:02:20 +0000 UTC" firstStartedPulling="2026-03-12 08:02:22.335050014 +0000 UTC m=+215.917085747" lastFinishedPulling="2026-03-12 08:03:07.671080677 +0000 UTC m=+261.253116420" observedRunningTime="2026-03-12 08:03:08.084021427 +0000 UTC m=+261.666057170" watchObservedRunningTime="2026-03-12 08:03:08.089632402 +0000 UTC m=+261.671668135" Mar 12 08:03:08 crc kubenswrapper[4809]: I0312 08:03:08.110765 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.110733045 podStartE2EDuration="2.110733045s" podCreationTimestamp="2026-03-12 08:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:03:08.106392725 +0000 UTC m=+261.688428458" watchObservedRunningTime="2026-03-12 08:03:08.110733045 +0000 UTC m=+261.692768818" Mar 12 08:03:11 crc kubenswrapper[4809]: I0312 08:03:11.083889 4809 generic.go:334] "Generic (PLEG): container finished" podID="1df02216-0a1b-4417-9914-e7b9452a9c6b" containerID="2281cd0cee5ed13354f6b451ab48ed78704e56eeaf689a2e811f43b46106b6c5" exitCode=0 Mar 12 08:03:11 crc kubenswrapper[4809]: I0312 08:03:11.083954 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-btdvf" event={"ID":"1df02216-0a1b-4417-9914-e7b9452a9c6b","Type":"ContainerDied","Data":"2281cd0cee5ed13354f6b451ab48ed78704e56eeaf689a2e811f43b46106b6c5"} Mar 12 08:03:11 crc kubenswrapper[4809]: I0312 08:03:11.090783 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4qrn8" event={"ID":"f283aa6d-85ad-44ff-8758-d8251b00ae50","Type":"ContainerStarted","Data":"4166c5bc26ff63694ab15f86f6827f1d249b39370efcc0315d54fb58d20c095e"} Mar 12 08:03:11 crc kubenswrapper[4809]: I0312 08:03:11.251619 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vhrcn" Mar 12 08:03:11 crc kubenswrapper[4809]: I0312 08:03:11.251685 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vhrcn" Mar 12 08:03:11 crc kubenswrapper[4809]: I0312 08:03:11.403913 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vhrcn" Mar 12 08:03:12 crc kubenswrapper[4809]: I0312 08:03:12.099551 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-btdvf" event={"ID":"1df02216-0a1b-4417-9914-e7b9452a9c6b","Type":"ContainerStarted","Data":"de752abbaa685e978a61b78459c4aaabf2e0f921dc8cf8b07ed5274c61ecd4f5"} Mar 12 08:03:12 crc kubenswrapper[4809]: I0312 08:03:12.102139 4809 generic.go:334] "Generic (PLEG): container finished" podID="f283aa6d-85ad-44ff-8758-d8251b00ae50" containerID="4166c5bc26ff63694ab15f86f6827f1d249b39370efcc0315d54fb58d20c095e" exitCode=0 Mar 12 08:03:12 crc kubenswrapper[4809]: I0312 08:03:12.102216 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4qrn8" event={"ID":"f283aa6d-85ad-44ff-8758-d8251b00ae50","Type":"ContainerDied","Data":"4166c5bc26ff63694ab15f86f6827f1d249b39370efcc0315d54fb58d20c095e"} Mar 12 08:03:12 crc kubenswrapper[4809]: I0312 08:03:12.130595 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-btdvf" podStartSLOduration=3.068283892 podStartE2EDuration="51.130570256s" podCreationTimestamp="2026-03-12 08:02:21 +0000 UTC" firstStartedPulling="2026-03-12 08:02:23.44373694 +0000 UTC m=+217.025772673" lastFinishedPulling="2026-03-12 08:03:11.506023274 +0000 UTC m=+265.088059037" observedRunningTime="2026-03-12 08:03:12.127876871 +0000 UTC m=+265.709912604" watchObservedRunningTime="2026-03-12 08:03:12.130570256 +0000 UTC m=+265.712605989" Mar 12 08:03:12 crc kubenswrapper[4809]: I0312 08:03:12.153999 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vhrcn" Mar 12 08:03:12 crc kubenswrapper[4809]: I0312 08:03:12.178610 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-8f5fb9cb7-ftwq8" Mar 12 08:03:12 crc kubenswrapper[4809]: I0312 08:03:12.183629 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-8f5fb9cb7-ftwq8" Mar 12 08:03:12 crc kubenswrapper[4809]: I0312 08:03:12.350424 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-s8wdz"] Mar 12 08:03:13 crc kubenswrapper[4809]: I0312 08:03:13.113438 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jn4tc" event={"ID":"c060e910-de6e-43df-a148-66f07bc71180","Type":"ContainerStarted","Data":"020d1eb6a1107b0302c6913f22d418bd2a1c75dfc8ba4a4861e02ad42af5a363"} Mar 12 08:03:13 crc kubenswrapper[4809]: I0312 08:03:13.116869 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4qrn8" event={"ID":"f283aa6d-85ad-44ff-8758-d8251b00ae50","Type":"ContainerStarted","Data":"4103d51c22bb98f593d103b0a3a160b75f3d4654273da212f6cf13fb92126a29"} Mar 12 08:03:13 crc kubenswrapper[4809]: I0312 08:03:13.159707 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4qrn8" podStartSLOduration=3.342193725 podStartE2EDuration="49.159669416s" podCreationTimestamp="2026-03-12 08:02:24 +0000 UTC" firstStartedPulling="2026-03-12 08:02:26.689584457 +0000 UTC m=+220.271620180" lastFinishedPulling="2026-03-12 08:03:12.507060138 +0000 UTC m=+266.089095871" observedRunningTime="2026-03-12 08:03:13.15766761 +0000 UTC m=+266.739703343" watchObservedRunningTime="2026-03-12 08:03:13.159669416 +0000 UTC m=+266.741705139" Mar 12 08:03:14 crc kubenswrapper[4809]: I0312 08:03:14.124716 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wmft4" event={"ID":"6adda456-f49a-4a6d-b09b-8841158e9268","Type":"ContainerStarted","Data":"f9cfbbdaca1fed92371305d7ed808f71f2986aae32fa1c189de1a2e0e1e1dd04"} Mar 12 08:03:14 crc kubenswrapper[4809]: I0312 08:03:14.126596 4809 generic.go:334] "Generic (PLEG): container finished" podID="c060e910-de6e-43df-a148-66f07bc71180" containerID="020d1eb6a1107b0302c6913f22d418bd2a1c75dfc8ba4a4861e02ad42af5a363" exitCode=0 Mar 12 08:03:14 crc kubenswrapper[4809]: I0312 08:03:14.126664 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jn4tc" event={"ID":"c060e910-de6e-43df-a148-66f07bc71180","Type":"ContainerDied","Data":"020d1eb6a1107b0302c6913f22d418bd2a1c75dfc8ba4a4861e02ad42af5a363"} Mar 12 08:03:14 crc kubenswrapper[4809]: I0312 08:03:14.130698 4809 generic.go:334] "Generic (PLEG): container finished" podID="bd4be45a-8370-4cbe-a718-bb31fd64d99a" containerID="e4ccc725d789b3c681c84bcf99393a7b2f29a97f026c12037aa6a0eb9293a992" exitCode=0 Mar 12 08:03:14 crc kubenswrapper[4809]: I0312 08:03:14.130735 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pqsb2" event={"ID":"bd4be45a-8370-4cbe-a718-bb31fd64d99a","Type":"ContainerDied","Data":"e4ccc725d789b3c681c84bcf99393a7b2f29a97f026c12037aa6a0eb9293a992"} Mar 12 08:03:14 crc kubenswrapper[4809]: I0312 08:03:14.132601 4809 generic.go:334] "Generic (PLEG): container finished" podID="75d1a803-df56-424d-ace1-ecc868081fca" containerID="41db58c1878faa146941df56f7f5193bc9208102fd525a771cb1749f1fdcc68f" exitCode=0 Mar 12 08:03:14 crc kubenswrapper[4809]: I0312 08:03:14.132634 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jkqnv" event={"ID":"75d1a803-df56-424d-ace1-ecc868081fca","Type":"ContainerDied","Data":"41db58c1878faa146941df56f7f5193bc9208102fd525a771cb1749f1fdcc68f"} Mar 12 08:03:14 crc kubenswrapper[4809]: I0312 08:03:14.446968 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4qrn8" Mar 12 08:03:14 crc kubenswrapper[4809]: I0312 08:03:14.447012 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4qrn8" Mar 12 08:03:15 crc kubenswrapper[4809]: I0312 08:03:15.048040 4809 patch_prober.go:28] interesting pod/machine-config-daemon-h6d4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 08:03:15 crc kubenswrapper[4809]: I0312 08:03:15.048354 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 08:03:15 crc kubenswrapper[4809]: I0312 08:03:15.048400 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" Mar 12 08:03:15 crc kubenswrapper[4809]: I0312 08:03:15.049036 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5a1dde7e156e97091c5fa419562607743e842e7e610e685fff1be3ce7ce33f39"} pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 08:03:15 crc kubenswrapper[4809]: I0312 08:03:15.049090 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" containerID="cri-o://5a1dde7e156e97091c5fa419562607743e842e7e610e685fff1be3ce7ce33f39" gracePeriod=600 Mar 12 08:03:15 crc kubenswrapper[4809]: I0312 08:03:15.140208 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jn4tc" event={"ID":"c060e910-de6e-43df-a148-66f07bc71180","Type":"ContainerStarted","Data":"7e9e966724b0fb49a8489a41cdda03032b8797814018c4b1ec1ef27116c7ce3a"} Mar 12 08:03:15 crc kubenswrapper[4809]: I0312 08:03:15.142210 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jkqnv" event={"ID":"75d1a803-df56-424d-ace1-ecc868081fca","Type":"ContainerStarted","Data":"338c9dc207e5bb491059a075f056019c1a58b6388dd08c3696ffb4792152163a"} Mar 12 08:03:15 crc kubenswrapper[4809]: I0312 08:03:15.144084 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m6gcg" event={"ID":"511df0b5-a255-46fe-aeba-cd5daa01e7c9","Type":"ContainerStarted","Data":"4c9dce325a4661df3da9c3caf117ee0cc98259c75246af3ab21f849aae08cc47"} Mar 12 08:03:15 crc kubenswrapper[4809]: I0312 08:03:15.145672 4809 generic.go:334] "Generic (PLEG): container finished" podID="6adda456-f49a-4a6d-b09b-8841158e9268" containerID="f9cfbbdaca1fed92371305d7ed808f71f2986aae32fa1c189de1a2e0e1e1dd04" exitCode=0 Mar 12 08:03:15 crc kubenswrapper[4809]: I0312 08:03:15.145696 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wmft4" event={"ID":"6adda456-f49a-4a6d-b09b-8841158e9268","Type":"ContainerDied","Data":"f9cfbbdaca1fed92371305d7ed808f71f2986aae32fa1c189de1a2e0e1e1dd04"} Mar 12 08:03:15 crc kubenswrapper[4809]: I0312 08:03:15.167589 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jn4tc" podStartSLOduration=3.24592913 podStartE2EDuration="53.167570896s" podCreationTimestamp="2026-03-12 08:02:22 +0000 UTC" firstStartedPulling="2026-03-12 08:02:24.662862194 +0000 UTC m=+218.244897927" lastFinishedPulling="2026-03-12 08:03:14.58450395 +0000 UTC m=+268.166539693" observedRunningTime="2026-03-12 08:03:15.165448826 +0000 UTC m=+268.747484569" watchObservedRunningTime="2026-03-12 08:03:15.167570896 +0000 UTC m=+268.749606629" Mar 12 08:03:15 crc kubenswrapper[4809]: I0312 08:03:15.183868 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jkqnv" podStartSLOduration=3.315429884 podStartE2EDuration="51.183853885s" podCreationTimestamp="2026-03-12 08:02:24 +0000 UTC" firstStartedPulling="2026-03-12 08:02:26.696839978 +0000 UTC m=+220.278875711" lastFinishedPulling="2026-03-12 08:03:14.565263969 +0000 UTC m=+268.147299712" observedRunningTime="2026-03-12 08:03:15.182339363 +0000 UTC m=+268.764375106" watchObservedRunningTime="2026-03-12 08:03:15.183853885 +0000 UTC m=+268.765889618" Mar 12 08:03:15 crc kubenswrapper[4809]: I0312 08:03:15.502436 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4qrn8" podUID="f283aa6d-85ad-44ff-8758-d8251b00ae50" containerName="registry-server" probeResult="failure" output=< Mar 12 08:03:15 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 08:03:15 crc kubenswrapper[4809]: > Mar 12 08:03:16 crc kubenswrapper[4809]: I0312 08:03:16.155498 4809 generic.go:334] "Generic (PLEG): container finished" podID="511df0b5-a255-46fe-aeba-cd5daa01e7c9" containerID="4c9dce325a4661df3da9c3caf117ee0cc98259c75246af3ab21f849aae08cc47" exitCode=0 Mar 12 08:03:16 crc kubenswrapper[4809]: I0312 08:03:16.155581 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m6gcg" event={"ID":"511df0b5-a255-46fe-aeba-cd5daa01e7c9","Type":"ContainerDied","Data":"4c9dce325a4661df3da9c3caf117ee0cc98259c75246af3ab21f849aae08cc47"} Mar 12 08:03:16 crc kubenswrapper[4809]: I0312 08:03:16.159336 4809 generic.go:334] "Generic (PLEG): container finished" podID="101483ba-8ed3-40eb-9855-077e9add029f" containerID="5a1dde7e156e97091c5fa419562607743e842e7e610e685fff1be3ce7ce33f39" exitCode=0 Mar 12 08:03:16 crc kubenswrapper[4809]: I0312 08:03:16.159365 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" event={"ID":"101483ba-8ed3-40eb-9855-077e9add029f","Type":"ContainerDied","Data":"5a1dde7e156e97091c5fa419562607743e842e7e610e685fff1be3ce7ce33f39"} Mar 12 08:03:16 crc kubenswrapper[4809]: I0312 08:03:16.159386 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" event={"ID":"101483ba-8ed3-40eb-9855-077e9add029f","Type":"ContainerStarted","Data":"4e9e4f9c5d2c28b5cf22cec3c1066c042f4246b7416ca727e9771e6b84eecf1f"} Mar 12 08:03:17 crc kubenswrapper[4809]: I0312 08:03:17.166814 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m6gcg" event={"ID":"511df0b5-a255-46fe-aeba-cd5daa01e7c9","Type":"ContainerStarted","Data":"c37b723986e23b116af10ceb482e227a3cb9019bb5ac5d75f3113ac68d688253"} Mar 12 08:03:17 crc kubenswrapper[4809]: I0312 08:03:17.169060 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wmft4" event={"ID":"6adda456-f49a-4a6d-b09b-8841158e9268","Type":"ContainerStarted","Data":"ee9d34f88a042bc889ff740d794930d35570a581c25163a597f5721a1291e50d"} Mar 12 08:03:17 crc kubenswrapper[4809]: I0312 08:03:17.171486 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pqsb2" event={"ID":"bd4be45a-8370-4cbe-a718-bb31fd64d99a","Type":"ContainerStarted","Data":"be53b596bf62aac81e069baa64b49314df9e66ce3279fc49765f560dbae7810f"} Mar 12 08:03:17 crc kubenswrapper[4809]: I0312 08:03:17.182520 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-m6gcg" podStartSLOduration=2.871462851 podStartE2EDuration="56.182495119s" podCreationTimestamp="2026-03-12 08:02:21 +0000 UTC" firstStartedPulling="2026-03-12 08:02:23.41402866 +0000 UTC m=+216.996064393" lastFinishedPulling="2026-03-12 08:03:16.725060938 +0000 UTC m=+270.307096661" observedRunningTime="2026-03-12 08:03:17.180963947 +0000 UTC m=+270.762999700" watchObservedRunningTime="2026-03-12 08:03:17.182495119 +0000 UTC m=+270.764530852" Mar 12 08:03:17 crc kubenswrapper[4809]: I0312 08:03:17.198975 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pqsb2" podStartSLOduration=3.636998719 podStartE2EDuration="56.198957415s" podCreationTimestamp="2026-03-12 08:02:21 +0000 UTC" firstStartedPulling="2026-03-12 08:02:23.457326434 +0000 UTC m=+217.039362167" lastFinishedPulling="2026-03-12 08:03:16.01928513 +0000 UTC m=+269.601320863" observedRunningTime="2026-03-12 08:03:17.196286031 +0000 UTC m=+270.778321774" watchObservedRunningTime="2026-03-12 08:03:17.198957415 +0000 UTC m=+270.780993158" Mar 12 08:03:17 crc kubenswrapper[4809]: I0312 08:03:17.217957 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wmft4" podStartSLOduration=3.773490527 podStartE2EDuration="54.217935669s" podCreationTimestamp="2026-03-12 08:02:23 +0000 UTC" firstStartedPulling="2026-03-12 08:02:25.677375433 +0000 UTC m=+219.259411166" lastFinishedPulling="2026-03-12 08:03:16.121820575 +0000 UTC m=+269.703856308" observedRunningTime="2026-03-12 08:03:17.216023816 +0000 UTC m=+270.798059559" watchObservedRunningTime="2026-03-12 08:03:17.217935669 +0000 UTC m=+270.799971402" Mar 12 08:03:17 crc kubenswrapper[4809]: I0312 08:03:17.833129 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8f5fb9cb7-ftwq8"] Mar 12 08:03:17 crc kubenswrapper[4809]: I0312 08:03:17.833334 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-8f5fb9cb7-ftwq8" podUID="5950d3a4-1323-453d-86fc-b5c9cb164dbc" containerName="controller-manager" containerID="cri-o://b2d432021fc1410884399a1bd756a2a0fb6edcfcbde9a5aab294f09c7528993b" gracePeriod=30 Mar 12 08:03:17 crc kubenswrapper[4809]: I0312 08:03:17.870457 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b67f6bdb8-9j9d7"] Mar 12 08:03:17 crc kubenswrapper[4809]: I0312 08:03:17.870745 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-b67f6bdb8-9j9d7" podUID="07758ee8-b281-47c3-9bf5-ffc0d1aa9c38" containerName="route-controller-manager" containerID="cri-o://e939c98bba0501614dfb06935887a9e21b94467b8923ce8a532eb5ba9cd88a46" gracePeriod=30 Mar 12 08:03:18 crc kubenswrapper[4809]: I0312 08:03:18.180883 4809 generic.go:334] "Generic (PLEG): container finished" podID="07758ee8-b281-47c3-9bf5-ffc0d1aa9c38" containerID="e939c98bba0501614dfb06935887a9e21b94467b8923ce8a532eb5ba9cd88a46" exitCode=0 Mar 12 08:03:18 crc kubenswrapper[4809]: I0312 08:03:18.180998 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b67f6bdb8-9j9d7" event={"ID":"07758ee8-b281-47c3-9bf5-ffc0d1aa9c38","Type":"ContainerDied","Data":"e939c98bba0501614dfb06935887a9e21b94467b8923ce8a532eb5ba9cd88a46"} Mar 12 08:03:18 crc kubenswrapper[4809]: I0312 08:03:18.183174 4809 generic.go:334] "Generic (PLEG): container finished" podID="5950d3a4-1323-453d-86fc-b5c9cb164dbc" containerID="b2d432021fc1410884399a1bd756a2a0fb6edcfcbde9a5aab294f09c7528993b" exitCode=0 Mar 12 08:03:18 crc kubenswrapper[4809]: I0312 08:03:18.183223 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8f5fb9cb7-ftwq8" event={"ID":"5950d3a4-1323-453d-86fc-b5c9cb164dbc","Type":"ContainerDied","Data":"b2d432021fc1410884399a1bd756a2a0fb6edcfcbde9a5aab294f09c7528993b"} Mar 12 08:03:18 crc kubenswrapper[4809]: I0312 08:03:18.498281 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b67f6bdb8-9j9d7" Mar 12 08:03:18 crc kubenswrapper[4809]: I0312 08:03:18.557284 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8f5fb9cb7-ftwq8" Mar 12 08:03:18 crc kubenswrapper[4809]: I0312 08:03:18.570393 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07758ee8-b281-47c3-9bf5-ffc0d1aa9c38-config\") pod \"07758ee8-b281-47c3-9bf5-ffc0d1aa9c38\" (UID: \"07758ee8-b281-47c3-9bf5-ffc0d1aa9c38\") " Mar 12 08:03:18 crc kubenswrapper[4809]: I0312 08:03:18.570466 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07758ee8-b281-47c3-9bf5-ffc0d1aa9c38-serving-cert\") pod \"07758ee8-b281-47c3-9bf5-ffc0d1aa9c38\" (UID: \"07758ee8-b281-47c3-9bf5-ffc0d1aa9c38\") " Mar 12 08:03:18 crc kubenswrapper[4809]: I0312 08:03:18.570525 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/07758ee8-b281-47c3-9bf5-ffc0d1aa9c38-client-ca\") pod \"07758ee8-b281-47c3-9bf5-ffc0d1aa9c38\" (UID: \"07758ee8-b281-47c3-9bf5-ffc0d1aa9c38\") " Mar 12 08:03:18 crc kubenswrapper[4809]: I0312 08:03:18.570553 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvbzp\" (UniqueName: \"kubernetes.io/projected/07758ee8-b281-47c3-9bf5-ffc0d1aa9c38-kube-api-access-pvbzp\") pod \"07758ee8-b281-47c3-9bf5-ffc0d1aa9c38\" (UID: \"07758ee8-b281-47c3-9bf5-ffc0d1aa9c38\") " Mar 12 08:03:18 crc kubenswrapper[4809]: I0312 08:03:18.571674 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07758ee8-b281-47c3-9bf5-ffc0d1aa9c38-config" (OuterVolumeSpecName: "config") pod "07758ee8-b281-47c3-9bf5-ffc0d1aa9c38" (UID: "07758ee8-b281-47c3-9bf5-ffc0d1aa9c38"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:03:18 crc kubenswrapper[4809]: I0312 08:03:18.572028 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07758ee8-b281-47c3-9bf5-ffc0d1aa9c38-client-ca" (OuterVolumeSpecName: "client-ca") pod "07758ee8-b281-47c3-9bf5-ffc0d1aa9c38" (UID: "07758ee8-b281-47c3-9bf5-ffc0d1aa9c38"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:03:18 crc kubenswrapper[4809]: I0312 08:03:18.577527 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07758ee8-b281-47c3-9bf5-ffc0d1aa9c38-kube-api-access-pvbzp" (OuterVolumeSpecName: "kube-api-access-pvbzp") pod "07758ee8-b281-47c3-9bf5-ffc0d1aa9c38" (UID: "07758ee8-b281-47c3-9bf5-ffc0d1aa9c38"). InnerVolumeSpecName "kube-api-access-pvbzp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:03:18 crc kubenswrapper[4809]: I0312 08:03:18.577726 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07758ee8-b281-47c3-9bf5-ffc0d1aa9c38-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "07758ee8-b281-47c3-9bf5-ffc0d1aa9c38" (UID: "07758ee8-b281-47c3-9bf5-ffc0d1aa9c38"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:03:18 crc kubenswrapper[4809]: I0312 08:03:18.671960 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5950d3a4-1323-453d-86fc-b5c9cb164dbc-proxy-ca-bundles\") pod \"5950d3a4-1323-453d-86fc-b5c9cb164dbc\" (UID: \"5950d3a4-1323-453d-86fc-b5c9cb164dbc\") " Mar 12 08:03:18 crc kubenswrapper[4809]: I0312 08:03:18.672074 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5950d3a4-1323-453d-86fc-b5c9cb164dbc-config\") pod \"5950d3a4-1323-453d-86fc-b5c9cb164dbc\" (UID: \"5950d3a4-1323-453d-86fc-b5c9cb164dbc\") " Mar 12 08:03:18 crc kubenswrapper[4809]: I0312 08:03:18.672211 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-759bs\" (UniqueName: \"kubernetes.io/projected/5950d3a4-1323-453d-86fc-b5c9cb164dbc-kube-api-access-759bs\") pod \"5950d3a4-1323-453d-86fc-b5c9cb164dbc\" (UID: \"5950d3a4-1323-453d-86fc-b5c9cb164dbc\") " Mar 12 08:03:18 crc kubenswrapper[4809]: I0312 08:03:18.672259 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5950d3a4-1323-453d-86fc-b5c9cb164dbc-serving-cert\") pod \"5950d3a4-1323-453d-86fc-b5c9cb164dbc\" (UID: \"5950d3a4-1323-453d-86fc-b5c9cb164dbc\") " Mar 12 08:03:18 crc kubenswrapper[4809]: I0312 08:03:18.672295 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5950d3a4-1323-453d-86fc-b5c9cb164dbc-client-ca\") pod \"5950d3a4-1323-453d-86fc-b5c9cb164dbc\" (UID: \"5950d3a4-1323-453d-86fc-b5c9cb164dbc\") " Mar 12 08:03:18 crc kubenswrapper[4809]: I0312 08:03:18.672593 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07758ee8-b281-47c3-9bf5-ffc0d1aa9c38-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:03:18 crc kubenswrapper[4809]: I0312 08:03:18.672618 4809 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07758ee8-b281-47c3-9bf5-ffc0d1aa9c38-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 08:03:18 crc kubenswrapper[4809]: I0312 08:03:18.672629 4809 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/07758ee8-b281-47c3-9bf5-ffc0d1aa9c38-client-ca\") on node \"crc\" DevicePath \"\"" Mar 12 08:03:18 crc kubenswrapper[4809]: I0312 08:03:18.672668 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvbzp\" (UniqueName: \"kubernetes.io/projected/07758ee8-b281-47c3-9bf5-ffc0d1aa9c38-kube-api-access-pvbzp\") on node \"crc\" DevicePath \"\"" Mar 12 08:03:18 crc kubenswrapper[4809]: I0312 08:03:18.673933 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5950d3a4-1323-453d-86fc-b5c9cb164dbc-client-ca" (OuterVolumeSpecName: "client-ca") pod "5950d3a4-1323-453d-86fc-b5c9cb164dbc" (UID: "5950d3a4-1323-453d-86fc-b5c9cb164dbc"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:03:18 crc kubenswrapper[4809]: I0312 08:03:18.674043 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5950d3a4-1323-453d-86fc-b5c9cb164dbc-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "5950d3a4-1323-453d-86fc-b5c9cb164dbc" (UID: "5950d3a4-1323-453d-86fc-b5c9cb164dbc"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:03:18 crc kubenswrapper[4809]: I0312 08:03:18.674191 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5950d3a4-1323-453d-86fc-b5c9cb164dbc-config" (OuterVolumeSpecName: "config") pod "5950d3a4-1323-453d-86fc-b5c9cb164dbc" (UID: "5950d3a4-1323-453d-86fc-b5c9cb164dbc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:03:18 crc kubenswrapper[4809]: I0312 08:03:18.677803 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5950d3a4-1323-453d-86fc-b5c9cb164dbc-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5950d3a4-1323-453d-86fc-b5c9cb164dbc" (UID: "5950d3a4-1323-453d-86fc-b5c9cb164dbc"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:03:18 crc kubenswrapper[4809]: I0312 08:03:18.677999 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5950d3a4-1323-453d-86fc-b5c9cb164dbc-kube-api-access-759bs" (OuterVolumeSpecName: "kube-api-access-759bs") pod "5950d3a4-1323-453d-86fc-b5c9cb164dbc" (UID: "5950d3a4-1323-453d-86fc-b5c9cb164dbc"). InnerVolumeSpecName "kube-api-access-759bs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:03:18 crc kubenswrapper[4809]: I0312 08:03:18.774136 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-759bs\" (UniqueName: \"kubernetes.io/projected/5950d3a4-1323-453d-86fc-b5c9cb164dbc-kube-api-access-759bs\") on node \"crc\" DevicePath \"\"" Mar 12 08:03:18 crc kubenswrapper[4809]: I0312 08:03:18.774439 4809 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5950d3a4-1323-453d-86fc-b5c9cb164dbc-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 08:03:18 crc kubenswrapper[4809]: I0312 08:03:18.774562 4809 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5950d3a4-1323-453d-86fc-b5c9cb164dbc-client-ca\") on node \"crc\" DevicePath \"\"" Mar 12 08:03:18 crc kubenswrapper[4809]: I0312 08:03:18.774644 4809 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5950d3a4-1323-453d-86fc-b5c9cb164dbc-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 12 08:03:18 crc kubenswrapper[4809]: I0312 08:03:18.774733 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5950d3a4-1323-453d-86fc-b5c9cb164dbc-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:03:19 crc kubenswrapper[4809]: I0312 08:03:19.190060 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8f5fb9cb7-ftwq8" Mar 12 08:03:19 crc kubenswrapper[4809]: I0312 08:03:19.190052 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8f5fb9cb7-ftwq8" event={"ID":"5950d3a4-1323-453d-86fc-b5c9cb164dbc","Type":"ContainerDied","Data":"543600f43e61a00aaaf1fa17de8286945a0dcc2404435adb45ed207f005e28df"} Mar 12 08:03:19 crc kubenswrapper[4809]: I0312 08:03:19.191292 4809 scope.go:117] "RemoveContainer" containerID="b2d432021fc1410884399a1bd756a2a0fb6edcfcbde9a5aab294f09c7528993b" Mar 12 08:03:19 crc kubenswrapper[4809]: I0312 08:03:19.191778 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b67f6bdb8-9j9d7" event={"ID":"07758ee8-b281-47c3-9bf5-ffc0d1aa9c38","Type":"ContainerDied","Data":"1835d52b4fbe98f462bb20d202a15bcc461fed1cf84d8de2a9d9227f588ef0f8"} Mar 12 08:03:19 crc kubenswrapper[4809]: I0312 08:03:19.191852 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b67f6bdb8-9j9d7" Mar 12 08:03:19 crc kubenswrapper[4809]: I0312 08:03:19.207270 4809 scope.go:117] "RemoveContainer" containerID="e939c98bba0501614dfb06935887a9e21b94467b8923ce8a532eb5ba9cd88a46" Mar 12 08:03:19 crc kubenswrapper[4809]: I0312 08:03:19.214027 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8f5fb9cb7-ftwq8"] Mar 12 08:03:19 crc kubenswrapper[4809]: I0312 08:03:19.217739 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-8f5fb9cb7-ftwq8"] Mar 12 08:03:19 crc kubenswrapper[4809]: I0312 08:03:19.227232 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b67f6bdb8-9j9d7"] Mar 12 08:03:19 crc kubenswrapper[4809]: I0312 08:03:19.229756 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b67f6bdb8-9j9d7"] Mar 12 08:03:19 crc kubenswrapper[4809]: I0312 08:03:19.855221 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-884c6bbf8-zb7jr"] Mar 12 08:03:19 crc kubenswrapper[4809]: E0312 08:03:19.855857 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5950d3a4-1323-453d-86fc-b5c9cb164dbc" containerName="controller-manager" Mar 12 08:03:19 crc kubenswrapper[4809]: I0312 08:03:19.855878 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="5950d3a4-1323-453d-86fc-b5c9cb164dbc" containerName="controller-manager" Mar 12 08:03:19 crc kubenswrapper[4809]: E0312 08:03:19.855897 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07758ee8-b281-47c3-9bf5-ffc0d1aa9c38" containerName="route-controller-manager" Mar 12 08:03:19 crc kubenswrapper[4809]: I0312 08:03:19.855906 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="07758ee8-b281-47c3-9bf5-ffc0d1aa9c38" containerName="route-controller-manager" Mar 12 08:03:19 crc kubenswrapper[4809]: I0312 08:03:19.856046 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="5950d3a4-1323-453d-86fc-b5c9cb164dbc" containerName="controller-manager" Mar 12 08:03:19 crc kubenswrapper[4809]: I0312 08:03:19.856067 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="07758ee8-b281-47c3-9bf5-ffc0d1aa9c38" containerName="route-controller-manager" Mar 12 08:03:19 crc kubenswrapper[4809]: I0312 08:03:19.856551 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-884c6bbf8-zb7jr" Mar 12 08:03:19 crc kubenswrapper[4809]: I0312 08:03:19.858765 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 12 08:03:19 crc kubenswrapper[4809]: I0312 08:03:19.858851 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 12 08:03:19 crc kubenswrapper[4809]: I0312 08:03:19.858991 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6dd94c7fd9-mz5zm"] Mar 12 08:03:19 crc kubenswrapper[4809]: I0312 08:03:19.859657 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6dd94c7fd9-mz5zm" Mar 12 08:03:19 crc kubenswrapper[4809]: I0312 08:03:19.860246 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 12 08:03:19 crc kubenswrapper[4809]: I0312 08:03:19.860514 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 12 08:03:19 crc kubenswrapper[4809]: I0312 08:03:19.860853 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 12 08:03:19 crc kubenswrapper[4809]: I0312 08:03:19.862626 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 12 08:03:19 crc kubenswrapper[4809]: I0312 08:03:19.862682 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 12 08:03:19 crc kubenswrapper[4809]: I0312 08:03:19.862631 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 12 08:03:19 crc kubenswrapper[4809]: I0312 08:03:19.862815 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 12 08:03:19 crc kubenswrapper[4809]: I0312 08:03:19.864063 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 12 08:03:19 crc kubenswrapper[4809]: I0312 08:03:19.864163 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 12 08:03:19 crc kubenswrapper[4809]: I0312 08:03:19.864668 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 12 08:03:19 crc kubenswrapper[4809]: I0312 08:03:19.875559 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 12 08:03:19 crc kubenswrapper[4809]: I0312 08:03:19.881648 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6dd94c7fd9-mz5zm"] Mar 12 08:03:19 crc kubenswrapper[4809]: I0312 08:03:19.890393 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/879828de-d2a0-475c-b2c3-9f07caa1be65-config\") pod \"controller-manager-6dd94c7fd9-mz5zm\" (UID: \"879828de-d2a0-475c-b2c3-9f07caa1be65\") " pod="openshift-controller-manager/controller-manager-6dd94c7fd9-mz5zm" Mar 12 08:03:19 crc kubenswrapper[4809]: I0312 08:03:19.890441 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhhkb\" (UniqueName: \"kubernetes.io/projected/408c166a-3c09-4993-bd81-59bf46798fca-kube-api-access-xhhkb\") pod \"route-controller-manager-884c6bbf8-zb7jr\" (UID: \"408c166a-3c09-4993-bd81-59bf46798fca\") " pod="openshift-route-controller-manager/route-controller-manager-884c6bbf8-zb7jr" Mar 12 08:03:19 crc kubenswrapper[4809]: I0312 08:03:19.890461 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7bdv\" (UniqueName: \"kubernetes.io/projected/879828de-d2a0-475c-b2c3-9f07caa1be65-kube-api-access-d7bdv\") pod \"controller-manager-6dd94c7fd9-mz5zm\" (UID: \"879828de-d2a0-475c-b2c3-9f07caa1be65\") " pod="openshift-controller-manager/controller-manager-6dd94c7fd9-mz5zm" Mar 12 08:03:19 crc kubenswrapper[4809]: I0312 08:03:19.890496 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/879828de-d2a0-475c-b2c3-9f07caa1be65-client-ca\") pod \"controller-manager-6dd94c7fd9-mz5zm\" (UID: \"879828de-d2a0-475c-b2c3-9f07caa1be65\") " pod="openshift-controller-manager/controller-manager-6dd94c7fd9-mz5zm" Mar 12 08:03:19 crc kubenswrapper[4809]: I0312 08:03:19.890531 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/408c166a-3c09-4993-bd81-59bf46798fca-client-ca\") pod \"route-controller-manager-884c6bbf8-zb7jr\" (UID: \"408c166a-3c09-4993-bd81-59bf46798fca\") " pod="openshift-route-controller-manager/route-controller-manager-884c6bbf8-zb7jr" Mar 12 08:03:19 crc kubenswrapper[4809]: I0312 08:03:19.890558 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/879828de-d2a0-475c-b2c3-9f07caa1be65-serving-cert\") pod \"controller-manager-6dd94c7fd9-mz5zm\" (UID: \"879828de-d2a0-475c-b2c3-9f07caa1be65\") " pod="openshift-controller-manager/controller-manager-6dd94c7fd9-mz5zm" Mar 12 08:03:19 crc kubenswrapper[4809]: I0312 08:03:19.890579 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/879828de-d2a0-475c-b2c3-9f07caa1be65-proxy-ca-bundles\") pod \"controller-manager-6dd94c7fd9-mz5zm\" (UID: \"879828de-d2a0-475c-b2c3-9f07caa1be65\") " pod="openshift-controller-manager/controller-manager-6dd94c7fd9-mz5zm" Mar 12 08:03:19 crc kubenswrapper[4809]: I0312 08:03:19.890599 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/408c166a-3c09-4993-bd81-59bf46798fca-config\") pod \"route-controller-manager-884c6bbf8-zb7jr\" (UID: \"408c166a-3c09-4993-bd81-59bf46798fca\") " pod="openshift-route-controller-manager/route-controller-manager-884c6bbf8-zb7jr" Mar 12 08:03:19 crc kubenswrapper[4809]: I0312 08:03:19.890622 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/408c166a-3c09-4993-bd81-59bf46798fca-serving-cert\") pod \"route-controller-manager-884c6bbf8-zb7jr\" (UID: \"408c166a-3c09-4993-bd81-59bf46798fca\") " pod="openshift-route-controller-manager/route-controller-manager-884c6bbf8-zb7jr" Mar 12 08:03:19 crc kubenswrapper[4809]: I0312 08:03:19.901653 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-884c6bbf8-zb7jr"] Mar 12 08:03:19 crc kubenswrapper[4809]: I0312 08:03:19.991839 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/879828de-d2a0-475c-b2c3-9f07caa1be65-client-ca\") pod \"controller-manager-6dd94c7fd9-mz5zm\" (UID: \"879828de-d2a0-475c-b2c3-9f07caa1be65\") " pod="openshift-controller-manager/controller-manager-6dd94c7fd9-mz5zm" Mar 12 08:03:19 crc kubenswrapper[4809]: I0312 08:03:19.991962 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/408c166a-3c09-4993-bd81-59bf46798fca-client-ca\") pod \"route-controller-manager-884c6bbf8-zb7jr\" (UID: \"408c166a-3c09-4993-bd81-59bf46798fca\") " pod="openshift-route-controller-manager/route-controller-manager-884c6bbf8-zb7jr" Mar 12 08:03:19 crc kubenswrapper[4809]: I0312 08:03:19.992008 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/879828de-d2a0-475c-b2c3-9f07caa1be65-serving-cert\") pod \"controller-manager-6dd94c7fd9-mz5zm\" (UID: \"879828de-d2a0-475c-b2c3-9f07caa1be65\") " pod="openshift-controller-manager/controller-manager-6dd94c7fd9-mz5zm" Mar 12 08:03:19 crc kubenswrapper[4809]: I0312 08:03:19.992040 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/879828de-d2a0-475c-b2c3-9f07caa1be65-proxy-ca-bundles\") pod \"controller-manager-6dd94c7fd9-mz5zm\" (UID: \"879828de-d2a0-475c-b2c3-9f07caa1be65\") " pod="openshift-controller-manager/controller-manager-6dd94c7fd9-mz5zm" Mar 12 08:03:19 crc kubenswrapper[4809]: I0312 08:03:19.992079 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/408c166a-3c09-4993-bd81-59bf46798fca-config\") pod \"route-controller-manager-884c6bbf8-zb7jr\" (UID: \"408c166a-3c09-4993-bd81-59bf46798fca\") " pod="openshift-route-controller-manager/route-controller-manager-884c6bbf8-zb7jr" Mar 12 08:03:19 crc kubenswrapper[4809]: I0312 08:03:19.992136 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/408c166a-3c09-4993-bd81-59bf46798fca-serving-cert\") pod \"route-controller-manager-884c6bbf8-zb7jr\" (UID: \"408c166a-3c09-4993-bd81-59bf46798fca\") " pod="openshift-route-controller-manager/route-controller-manager-884c6bbf8-zb7jr" Mar 12 08:03:19 crc kubenswrapper[4809]: I0312 08:03:19.992198 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/879828de-d2a0-475c-b2c3-9f07caa1be65-config\") pod \"controller-manager-6dd94c7fd9-mz5zm\" (UID: \"879828de-d2a0-475c-b2c3-9f07caa1be65\") " pod="openshift-controller-manager/controller-manager-6dd94c7fd9-mz5zm" Mar 12 08:03:19 crc kubenswrapper[4809]: I0312 08:03:19.992233 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhhkb\" (UniqueName: \"kubernetes.io/projected/408c166a-3c09-4993-bd81-59bf46798fca-kube-api-access-xhhkb\") pod \"route-controller-manager-884c6bbf8-zb7jr\" (UID: \"408c166a-3c09-4993-bd81-59bf46798fca\") " pod="openshift-route-controller-manager/route-controller-manager-884c6bbf8-zb7jr" Mar 12 08:03:19 crc kubenswrapper[4809]: I0312 08:03:19.992270 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7bdv\" (UniqueName: \"kubernetes.io/projected/879828de-d2a0-475c-b2c3-9f07caa1be65-kube-api-access-d7bdv\") pod \"controller-manager-6dd94c7fd9-mz5zm\" (UID: \"879828de-d2a0-475c-b2c3-9f07caa1be65\") " pod="openshift-controller-manager/controller-manager-6dd94c7fd9-mz5zm" Mar 12 08:03:19 crc kubenswrapper[4809]: I0312 08:03:19.993813 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/879828de-d2a0-475c-b2c3-9f07caa1be65-client-ca\") pod \"controller-manager-6dd94c7fd9-mz5zm\" (UID: \"879828de-d2a0-475c-b2c3-9f07caa1be65\") " pod="openshift-controller-manager/controller-manager-6dd94c7fd9-mz5zm" Mar 12 08:03:19 crc kubenswrapper[4809]: I0312 08:03:19.994481 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/879828de-d2a0-475c-b2c3-9f07caa1be65-config\") pod \"controller-manager-6dd94c7fd9-mz5zm\" (UID: \"879828de-d2a0-475c-b2c3-9f07caa1be65\") " pod="openshift-controller-manager/controller-manager-6dd94c7fd9-mz5zm" Mar 12 08:03:19 crc kubenswrapper[4809]: I0312 08:03:19.995100 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/408c166a-3c09-4993-bd81-59bf46798fca-client-ca\") pod \"route-controller-manager-884c6bbf8-zb7jr\" (UID: \"408c166a-3c09-4993-bd81-59bf46798fca\") " pod="openshift-route-controller-manager/route-controller-manager-884c6bbf8-zb7jr" Mar 12 08:03:19 crc kubenswrapper[4809]: I0312 08:03:19.995487 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/408c166a-3c09-4993-bd81-59bf46798fca-config\") pod \"route-controller-manager-884c6bbf8-zb7jr\" (UID: \"408c166a-3c09-4993-bd81-59bf46798fca\") " pod="openshift-route-controller-manager/route-controller-manager-884c6bbf8-zb7jr" Mar 12 08:03:19 crc kubenswrapper[4809]: I0312 08:03:19.995918 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/879828de-d2a0-475c-b2c3-9f07caa1be65-proxy-ca-bundles\") pod \"controller-manager-6dd94c7fd9-mz5zm\" (UID: \"879828de-d2a0-475c-b2c3-9f07caa1be65\") " pod="openshift-controller-manager/controller-manager-6dd94c7fd9-mz5zm" Mar 12 08:03:20 crc kubenswrapper[4809]: I0312 08:03:19.998553 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/879828de-d2a0-475c-b2c3-9f07caa1be65-serving-cert\") pod \"controller-manager-6dd94c7fd9-mz5zm\" (UID: \"879828de-d2a0-475c-b2c3-9f07caa1be65\") " pod="openshift-controller-manager/controller-manager-6dd94c7fd9-mz5zm" Mar 12 08:03:20 crc kubenswrapper[4809]: I0312 08:03:19.998719 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/408c166a-3c09-4993-bd81-59bf46798fca-serving-cert\") pod \"route-controller-manager-884c6bbf8-zb7jr\" (UID: \"408c166a-3c09-4993-bd81-59bf46798fca\") " pod="openshift-route-controller-manager/route-controller-manager-884c6bbf8-zb7jr" Mar 12 08:03:20 crc kubenswrapper[4809]: I0312 08:03:20.010254 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhhkb\" (UniqueName: \"kubernetes.io/projected/408c166a-3c09-4993-bd81-59bf46798fca-kube-api-access-xhhkb\") pod \"route-controller-manager-884c6bbf8-zb7jr\" (UID: \"408c166a-3c09-4993-bd81-59bf46798fca\") " pod="openshift-route-controller-manager/route-controller-manager-884c6bbf8-zb7jr" Mar 12 08:03:20 crc kubenswrapper[4809]: I0312 08:03:20.011930 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7bdv\" (UniqueName: \"kubernetes.io/projected/879828de-d2a0-475c-b2c3-9f07caa1be65-kube-api-access-d7bdv\") pod \"controller-manager-6dd94c7fd9-mz5zm\" (UID: \"879828de-d2a0-475c-b2c3-9f07caa1be65\") " pod="openshift-controller-manager/controller-manager-6dd94c7fd9-mz5zm" Mar 12 08:03:20 crc kubenswrapper[4809]: I0312 08:03:20.188644 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6dd94c7fd9-mz5zm" Mar 12 08:03:20 crc kubenswrapper[4809]: I0312 08:03:20.189776 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-884c6bbf8-zb7jr" Mar 12 08:03:20 crc kubenswrapper[4809]: I0312 08:03:20.441084 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6dd94c7fd9-mz5zm"] Mar 12 08:03:20 crc kubenswrapper[4809]: I0312 08:03:20.501133 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-884c6bbf8-zb7jr"] Mar 12 08:03:20 crc kubenswrapper[4809]: W0312 08:03:20.516028 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod408c166a_3c09_4993_bd81_59bf46798fca.slice/crio-037bfbda496275ccfc1c02d6fb392a9a641e8dd54e0b6896a47facb3bf1ada68 WatchSource:0}: Error finding container 037bfbda496275ccfc1c02d6fb392a9a641e8dd54e0b6896a47facb3bf1ada68: Status 404 returned error can't find the container with id 037bfbda496275ccfc1c02d6fb392a9a641e8dd54e0b6896a47facb3bf1ada68 Mar 12 08:03:21 crc kubenswrapper[4809]: I0312 08:03:21.112141 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07758ee8-b281-47c3-9bf5-ffc0d1aa9c38" path="/var/lib/kubelet/pods/07758ee8-b281-47c3-9bf5-ffc0d1aa9c38/volumes" Mar 12 08:03:21 crc kubenswrapper[4809]: I0312 08:03:21.113341 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5950d3a4-1323-453d-86fc-b5c9cb164dbc" path="/var/lib/kubelet/pods/5950d3a4-1323-453d-86fc-b5c9cb164dbc/volumes" Mar 12 08:03:21 crc kubenswrapper[4809]: I0312 08:03:21.214126 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6dd94c7fd9-mz5zm" event={"ID":"879828de-d2a0-475c-b2c3-9f07caa1be65","Type":"ContainerStarted","Data":"bac4f94a06e90f981ff572e9d1b5c2b8cc5c8a24e9419dc45b59483b46abec4a"} Mar 12 08:03:21 crc kubenswrapper[4809]: I0312 08:03:21.214169 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6dd94c7fd9-mz5zm" event={"ID":"879828de-d2a0-475c-b2c3-9f07caa1be65","Type":"ContainerStarted","Data":"f0f8c9bd5779a5c88ffa6e7bf1ebebec7049561d4cbd5841e08fde1291740bd7"} Mar 12 08:03:21 crc kubenswrapper[4809]: I0312 08:03:21.214309 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6dd94c7fd9-mz5zm" Mar 12 08:03:21 crc kubenswrapper[4809]: I0312 08:03:21.215691 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-884c6bbf8-zb7jr" event={"ID":"408c166a-3c09-4993-bd81-59bf46798fca","Type":"ContainerStarted","Data":"6d4a13ea6883b045a203e264c758cc69c543583d0e297932cd689b5d6b9d5869"} Mar 12 08:03:21 crc kubenswrapper[4809]: I0312 08:03:21.215729 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-884c6bbf8-zb7jr" event={"ID":"408c166a-3c09-4993-bd81-59bf46798fca","Type":"ContainerStarted","Data":"037bfbda496275ccfc1c02d6fb392a9a641e8dd54e0b6896a47facb3bf1ada68"} Mar 12 08:03:21 crc kubenswrapper[4809]: I0312 08:03:21.215902 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-884c6bbf8-zb7jr" Mar 12 08:03:21 crc kubenswrapper[4809]: I0312 08:03:21.219060 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6dd94c7fd9-mz5zm" Mar 12 08:03:21 crc kubenswrapper[4809]: I0312 08:03:21.224065 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-884c6bbf8-zb7jr" Mar 12 08:03:21 crc kubenswrapper[4809]: I0312 08:03:21.237135 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6dd94c7fd9-mz5zm" podStartSLOduration=4.237098642 podStartE2EDuration="4.237098642s" podCreationTimestamp="2026-03-12 08:03:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:03:21.235576129 +0000 UTC m=+274.817611862" watchObservedRunningTime="2026-03-12 08:03:21.237098642 +0000 UTC m=+274.819134365" Mar 12 08:03:21 crc kubenswrapper[4809]: I0312 08:03:21.279582 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-884c6bbf8-zb7jr" podStartSLOduration=4.2795619049999996 podStartE2EDuration="4.279561905s" podCreationTimestamp="2026-03-12 08:03:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:03:21.277642573 +0000 UTC m=+274.859678326" watchObservedRunningTime="2026-03-12 08:03:21.279561905 +0000 UTC m=+274.861597648" Mar 12 08:03:21 crc kubenswrapper[4809]: I0312 08:03:21.581187 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pqsb2" Mar 12 08:03:21 crc kubenswrapper[4809]: I0312 08:03:21.581231 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pqsb2" Mar 12 08:03:21 crc kubenswrapper[4809]: I0312 08:03:21.633309 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pqsb2" Mar 12 08:03:21 crc kubenswrapper[4809]: I0312 08:03:21.658611 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-m6gcg" Mar 12 08:03:21 crc kubenswrapper[4809]: I0312 08:03:21.658659 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-m6gcg" Mar 12 08:03:21 crc kubenswrapper[4809]: I0312 08:03:21.697016 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-m6gcg" Mar 12 08:03:21 crc kubenswrapper[4809]: I0312 08:03:21.856305 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-btdvf" Mar 12 08:03:21 crc kubenswrapper[4809]: I0312 08:03:21.856372 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-btdvf" Mar 12 08:03:21 crc kubenswrapper[4809]: I0312 08:03:21.909515 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-btdvf" Mar 12 08:03:22 crc kubenswrapper[4809]: I0312 08:03:22.262424 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-m6gcg" Mar 12 08:03:22 crc kubenswrapper[4809]: I0312 08:03:22.272893 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-btdvf" Mar 12 08:03:22 crc kubenswrapper[4809]: I0312 08:03:22.281632 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pqsb2" Mar 12 08:03:23 crc kubenswrapper[4809]: I0312 08:03:23.141167 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m6gcg"] Mar 12 08:03:23 crc kubenswrapper[4809]: I0312 08:03:23.227168 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jn4tc" Mar 12 08:03:23 crc kubenswrapper[4809]: I0312 08:03:23.227240 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jn4tc" Mar 12 08:03:23 crc kubenswrapper[4809]: I0312 08:03:23.266406 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jn4tc" Mar 12 08:03:23 crc kubenswrapper[4809]: I0312 08:03:23.740820 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wmft4" Mar 12 08:03:23 crc kubenswrapper[4809]: I0312 08:03:23.741969 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wmft4" Mar 12 08:03:23 crc kubenswrapper[4809]: I0312 08:03:23.749005 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-btdvf"] Mar 12 08:03:23 crc kubenswrapper[4809]: I0312 08:03:23.802322 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wmft4" Mar 12 08:03:24 crc kubenswrapper[4809]: I0312 08:03:24.233033 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-btdvf" podUID="1df02216-0a1b-4417-9914-e7b9452a9c6b" containerName="registry-server" containerID="cri-o://de752abbaa685e978a61b78459c4aaabf2e0f921dc8cf8b07ed5274c61ecd4f5" gracePeriod=2 Mar 12 08:03:24 crc kubenswrapper[4809]: I0312 08:03:24.234398 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-m6gcg" podUID="511df0b5-a255-46fe-aeba-cd5daa01e7c9" containerName="registry-server" containerID="cri-o://c37b723986e23b116af10ceb482e227a3cb9019bb5ac5d75f3113ac68d688253" gracePeriod=2 Mar 12 08:03:24 crc kubenswrapper[4809]: I0312 08:03:24.288976 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jn4tc" Mar 12 08:03:24 crc kubenswrapper[4809]: I0312 08:03:24.301640 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wmft4" Mar 12 08:03:24 crc kubenswrapper[4809]: I0312 08:03:24.496420 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4qrn8" Mar 12 08:03:24 crc kubenswrapper[4809]: I0312 08:03:24.549541 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4qrn8" Mar 12 08:03:24 crc kubenswrapper[4809]: I0312 08:03:24.725332 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-btdvf" Mar 12 08:03:25 crc kubenswrapper[4809]: I0312 08:03:24.752395 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m6gcg" Mar 12 08:03:25 crc kubenswrapper[4809]: I0312 08:03:24.859595 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xk59x\" (UniqueName: \"kubernetes.io/projected/511df0b5-a255-46fe-aeba-cd5daa01e7c9-kube-api-access-xk59x\") pod \"511df0b5-a255-46fe-aeba-cd5daa01e7c9\" (UID: \"511df0b5-a255-46fe-aeba-cd5daa01e7c9\") " Mar 12 08:03:25 crc kubenswrapper[4809]: I0312 08:03:24.862265 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1df02216-0a1b-4417-9914-e7b9452a9c6b-utilities\") pod \"1df02216-0a1b-4417-9914-e7b9452a9c6b\" (UID: \"1df02216-0a1b-4417-9914-e7b9452a9c6b\") " Mar 12 08:03:25 crc kubenswrapper[4809]: I0312 08:03:24.862317 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/511df0b5-a255-46fe-aeba-cd5daa01e7c9-utilities\") pod \"511df0b5-a255-46fe-aeba-cd5daa01e7c9\" (UID: \"511df0b5-a255-46fe-aeba-cd5daa01e7c9\") " Mar 12 08:03:25 crc kubenswrapper[4809]: I0312 08:03:24.862349 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1df02216-0a1b-4417-9914-e7b9452a9c6b-catalog-content\") pod \"1df02216-0a1b-4417-9914-e7b9452a9c6b\" (UID: \"1df02216-0a1b-4417-9914-e7b9452a9c6b\") " Mar 12 08:03:25 crc kubenswrapper[4809]: I0312 08:03:24.862422 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xl7hr\" (UniqueName: \"kubernetes.io/projected/1df02216-0a1b-4417-9914-e7b9452a9c6b-kube-api-access-xl7hr\") pod \"1df02216-0a1b-4417-9914-e7b9452a9c6b\" (UID: \"1df02216-0a1b-4417-9914-e7b9452a9c6b\") " Mar 12 08:03:25 crc kubenswrapper[4809]: I0312 08:03:24.862456 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/511df0b5-a255-46fe-aeba-cd5daa01e7c9-catalog-content\") pod \"511df0b5-a255-46fe-aeba-cd5daa01e7c9\" (UID: \"511df0b5-a255-46fe-aeba-cd5daa01e7c9\") " Mar 12 08:03:25 crc kubenswrapper[4809]: I0312 08:03:24.863501 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1df02216-0a1b-4417-9914-e7b9452a9c6b-utilities" (OuterVolumeSpecName: "utilities") pod "1df02216-0a1b-4417-9914-e7b9452a9c6b" (UID: "1df02216-0a1b-4417-9914-e7b9452a9c6b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:03:25 crc kubenswrapper[4809]: I0312 08:03:24.864656 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/511df0b5-a255-46fe-aeba-cd5daa01e7c9-utilities" (OuterVolumeSpecName: "utilities") pod "511df0b5-a255-46fe-aeba-cd5daa01e7c9" (UID: "511df0b5-a255-46fe-aeba-cd5daa01e7c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:03:25 crc kubenswrapper[4809]: I0312 08:03:24.868259 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/511df0b5-a255-46fe-aeba-cd5daa01e7c9-kube-api-access-xk59x" (OuterVolumeSpecName: "kube-api-access-xk59x") pod "511df0b5-a255-46fe-aeba-cd5daa01e7c9" (UID: "511df0b5-a255-46fe-aeba-cd5daa01e7c9"). InnerVolumeSpecName "kube-api-access-xk59x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:03:25 crc kubenswrapper[4809]: I0312 08:03:24.868372 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1df02216-0a1b-4417-9914-e7b9452a9c6b-kube-api-access-xl7hr" (OuterVolumeSpecName: "kube-api-access-xl7hr") pod "1df02216-0a1b-4417-9914-e7b9452a9c6b" (UID: "1df02216-0a1b-4417-9914-e7b9452a9c6b"). InnerVolumeSpecName "kube-api-access-xl7hr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:03:25 crc kubenswrapper[4809]: I0312 08:03:24.907951 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jkqnv" Mar 12 08:03:25 crc kubenswrapper[4809]: I0312 08:03:24.908015 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jkqnv" Mar 12 08:03:25 crc kubenswrapper[4809]: I0312 08:03:24.919616 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/511df0b5-a255-46fe-aeba-cd5daa01e7c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "511df0b5-a255-46fe-aeba-cd5daa01e7c9" (UID: "511df0b5-a255-46fe-aeba-cd5daa01e7c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:03:25 crc kubenswrapper[4809]: I0312 08:03:24.926371 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1df02216-0a1b-4417-9914-e7b9452a9c6b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1df02216-0a1b-4417-9914-e7b9452a9c6b" (UID: "1df02216-0a1b-4417-9914-e7b9452a9c6b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:03:25 crc kubenswrapper[4809]: I0312 08:03:24.951624 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jkqnv" Mar 12 08:03:25 crc kubenswrapper[4809]: I0312 08:03:24.963321 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xl7hr\" (UniqueName: \"kubernetes.io/projected/1df02216-0a1b-4417-9914-e7b9452a9c6b-kube-api-access-xl7hr\") on node \"crc\" DevicePath \"\"" Mar 12 08:03:25 crc kubenswrapper[4809]: I0312 08:03:24.963347 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/511df0b5-a255-46fe-aeba-cd5daa01e7c9-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 08:03:25 crc kubenswrapper[4809]: I0312 08:03:24.963356 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xk59x\" (UniqueName: \"kubernetes.io/projected/511df0b5-a255-46fe-aeba-cd5daa01e7c9-kube-api-access-xk59x\") on node \"crc\" DevicePath \"\"" Mar 12 08:03:25 crc kubenswrapper[4809]: I0312 08:03:24.963367 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1df02216-0a1b-4417-9914-e7b9452a9c6b-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 08:03:25 crc kubenswrapper[4809]: I0312 08:03:24.963375 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/511df0b5-a255-46fe-aeba-cd5daa01e7c9-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 08:03:25 crc kubenswrapper[4809]: I0312 08:03:24.963386 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1df02216-0a1b-4417-9914-e7b9452a9c6b-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 08:03:25 crc kubenswrapper[4809]: I0312 08:03:25.243503 4809 generic.go:334] "Generic (PLEG): container finished" podID="511df0b5-a255-46fe-aeba-cd5daa01e7c9" containerID="c37b723986e23b116af10ceb482e227a3cb9019bb5ac5d75f3113ac68d688253" exitCode=0 Mar 12 08:03:25 crc kubenswrapper[4809]: I0312 08:03:25.243580 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m6gcg" event={"ID":"511df0b5-a255-46fe-aeba-cd5daa01e7c9","Type":"ContainerDied","Data":"c37b723986e23b116af10ceb482e227a3cb9019bb5ac5d75f3113ac68d688253"} Mar 12 08:03:25 crc kubenswrapper[4809]: I0312 08:03:25.243665 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m6gcg" event={"ID":"511df0b5-a255-46fe-aeba-cd5daa01e7c9","Type":"ContainerDied","Data":"d69eda06064bdb407560d3f538d582f6d628efe96b17652ce937534d9f250d61"} Mar 12 08:03:25 crc kubenswrapper[4809]: I0312 08:03:25.243690 4809 scope.go:117] "RemoveContainer" containerID="c37b723986e23b116af10ceb482e227a3cb9019bb5ac5d75f3113ac68d688253" Mar 12 08:03:25 crc kubenswrapper[4809]: I0312 08:03:25.243720 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m6gcg" Mar 12 08:03:25 crc kubenswrapper[4809]: I0312 08:03:25.246645 4809 generic.go:334] "Generic (PLEG): container finished" podID="1df02216-0a1b-4417-9914-e7b9452a9c6b" containerID="de752abbaa685e978a61b78459c4aaabf2e0f921dc8cf8b07ed5274c61ecd4f5" exitCode=0 Mar 12 08:03:25 crc kubenswrapper[4809]: I0312 08:03:25.247214 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-btdvf" event={"ID":"1df02216-0a1b-4417-9914-e7b9452a9c6b","Type":"ContainerDied","Data":"de752abbaa685e978a61b78459c4aaabf2e0f921dc8cf8b07ed5274c61ecd4f5"} Mar 12 08:03:25 crc kubenswrapper[4809]: I0312 08:03:25.247266 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-btdvf" event={"ID":"1df02216-0a1b-4417-9914-e7b9452a9c6b","Type":"ContainerDied","Data":"d9d01e20a325b460e97026229e56f712e5d9feb785b9eb4ba03efe2b351ec00d"} Mar 12 08:03:25 crc kubenswrapper[4809]: I0312 08:03:25.247369 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-btdvf" Mar 12 08:03:25 crc kubenswrapper[4809]: I0312 08:03:25.267333 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m6gcg"] Mar 12 08:03:25 crc kubenswrapper[4809]: I0312 08:03:25.275713 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-m6gcg"] Mar 12 08:03:25 crc kubenswrapper[4809]: I0312 08:03:25.284314 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-btdvf"] Mar 12 08:03:25 crc kubenswrapper[4809]: I0312 08:03:25.284679 4809 scope.go:117] "RemoveContainer" containerID="4c9dce325a4661df3da9c3caf117ee0cc98259c75246af3ab21f849aae08cc47" Mar 12 08:03:25 crc kubenswrapper[4809]: I0312 08:03:25.288309 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-btdvf"] Mar 12 08:03:25 crc kubenswrapper[4809]: I0312 08:03:25.303041 4809 scope.go:117] "RemoveContainer" containerID="1944e677b6759a1709b151e612f6421b377d4bababca5aab628e2327fee96cc0" Mar 12 08:03:25 crc kubenswrapper[4809]: I0312 08:03:25.303182 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jkqnv" Mar 12 08:03:25 crc kubenswrapper[4809]: I0312 08:03:25.323723 4809 scope.go:117] "RemoveContainer" containerID="c37b723986e23b116af10ceb482e227a3cb9019bb5ac5d75f3113ac68d688253" Mar 12 08:03:25 crc kubenswrapper[4809]: E0312 08:03:25.325157 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c37b723986e23b116af10ceb482e227a3cb9019bb5ac5d75f3113ac68d688253\": container with ID starting with c37b723986e23b116af10ceb482e227a3cb9019bb5ac5d75f3113ac68d688253 not found: ID does not exist" containerID="c37b723986e23b116af10ceb482e227a3cb9019bb5ac5d75f3113ac68d688253" Mar 12 08:03:25 crc kubenswrapper[4809]: I0312 08:03:25.325276 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c37b723986e23b116af10ceb482e227a3cb9019bb5ac5d75f3113ac68d688253"} err="failed to get container status \"c37b723986e23b116af10ceb482e227a3cb9019bb5ac5d75f3113ac68d688253\": rpc error: code = NotFound desc = could not find container \"c37b723986e23b116af10ceb482e227a3cb9019bb5ac5d75f3113ac68d688253\": container with ID starting with c37b723986e23b116af10ceb482e227a3cb9019bb5ac5d75f3113ac68d688253 not found: ID does not exist" Mar 12 08:03:25 crc kubenswrapper[4809]: I0312 08:03:25.325367 4809 scope.go:117] "RemoveContainer" containerID="4c9dce325a4661df3da9c3caf117ee0cc98259c75246af3ab21f849aae08cc47" Mar 12 08:03:25 crc kubenswrapper[4809]: E0312 08:03:25.330143 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c9dce325a4661df3da9c3caf117ee0cc98259c75246af3ab21f849aae08cc47\": container with ID starting with 4c9dce325a4661df3da9c3caf117ee0cc98259c75246af3ab21f849aae08cc47 not found: ID does not exist" containerID="4c9dce325a4661df3da9c3caf117ee0cc98259c75246af3ab21f849aae08cc47" Mar 12 08:03:25 crc kubenswrapper[4809]: I0312 08:03:25.330206 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c9dce325a4661df3da9c3caf117ee0cc98259c75246af3ab21f849aae08cc47"} err="failed to get container status \"4c9dce325a4661df3da9c3caf117ee0cc98259c75246af3ab21f849aae08cc47\": rpc error: code = NotFound desc = could not find container \"4c9dce325a4661df3da9c3caf117ee0cc98259c75246af3ab21f849aae08cc47\": container with ID starting with 4c9dce325a4661df3da9c3caf117ee0cc98259c75246af3ab21f849aae08cc47 not found: ID does not exist" Mar 12 08:03:25 crc kubenswrapper[4809]: I0312 08:03:25.330248 4809 scope.go:117] "RemoveContainer" containerID="1944e677b6759a1709b151e612f6421b377d4bababca5aab628e2327fee96cc0" Mar 12 08:03:25 crc kubenswrapper[4809]: E0312 08:03:25.330688 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1944e677b6759a1709b151e612f6421b377d4bababca5aab628e2327fee96cc0\": container with ID starting with 1944e677b6759a1709b151e612f6421b377d4bababca5aab628e2327fee96cc0 not found: ID does not exist" containerID="1944e677b6759a1709b151e612f6421b377d4bababca5aab628e2327fee96cc0" Mar 12 08:03:25 crc kubenswrapper[4809]: I0312 08:03:25.330782 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1944e677b6759a1709b151e612f6421b377d4bababca5aab628e2327fee96cc0"} err="failed to get container status \"1944e677b6759a1709b151e612f6421b377d4bababca5aab628e2327fee96cc0\": rpc error: code = NotFound desc = could not find container \"1944e677b6759a1709b151e612f6421b377d4bababca5aab628e2327fee96cc0\": container with ID starting with 1944e677b6759a1709b151e612f6421b377d4bababca5aab628e2327fee96cc0 not found: ID does not exist" Mar 12 08:03:25 crc kubenswrapper[4809]: I0312 08:03:25.330862 4809 scope.go:117] "RemoveContainer" containerID="de752abbaa685e978a61b78459c4aaabf2e0f921dc8cf8b07ed5274c61ecd4f5" Mar 12 08:03:25 crc kubenswrapper[4809]: I0312 08:03:25.348370 4809 scope.go:117] "RemoveContainer" containerID="2281cd0cee5ed13354f6b451ab48ed78704e56eeaf689a2e811f43b46106b6c5" Mar 12 08:03:25 crc kubenswrapper[4809]: I0312 08:03:25.395723 4809 scope.go:117] "RemoveContainer" containerID="204d12bc9add314f4a2fdc879cb18a3e9bb7e0dbec3d948147d5514fdac04ca1" Mar 12 08:03:25 crc kubenswrapper[4809]: I0312 08:03:25.412168 4809 scope.go:117] "RemoveContainer" containerID="de752abbaa685e978a61b78459c4aaabf2e0f921dc8cf8b07ed5274c61ecd4f5" Mar 12 08:03:25 crc kubenswrapper[4809]: E0312 08:03:25.412812 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de752abbaa685e978a61b78459c4aaabf2e0f921dc8cf8b07ed5274c61ecd4f5\": container with ID starting with de752abbaa685e978a61b78459c4aaabf2e0f921dc8cf8b07ed5274c61ecd4f5 not found: ID does not exist" containerID="de752abbaa685e978a61b78459c4aaabf2e0f921dc8cf8b07ed5274c61ecd4f5" Mar 12 08:03:25 crc kubenswrapper[4809]: I0312 08:03:25.412926 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de752abbaa685e978a61b78459c4aaabf2e0f921dc8cf8b07ed5274c61ecd4f5"} err="failed to get container status \"de752abbaa685e978a61b78459c4aaabf2e0f921dc8cf8b07ed5274c61ecd4f5\": rpc error: code = NotFound desc = could not find container \"de752abbaa685e978a61b78459c4aaabf2e0f921dc8cf8b07ed5274c61ecd4f5\": container with ID starting with de752abbaa685e978a61b78459c4aaabf2e0f921dc8cf8b07ed5274c61ecd4f5 not found: ID does not exist" Mar 12 08:03:25 crc kubenswrapper[4809]: I0312 08:03:25.413048 4809 scope.go:117] "RemoveContainer" containerID="2281cd0cee5ed13354f6b451ab48ed78704e56eeaf689a2e811f43b46106b6c5" Mar 12 08:03:25 crc kubenswrapper[4809]: E0312 08:03:25.413615 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2281cd0cee5ed13354f6b451ab48ed78704e56eeaf689a2e811f43b46106b6c5\": container with ID starting with 2281cd0cee5ed13354f6b451ab48ed78704e56eeaf689a2e811f43b46106b6c5 not found: ID does not exist" containerID="2281cd0cee5ed13354f6b451ab48ed78704e56eeaf689a2e811f43b46106b6c5" Mar 12 08:03:25 crc kubenswrapper[4809]: I0312 08:03:25.413674 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2281cd0cee5ed13354f6b451ab48ed78704e56eeaf689a2e811f43b46106b6c5"} err="failed to get container status \"2281cd0cee5ed13354f6b451ab48ed78704e56eeaf689a2e811f43b46106b6c5\": rpc error: code = NotFound desc = could not find container \"2281cd0cee5ed13354f6b451ab48ed78704e56eeaf689a2e811f43b46106b6c5\": container with ID starting with 2281cd0cee5ed13354f6b451ab48ed78704e56eeaf689a2e811f43b46106b6c5 not found: ID does not exist" Mar 12 08:03:25 crc kubenswrapper[4809]: I0312 08:03:25.413717 4809 scope.go:117] "RemoveContainer" containerID="204d12bc9add314f4a2fdc879cb18a3e9bb7e0dbec3d948147d5514fdac04ca1" Mar 12 08:03:25 crc kubenswrapper[4809]: E0312 08:03:25.414544 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"204d12bc9add314f4a2fdc879cb18a3e9bb7e0dbec3d948147d5514fdac04ca1\": container with ID starting with 204d12bc9add314f4a2fdc879cb18a3e9bb7e0dbec3d948147d5514fdac04ca1 not found: ID does not exist" containerID="204d12bc9add314f4a2fdc879cb18a3e9bb7e0dbec3d948147d5514fdac04ca1" Mar 12 08:03:25 crc kubenswrapper[4809]: I0312 08:03:25.414646 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"204d12bc9add314f4a2fdc879cb18a3e9bb7e0dbec3d948147d5514fdac04ca1"} err="failed to get container status \"204d12bc9add314f4a2fdc879cb18a3e9bb7e0dbec3d948147d5514fdac04ca1\": rpc error: code = NotFound desc = could not find container \"204d12bc9add314f4a2fdc879cb18a3e9bb7e0dbec3d948147d5514fdac04ca1\": container with ID starting with 204d12bc9add314f4a2fdc879cb18a3e9bb7e0dbec3d948147d5514fdac04ca1 not found: ID does not exist" Mar 12 08:03:26 crc kubenswrapper[4809]: I0312 08:03:26.146299 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wmft4"] Mar 12 08:03:27 crc kubenswrapper[4809]: I0312 08:03:27.111784 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1df02216-0a1b-4417-9914-e7b9452a9c6b" path="/var/lib/kubelet/pods/1df02216-0a1b-4417-9914-e7b9452a9c6b/volumes" Mar 12 08:03:27 crc kubenswrapper[4809]: I0312 08:03:27.112697 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="511df0b5-a255-46fe-aeba-cd5daa01e7c9" path="/var/lib/kubelet/pods/511df0b5-a255-46fe-aeba-cd5daa01e7c9/volumes" Mar 12 08:03:27 crc kubenswrapper[4809]: I0312 08:03:27.261699 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wmft4" podUID="6adda456-f49a-4a6d-b09b-8841158e9268" containerName="registry-server" containerID="cri-o://ee9d34f88a042bc889ff740d794930d35570a581c25163a597f5721a1291e50d" gracePeriod=2 Mar 12 08:03:27 crc kubenswrapper[4809]: I0312 08:03:27.747807 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wmft4" Mar 12 08:03:27 crc kubenswrapper[4809]: I0312 08:03:27.911871 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mj5wj\" (UniqueName: \"kubernetes.io/projected/6adda456-f49a-4a6d-b09b-8841158e9268-kube-api-access-mj5wj\") pod \"6adda456-f49a-4a6d-b09b-8841158e9268\" (UID: \"6adda456-f49a-4a6d-b09b-8841158e9268\") " Mar 12 08:03:27 crc kubenswrapper[4809]: I0312 08:03:27.911921 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6adda456-f49a-4a6d-b09b-8841158e9268-utilities\") pod \"6adda456-f49a-4a6d-b09b-8841158e9268\" (UID: \"6adda456-f49a-4a6d-b09b-8841158e9268\") " Mar 12 08:03:27 crc kubenswrapper[4809]: I0312 08:03:27.911999 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6adda456-f49a-4a6d-b09b-8841158e9268-catalog-content\") pod \"6adda456-f49a-4a6d-b09b-8841158e9268\" (UID: \"6adda456-f49a-4a6d-b09b-8841158e9268\") " Mar 12 08:03:27 crc kubenswrapper[4809]: I0312 08:03:27.913106 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6adda456-f49a-4a6d-b09b-8841158e9268-utilities" (OuterVolumeSpecName: "utilities") pod "6adda456-f49a-4a6d-b09b-8841158e9268" (UID: "6adda456-f49a-4a6d-b09b-8841158e9268"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:03:27 crc kubenswrapper[4809]: I0312 08:03:27.913462 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6adda456-f49a-4a6d-b09b-8841158e9268-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 08:03:27 crc kubenswrapper[4809]: I0312 08:03:27.926327 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6adda456-f49a-4a6d-b09b-8841158e9268-kube-api-access-mj5wj" (OuterVolumeSpecName: "kube-api-access-mj5wj") pod "6adda456-f49a-4a6d-b09b-8841158e9268" (UID: "6adda456-f49a-4a6d-b09b-8841158e9268"). InnerVolumeSpecName "kube-api-access-mj5wj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:03:27 crc kubenswrapper[4809]: I0312 08:03:27.939510 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6adda456-f49a-4a6d-b09b-8841158e9268-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6adda456-f49a-4a6d-b09b-8841158e9268" (UID: "6adda456-f49a-4a6d-b09b-8841158e9268"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:03:28 crc kubenswrapper[4809]: I0312 08:03:28.013778 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mj5wj\" (UniqueName: \"kubernetes.io/projected/6adda456-f49a-4a6d-b09b-8841158e9268-kube-api-access-mj5wj\") on node \"crc\" DevicePath \"\"" Mar 12 08:03:28 crc kubenswrapper[4809]: I0312 08:03:28.013820 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6adda456-f49a-4a6d-b09b-8841158e9268-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 08:03:28 crc kubenswrapper[4809]: I0312 08:03:28.291263 4809 generic.go:334] "Generic (PLEG): container finished" podID="6adda456-f49a-4a6d-b09b-8841158e9268" containerID="ee9d34f88a042bc889ff740d794930d35570a581c25163a597f5721a1291e50d" exitCode=0 Mar 12 08:03:28 crc kubenswrapper[4809]: I0312 08:03:28.291428 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wmft4" event={"ID":"6adda456-f49a-4a6d-b09b-8841158e9268","Type":"ContainerDied","Data":"ee9d34f88a042bc889ff740d794930d35570a581c25163a597f5721a1291e50d"} Mar 12 08:03:28 crc kubenswrapper[4809]: I0312 08:03:28.291982 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wmft4" event={"ID":"6adda456-f49a-4a6d-b09b-8841158e9268","Type":"ContainerDied","Data":"40fb895a60cbccb4c660aa8fe9e0263ae497a534e36ce898654003ff20d2b6b6"} Mar 12 08:03:28 crc kubenswrapper[4809]: I0312 08:03:28.292021 4809 scope.go:117] "RemoveContainer" containerID="ee9d34f88a042bc889ff740d794930d35570a581c25163a597f5721a1291e50d" Mar 12 08:03:28 crc kubenswrapper[4809]: I0312 08:03:28.291446 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wmft4" Mar 12 08:03:28 crc kubenswrapper[4809]: I0312 08:03:28.316485 4809 scope.go:117] "RemoveContainer" containerID="f9cfbbdaca1fed92371305d7ed808f71f2986aae32fa1c189de1a2e0e1e1dd04" Mar 12 08:03:28 crc kubenswrapper[4809]: I0312 08:03:28.340104 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wmft4"] Mar 12 08:03:28 crc kubenswrapper[4809]: I0312 08:03:28.345573 4809 scope.go:117] "RemoveContainer" containerID="6bf8beff6fcfa4314eb9cef0c30e6e8c14b42d80fb72307c2ab2b795c7686b23" Mar 12 08:03:28 crc kubenswrapper[4809]: I0312 08:03:28.346384 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wmft4"] Mar 12 08:03:28 crc kubenswrapper[4809]: I0312 08:03:28.360849 4809 scope.go:117] "RemoveContainer" containerID="ee9d34f88a042bc889ff740d794930d35570a581c25163a597f5721a1291e50d" Mar 12 08:03:28 crc kubenswrapper[4809]: E0312 08:03:28.361526 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee9d34f88a042bc889ff740d794930d35570a581c25163a597f5721a1291e50d\": container with ID starting with ee9d34f88a042bc889ff740d794930d35570a581c25163a597f5721a1291e50d not found: ID does not exist" containerID="ee9d34f88a042bc889ff740d794930d35570a581c25163a597f5721a1291e50d" Mar 12 08:03:28 crc kubenswrapper[4809]: I0312 08:03:28.361587 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee9d34f88a042bc889ff740d794930d35570a581c25163a597f5721a1291e50d"} err="failed to get container status \"ee9d34f88a042bc889ff740d794930d35570a581c25163a597f5721a1291e50d\": rpc error: code = NotFound desc = could not find container \"ee9d34f88a042bc889ff740d794930d35570a581c25163a597f5721a1291e50d\": container with ID starting with ee9d34f88a042bc889ff740d794930d35570a581c25163a597f5721a1291e50d not found: ID does not exist" Mar 12 08:03:28 crc kubenswrapper[4809]: I0312 08:03:28.361629 4809 scope.go:117] "RemoveContainer" containerID="f9cfbbdaca1fed92371305d7ed808f71f2986aae32fa1c189de1a2e0e1e1dd04" Mar 12 08:03:28 crc kubenswrapper[4809]: E0312 08:03:28.362145 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9cfbbdaca1fed92371305d7ed808f71f2986aae32fa1c189de1a2e0e1e1dd04\": container with ID starting with f9cfbbdaca1fed92371305d7ed808f71f2986aae32fa1c189de1a2e0e1e1dd04 not found: ID does not exist" containerID="f9cfbbdaca1fed92371305d7ed808f71f2986aae32fa1c189de1a2e0e1e1dd04" Mar 12 08:03:28 crc kubenswrapper[4809]: I0312 08:03:28.362234 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9cfbbdaca1fed92371305d7ed808f71f2986aae32fa1c189de1a2e0e1e1dd04"} err="failed to get container status \"f9cfbbdaca1fed92371305d7ed808f71f2986aae32fa1c189de1a2e0e1e1dd04\": rpc error: code = NotFound desc = could not find container \"f9cfbbdaca1fed92371305d7ed808f71f2986aae32fa1c189de1a2e0e1e1dd04\": container with ID starting with f9cfbbdaca1fed92371305d7ed808f71f2986aae32fa1c189de1a2e0e1e1dd04 not found: ID does not exist" Mar 12 08:03:28 crc kubenswrapper[4809]: I0312 08:03:28.362286 4809 scope.go:117] "RemoveContainer" containerID="6bf8beff6fcfa4314eb9cef0c30e6e8c14b42d80fb72307c2ab2b795c7686b23" Mar 12 08:03:28 crc kubenswrapper[4809]: E0312 08:03:28.362678 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bf8beff6fcfa4314eb9cef0c30e6e8c14b42d80fb72307c2ab2b795c7686b23\": container with ID starting with 6bf8beff6fcfa4314eb9cef0c30e6e8c14b42d80fb72307c2ab2b795c7686b23 not found: ID does not exist" containerID="6bf8beff6fcfa4314eb9cef0c30e6e8c14b42d80fb72307c2ab2b795c7686b23" Mar 12 08:03:28 crc kubenswrapper[4809]: I0312 08:03:28.362709 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bf8beff6fcfa4314eb9cef0c30e6e8c14b42d80fb72307c2ab2b795c7686b23"} err="failed to get container status \"6bf8beff6fcfa4314eb9cef0c30e6e8c14b42d80fb72307c2ab2b795c7686b23\": rpc error: code = NotFound desc = could not find container \"6bf8beff6fcfa4314eb9cef0c30e6e8c14b42d80fb72307c2ab2b795c7686b23\": container with ID starting with 6bf8beff6fcfa4314eb9cef0c30e6e8c14b42d80fb72307c2ab2b795c7686b23 not found: ID does not exist" Mar 12 08:03:28 crc kubenswrapper[4809]: I0312 08:03:28.548064 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jkqnv"] Mar 12 08:03:28 crc kubenswrapper[4809]: I0312 08:03:28.548611 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jkqnv" podUID="75d1a803-df56-424d-ace1-ecc868081fca" containerName="registry-server" containerID="cri-o://338c9dc207e5bb491059a075f056019c1a58b6388dd08c3696ffb4792152163a" gracePeriod=2 Mar 12 08:03:29 crc kubenswrapper[4809]: I0312 08:03:29.083732 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jkqnv" Mar 12 08:03:29 crc kubenswrapper[4809]: I0312 08:03:29.125465 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6adda456-f49a-4a6d-b09b-8841158e9268" path="/var/lib/kubelet/pods/6adda456-f49a-4a6d-b09b-8841158e9268/volumes" Mar 12 08:03:29 crc kubenswrapper[4809]: I0312 08:03:29.230008 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wphrv\" (UniqueName: \"kubernetes.io/projected/75d1a803-df56-424d-ace1-ecc868081fca-kube-api-access-wphrv\") pod \"75d1a803-df56-424d-ace1-ecc868081fca\" (UID: \"75d1a803-df56-424d-ace1-ecc868081fca\") " Mar 12 08:03:29 crc kubenswrapper[4809]: I0312 08:03:29.230197 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75d1a803-df56-424d-ace1-ecc868081fca-catalog-content\") pod \"75d1a803-df56-424d-ace1-ecc868081fca\" (UID: \"75d1a803-df56-424d-ace1-ecc868081fca\") " Mar 12 08:03:29 crc kubenswrapper[4809]: I0312 08:03:29.230282 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75d1a803-df56-424d-ace1-ecc868081fca-utilities\") pod \"75d1a803-df56-424d-ace1-ecc868081fca\" (UID: \"75d1a803-df56-424d-ace1-ecc868081fca\") " Mar 12 08:03:29 crc kubenswrapper[4809]: I0312 08:03:29.231798 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75d1a803-df56-424d-ace1-ecc868081fca-utilities" (OuterVolumeSpecName: "utilities") pod "75d1a803-df56-424d-ace1-ecc868081fca" (UID: "75d1a803-df56-424d-ace1-ecc868081fca"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:03:29 crc kubenswrapper[4809]: I0312 08:03:29.237511 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75d1a803-df56-424d-ace1-ecc868081fca-kube-api-access-wphrv" (OuterVolumeSpecName: "kube-api-access-wphrv") pod "75d1a803-df56-424d-ace1-ecc868081fca" (UID: "75d1a803-df56-424d-ace1-ecc868081fca"). InnerVolumeSpecName "kube-api-access-wphrv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:03:29 crc kubenswrapper[4809]: I0312 08:03:29.302251 4809 generic.go:334] "Generic (PLEG): container finished" podID="75d1a803-df56-424d-ace1-ecc868081fca" containerID="338c9dc207e5bb491059a075f056019c1a58b6388dd08c3696ffb4792152163a" exitCode=0 Mar 12 08:03:29 crc kubenswrapper[4809]: I0312 08:03:29.302305 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jkqnv" event={"ID":"75d1a803-df56-424d-ace1-ecc868081fca","Type":"ContainerDied","Data":"338c9dc207e5bb491059a075f056019c1a58b6388dd08c3696ffb4792152163a"} Mar 12 08:03:29 crc kubenswrapper[4809]: I0312 08:03:29.302321 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jkqnv" Mar 12 08:03:29 crc kubenswrapper[4809]: I0312 08:03:29.302352 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jkqnv" event={"ID":"75d1a803-df56-424d-ace1-ecc868081fca","Type":"ContainerDied","Data":"a1b74cefe74c673a58eef8be378af9c6867ed22ac98aaebe1df76dd907066b09"} Mar 12 08:03:29 crc kubenswrapper[4809]: I0312 08:03:29.302378 4809 scope.go:117] "RemoveContainer" containerID="338c9dc207e5bb491059a075f056019c1a58b6388dd08c3696ffb4792152163a" Mar 12 08:03:29 crc kubenswrapper[4809]: I0312 08:03:29.327697 4809 scope.go:117] "RemoveContainer" containerID="41db58c1878faa146941df56f7f5193bc9208102fd525a771cb1749f1fdcc68f" Mar 12 08:03:29 crc kubenswrapper[4809]: I0312 08:03:29.331476 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75d1a803-df56-424d-ace1-ecc868081fca-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 08:03:29 crc kubenswrapper[4809]: I0312 08:03:29.331508 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wphrv\" (UniqueName: \"kubernetes.io/projected/75d1a803-df56-424d-ace1-ecc868081fca-kube-api-access-wphrv\") on node \"crc\" DevicePath \"\"" Mar 12 08:03:29 crc kubenswrapper[4809]: I0312 08:03:29.351960 4809 scope.go:117] "RemoveContainer" containerID="b9059eee3e580902a476f96ef5c15d6835a95bc18c208a5b6ade171f4d3a7bc6" Mar 12 08:03:29 crc kubenswrapper[4809]: I0312 08:03:29.360887 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75d1a803-df56-424d-ace1-ecc868081fca-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "75d1a803-df56-424d-ace1-ecc868081fca" (UID: "75d1a803-df56-424d-ace1-ecc868081fca"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:03:29 crc kubenswrapper[4809]: I0312 08:03:29.373675 4809 scope.go:117] "RemoveContainer" containerID="338c9dc207e5bb491059a075f056019c1a58b6388dd08c3696ffb4792152163a" Mar 12 08:03:29 crc kubenswrapper[4809]: E0312 08:03:29.374026 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"338c9dc207e5bb491059a075f056019c1a58b6388dd08c3696ffb4792152163a\": container with ID starting with 338c9dc207e5bb491059a075f056019c1a58b6388dd08c3696ffb4792152163a not found: ID does not exist" containerID="338c9dc207e5bb491059a075f056019c1a58b6388dd08c3696ffb4792152163a" Mar 12 08:03:29 crc kubenswrapper[4809]: I0312 08:03:29.374061 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"338c9dc207e5bb491059a075f056019c1a58b6388dd08c3696ffb4792152163a"} err="failed to get container status \"338c9dc207e5bb491059a075f056019c1a58b6388dd08c3696ffb4792152163a\": rpc error: code = NotFound desc = could not find container \"338c9dc207e5bb491059a075f056019c1a58b6388dd08c3696ffb4792152163a\": container with ID starting with 338c9dc207e5bb491059a075f056019c1a58b6388dd08c3696ffb4792152163a not found: ID does not exist" Mar 12 08:03:29 crc kubenswrapper[4809]: I0312 08:03:29.374087 4809 scope.go:117] "RemoveContainer" containerID="41db58c1878faa146941df56f7f5193bc9208102fd525a771cb1749f1fdcc68f" Mar 12 08:03:29 crc kubenswrapper[4809]: E0312 08:03:29.374471 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41db58c1878faa146941df56f7f5193bc9208102fd525a771cb1749f1fdcc68f\": container with ID starting with 41db58c1878faa146941df56f7f5193bc9208102fd525a771cb1749f1fdcc68f not found: ID does not exist" containerID="41db58c1878faa146941df56f7f5193bc9208102fd525a771cb1749f1fdcc68f" Mar 12 08:03:29 crc kubenswrapper[4809]: I0312 08:03:29.374491 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41db58c1878faa146941df56f7f5193bc9208102fd525a771cb1749f1fdcc68f"} err="failed to get container status \"41db58c1878faa146941df56f7f5193bc9208102fd525a771cb1749f1fdcc68f\": rpc error: code = NotFound desc = could not find container \"41db58c1878faa146941df56f7f5193bc9208102fd525a771cb1749f1fdcc68f\": container with ID starting with 41db58c1878faa146941df56f7f5193bc9208102fd525a771cb1749f1fdcc68f not found: ID does not exist" Mar 12 08:03:29 crc kubenswrapper[4809]: I0312 08:03:29.374505 4809 scope.go:117] "RemoveContainer" containerID="b9059eee3e580902a476f96ef5c15d6835a95bc18c208a5b6ade171f4d3a7bc6" Mar 12 08:03:29 crc kubenswrapper[4809]: E0312 08:03:29.374796 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9059eee3e580902a476f96ef5c15d6835a95bc18c208a5b6ade171f4d3a7bc6\": container with ID starting with b9059eee3e580902a476f96ef5c15d6835a95bc18c208a5b6ade171f4d3a7bc6 not found: ID does not exist" containerID="b9059eee3e580902a476f96ef5c15d6835a95bc18c208a5b6ade171f4d3a7bc6" Mar 12 08:03:29 crc kubenswrapper[4809]: I0312 08:03:29.374819 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9059eee3e580902a476f96ef5c15d6835a95bc18c208a5b6ade171f4d3a7bc6"} err="failed to get container status \"b9059eee3e580902a476f96ef5c15d6835a95bc18c208a5b6ade171f4d3a7bc6\": rpc error: code = NotFound desc = could not find container \"b9059eee3e580902a476f96ef5c15d6835a95bc18c208a5b6ade171f4d3a7bc6\": container with ID starting with b9059eee3e580902a476f96ef5c15d6835a95bc18c208a5b6ade171f4d3a7bc6 not found: ID does not exist" Mar 12 08:03:29 crc kubenswrapper[4809]: I0312 08:03:29.433699 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75d1a803-df56-424d-ace1-ecc868081fca-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 08:03:29 crc kubenswrapper[4809]: I0312 08:03:29.654148 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jkqnv"] Mar 12 08:03:29 crc kubenswrapper[4809]: I0312 08:03:29.661530 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jkqnv"] Mar 12 08:03:31 crc kubenswrapper[4809]: I0312 08:03:31.120090 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75d1a803-df56-424d-ace1-ecc868081fca" path="/var/lib/kubelet/pods/75d1a803-df56-424d-ace1-ecc868081fca/volumes" Mar 12 08:03:37 crc kubenswrapper[4809]: I0312 08:03:37.389474 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-s8wdz" podUID="3e14730e-9ab1-4dd1-b786-142b82b59802" containerName="oauth-openshift" containerID="cri-o://07997b868a10f171c145e86a4a3ba2c80b0eae13cf43103285c8510b1f08f0c8" gracePeriod=15 Mar 12 08:03:37 crc kubenswrapper[4809]: I0312 08:03:37.898899 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6dd94c7fd9-mz5zm"] Mar 12 08:03:37 crc kubenswrapper[4809]: I0312 08:03:37.899259 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6dd94c7fd9-mz5zm" podUID="879828de-d2a0-475c-b2c3-9f07caa1be65" containerName="controller-manager" containerID="cri-o://bac4f94a06e90f981ff572e9d1b5c2b8cc5c8a24e9419dc45b59483b46abec4a" gracePeriod=30 Mar 12 08:03:37 crc kubenswrapper[4809]: I0312 08:03:37.938416 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-s8wdz" Mar 12 08:03:37 crc kubenswrapper[4809]: I0312 08:03:37.960961 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3e14730e-9ab1-4dd1-b786-142b82b59802-v4-0-config-system-cliconfig\") pod \"3e14730e-9ab1-4dd1-b786-142b82b59802\" (UID: \"3e14730e-9ab1-4dd1-b786-142b82b59802\") " Mar 12 08:03:37 crc kubenswrapper[4809]: I0312 08:03:37.961018 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3e14730e-9ab1-4dd1-b786-142b82b59802-v4-0-config-system-ocp-branding-template\") pod \"3e14730e-9ab1-4dd1-b786-142b82b59802\" (UID: \"3e14730e-9ab1-4dd1-b786-142b82b59802\") " Mar 12 08:03:37 crc kubenswrapper[4809]: I0312 08:03:37.961048 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3e14730e-9ab1-4dd1-b786-142b82b59802-v4-0-config-user-idp-0-file-data\") pod \"3e14730e-9ab1-4dd1-b786-142b82b59802\" (UID: \"3e14730e-9ab1-4dd1-b786-142b82b59802\") " Mar 12 08:03:37 crc kubenswrapper[4809]: I0312 08:03:37.961073 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3e14730e-9ab1-4dd1-b786-142b82b59802-v4-0-config-user-template-login\") pod \"3e14730e-9ab1-4dd1-b786-142b82b59802\" (UID: \"3e14730e-9ab1-4dd1-b786-142b82b59802\") " Mar 12 08:03:37 crc kubenswrapper[4809]: I0312 08:03:37.961108 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3e14730e-9ab1-4dd1-b786-142b82b59802-v4-0-config-system-router-certs\") pod \"3e14730e-9ab1-4dd1-b786-142b82b59802\" (UID: \"3e14730e-9ab1-4dd1-b786-142b82b59802\") " Mar 12 08:03:37 crc kubenswrapper[4809]: I0312 08:03:37.961182 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3e14730e-9ab1-4dd1-b786-142b82b59802-v4-0-config-user-template-provider-selection\") pod \"3e14730e-9ab1-4dd1-b786-142b82b59802\" (UID: \"3e14730e-9ab1-4dd1-b786-142b82b59802\") " Mar 12 08:03:37 crc kubenswrapper[4809]: I0312 08:03:37.961253 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3e14730e-9ab1-4dd1-b786-142b82b59802-audit-dir\") pod \"3e14730e-9ab1-4dd1-b786-142b82b59802\" (UID: \"3e14730e-9ab1-4dd1-b786-142b82b59802\") " Mar 12 08:03:37 crc kubenswrapper[4809]: I0312 08:03:37.961302 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3e14730e-9ab1-4dd1-b786-142b82b59802-v4-0-config-system-serving-cert\") pod \"3e14730e-9ab1-4dd1-b786-142b82b59802\" (UID: \"3e14730e-9ab1-4dd1-b786-142b82b59802\") " Mar 12 08:03:37 crc kubenswrapper[4809]: I0312 08:03:37.961356 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3e14730e-9ab1-4dd1-b786-142b82b59802-v4-0-config-system-service-ca\") pod \"3e14730e-9ab1-4dd1-b786-142b82b59802\" (UID: \"3e14730e-9ab1-4dd1-b786-142b82b59802\") " Mar 12 08:03:37 crc kubenswrapper[4809]: I0312 08:03:37.961414 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3e14730e-9ab1-4dd1-b786-142b82b59802-v4-0-config-system-session\") pod \"3e14730e-9ab1-4dd1-b786-142b82b59802\" (UID: \"3e14730e-9ab1-4dd1-b786-142b82b59802\") " Mar 12 08:03:37 crc kubenswrapper[4809]: I0312 08:03:37.961448 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3e14730e-9ab1-4dd1-b786-142b82b59802-audit-policies\") pod \"3e14730e-9ab1-4dd1-b786-142b82b59802\" (UID: \"3e14730e-9ab1-4dd1-b786-142b82b59802\") " Mar 12 08:03:37 crc kubenswrapper[4809]: I0312 08:03:37.961491 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3e14730e-9ab1-4dd1-b786-142b82b59802-v4-0-config-user-template-error\") pod \"3e14730e-9ab1-4dd1-b786-142b82b59802\" (UID: \"3e14730e-9ab1-4dd1-b786-142b82b59802\") " Mar 12 08:03:37 crc kubenswrapper[4809]: I0312 08:03:37.961556 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e14730e-9ab1-4dd1-b786-142b82b59802-v4-0-config-system-trusted-ca-bundle\") pod \"3e14730e-9ab1-4dd1-b786-142b82b59802\" (UID: \"3e14730e-9ab1-4dd1-b786-142b82b59802\") " Mar 12 08:03:37 crc kubenswrapper[4809]: I0312 08:03:37.961587 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-msw9j\" (UniqueName: \"kubernetes.io/projected/3e14730e-9ab1-4dd1-b786-142b82b59802-kube-api-access-msw9j\") pod \"3e14730e-9ab1-4dd1-b786-142b82b59802\" (UID: \"3e14730e-9ab1-4dd1-b786-142b82b59802\") " Mar 12 08:03:37 crc kubenswrapper[4809]: I0312 08:03:37.963396 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e14730e-9ab1-4dd1-b786-142b82b59802-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "3e14730e-9ab1-4dd1-b786-142b82b59802" (UID: "3e14730e-9ab1-4dd1-b786-142b82b59802"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:03:37 crc kubenswrapper[4809]: I0312 08:03:37.963886 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e14730e-9ab1-4dd1-b786-142b82b59802-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "3e14730e-9ab1-4dd1-b786-142b82b59802" (UID: "3e14730e-9ab1-4dd1-b786-142b82b59802"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:03:37 crc kubenswrapper[4809]: I0312 08:03:37.966728 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e14730e-9ab1-4dd1-b786-142b82b59802-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "3e14730e-9ab1-4dd1-b786-142b82b59802" (UID: "3e14730e-9ab1-4dd1-b786-142b82b59802"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:03:37 crc kubenswrapper[4809]: I0312 08:03:37.967208 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e14730e-9ab1-4dd1-b786-142b82b59802-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3e14730e-9ab1-4dd1-b786-142b82b59802" (UID: "3e14730e-9ab1-4dd1-b786-142b82b59802"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 08:03:37 crc kubenswrapper[4809]: I0312 08:03:37.967446 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e14730e-9ab1-4dd1-b786-142b82b59802-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "3e14730e-9ab1-4dd1-b786-142b82b59802" (UID: "3e14730e-9ab1-4dd1-b786-142b82b59802"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:03:37 crc kubenswrapper[4809]: I0312 08:03:37.992953 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-884c6bbf8-zb7jr"] Mar 12 08:03:37 crc kubenswrapper[4809]: I0312 08:03:37.993180 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-884c6bbf8-zb7jr" podUID="408c166a-3c09-4993-bd81-59bf46798fca" containerName="route-controller-manager" containerID="cri-o://6d4a13ea6883b045a203e264c758cc69c543583d0e297932cd689b5d6b9d5869" gracePeriod=30 Mar 12 08:03:38 crc kubenswrapper[4809]: I0312 08:03:38.032226 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e14730e-9ab1-4dd1-b786-142b82b59802-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "3e14730e-9ab1-4dd1-b786-142b82b59802" (UID: "3e14730e-9ab1-4dd1-b786-142b82b59802"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:03:38 crc kubenswrapper[4809]: I0312 08:03:38.033350 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e14730e-9ab1-4dd1-b786-142b82b59802-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "3e14730e-9ab1-4dd1-b786-142b82b59802" (UID: "3e14730e-9ab1-4dd1-b786-142b82b59802"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:03:38 crc kubenswrapper[4809]: I0312 08:03:38.035578 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e14730e-9ab1-4dd1-b786-142b82b59802-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "3e14730e-9ab1-4dd1-b786-142b82b59802" (UID: "3e14730e-9ab1-4dd1-b786-142b82b59802"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:03:38 crc kubenswrapper[4809]: I0312 08:03:38.036934 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e14730e-9ab1-4dd1-b786-142b82b59802-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "3e14730e-9ab1-4dd1-b786-142b82b59802" (UID: "3e14730e-9ab1-4dd1-b786-142b82b59802"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:03:38 crc kubenswrapper[4809]: I0312 08:03:38.037204 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e14730e-9ab1-4dd1-b786-142b82b59802-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "3e14730e-9ab1-4dd1-b786-142b82b59802" (UID: "3e14730e-9ab1-4dd1-b786-142b82b59802"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:03:38 crc kubenswrapper[4809]: I0312 08:03:38.038479 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e14730e-9ab1-4dd1-b786-142b82b59802-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "3e14730e-9ab1-4dd1-b786-142b82b59802" (UID: "3e14730e-9ab1-4dd1-b786-142b82b59802"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:03:38 crc kubenswrapper[4809]: I0312 08:03:38.055295 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e14730e-9ab1-4dd1-b786-142b82b59802-kube-api-access-msw9j" (OuterVolumeSpecName: "kube-api-access-msw9j") pod "3e14730e-9ab1-4dd1-b786-142b82b59802" (UID: "3e14730e-9ab1-4dd1-b786-142b82b59802"). InnerVolumeSpecName "kube-api-access-msw9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:03:38 crc kubenswrapper[4809]: I0312 08:03:38.056711 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e14730e-9ab1-4dd1-b786-142b82b59802-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "3e14730e-9ab1-4dd1-b786-142b82b59802" (UID: "3e14730e-9ab1-4dd1-b786-142b82b59802"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:03:38 crc kubenswrapper[4809]: I0312 08:03:38.059441 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e14730e-9ab1-4dd1-b786-142b82b59802-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "3e14730e-9ab1-4dd1-b786-142b82b59802" (UID: "3e14730e-9ab1-4dd1-b786-142b82b59802"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:03:38 crc kubenswrapper[4809]: I0312 08:03:38.063808 4809 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3e14730e-9ab1-4dd1-b786-142b82b59802-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 08:03:38 crc kubenswrapper[4809]: I0312 08:03:38.063867 4809 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3e14730e-9ab1-4dd1-b786-142b82b59802-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Mar 12 08:03:38 crc kubenswrapper[4809]: I0312 08:03:38.063886 4809 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3e14730e-9ab1-4dd1-b786-142b82b59802-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Mar 12 08:03:38 crc kubenswrapper[4809]: I0312 08:03:38.063905 4809 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3e14730e-9ab1-4dd1-b786-142b82b59802-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 12 08:03:38 crc kubenswrapper[4809]: I0312 08:03:38.063919 4809 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3e14730e-9ab1-4dd1-b786-142b82b59802-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Mar 12 08:03:38 crc kubenswrapper[4809]: I0312 08:03:38.063935 4809 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e14730e-9ab1-4dd1-b786-142b82b59802-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:03:38 crc kubenswrapper[4809]: I0312 08:03:38.063951 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-msw9j\" (UniqueName: \"kubernetes.io/projected/3e14730e-9ab1-4dd1-b786-142b82b59802-kube-api-access-msw9j\") on node \"crc\" DevicePath \"\"" Mar 12 08:03:38 crc kubenswrapper[4809]: I0312 08:03:38.063964 4809 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3e14730e-9ab1-4dd1-b786-142b82b59802-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Mar 12 08:03:38 crc kubenswrapper[4809]: I0312 08:03:38.063977 4809 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3e14730e-9ab1-4dd1-b786-142b82b59802-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Mar 12 08:03:38 crc kubenswrapper[4809]: I0312 08:03:38.063991 4809 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3e14730e-9ab1-4dd1-b786-142b82b59802-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Mar 12 08:03:38 crc kubenswrapper[4809]: I0312 08:03:38.064004 4809 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3e14730e-9ab1-4dd1-b786-142b82b59802-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Mar 12 08:03:38 crc kubenswrapper[4809]: I0312 08:03:38.064019 4809 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3e14730e-9ab1-4dd1-b786-142b82b59802-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Mar 12 08:03:38 crc kubenswrapper[4809]: I0312 08:03:38.064033 4809 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3e14730e-9ab1-4dd1-b786-142b82b59802-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Mar 12 08:03:38 crc kubenswrapper[4809]: I0312 08:03:38.064053 4809 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3e14730e-9ab1-4dd1-b786-142b82b59802-audit-dir\") on node \"crc\" DevicePath \"\"" Mar 12 08:03:38 crc kubenswrapper[4809]: I0312 08:03:38.362186 4809 generic.go:334] "Generic (PLEG): container finished" podID="879828de-d2a0-475c-b2c3-9f07caa1be65" containerID="bac4f94a06e90f981ff572e9d1b5c2b8cc5c8a24e9419dc45b59483b46abec4a" exitCode=0 Mar 12 08:03:38 crc kubenswrapper[4809]: I0312 08:03:38.362266 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6dd94c7fd9-mz5zm" event={"ID":"879828de-d2a0-475c-b2c3-9f07caa1be65","Type":"ContainerDied","Data":"bac4f94a06e90f981ff572e9d1b5c2b8cc5c8a24e9419dc45b59483b46abec4a"} Mar 12 08:03:38 crc kubenswrapper[4809]: I0312 08:03:38.363673 4809 generic.go:334] "Generic (PLEG): container finished" podID="408c166a-3c09-4993-bd81-59bf46798fca" containerID="6d4a13ea6883b045a203e264c758cc69c543583d0e297932cd689b5d6b9d5869" exitCode=0 Mar 12 08:03:38 crc kubenswrapper[4809]: I0312 08:03:38.363711 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-884c6bbf8-zb7jr" event={"ID":"408c166a-3c09-4993-bd81-59bf46798fca","Type":"ContainerDied","Data":"6d4a13ea6883b045a203e264c758cc69c543583d0e297932cd689b5d6b9d5869"} Mar 12 08:03:38 crc kubenswrapper[4809]: I0312 08:03:38.367030 4809 generic.go:334] "Generic (PLEG): container finished" podID="3e14730e-9ab1-4dd1-b786-142b82b59802" containerID="07997b868a10f171c145e86a4a3ba2c80b0eae13cf43103285c8510b1f08f0c8" exitCode=0 Mar 12 08:03:38 crc kubenswrapper[4809]: I0312 08:03:38.367053 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-s8wdz" event={"ID":"3e14730e-9ab1-4dd1-b786-142b82b59802","Type":"ContainerDied","Data":"07997b868a10f171c145e86a4a3ba2c80b0eae13cf43103285c8510b1f08f0c8"} Mar 12 08:03:38 crc kubenswrapper[4809]: I0312 08:03:38.367068 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-s8wdz" event={"ID":"3e14730e-9ab1-4dd1-b786-142b82b59802","Type":"ContainerDied","Data":"129d9cedf8fafeda64a7715f3b69433c81c8b858a158b9d265d5bce44dcb5563"} Mar 12 08:03:38 crc kubenswrapper[4809]: I0312 08:03:38.367084 4809 scope.go:117] "RemoveContainer" containerID="07997b868a10f171c145e86a4a3ba2c80b0eae13cf43103285c8510b1f08f0c8" Mar 12 08:03:38 crc kubenswrapper[4809]: I0312 08:03:38.367209 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-s8wdz" Mar 12 08:03:38 crc kubenswrapper[4809]: I0312 08:03:38.415664 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-s8wdz"] Mar 12 08:03:38 crc kubenswrapper[4809]: I0312 08:03:38.415743 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-s8wdz"] Mar 12 08:03:38 crc kubenswrapper[4809]: I0312 08:03:38.434440 4809 scope.go:117] "RemoveContainer" containerID="07997b868a10f171c145e86a4a3ba2c80b0eae13cf43103285c8510b1f08f0c8" Mar 12 08:03:38 crc kubenswrapper[4809]: E0312 08:03:38.440276 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07997b868a10f171c145e86a4a3ba2c80b0eae13cf43103285c8510b1f08f0c8\": container with ID starting with 07997b868a10f171c145e86a4a3ba2c80b0eae13cf43103285c8510b1f08f0c8 not found: ID does not exist" containerID="07997b868a10f171c145e86a4a3ba2c80b0eae13cf43103285c8510b1f08f0c8" Mar 12 08:03:38 crc kubenswrapper[4809]: I0312 08:03:38.440347 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07997b868a10f171c145e86a4a3ba2c80b0eae13cf43103285c8510b1f08f0c8"} err="failed to get container status \"07997b868a10f171c145e86a4a3ba2c80b0eae13cf43103285c8510b1f08f0c8\": rpc error: code = NotFound desc = could not find container \"07997b868a10f171c145e86a4a3ba2c80b0eae13cf43103285c8510b1f08f0c8\": container with ID starting with 07997b868a10f171c145e86a4a3ba2c80b0eae13cf43103285c8510b1f08f0c8 not found: ID does not exist" Mar 12 08:03:38 crc kubenswrapper[4809]: I0312 08:03:38.641543 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-884c6bbf8-zb7jr" Mar 12 08:03:38 crc kubenswrapper[4809]: I0312 08:03:38.680498 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/408c166a-3c09-4993-bd81-59bf46798fca-client-ca\") pod \"408c166a-3c09-4993-bd81-59bf46798fca\" (UID: \"408c166a-3c09-4993-bd81-59bf46798fca\") " Mar 12 08:03:38 crc kubenswrapper[4809]: I0312 08:03:38.680637 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhhkb\" (UniqueName: \"kubernetes.io/projected/408c166a-3c09-4993-bd81-59bf46798fca-kube-api-access-xhhkb\") pod \"408c166a-3c09-4993-bd81-59bf46798fca\" (UID: \"408c166a-3c09-4993-bd81-59bf46798fca\") " Mar 12 08:03:38 crc kubenswrapper[4809]: I0312 08:03:38.680696 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/408c166a-3c09-4993-bd81-59bf46798fca-serving-cert\") pod \"408c166a-3c09-4993-bd81-59bf46798fca\" (UID: \"408c166a-3c09-4993-bd81-59bf46798fca\") " Mar 12 08:03:38 crc kubenswrapper[4809]: I0312 08:03:38.681522 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/408c166a-3c09-4993-bd81-59bf46798fca-client-ca" (OuterVolumeSpecName: "client-ca") pod "408c166a-3c09-4993-bd81-59bf46798fca" (UID: "408c166a-3c09-4993-bd81-59bf46798fca"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:03:38 crc kubenswrapper[4809]: I0312 08:03:38.681782 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6dd94c7fd9-mz5zm" Mar 12 08:03:38 crc kubenswrapper[4809]: I0312 08:03:38.681795 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/408c166a-3c09-4993-bd81-59bf46798fca-config\") pod \"408c166a-3c09-4993-bd81-59bf46798fca\" (UID: \"408c166a-3c09-4993-bd81-59bf46798fca\") " Mar 12 08:03:38 crc kubenswrapper[4809]: I0312 08:03:38.682357 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/408c166a-3c09-4993-bd81-59bf46798fca-config" (OuterVolumeSpecName: "config") pod "408c166a-3c09-4993-bd81-59bf46798fca" (UID: "408c166a-3c09-4993-bd81-59bf46798fca"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:03:38 crc kubenswrapper[4809]: I0312 08:03:38.682880 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/408c166a-3c09-4993-bd81-59bf46798fca-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:03:38 crc kubenswrapper[4809]: I0312 08:03:38.682900 4809 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/408c166a-3c09-4993-bd81-59bf46798fca-client-ca\") on node \"crc\" DevicePath \"\"" Mar 12 08:03:38 crc kubenswrapper[4809]: I0312 08:03:38.686429 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/408c166a-3c09-4993-bd81-59bf46798fca-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "408c166a-3c09-4993-bd81-59bf46798fca" (UID: "408c166a-3c09-4993-bd81-59bf46798fca"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:03:38 crc kubenswrapper[4809]: I0312 08:03:38.686494 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/408c166a-3c09-4993-bd81-59bf46798fca-kube-api-access-xhhkb" (OuterVolumeSpecName: "kube-api-access-xhhkb") pod "408c166a-3c09-4993-bd81-59bf46798fca" (UID: "408c166a-3c09-4993-bd81-59bf46798fca"). InnerVolumeSpecName "kube-api-access-xhhkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:03:38 crc kubenswrapper[4809]: I0312 08:03:38.784371 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/879828de-d2a0-475c-b2c3-9f07caa1be65-config\") pod \"879828de-d2a0-475c-b2c3-9f07caa1be65\" (UID: \"879828de-d2a0-475c-b2c3-9f07caa1be65\") " Mar 12 08:03:38 crc kubenswrapper[4809]: I0312 08:03:38.784452 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7bdv\" (UniqueName: \"kubernetes.io/projected/879828de-d2a0-475c-b2c3-9f07caa1be65-kube-api-access-d7bdv\") pod \"879828de-d2a0-475c-b2c3-9f07caa1be65\" (UID: \"879828de-d2a0-475c-b2c3-9f07caa1be65\") " Mar 12 08:03:38 crc kubenswrapper[4809]: I0312 08:03:38.784514 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/879828de-d2a0-475c-b2c3-9f07caa1be65-proxy-ca-bundles\") pod \"879828de-d2a0-475c-b2c3-9f07caa1be65\" (UID: \"879828de-d2a0-475c-b2c3-9f07caa1be65\") " Mar 12 08:03:38 crc kubenswrapper[4809]: I0312 08:03:38.784573 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/879828de-d2a0-475c-b2c3-9f07caa1be65-serving-cert\") pod \"879828de-d2a0-475c-b2c3-9f07caa1be65\" (UID: \"879828de-d2a0-475c-b2c3-9f07caa1be65\") " Mar 12 08:03:38 crc kubenswrapper[4809]: I0312 08:03:38.784600 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/879828de-d2a0-475c-b2c3-9f07caa1be65-client-ca\") pod \"879828de-d2a0-475c-b2c3-9f07caa1be65\" (UID: \"879828de-d2a0-475c-b2c3-9f07caa1be65\") " Mar 12 08:03:38 crc kubenswrapper[4809]: I0312 08:03:38.784868 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhhkb\" (UniqueName: \"kubernetes.io/projected/408c166a-3c09-4993-bd81-59bf46798fca-kube-api-access-xhhkb\") on node \"crc\" DevicePath \"\"" Mar 12 08:03:38 crc kubenswrapper[4809]: I0312 08:03:38.784888 4809 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/408c166a-3c09-4993-bd81-59bf46798fca-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 08:03:38 crc kubenswrapper[4809]: I0312 08:03:38.785548 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/879828de-d2a0-475c-b2c3-9f07caa1be65-client-ca" (OuterVolumeSpecName: "client-ca") pod "879828de-d2a0-475c-b2c3-9f07caa1be65" (UID: "879828de-d2a0-475c-b2c3-9f07caa1be65"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:03:38 crc kubenswrapper[4809]: I0312 08:03:38.785584 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/879828de-d2a0-475c-b2c3-9f07caa1be65-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "879828de-d2a0-475c-b2c3-9f07caa1be65" (UID: "879828de-d2a0-475c-b2c3-9f07caa1be65"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:03:38 crc kubenswrapper[4809]: I0312 08:03:38.785823 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/879828de-d2a0-475c-b2c3-9f07caa1be65-config" (OuterVolumeSpecName: "config") pod "879828de-d2a0-475c-b2c3-9f07caa1be65" (UID: "879828de-d2a0-475c-b2c3-9f07caa1be65"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:03:38 crc kubenswrapper[4809]: I0312 08:03:38.788753 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/879828de-d2a0-475c-b2c3-9f07caa1be65-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "879828de-d2a0-475c-b2c3-9f07caa1be65" (UID: "879828de-d2a0-475c-b2c3-9f07caa1be65"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:03:38 crc kubenswrapper[4809]: I0312 08:03:38.789443 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/879828de-d2a0-475c-b2c3-9f07caa1be65-kube-api-access-d7bdv" (OuterVolumeSpecName: "kube-api-access-d7bdv") pod "879828de-d2a0-475c-b2c3-9f07caa1be65" (UID: "879828de-d2a0-475c-b2c3-9f07caa1be65"). InnerVolumeSpecName "kube-api-access-d7bdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:03:38 crc kubenswrapper[4809]: I0312 08:03:38.886521 4809 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/879828de-d2a0-475c-b2c3-9f07caa1be65-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 08:03:38 crc kubenswrapper[4809]: I0312 08:03:38.886577 4809 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/879828de-d2a0-475c-b2c3-9f07caa1be65-client-ca\") on node \"crc\" DevicePath \"\"" Mar 12 08:03:38 crc kubenswrapper[4809]: I0312 08:03:38.886591 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/879828de-d2a0-475c-b2c3-9f07caa1be65-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:03:38 crc kubenswrapper[4809]: I0312 08:03:38.886605 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7bdv\" (UniqueName: \"kubernetes.io/projected/879828de-d2a0-475c-b2c3-9f07caa1be65-kube-api-access-d7bdv\") on node \"crc\" DevicePath \"\"" Mar 12 08:03:38 crc kubenswrapper[4809]: I0312 08:03:38.886619 4809 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/879828de-d2a0-475c-b2c3-9f07caa1be65-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 12 08:03:39 crc kubenswrapper[4809]: I0312 08:03:39.115011 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e14730e-9ab1-4dd1-b786-142b82b59802" path="/var/lib/kubelet/pods/3e14730e-9ab1-4dd1-b786-142b82b59802/volumes" Mar 12 08:03:39 crc kubenswrapper[4809]: I0312 08:03:39.375579 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6dd94c7fd9-mz5zm" event={"ID":"879828de-d2a0-475c-b2c3-9f07caa1be65","Type":"ContainerDied","Data":"f0f8c9bd5779a5c88ffa6e7bf1ebebec7049561d4cbd5841e08fde1291740bd7"} Mar 12 08:03:39 crc kubenswrapper[4809]: I0312 08:03:39.375632 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6dd94c7fd9-mz5zm" Mar 12 08:03:39 crc kubenswrapper[4809]: I0312 08:03:39.375666 4809 scope.go:117] "RemoveContainer" containerID="bac4f94a06e90f981ff572e9d1b5c2b8cc5c8a24e9419dc45b59483b46abec4a" Mar 12 08:03:39 crc kubenswrapper[4809]: I0312 08:03:39.378995 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-884c6bbf8-zb7jr" event={"ID":"408c166a-3c09-4993-bd81-59bf46798fca","Type":"ContainerDied","Data":"037bfbda496275ccfc1c02d6fb392a9a641e8dd54e0b6896a47facb3bf1ada68"} Mar 12 08:03:39 crc kubenswrapper[4809]: I0312 08:03:39.379162 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-884c6bbf8-zb7jr" Mar 12 08:03:39 crc kubenswrapper[4809]: I0312 08:03:39.394993 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6dd94c7fd9-mz5zm"] Mar 12 08:03:39 crc kubenswrapper[4809]: I0312 08:03:39.398263 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6dd94c7fd9-mz5zm"] Mar 12 08:03:39 crc kubenswrapper[4809]: I0312 08:03:39.417374 4809 scope.go:117] "RemoveContainer" containerID="6d4a13ea6883b045a203e264c758cc69c543583d0e297932cd689b5d6b9d5869" Mar 12 08:03:39 crc kubenswrapper[4809]: I0312 08:03:39.424660 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-884c6bbf8-zb7jr"] Mar 12 08:03:39 crc kubenswrapper[4809]: I0312 08:03:39.429316 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-884c6bbf8-zb7jr"] Mar 12 08:03:39 crc kubenswrapper[4809]: I0312 08:03:39.924604 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-f7c76cdd5-nbjpc"] Mar 12 08:03:39 crc kubenswrapper[4809]: E0312 08:03:39.925098 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1df02216-0a1b-4417-9914-e7b9452a9c6b" containerName="extract-content" Mar 12 08:03:39 crc kubenswrapper[4809]: I0312 08:03:39.925158 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="1df02216-0a1b-4417-9914-e7b9452a9c6b" containerName="extract-content" Mar 12 08:03:39 crc kubenswrapper[4809]: E0312 08:03:39.925183 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75d1a803-df56-424d-ace1-ecc868081fca" containerName="extract-utilities" Mar 12 08:03:39 crc kubenswrapper[4809]: I0312 08:03:39.925197 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="75d1a803-df56-424d-ace1-ecc868081fca" containerName="extract-utilities" Mar 12 08:03:39 crc kubenswrapper[4809]: E0312 08:03:39.925255 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="511df0b5-a255-46fe-aeba-cd5daa01e7c9" containerName="registry-server" Mar 12 08:03:39 crc kubenswrapper[4809]: I0312 08:03:39.925267 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="511df0b5-a255-46fe-aeba-cd5daa01e7c9" containerName="registry-server" Mar 12 08:03:39 crc kubenswrapper[4809]: E0312 08:03:39.925286 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6adda456-f49a-4a6d-b09b-8841158e9268" containerName="extract-content" Mar 12 08:03:39 crc kubenswrapper[4809]: I0312 08:03:39.925297 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="6adda456-f49a-4a6d-b09b-8841158e9268" containerName="extract-content" Mar 12 08:03:39 crc kubenswrapper[4809]: E0312 08:03:39.925313 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="408c166a-3c09-4993-bd81-59bf46798fca" containerName="route-controller-manager" Mar 12 08:03:39 crc kubenswrapper[4809]: I0312 08:03:39.925325 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="408c166a-3c09-4993-bd81-59bf46798fca" containerName="route-controller-manager" Mar 12 08:03:39 crc kubenswrapper[4809]: E0312 08:03:39.925340 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75d1a803-df56-424d-ace1-ecc868081fca" containerName="registry-server" Mar 12 08:03:39 crc kubenswrapper[4809]: I0312 08:03:39.925351 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="75d1a803-df56-424d-ace1-ecc868081fca" containerName="registry-server" Mar 12 08:03:39 crc kubenswrapper[4809]: E0312 08:03:39.925369 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="511df0b5-a255-46fe-aeba-cd5daa01e7c9" containerName="extract-utilities" Mar 12 08:03:39 crc kubenswrapper[4809]: I0312 08:03:39.925381 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="511df0b5-a255-46fe-aeba-cd5daa01e7c9" containerName="extract-utilities" Mar 12 08:03:39 crc kubenswrapper[4809]: E0312 08:03:39.925392 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6adda456-f49a-4a6d-b09b-8841158e9268" containerName="extract-utilities" Mar 12 08:03:39 crc kubenswrapper[4809]: I0312 08:03:39.925403 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="6adda456-f49a-4a6d-b09b-8841158e9268" containerName="extract-utilities" Mar 12 08:03:39 crc kubenswrapper[4809]: E0312 08:03:39.925418 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1df02216-0a1b-4417-9914-e7b9452a9c6b" containerName="extract-utilities" Mar 12 08:03:39 crc kubenswrapper[4809]: I0312 08:03:39.925432 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="1df02216-0a1b-4417-9914-e7b9452a9c6b" containerName="extract-utilities" Mar 12 08:03:39 crc kubenswrapper[4809]: E0312 08:03:39.925445 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75d1a803-df56-424d-ace1-ecc868081fca" containerName="extract-content" Mar 12 08:03:39 crc kubenswrapper[4809]: I0312 08:03:39.925454 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="75d1a803-df56-424d-ace1-ecc868081fca" containerName="extract-content" Mar 12 08:03:39 crc kubenswrapper[4809]: E0312 08:03:39.925467 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1df02216-0a1b-4417-9914-e7b9452a9c6b" containerName="registry-server" Mar 12 08:03:39 crc kubenswrapper[4809]: I0312 08:03:39.925478 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="1df02216-0a1b-4417-9914-e7b9452a9c6b" containerName="registry-server" Mar 12 08:03:39 crc kubenswrapper[4809]: E0312 08:03:39.925492 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="511df0b5-a255-46fe-aeba-cd5daa01e7c9" containerName="extract-content" Mar 12 08:03:39 crc kubenswrapper[4809]: I0312 08:03:39.925503 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="511df0b5-a255-46fe-aeba-cd5daa01e7c9" containerName="extract-content" Mar 12 08:03:39 crc kubenswrapper[4809]: E0312 08:03:39.925515 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6adda456-f49a-4a6d-b09b-8841158e9268" containerName="registry-server" Mar 12 08:03:39 crc kubenswrapper[4809]: I0312 08:03:39.925527 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="6adda456-f49a-4a6d-b09b-8841158e9268" containerName="registry-server" Mar 12 08:03:39 crc kubenswrapper[4809]: E0312 08:03:39.925537 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="879828de-d2a0-475c-b2c3-9f07caa1be65" containerName="controller-manager" Mar 12 08:03:39 crc kubenswrapper[4809]: I0312 08:03:39.925547 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="879828de-d2a0-475c-b2c3-9f07caa1be65" containerName="controller-manager" Mar 12 08:03:39 crc kubenswrapper[4809]: E0312 08:03:39.925566 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e14730e-9ab1-4dd1-b786-142b82b59802" containerName="oauth-openshift" Mar 12 08:03:39 crc kubenswrapper[4809]: I0312 08:03:39.925577 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e14730e-9ab1-4dd1-b786-142b82b59802" containerName="oauth-openshift" Mar 12 08:03:39 crc kubenswrapper[4809]: I0312 08:03:39.925779 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="1df02216-0a1b-4417-9914-e7b9452a9c6b" containerName="registry-server" Mar 12 08:03:39 crc kubenswrapper[4809]: I0312 08:03:39.925804 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="6adda456-f49a-4a6d-b09b-8841158e9268" containerName="registry-server" Mar 12 08:03:39 crc kubenswrapper[4809]: I0312 08:03:39.925824 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="75d1a803-df56-424d-ace1-ecc868081fca" containerName="registry-server" Mar 12 08:03:39 crc kubenswrapper[4809]: I0312 08:03:39.925848 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="511df0b5-a255-46fe-aeba-cd5daa01e7c9" containerName="registry-server" Mar 12 08:03:39 crc kubenswrapper[4809]: I0312 08:03:39.925866 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="408c166a-3c09-4993-bd81-59bf46798fca" containerName="route-controller-manager" Mar 12 08:03:39 crc kubenswrapper[4809]: I0312 08:03:39.925880 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="879828de-d2a0-475c-b2c3-9f07caa1be65" containerName="controller-manager" Mar 12 08:03:39 crc kubenswrapper[4809]: I0312 08:03:39.925894 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e14730e-9ab1-4dd1-b786-142b82b59802" containerName="oauth-openshift" Mar 12 08:03:39 crc kubenswrapper[4809]: I0312 08:03:39.926743 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f7c76cdd5-nbjpc" Mar 12 08:03:39 crc kubenswrapper[4809]: I0312 08:03:39.929031 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 12 08:03:39 crc kubenswrapper[4809]: I0312 08:03:39.929587 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5dbd9bc447-2v8gh"] Mar 12 08:03:39 crc kubenswrapper[4809]: I0312 08:03:39.929953 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 12 08:03:39 crc kubenswrapper[4809]: I0312 08:03:39.930239 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 12 08:03:39 crc kubenswrapper[4809]: I0312 08:03:39.931070 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5dbd9bc447-2v8gh" Mar 12 08:03:39 crc kubenswrapper[4809]: I0312 08:03:39.934896 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 12 08:03:39 crc kubenswrapper[4809]: I0312 08:03:39.935794 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 12 08:03:39 crc kubenswrapper[4809]: I0312 08:03:39.935872 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 12 08:03:39 crc kubenswrapper[4809]: I0312 08:03:39.935794 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 12 08:03:39 crc kubenswrapper[4809]: I0312 08:03:39.936467 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 12 08:03:39 crc kubenswrapper[4809]: I0312 08:03:39.936500 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 12 08:03:39 crc kubenswrapper[4809]: I0312 08:03:39.936611 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 12 08:03:39 crc kubenswrapper[4809]: I0312 08:03:39.936901 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 12 08:03:39 crc kubenswrapper[4809]: I0312 08:03:39.938075 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 12 08:03:39 crc kubenswrapper[4809]: I0312 08:03:39.944619 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 12 08:03:39 crc kubenswrapper[4809]: I0312 08:03:39.948285 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-f7c76cdd5-nbjpc"] Mar 12 08:03:39 crc kubenswrapper[4809]: I0312 08:03:39.962073 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5dbd9bc447-2v8gh"] Mar 12 08:03:40 crc kubenswrapper[4809]: I0312 08:03:40.002387 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5tg2\" (UniqueName: \"kubernetes.io/projected/c1cae54e-b80d-4bac-bfd6-fcc3b6e8832b-kube-api-access-f5tg2\") pod \"controller-manager-f7c76cdd5-nbjpc\" (UID: \"c1cae54e-b80d-4bac-bfd6-fcc3b6e8832b\") " pod="openshift-controller-manager/controller-manager-f7c76cdd5-nbjpc" Mar 12 08:03:40 crc kubenswrapper[4809]: I0312 08:03:40.002478 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfc7z\" (UniqueName: \"kubernetes.io/projected/5ba37d8e-9139-402a-9909-8a9c3fa4d103-kube-api-access-tfc7z\") pod \"route-controller-manager-5dbd9bc447-2v8gh\" (UID: \"5ba37d8e-9139-402a-9909-8a9c3fa4d103\") " pod="openshift-route-controller-manager/route-controller-manager-5dbd9bc447-2v8gh" Mar 12 08:03:40 crc kubenswrapper[4809]: I0312 08:03:40.002664 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1cae54e-b80d-4bac-bfd6-fcc3b6e8832b-serving-cert\") pod \"controller-manager-f7c76cdd5-nbjpc\" (UID: \"c1cae54e-b80d-4bac-bfd6-fcc3b6e8832b\") " pod="openshift-controller-manager/controller-manager-f7c76cdd5-nbjpc" Mar 12 08:03:40 crc kubenswrapper[4809]: I0312 08:03:40.002707 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c1cae54e-b80d-4bac-bfd6-fcc3b6e8832b-proxy-ca-bundles\") pod \"controller-manager-f7c76cdd5-nbjpc\" (UID: \"c1cae54e-b80d-4bac-bfd6-fcc3b6e8832b\") " pod="openshift-controller-manager/controller-manager-f7c76cdd5-nbjpc" Mar 12 08:03:40 crc kubenswrapper[4809]: I0312 08:03:40.002774 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5ba37d8e-9139-402a-9909-8a9c3fa4d103-client-ca\") pod \"route-controller-manager-5dbd9bc447-2v8gh\" (UID: \"5ba37d8e-9139-402a-9909-8a9c3fa4d103\") " pod="openshift-route-controller-manager/route-controller-manager-5dbd9bc447-2v8gh" Mar 12 08:03:40 crc kubenswrapper[4809]: I0312 08:03:40.002807 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c1cae54e-b80d-4bac-bfd6-fcc3b6e8832b-client-ca\") pod \"controller-manager-f7c76cdd5-nbjpc\" (UID: \"c1cae54e-b80d-4bac-bfd6-fcc3b6e8832b\") " pod="openshift-controller-manager/controller-manager-f7c76cdd5-nbjpc" Mar 12 08:03:40 crc kubenswrapper[4809]: I0312 08:03:40.003044 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ba37d8e-9139-402a-9909-8a9c3fa4d103-serving-cert\") pod \"route-controller-manager-5dbd9bc447-2v8gh\" (UID: \"5ba37d8e-9139-402a-9909-8a9c3fa4d103\") " pod="openshift-route-controller-manager/route-controller-manager-5dbd9bc447-2v8gh" Mar 12 08:03:40 crc kubenswrapper[4809]: I0312 08:03:40.003110 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ba37d8e-9139-402a-9909-8a9c3fa4d103-config\") pod \"route-controller-manager-5dbd9bc447-2v8gh\" (UID: \"5ba37d8e-9139-402a-9909-8a9c3fa4d103\") " pod="openshift-route-controller-manager/route-controller-manager-5dbd9bc447-2v8gh" Mar 12 08:03:40 crc kubenswrapper[4809]: I0312 08:03:40.003164 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1cae54e-b80d-4bac-bfd6-fcc3b6e8832b-config\") pod \"controller-manager-f7c76cdd5-nbjpc\" (UID: \"c1cae54e-b80d-4bac-bfd6-fcc3b6e8832b\") " pod="openshift-controller-manager/controller-manager-f7c76cdd5-nbjpc" Mar 12 08:03:40 crc kubenswrapper[4809]: I0312 08:03:40.104396 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5tg2\" (UniqueName: \"kubernetes.io/projected/c1cae54e-b80d-4bac-bfd6-fcc3b6e8832b-kube-api-access-f5tg2\") pod \"controller-manager-f7c76cdd5-nbjpc\" (UID: \"c1cae54e-b80d-4bac-bfd6-fcc3b6e8832b\") " pod="openshift-controller-manager/controller-manager-f7c76cdd5-nbjpc" Mar 12 08:03:40 crc kubenswrapper[4809]: I0312 08:03:40.104456 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfc7z\" (UniqueName: \"kubernetes.io/projected/5ba37d8e-9139-402a-9909-8a9c3fa4d103-kube-api-access-tfc7z\") pod \"route-controller-manager-5dbd9bc447-2v8gh\" (UID: \"5ba37d8e-9139-402a-9909-8a9c3fa4d103\") " pod="openshift-route-controller-manager/route-controller-manager-5dbd9bc447-2v8gh" Mar 12 08:03:40 crc kubenswrapper[4809]: I0312 08:03:40.104514 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1cae54e-b80d-4bac-bfd6-fcc3b6e8832b-serving-cert\") pod \"controller-manager-f7c76cdd5-nbjpc\" (UID: \"c1cae54e-b80d-4bac-bfd6-fcc3b6e8832b\") " pod="openshift-controller-manager/controller-manager-f7c76cdd5-nbjpc" Mar 12 08:03:40 crc kubenswrapper[4809]: I0312 08:03:40.104539 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c1cae54e-b80d-4bac-bfd6-fcc3b6e8832b-proxy-ca-bundles\") pod \"controller-manager-f7c76cdd5-nbjpc\" (UID: \"c1cae54e-b80d-4bac-bfd6-fcc3b6e8832b\") " pod="openshift-controller-manager/controller-manager-f7c76cdd5-nbjpc" Mar 12 08:03:40 crc kubenswrapper[4809]: I0312 08:03:40.104562 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5ba37d8e-9139-402a-9909-8a9c3fa4d103-client-ca\") pod \"route-controller-manager-5dbd9bc447-2v8gh\" (UID: \"5ba37d8e-9139-402a-9909-8a9c3fa4d103\") " pod="openshift-route-controller-manager/route-controller-manager-5dbd9bc447-2v8gh" Mar 12 08:03:40 crc kubenswrapper[4809]: I0312 08:03:40.104579 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c1cae54e-b80d-4bac-bfd6-fcc3b6e8832b-client-ca\") pod \"controller-manager-f7c76cdd5-nbjpc\" (UID: \"c1cae54e-b80d-4bac-bfd6-fcc3b6e8832b\") " pod="openshift-controller-manager/controller-manager-f7c76cdd5-nbjpc" Mar 12 08:03:40 crc kubenswrapper[4809]: I0312 08:03:40.104618 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ba37d8e-9139-402a-9909-8a9c3fa4d103-serving-cert\") pod \"route-controller-manager-5dbd9bc447-2v8gh\" (UID: \"5ba37d8e-9139-402a-9909-8a9c3fa4d103\") " pod="openshift-route-controller-manager/route-controller-manager-5dbd9bc447-2v8gh" Mar 12 08:03:40 crc kubenswrapper[4809]: I0312 08:03:40.104640 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ba37d8e-9139-402a-9909-8a9c3fa4d103-config\") pod \"route-controller-manager-5dbd9bc447-2v8gh\" (UID: \"5ba37d8e-9139-402a-9909-8a9c3fa4d103\") " pod="openshift-route-controller-manager/route-controller-manager-5dbd9bc447-2v8gh" Mar 12 08:03:40 crc kubenswrapper[4809]: I0312 08:03:40.104682 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1cae54e-b80d-4bac-bfd6-fcc3b6e8832b-config\") pod \"controller-manager-f7c76cdd5-nbjpc\" (UID: \"c1cae54e-b80d-4bac-bfd6-fcc3b6e8832b\") " pod="openshift-controller-manager/controller-manager-f7c76cdd5-nbjpc" Mar 12 08:03:40 crc kubenswrapper[4809]: I0312 08:03:40.105828 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5ba37d8e-9139-402a-9909-8a9c3fa4d103-client-ca\") pod \"route-controller-manager-5dbd9bc447-2v8gh\" (UID: \"5ba37d8e-9139-402a-9909-8a9c3fa4d103\") " pod="openshift-route-controller-manager/route-controller-manager-5dbd9bc447-2v8gh" Mar 12 08:03:40 crc kubenswrapper[4809]: I0312 08:03:40.106465 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ba37d8e-9139-402a-9909-8a9c3fa4d103-config\") pod \"route-controller-manager-5dbd9bc447-2v8gh\" (UID: \"5ba37d8e-9139-402a-9909-8a9c3fa4d103\") " pod="openshift-route-controller-manager/route-controller-manager-5dbd9bc447-2v8gh" Mar 12 08:03:40 crc kubenswrapper[4809]: I0312 08:03:40.106700 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c1cae54e-b80d-4bac-bfd6-fcc3b6e8832b-client-ca\") pod \"controller-manager-f7c76cdd5-nbjpc\" (UID: \"c1cae54e-b80d-4bac-bfd6-fcc3b6e8832b\") " pod="openshift-controller-manager/controller-manager-f7c76cdd5-nbjpc" Mar 12 08:03:40 crc kubenswrapper[4809]: I0312 08:03:40.107184 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1cae54e-b80d-4bac-bfd6-fcc3b6e8832b-config\") pod \"controller-manager-f7c76cdd5-nbjpc\" (UID: \"c1cae54e-b80d-4bac-bfd6-fcc3b6e8832b\") " pod="openshift-controller-manager/controller-manager-f7c76cdd5-nbjpc" Mar 12 08:03:40 crc kubenswrapper[4809]: I0312 08:03:40.107514 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c1cae54e-b80d-4bac-bfd6-fcc3b6e8832b-proxy-ca-bundles\") pod \"controller-manager-f7c76cdd5-nbjpc\" (UID: \"c1cae54e-b80d-4bac-bfd6-fcc3b6e8832b\") " pod="openshift-controller-manager/controller-manager-f7c76cdd5-nbjpc" Mar 12 08:03:40 crc kubenswrapper[4809]: I0312 08:03:40.111244 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ba37d8e-9139-402a-9909-8a9c3fa4d103-serving-cert\") pod \"route-controller-manager-5dbd9bc447-2v8gh\" (UID: \"5ba37d8e-9139-402a-9909-8a9c3fa4d103\") " pod="openshift-route-controller-manager/route-controller-manager-5dbd9bc447-2v8gh" Mar 12 08:03:40 crc kubenswrapper[4809]: I0312 08:03:40.112333 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1cae54e-b80d-4bac-bfd6-fcc3b6e8832b-serving-cert\") pod \"controller-manager-f7c76cdd5-nbjpc\" (UID: \"c1cae54e-b80d-4bac-bfd6-fcc3b6e8832b\") " pod="openshift-controller-manager/controller-manager-f7c76cdd5-nbjpc" Mar 12 08:03:40 crc kubenswrapper[4809]: I0312 08:03:40.128732 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5tg2\" (UniqueName: \"kubernetes.io/projected/c1cae54e-b80d-4bac-bfd6-fcc3b6e8832b-kube-api-access-f5tg2\") pod \"controller-manager-f7c76cdd5-nbjpc\" (UID: \"c1cae54e-b80d-4bac-bfd6-fcc3b6e8832b\") " pod="openshift-controller-manager/controller-manager-f7c76cdd5-nbjpc" Mar 12 08:03:40 crc kubenswrapper[4809]: I0312 08:03:40.137882 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfc7z\" (UniqueName: \"kubernetes.io/projected/5ba37d8e-9139-402a-9909-8a9c3fa4d103-kube-api-access-tfc7z\") pod \"route-controller-manager-5dbd9bc447-2v8gh\" (UID: \"5ba37d8e-9139-402a-9909-8a9c3fa4d103\") " pod="openshift-route-controller-manager/route-controller-manager-5dbd9bc447-2v8gh" Mar 12 08:03:40 crc kubenswrapper[4809]: I0312 08:03:40.255903 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f7c76cdd5-nbjpc" Mar 12 08:03:40 crc kubenswrapper[4809]: I0312 08:03:40.269201 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5dbd9bc447-2v8gh" Mar 12 08:03:40 crc kubenswrapper[4809]: I0312 08:03:40.754781 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-f7c76cdd5-nbjpc"] Mar 12 08:03:40 crc kubenswrapper[4809]: W0312 08:03:40.765950 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1cae54e_b80d_4bac_bfd6_fcc3b6e8832b.slice/crio-66b6dc2d8c65248f4dcd0cbaa077405a89867e0c19c46a679c75fa5e6d171812 WatchSource:0}: Error finding container 66b6dc2d8c65248f4dcd0cbaa077405a89867e0c19c46a679c75fa5e6d171812: Status 404 returned error can't find the container with id 66b6dc2d8c65248f4dcd0cbaa077405a89867e0c19c46a679c75fa5e6d171812 Mar 12 08:03:40 crc kubenswrapper[4809]: I0312 08:03:40.817749 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5dbd9bc447-2v8gh"] Mar 12 08:03:40 crc kubenswrapper[4809]: W0312 08:03:40.827707 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ba37d8e_9139_402a_9909_8a9c3fa4d103.slice/crio-00d9f71d257154ebff8314dda1ebcea4a18ecb2580ee1492196bacca8b74967a WatchSource:0}: Error finding container 00d9f71d257154ebff8314dda1ebcea4a18ecb2580ee1492196bacca8b74967a: Status 404 returned error can't find the container with id 00d9f71d257154ebff8314dda1ebcea4a18ecb2580ee1492196bacca8b74967a Mar 12 08:03:41 crc kubenswrapper[4809]: I0312 08:03:41.113480 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="408c166a-3c09-4993-bd81-59bf46798fca" path="/var/lib/kubelet/pods/408c166a-3c09-4993-bd81-59bf46798fca/volumes" Mar 12 08:03:41 crc kubenswrapper[4809]: I0312 08:03:41.114823 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="879828de-d2a0-475c-b2c3-9f07caa1be65" path="/var/lib/kubelet/pods/879828de-d2a0-475c-b2c3-9f07caa1be65/volumes" Mar 12 08:03:41 crc kubenswrapper[4809]: I0312 08:03:41.398303 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f7c76cdd5-nbjpc" event={"ID":"c1cae54e-b80d-4bac-bfd6-fcc3b6e8832b","Type":"ContainerStarted","Data":"84f1f44cea0ba1bbe7a81e9f212f7ffa115592893287b9b94fdcb98a1c713a9e"} Mar 12 08:03:41 crc kubenswrapper[4809]: I0312 08:03:41.398646 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f7c76cdd5-nbjpc" event={"ID":"c1cae54e-b80d-4bac-bfd6-fcc3b6e8832b","Type":"ContainerStarted","Data":"66b6dc2d8c65248f4dcd0cbaa077405a89867e0c19c46a679c75fa5e6d171812"} Mar 12 08:03:41 crc kubenswrapper[4809]: I0312 08:03:41.398670 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-f7c76cdd5-nbjpc" Mar 12 08:03:41 crc kubenswrapper[4809]: I0312 08:03:41.400790 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5dbd9bc447-2v8gh" event={"ID":"5ba37d8e-9139-402a-9909-8a9c3fa4d103","Type":"ContainerStarted","Data":"3c24f46e748734abcd75c20587fa57ae57554aacfa8149267e6e1e842dc3973a"} Mar 12 08:03:41 crc kubenswrapper[4809]: I0312 08:03:41.400855 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5dbd9bc447-2v8gh" event={"ID":"5ba37d8e-9139-402a-9909-8a9c3fa4d103","Type":"ContainerStarted","Data":"00d9f71d257154ebff8314dda1ebcea4a18ecb2580ee1492196bacca8b74967a"} Mar 12 08:03:41 crc kubenswrapper[4809]: I0312 08:03:41.401878 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5dbd9bc447-2v8gh" Mar 12 08:03:41 crc kubenswrapper[4809]: I0312 08:03:41.406582 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-f7c76cdd5-nbjpc" Mar 12 08:03:41 crc kubenswrapper[4809]: I0312 08:03:41.409515 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5dbd9bc447-2v8gh" Mar 12 08:03:41 crc kubenswrapper[4809]: I0312 08:03:41.417996 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-f7c76cdd5-nbjpc" podStartSLOduration=4.417976863 podStartE2EDuration="4.417976863s" podCreationTimestamp="2026-03-12 08:03:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:03:41.4160387 +0000 UTC m=+294.998074453" watchObservedRunningTime="2026-03-12 08:03:41.417976863 +0000 UTC m=+295.000012596" Mar 12 08:03:41 crc kubenswrapper[4809]: I0312 08:03:41.444513 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5dbd9bc447-2v8gh" podStartSLOduration=3.444489336 podStartE2EDuration="3.444489336s" podCreationTimestamp="2026-03-12 08:03:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:03:41.442324587 +0000 UTC m=+295.024360320" watchObservedRunningTime="2026-03-12 08:03:41.444489336 +0000 UTC m=+295.026525069" Mar 12 08:03:42 crc kubenswrapper[4809]: I0312 08:03:42.931786 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7d9c768c99-69kh8"] Mar 12 08:03:42 crc kubenswrapper[4809]: I0312 08:03:42.933253 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7d9c768c99-69kh8" Mar 12 08:03:42 crc kubenswrapper[4809]: I0312 08:03:42.942683 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 12 08:03:42 crc kubenswrapper[4809]: I0312 08:03:42.942871 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 12 08:03:42 crc kubenswrapper[4809]: I0312 08:03:42.943343 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 12 08:03:42 crc kubenswrapper[4809]: I0312 08:03:42.943619 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 12 08:03:42 crc kubenswrapper[4809]: I0312 08:03:42.944249 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Mar 12 08:03:42 crc kubenswrapper[4809]: I0312 08:03:42.944477 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 12 08:03:42 crc kubenswrapper[4809]: I0312 08:03:42.944577 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 12 08:03:42 crc kubenswrapper[4809]: I0312 08:03:42.944966 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 12 08:03:42 crc kubenswrapper[4809]: I0312 08:03:42.945084 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 12 08:03:42 crc kubenswrapper[4809]: I0312 08:03:42.945265 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 12 08:03:42 crc kubenswrapper[4809]: I0312 08:03:42.985480 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Mar 12 08:03:42 crc kubenswrapper[4809]: I0312 08:03:42.985811 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 12 08:03:42 crc kubenswrapper[4809]: I0312 08:03:42.997367 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 12 08:03:42 crc kubenswrapper[4809]: I0312 08:03:42.998312 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 12 08:03:43 crc kubenswrapper[4809]: I0312 08:03:43.019714 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7d9c768c99-69kh8"] Mar 12 08:03:43 crc kubenswrapper[4809]: I0312 08:03:43.023934 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 12 08:03:43 crc kubenswrapper[4809]: I0312 08:03:43.045820 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0a8a753b-e49b-4631-8630-ecc01634d644-v4-0-config-system-service-ca\") pod \"oauth-openshift-7d9c768c99-69kh8\" (UID: \"0a8a753b-e49b-4631-8630-ecc01634d644\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-69kh8" Mar 12 08:03:43 crc kubenswrapper[4809]: I0312 08:03:43.045892 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0a8a753b-e49b-4631-8630-ecc01634d644-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7d9c768c99-69kh8\" (UID: \"0a8a753b-e49b-4631-8630-ecc01634d644\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-69kh8" Mar 12 08:03:43 crc kubenswrapper[4809]: I0312 08:03:43.045931 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8svq4\" (UniqueName: \"kubernetes.io/projected/0a8a753b-e49b-4631-8630-ecc01634d644-kube-api-access-8svq4\") pod \"oauth-openshift-7d9c768c99-69kh8\" (UID: \"0a8a753b-e49b-4631-8630-ecc01634d644\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-69kh8" Mar 12 08:03:43 crc kubenswrapper[4809]: I0312 08:03:43.045965 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0a8a753b-e49b-4631-8630-ecc01634d644-audit-dir\") pod \"oauth-openshift-7d9c768c99-69kh8\" (UID: \"0a8a753b-e49b-4631-8630-ecc01634d644\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-69kh8" Mar 12 08:03:43 crc kubenswrapper[4809]: I0312 08:03:43.045996 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0a8a753b-e49b-4631-8630-ecc01634d644-v4-0-config-user-template-error\") pod \"oauth-openshift-7d9c768c99-69kh8\" (UID: \"0a8a753b-e49b-4631-8630-ecc01634d644\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-69kh8" Mar 12 08:03:43 crc kubenswrapper[4809]: I0312 08:03:43.046022 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0a8a753b-e49b-4631-8630-ecc01634d644-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7d9c768c99-69kh8\" (UID: \"0a8a753b-e49b-4631-8630-ecc01634d644\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-69kh8" Mar 12 08:03:43 crc kubenswrapper[4809]: I0312 08:03:43.046134 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0a8a753b-e49b-4631-8630-ecc01634d644-v4-0-config-user-template-login\") pod \"oauth-openshift-7d9c768c99-69kh8\" (UID: \"0a8a753b-e49b-4631-8630-ecc01634d644\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-69kh8" Mar 12 08:03:43 crc kubenswrapper[4809]: I0312 08:03:43.046230 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0a8a753b-e49b-4631-8630-ecc01634d644-v4-0-config-system-router-certs\") pod \"oauth-openshift-7d9c768c99-69kh8\" (UID: \"0a8a753b-e49b-4631-8630-ecc01634d644\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-69kh8" Mar 12 08:03:43 crc kubenswrapper[4809]: I0312 08:03:43.046398 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0a8a753b-e49b-4631-8630-ecc01634d644-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7d9c768c99-69kh8\" (UID: \"0a8a753b-e49b-4631-8630-ecc01634d644\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-69kh8" Mar 12 08:03:43 crc kubenswrapper[4809]: I0312 08:03:43.046510 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0a8a753b-e49b-4631-8630-ecc01634d644-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7d9c768c99-69kh8\" (UID: \"0a8a753b-e49b-4631-8630-ecc01634d644\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-69kh8" Mar 12 08:03:43 crc kubenswrapper[4809]: I0312 08:03:43.046570 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0a8a753b-e49b-4631-8630-ecc01634d644-audit-policies\") pod \"oauth-openshift-7d9c768c99-69kh8\" (UID: \"0a8a753b-e49b-4631-8630-ecc01634d644\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-69kh8" Mar 12 08:03:43 crc kubenswrapper[4809]: I0312 08:03:43.046613 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0a8a753b-e49b-4631-8630-ecc01634d644-v4-0-config-system-session\") pod \"oauth-openshift-7d9c768c99-69kh8\" (UID: \"0a8a753b-e49b-4631-8630-ecc01634d644\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-69kh8" Mar 12 08:03:43 crc kubenswrapper[4809]: I0312 08:03:43.046656 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a8a753b-e49b-4631-8630-ecc01634d644-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7d9c768c99-69kh8\" (UID: \"0a8a753b-e49b-4631-8630-ecc01634d644\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-69kh8" Mar 12 08:03:43 crc kubenswrapper[4809]: I0312 08:03:43.046841 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0a8a753b-e49b-4631-8630-ecc01634d644-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7d9c768c99-69kh8\" (UID: \"0a8a753b-e49b-4631-8630-ecc01634d644\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-69kh8" Mar 12 08:03:43 crc kubenswrapper[4809]: I0312 08:03:43.148568 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0a8a753b-e49b-4631-8630-ecc01634d644-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7d9c768c99-69kh8\" (UID: \"0a8a753b-e49b-4631-8630-ecc01634d644\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-69kh8" Mar 12 08:03:43 crc kubenswrapper[4809]: I0312 08:03:43.149068 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0a8a753b-e49b-4631-8630-ecc01634d644-v4-0-config-user-template-login\") pod \"oauth-openshift-7d9c768c99-69kh8\" (UID: \"0a8a753b-e49b-4631-8630-ecc01634d644\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-69kh8" Mar 12 08:03:43 crc kubenswrapper[4809]: I0312 08:03:43.149244 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0a8a753b-e49b-4631-8630-ecc01634d644-v4-0-config-system-router-certs\") pod \"oauth-openshift-7d9c768c99-69kh8\" (UID: \"0a8a753b-e49b-4631-8630-ecc01634d644\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-69kh8" Mar 12 08:03:43 crc kubenswrapper[4809]: I0312 08:03:43.149382 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0a8a753b-e49b-4631-8630-ecc01634d644-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7d9c768c99-69kh8\" (UID: \"0a8a753b-e49b-4631-8630-ecc01634d644\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-69kh8" Mar 12 08:03:43 crc kubenswrapper[4809]: I0312 08:03:43.149496 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0a8a753b-e49b-4631-8630-ecc01634d644-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7d9c768c99-69kh8\" (UID: \"0a8a753b-e49b-4631-8630-ecc01634d644\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-69kh8" Mar 12 08:03:43 crc kubenswrapper[4809]: I0312 08:03:43.149621 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0a8a753b-e49b-4631-8630-ecc01634d644-audit-policies\") pod \"oauth-openshift-7d9c768c99-69kh8\" (UID: \"0a8a753b-e49b-4631-8630-ecc01634d644\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-69kh8" Mar 12 08:03:43 crc kubenswrapper[4809]: I0312 08:03:43.149724 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0a8a753b-e49b-4631-8630-ecc01634d644-v4-0-config-system-session\") pod \"oauth-openshift-7d9c768c99-69kh8\" (UID: \"0a8a753b-e49b-4631-8630-ecc01634d644\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-69kh8" Mar 12 08:03:43 crc kubenswrapper[4809]: I0312 08:03:43.151035 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a8a753b-e49b-4631-8630-ecc01634d644-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7d9c768c99-69kh8\" (UID: \"0a8a753b-e49b-4631-8630-ecc01634d644\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-69kh8" Mar 12 08:03:43 crc kubenswrapper[4809]: I0312 08:03:43.151193 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0a8a753b-e49b-4631-8630-ecc01634d644-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7d9c768c99-69kh8\" (UID: \"0a8a753b-e49b-4631-8630-ecc01634d644\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-69kh8" Mar 12 08:03:43 crc kubenswrapper[4809]: I0312 08:03:43.151334 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0a8a753b-e49b-4631-8630-ecc01634d644-v4-0-config-system-service-ca\") pod \"oauth-openshift-7d9c768c99-69kh8\" (UID: \"0a8a753b-e49b-4631-8630-ecc01634d644\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-69kh8" Mar 12 08:03:43 crc kubenswrapper[4809]: I0312 08:03:43.151449 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0a8a753b-e49b-4631-8630-ecc01634d644-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7d9c768c99-69kh8\" (UID: \"0a8a753b-e49b-4631-8630-ecc01634d644\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-69kh8" Mar 12 08:03:43 crc kubenswrapper[4809]: I0312 08:03:43.151568 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8svq4\" (UniqueName: \"kubernetes.io/projected/0a8a753b-e49b-4631-8630-ecc01634d644-kube-api-access-8svq4\") pod \"oauth-openshift-7d9c768c99-69kh8\" (UID: \"0a8a753b-e49b-4631-8630-ecc01634d644\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-69kh8" Mar 12 08:03:43 crc kubenswrapper[4809]: I0312 08:03:43.151679 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0a8a753b-e49b-4631-8630-ecc01634d644-audit-dir\") pod \"oauth-openshift-7d9c768c99-69kh8\" (UID: \"0a8a753b-e49b-4631-8630-ecc01634d644\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-69kh8" Mar 12 08:03:43 crc kubenswrapper[4809]: I0312 08:03:43.151794 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0a8a753b-e49b-4631-8630-ecc01634d644-v4-0-config-user-template-error\") pod \"oauth-openshift-7d9c768c99-69kh8\" (UID: \"0a8a753b-e49b-4631-8630-ecc01634d644\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-69kh8" Mar 12 08:03:43 crc kubenswrapper[4809]: I0312 08:03:43.151836 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a8a753b-e49b-4631-8630-ecc01634d644-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7d9c768c99-69kh8\" (UID: \"0a8a753b-e49b-4631-8630-ecc01634d644\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-69kh8" Mar 12 08:03:43 crc kubenswrapper[4809]: I0312 08:03:43.150865 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0a8a753b-e49b-4631-8630-ecc01634d644-audit-policies\") pod \"oauth-openshift-7d9c768c99-69kh8\" (UID: \"0a8a753b-e49b-4631-8630-ecc01634d644\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-69kh8" Mar 12 08:03:43 crc kubenswrapper[4809]: I0312 08:03:43.150439 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0a8a753b-e49b-4631-8630-ecc01634d644-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7d9c768c99-69kh8\" (UID: \"0a8a753b-e49b-4631-8630-ecc01634d644\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-69kh8" Mar 12 08:03:43 crc kubenswrapper[4809]: I0312 08:03:43.152508 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0a8a753b-e49b-4631-8630-ecc01634d644-audit-dir\") pod \"oauth-openshift-7d9c768c99-69kh8\" (UID: \"0a8a753b-e49b-4631-8630-ecc01634d644\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-69kh8" Mar 12 08:03:43 crc kubenswrapper[4809]: I0312 08:03:43.152556 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0a8a753b-e49b-4631-8630-ecc01634d644-v4-0-config-system-service-ca\") pod \"oauth-openshift-7d9c768c99-69kh8\" (UID: \"0a8a753b-e49b-4631-8630-ecc01634d644\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-69kh8" Mar 12 08:03:43 crc kubenswrapper[4809]: I0312 08:03:43.156347 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0a8a753b-e49b-4631-8630-ecc01634d644-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7d9c768c99-69kh8\" (UID: \"0a8a753b-e49b-4631-8630-ecc01634d644\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-69kh8" Mar 12 08:03:43 crc kubenswrapper[4809]: I0312 08:03:43.156436 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0a8a753b-e49b-4631-8630-ecc01634d644-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7d9c768c99-69kh8\" (UID: \"0a8a753b-e49b-4631-8630-ecc01634d644\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-69kh8" Mar 12 08:03:43 crc kubenswrapper[4809]: I0312 08:03:43.156519 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0a8a753b-e49b-4631-8630-ecc01634d644-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7d9c768c99-69kh8\" (UID: \"0a8a753b-e49b-4631-8630-ecc01634d644\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-69kh8" Mar 12 08:03:43 crc kubenswrapper[4809]: I0312 08:03:43.156748 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0a8a753b-e49b-4631-8630-ecc01634d644-v4-0-config-user-template-login\") pod \"oauth-openshift-7d9c768c99-69kh8\" (UID: \"0a8a753b-e49b-4631-8630-ecc01634d644\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-69kh8" Mar 12 08:03:43 crc kubenswrapper[4809]: I0312 08:03:43.157356 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0a8a753b-e49b-4631-8630-ecc01634d644-v4-0-config-system-router-certs\") pod \"oauth-openshift-7d9c768c99-69kh8\" (UID: \"0a8a753b-e49b-4631-8630-ecc01634d644\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-69kh8" Mar 12 08:03:43 crc kubenswrapper[4809]: I0312 08:03:43.157945 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0a8a753b-e49b-4631-8630-ecc01634d644-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7d9c768c99-69kh8\" (UID: \"0a8a753b-e49b-4631-8630-ecc01634d644\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-69kh8" Mar 12 08:03:43 crc kubenswrapper[4809]: I0312 08:03:43.159965 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0a8a753b-e49b-4631-8630-ecc01634d644-v4-0-config-system-session\") pod \"oauth-openshift-7d9c768c99-69kh8\" (UID: \"0a8a753b-e49b-4631-8630-ecc01634d644\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-69kh8" Mar 12 08:03:43 crc kubenswrapper[4809]: I0312 08:03:43.165251 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0a8a753b-e49b-4631-8630-ecc01634d644-v4-0-config-user-template-error\") pod \"oauth-openshift-7d9c768c99-69kh8\" (UID: \"0a8a753b-e49b-4631-8630-ecc01634d644\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-69kh8" Mar 12 08:03:43 crc kubenswrapper[4809]: I0312 08:03:43.176885 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8svq4\" (UniqueName: \"kubernetes.io/projected/0a8a753b-e49b-4631-8630-ecc01634d644-kube-api-access-8svq4\") pod \"oauth-openshift-7d9c768c99-69kh8\" (UID: \"0a8a753b-e49b-4631-8630-ecc01634d644\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-69kh8" Mar 12 08:03:43 crc kubenswrapper[4809]: I0312 08:03:43.295221 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7d9c768c99-69kh8" Mar 12 08:03:43 crc kubenswrapper[4809]: I0312 08:03:43.761811 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7d9c768c99-69kh8"] Mar 12 08:03:44 crc kubenswrapper[4809]: I0312 08:03:44.425425 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7d9c768c99-69kh8" event={"ID":"0a8a753b-e49b-4631-8630-ecc01634d644","Type":"ContainerStarted","Data":"2305c0779cfee119bb607b60df41d3a7b4dce5e75dcc42f8e3a6335260a27317"} Mar 12 08:03:44 crc kubenswrapper[4809]: I0312 08:03:44.425723 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7d9c768c99-69kh8" event={"ID":"0a8a753b-e49b-4631-8630-ecc01634d644","Type":"ContainerStarted","Data":"d1b22df5fbec8634a4be6888009bcdbb2e44fc5c0e2b144b959e3cba80c005b4"} Mar 12 08:03:44 crc kubenswrapper[4809]: I0312 08:03:44.427243 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7d9c768c99-69kh8" Mar 12 08:03:44 crc kubenswrapper[4809]: I0312 08:03:44.433951 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7d9c768c99-69kh8" Mar 12 08:03:44 crc kubenswrapper[4809]: I0312 08:03:44.448203 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7d9c768c99-69kh8" podStartSLOduration=32.448185115 podStartE2EDuration="32.448185115s" podCreationTimestamp="2026-03-12 08:03:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:03:44.444323709 +0000 UTC m=+298.026359442" watchObservedRunningTime="2026-03-12 08:03:44.448185115 +0000 UTC m=+298.030220848" Mar 12 08:03:45 crc kubenswrapper[4809]: I0312 08:03:45.012016 4809 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 12 08:03:45 crc kubenswrapper[4809]: I0312 08:03:45.012963 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://95634c408015e53a4c8d044b27803c5070ad086be1f3372a760744db1bc9c033" gracePeriod=15 Mar 12 08:03:45 crc kubenswrapper[4809]: I0312 08:03:45.013073 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://411ee556dd1461097ea682e93564110b2a46d02734cb91026ba06ffe7de7a80f" gracePeriod=15 Mar 12 08:03:45 crc kubenswrapper[4809]: I0312 08:03:45.012983 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://b9814d65f13344afd5484baf20498a643e575626a91a474b937602f4bb06f564" gracePeriod=15 Mar 12 08:03:45 crc kubenswrapper[4809]: I0312 08:03:45.013059 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://efb3f6bb09e8fcec0267b3fdbdf43301e33aa2ba8bfaf8b3a124c7e7714949ab" gracePeriod=15 Mar 12 08:03:45 crc kubenswrapper[4809]: I0312 08:03:45.013028 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://c386318c2476130eb02e6620ab75fce7d5cd814d7f214bed4f449de60d126a14" gracePeriod=15 Mar 12 08:03:45 crc kubenswrapper[4809]: I0312 08:03:45.017337 4809 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 12 08:03:45 crc kubenswrapper[4809]: E0312 08:03:45.017804 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Mar 12 08:03:45 crc kubenswrapper[4809]: I0312 08:03:45.017840 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Mar 12 08:03:45 crc kubenswrapper[4809]: E0312 08:03:45.017859 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 12 08:03:45 crc kubenswrapper[4809]: I0312 08:03:45.017873 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 12 08:03:45 crc kubenswrapper[4809]: E0312 08:03:45.017892 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 12 08:03:45 crc kubenswrapper[4809]: I0312 08:03:45.017904 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 12 08:03:45 crc kubenswrapper[4809]: E0312 08:03:45.017927 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 12 08:03:45 crc kubenswrapper[4809]: I0312 08:03:45.017941 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 12 08:03:45 crc kubenswrapper[4809]: E0312 08:03:45.017967 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 12 08:03:45 crc kubenswrapper[4809]: I0312 08:03:45.017980 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 12 08:03:45 crc kubenswrapper[4809]: E0312 08:03:45.017999 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 12 08:03:45 crc kubenswrapper[4809]: I0312 08:03:45.018011 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 12 08:03:45 crc kubenswrapper[4809]: E0312 08:03:45.018026 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 12 08:03:45 crc kubenswrapper[4809]: I0312 08:03:45.018039 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 12 08:03:45 crc kubenswrapper[4809]: E0312 08:03:45.018057 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 12 08:03:45 crc kubenswrapper[4809]: I0312 08:03:45.018072 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 12 08:03:45 crc kubenswrapper[4809]: E0312 08:03:45.018085 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 12 08:03:45 crc kubenswrapper[4809]: I0312 08:03:45.018098 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 12 08:03:45 crc kubenswrapper[4809]: I0312 08:03:45.018388 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 12 08:03:45 crc kubenswrapper[4809]: I0312 08:03:45.018411 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 12 08:03:45 crc kubenswrapper[4809]: I0312 08:03:45.018425 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 12 08:03:45 crc kubenswrapper[4809]: I0312 08:03:45.018443 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 12 08:03:45 crc kubenswrapper[4809]: I0312 08:03:45.018461 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 12 08:03:45 crc kubenswrapper[4809]: I0312 08:03:45.018478 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 12 08:03:45 crc kubenswrapper[4809]: I0312 08:03:45.018500 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 12 08:03:45 crc kubenswrapper[4809]: E0312 08:03:45.018700 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 12 08:03:45 crc kubenswrapper[4809]: I0312 08:03:45.018726 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 12 08:03:45 crc kubenswrapper[4809]: I0312 08:03:45.018923 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 12 08:03:45 crc kubenswrapper[4809]: I0312 08:03:45.019401 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 12 08:03:45 crc kubenswrapper[4809]: I0312 08:03:45.026330 4809 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 12 08:03:45 crc kubenswrapper[4809]: I0312 08:03:45.027535 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 08:03:45 crc kubenswrapper[4809]: I0312 08:03:45.034160 4809 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Mar 12 08:03:45 crc kubenswrapper[4809]: I0312 08:03:45.085807 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 08:03:45 crc kubenswrapper[4809]: I0312 08:03:45.085980 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 08:03:45 crc kubenswrapper[4809]: I0312 08:03:45.086268 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 08:03:45 crc kubenswrapper[4809]: I0312 08:03:45.086348 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 08:03:45 crc kubenswrapper[4809]: I0312 08:03:45.086441 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 08:03:45 crc kubenswrapper[4809]: I0312 08:03:45.086488 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 08:03:45 crc kubenswrapper[4809]: I0312 08:03:45.086633 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 08:03:45 crc kubenswrapper[4809]: I0312 08:03:45.086737 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 08:03:45 crc kubenswrapper[4809]: E0312 08:03:45.090027 4809 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.80:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 08:03:45 crc kubenswrapper[4809]: I0312 08:03:45.188282 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 08:03:45 crc kubenswrapper[4809]: I0312 08:03:45.188364 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 08:03:45 crc kubenswrapper[4809]: I0312 08:03:45.188416 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 08:03:45 crc kubenswrapper[4809]: I0312 08:03:45.188442 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 08:03:45 crc kubenswrapper[4809]: I0312 08:03:45.188498 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 08:03:45 crc kubenswrapper[4809]: I0312 08:03:45.188548 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 08:03:45 crc kubenswrapper[4809]: I0312 08:03:45.188646 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 08:03:45 crc kubenswrapper[4809]: I0312 08:03:45.188677 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 08:03:45 crc kubenswrapper[4809]: I0312 08:03:45.188750 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 08:03:45 crc kubenswrapper[4809]: I0312 08:03:45.188747 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 08:03:45 crc kubenswrapper[4809]: I0312 08:03:45.188827 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 08:03:45 crc kubenswrapper[4809]: I0312 08:03:45.188802 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 08:03:45 crc kubenswrapper[4809]: I0312 08:03:45.188784 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 08:03:45 crc kubenswrapper[4809]: I0312 08:03:45.188877 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 08:03:45 crc kubenswrapper[4809]: I0312 08:03:45.188963 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 08:03:45 crc kubenswrapper[4809]: I0312 08:03:45.189105 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 08:03:45 crc kubenswrapper[4809]: I0312 08:03:45.391607 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 08:03:45 crc kubenswrapper[4809]: W0312 08:03:45.424325 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-9ea09c062b47b5528a172422916c5ae7049b2844ea031dd0be8ab52dfeab7739 WatchSource:0}: Error finding container 9ea09c062b47b5528a172422916c5ae7049b2844ea031dd0be8ab52dfeab7739: Status 404 returned error can't find the container with id 9ea09c062b47b5528a172422916c5ae7049b2844ea031dd0be8ab52dfeab7739 Mar 12 08:03:45 crc kubenswrapper[4809]: E0312 08:03:45.431496 4809 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.80:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189c0957ac0fd2da openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 08:03:45.430655706 +0000 UTC m=+299.012691469,LastTimestamp:2026-03-12 08:03:45.430655706 +0000 UTC m=+299.012691469,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 08:03:45 crc kubenswrapper[4809]: I0312 08:03:45.440761 4809 generic.go:334] "Generic (PLEG): container finished" podID="5870849b-5199-42a7-b08d-614735e47737" containerID="dd466d440d4fbc4c6f4697487402dd28a7ca539addce8dd241aba7f548e49590" exitCode=0 Mar 12 08:03:45 crc kubenswrapper[4809]: I0312 08:03:45.440889 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"5870849b-5199-42a7-b08d-614735e47737","Type":"ContainerDied","Data":"dd466d440d4fbc4c6f4697487402dd28a7ca539addce8dd241aba7f548e49590"} Mar 12 08:03:45 crc kubenswrapper[4809]: I0312 08:03:45.441745 4809 status_manager.go:851] "Failed to get status for pod" podUID="5870849b-5199-42a7-b08d-614735e47737" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Mar 12 08:03:45 crc kubenswrapper[4809]: I0312 08:03:45.446844 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 12 08:03:45 crc kubenswrapper[4809]: I0312 08:03:45.449334 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 12 08:03:45 crc kubenswrapper[4809]: I0312 08:03:45.451029 4809 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b9814d65f13344afd5484baf20498a643e575626a91a474b937602f4bb06f564" exitCode=0 Mar 12 08:03:45 crc kubenswrapper[4809]: I0312 08:03:45.451081 4809 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="411ee556dd1461097ea682e93564110b2a46d02734cb91026ba06ffe7de7a80f" exitCode=0 Mar 12 08:03:45 crc kubenswrapper[4809]: I0312 08:03:45.451101 4809 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c386318c2476130eb02e6620ab75fce7d5cd814d7f214bed4f449de60d126a14" exitCode=0 Mar 12 08:03:45 crc kubenswrapper[4809]: I0312 08:03:45.451149 4809 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="efb3f6bb09e8fcec0267b3fdbdf43301e33aa2ba8bfaf8b3a124c7e7714949ab" exitCode=2 Mar 12 08:03:45 crc kubenswrapper[4809]: I0312 08:03:45.451306 4809 scope.go:117] "RemoveContainer" containerID="94c69205f47db3c3a925250e0e899627b256d299d332a6f21f5cb8dbc84bfdf9" Mar 12 08:03:45 crc kubenswrapper[4809]: I0312 08:03:45.454553 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"9ea09c062b47b5528a172422916c5ae7049b2844ea031dd0be8ab52dfeab7739"} Mar 12 08:03:46 crc kubenswrapper[4809]: I0312 08:03:46.466873 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 12 08:03:46 crc kubenswrapper[4809]: I0312 08:03:46.470555 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"8bc2f834965b5e39b43bee7f1a0a4592eee5d2ca252c23a2fefec7e61cac38aa"} Mar 12 08:03:46 crc kubenswrapper[4809]: E0312 08:03:46.472449 4809 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.80:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 08:03:46 crc kubenswrapper[4809]: I0312 08:03:46.472627 4809 status_manager.go:851] "Failed to get status for pod" podUID="5870849b-5199-42a7-b08d-614735e47737" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Mar 12 08:03:47 crc kubenswrapper[4809]: I0312 08:03:47.000775 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 12 08:03:47 crc kubenswrapper[4809]: I0312 08:03:47.002208 4809 status_manager.go:851] "Failed to get status for pod" podUID="5870849b-5199-42a7-b08d-614735e47737" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Mar 12 08:03:47 crc kubenswrapper[4809]: I0312 08:03:47.119592 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5870849b-5199-42a7-b08d-614735e47737-kube-api-access\") pod \"5870849b-5199-42a7-b08d-614735e47737\" (UID: \"5870849b-5199-42a7-b08d-614735e47737\") " Mar 12 08:03:47 crc kubenswrapper[4809]: I0312 08:03:47.119678 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5870849b-5199-42a7-b08d-614735e47737-var-lock\") pod \"5870849b-5199-42a7-b08d-614735e47737\" (UID: \"5870849b-5199-42a7-b08d-614735e47737\") " Mar 12 08:03:47 crc kubenswrapper[4809]: I0312 08:03:47.119715 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5870849b-5199-42a7-b08d-614735e47737-kubelet-dir\") pod \"5870849b-5199-42a7-b08d-614735e47737\" (UID: \"5870849b-5199-42a7-b08d-614735e47737\") " Mar 12 08:03:47 crc kubenswrapper[4809]: I0312 08:03:47.120017 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5870849b-5199-42a7-b08d-614735e47737-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5870849b-5199-42a7-b08d-614735e47737" (UID: "5870849b-5199-42a7-b08d-614735e47737"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 08:03:47 crc kubenswrapper[4809]: I0312 08:03:47.120819 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5870849b-5199-42a7-b08d-614735e47737-var-lock" (OuterVolumeSpecName: "var-lock") pod "5870849b-5199-42a7-b08d-614735e47737" (UID: "5870849b-5199-42a7-b08d-614735e47737"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 08:03:47 crc kubenswrapper[4809]: I0312 08:03:47.163916 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5870849b-5199-42a7-b08d-614735e47737-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5870849b-5199-42a7-b08d-614735e47737" (UID: "5870849b-5199-42a7-b08d-614735e47737"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:03:47 crc kubenswrapper[4809]: I0312 08:03:47.169690 4809 status_manager.go:851] "Failed to get status for pod" podUID="5870849b-5199-42a7-b08d-614735e47737" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Mar 12 08:03:47 crc kubenswrapper[4809]: I0312 08:03:47.223274 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5870849b-5199-42a7-b08d-614735e47737-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 12 08:03:47 crc kubenswrapper[4809]: I0312 08:03:47.223318 4809 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5870849b-5199-42a7-b08d-614735e47737-var-lock\") on node \"crc\" DevicePath \"\"" Mar 12 08:03:47 crc kubenswrapper[4809]: I0312 08:03:47.223329 4809 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5870849b-5199-42a7-b08d-614735e47737-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 12 08:03:47 crc kubenswrapper[4809]: I0312 08:03:47.447343 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 12 08:03:47 crc kubenswrapper[4809]: I0312 08:03:47.448039 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 08:03:47 crc kubenswrapper[4809]: I0312 08:03:47.448732 4809 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Mar 12 08:03:47 crc kubenswrapper[4809]: I0312 08:03:47.449246 4809 status_manager.go:851] "Failed to get status for pod" podUID="5870849b-5199-42a7-b08d-614735e47737" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Mar 12 08:03:47 crc kubenswrapper[4809]: I0312 08:03:47.481388 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 12 08:03:47 crc kubenswrapper[4809]: I0312 08:03:47.481377 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"5870849b-5199-42a7-b08d-614735e47737","Type":"ContainerDied","Data":"c81e0298906c3aeb00a790984f638e3fd44782f9b5267de7a53b3e0212685d54"} Mar 12 08:03:47 crc kubenswrapper[4809]: I0312 08:03:47.481595 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c81e0298906c3aeb00a790984f638e3fd44782f9b5267de7a53b3e0212685d54" Mar 12 08:03:47 crc kubenswrapper[4809]: I0312 08:03:47.484707 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 12 08:03:47 crc kubenswrapper[4809]: I0312 08:03:47.486059 4809 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="95634c408015e53a4c8d044b27803c5070ad086be1f3372a760744db1bc9c033" exitCode=0 Mar 12 08:03:47 crc kubenswrapper[4809]: I0312 08:03:47.486434 4809 scope.go:117] "RemoveContainer" containerID="b9814d65f13344afd5484baf20498a643e575626a91a474b937602f4bb06f564" Mar 12 08:03:47 crc kubenswrapper[4809]: I0312 08:03:47.486514 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 08:03:47 crc kubenswrapper[4809]: E0312 08:03:47.489352 4809 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.80:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 08:03:47 crc kubenswrapper[4809]: I0312 08:03:47.493079 4809 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Mar 12 08:03:47 crc kubenswrapper[4809]: I0312 08:03:47.494440 4809 status_manager.go:851] "Failed to get status for pod" podUID="5870849b-5199-42a7-b08d-614735e47737" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Mar 12 08:03:47 crc kubenswrapper[4809]: I0312 08:03:47.509776 4809 scope.go:117] "RemoveContainer" containerID="411ee556dd1461097ea682e93564110b2a46d02734cb91026ba06ffe7de7a80f" Mar 12 08:03:47 crc kubenswrapper[4809]: I0312 08:03:47.526182 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 12 08:03:47 crc kubenswrapper[4809]: I0312 08:03:47.526271 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 08:03:47 crc kubenswrapper[4809]: I0312 08:03:47.526196 4809 scope.go:117] "RemoveContainer" containerID="c386318c2476130eb02e6620ab75fce7d5cd814d7f214bed4f449de60d126a14" Mar 12 08:03:47 crc kubenswrapper[4809]: I0312 08:03:47.526368 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 12 08:03:47 crc kubenswrapper[4809]: I0312 08:03:47.526428 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 12 08:03:47 crc kubenswrapper[4809]: I0312 08:03:47.526572 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 08:03:47 crc kubenswrapper[4809]: I0312 08:03:47.526664 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 08:03:47 crc kubenswrapper[4809]: I0312 08:03:47.526932 4809 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Mar 12 08:03:47 crc kubenswrapper[4809]: I0312 08:03:47.526979 4809 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Mar 12 08:03:47 crc kubenswrapper[4809]: I0312 08:03:47.526998 4809 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 12 08:03:47 crc kubenswrapper[4809]: I0312 08:03:47.547512 4809 scope.go:117] "RemoveContainer" containerID="efb3f6bb09e8fcec0267b3fdbdf43301e33aa2ba8bfaf8b3a124c7e7714949ab" Mar 12 08:03:47 crc kubenswrapper[4809]: I0312 08:03:47.570259 4809 scope.go:117] "RemoveContainer" containerID="95634c408015e53a4c8d044b27803c5070ad086be1f3372a760744db1bc9c033" Mar 12 08:03:47 crc kubenswrapper[4809]: I0312 08:03:47.594488 4809 scope.go:117] "RemoveContainer" containerID="9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2" Mar 12 08:03:47 crc kubenswrapper[4809]: I0312 08:03:47.614616 4809 scope.go:117] "RemoveContainer" containerID="b9814d65f13344afd5484baf20498a643e575626a91a474b937602f4bb06f564" Mar 12 08:03:47 crc kubenswrapper[4809]: E0312 08:03:47.615340 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9814d65f13344afd5484baf20498a643e575626a91a474b937602f4bb06f564\": container with ID starting with b9814d65f13344afd5484baf20498a643e575626a91a474b937602f4bb06f564 not found: ID does not exist" containerID="b9814d65f13344afd5484baf20498a643e575626a91a474b937602f4bb06f564" Mar 12 08:03:47 crc kubenswrapper[4809]: I0312 08:03:47.615390 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9814d65f13344afd5484baf20498a643e575626a91a474b937602f4bb06f564"} err="failed to get container status \"b9814d65f13344afd5484baf20498a643e575626a91a474b937602f4bb06f564\": rpc error: code = NotFound desc = could not find container \"b9814d65f13344afd5484baf20498a643e575626a91a474b937602f4bb06f564\": container with ID starting with b9814d65f13344afd5484baf20498a643e575626a91a474b937602f4bb06f564 not found: ID does not exist" Mar 12 08:03:47 crc kubenswrapper[4809]: I0312 08:03:47.615424 4809 scope.go:117] "RemoveContainer" containerID="411ee556dd1461097ea682e93564110b2a46d02734cb91026ba06ffe7de7a80f" Mar 12 08:03:47 crc kubenswrapper[4809]: E0312 08:03:47.615711 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"411ee556dd1461097ea682e93564110b2a46d02734cb91026ba06ffe7de7a80f\": container with ID starting with 411ee556dd1461097ea682e93564110b2a46d02734cb91026ba06ffe7de7a80f not found: ID does not exist" containerID="411ee556dd1461097ea682e93564110b2a46d02734cb91026ba06ffe7de7a80f" Mar 12 08:03:47 crc kubenswrapper[4809]: I0312 08:03:47.615731 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"411ee556dd1461097ea682e93564110b2a46d02734cb91026ba06ffe7de7a80f"} err="failed to get container status \"411ee556dd1461097ea682e93564110b2a46d02734cb91026ba06ffe7de7a80f\": rpc error: code = NotFound desc = could not find container \"411ee556dd1461097ea682e93564110b2a46d02734cb91026ba06ffe7de7a80f\": container with ID starting with 411ee556dd1461097ea682e93564110b2a46d02734cb91026ba06ffe7de7a80f not found: ID does not exist" Mar 12 08:03:47 crc kubenswrapper[4809]: I0312 08:03:47.615743 4809 scope.go:117] "RemoveContainer" containerID="c386318c2476130eb02e6620ab75fce7d5cd814d7f214bed4f449de60d126a14" Mar 12 08:03:47 crc kubenswrapper[4809]: E0312 08:03:47.616140 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c386318c2476130eb02e6620ab75fce7d5cd814d7f214bed4f449de60d126a14\": container with ID starting with c386318c2476130eb02e6620ab75fce7d5cd814d7f214bed4f449de60d126a14 not found: ID does not exist" containerID="c386318c2476130eb02e6620ab75fce7d5cd814d7f214bed4f449de60d126a14" Mar 12 08:03:47 crc kubenswrapper[4809]: I0312 08:03:47.616189 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c386318c2476130eb02e6620ab75fce7d5cd814d7f214bed4f449de60d126a14"} err="failed to get container status \"c386318c2476130eb02e6620ab75fce7d5cd814d7f214bed4f449de60d126a14\": rpc error: code = NotFound desc = could not find container \"c386318c2476130eb02e6620ab75fce7d5cd814d7f214bed4f449de60d126a14\": container with ID starting with c386318c2476130eb02e6620ab75fce7d5cd814d7f214bed4f449de60d126a14 not found: ID does not exist" Mar 12 08:03:47 crc kubenswrapper[4809]: I0312 08:03:47.616225 4809 scope.go:117] "RemoveContainer" containerID="efb3f6bb09e8fcec0267b3fdbdf43301e33aa2ba8bfaf8b3a124c7e7714949ab" Mar 12 08:03:47 crc kubenswrapper[4809]: E0312 08:03:47.616572 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efb3f6bb09e8fcec0267b3fdbdf43301e33aa2ba8bfaf8b3a124c7e7714949ab\": container with ID starting with efb3f6bb09e8fcec0267b3fdbdf43301e33aa2ba8bfaf8b3a124c7e7714949ab not found: ID does not exist" containerID="efb3f6bb09e8fcec0267b3fdbdf43301e33aa2ba8bfaf8b3a124c7e7714949ab" Mar 12 08:03:47 crc kubenswrapper[4809]: I0312 08:03:47.616620 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efb3f6bb09e8fcec0267b3fdbdf43301e33aa2ba8bfaf8b3a124c7e7714949ab"} err="failed to get container status \"efb3f6bb09e8fcec0267b3fdbdf43301e33aa2ba8bfaf8b3a124c7e7714949ab\": rpc error: code = NotFound desc = could not find container \"efb3f6bb09e8fcec0267b3fdbdf43301e33aa2ba8bfaf8b3a124c7e7714949ab\": container with ID starting with efb3f6bb09e8fcec0267b3fdbdf43301e33aa2ba8bfaf8b3a124c7e7714949ab not found: ID does not exist" Mar 12 08:03:47 crc kubenswrapper[4809]: I0312 08:03:47.616657 4809 scope.go:117] "RemoveContainer" containerID="95634c408015e53a4c8d044b27803c5070ad086be1f3372a760744db1bc9c033" Mar 12 08:03:47 crc kubenswrapper[4809]: E0312 08:03:47.617309 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95634c408015e53a4c8d044b27803c5070ad086be1f3372a760744db1bc9c033\": container with ID starting with 95634c408015e53a4c8d044b27803c5070ad086be1f3372a760744db1bc9c033 not found: ID does not exist" containerID="95634c408015e53a4c8d044b27803c5070ad086be1f3372a760744db1bc9c033" Mar 12 08:03:47 crc kubenswrapper[4809]: I0312 08:03:47.617341 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95634c408015e53a4c8d044b27803c5070ad086be1f3372a760744db1bc9c033"} err="failed to get container status \"95634c408015e53a4c8d044b27803c5070ad086be1f3372a760744db1bc9c033\": rpc error: code = NotFound desc = could not find container \"95634c408015e53a4c8d044b27803c5070ad086be1f3372a760744db1bc9c033\": container with ID starting with 95634c408015e53a4c8d044b27803c5070ad086be1f3372a760744db1bc9c033 not found: ID does not exist" Mar 12 08:03:47 crc kubenswrapper[4809]: I0312 08:03:47.617358 4809 scope.go:117] "RemoveContainer" containerID="9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2" Mar 12 08:03:47 crc kubenswrapper[4809]: E0312 08:03:47.617744 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\": container with ID starting with 9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2 not found: ID does not exist" containerID="9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2" Mar 12 08:03:47 crc kubenswrapper[4809]: I0312 08:03:47.617782 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2"} err="failed to get container status \"9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\": rpc error: code = NotFound desc = could not find container \"9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2\": container with ID starting with 9f4e932ddfa6a6d089daba38ca760a032af21e88df938f64ed4ae2f7782103b2 not found: ID does not exist" Mar 12 08:03:47 crc kubenswrapper[4809]: I0312 08:03:47.808039 4809 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Mar 12 08:03:47 crc kubenswrapper[4809]: I0312 08:03:47.808569 4809 status_manager.go:851] "Failed to get status for pod" podUID="5870849b-5199-42a7-b08d-614735e47737" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Mar 12 08:03:49 crc kubenswrapper[4809]: I0312 08:03:49.116265 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Mar 12 08:03:49 crc kubenswrapper[4809]: E0312 08:03:49.429736 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-pod5870849b_5199_42a7_b08d_614735e47737.slice\": RecentStats: unable to find data in memory cache]" Mar 12 08:03:51 crc kubenswrapper[4809]: E0312 08:03:51.391748 4809 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.80:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189c0957ac0fd2da openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 08:03:45.430655706 +0000 UTC m=+299.012691469,LastTimestamp:2026-03-12 08:03:45.430655706 +0000 UTC m=+299.012691469,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 08:03:51 crc kubenswrapper[4809]: E0312 08:03:51.506399 4809 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" Mar 12 08:03:51 crc kubenswrapper[4809]: E0312 08:03:51.506897 4809 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" Mar 12 08:03:51 crc kubenswrapper[4809]: E0312 08:03:51.508352 4809 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" Mar 12 08:03:51 crc kubenswrapper[4809]: E0312 08:03:51.508755 4809 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" Mar 12 08:03:51 crc kubenswrapper[4809]: E0312 08:03:51.509090 4809 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" Mar 12 08:03:51 crc kubenswrapper[4809]: I0312 08:03:51.509168 4809 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 12 08:03:51 crc kubenswrapper[4809]: E0312 08:03:51.509463 4809 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="200ms" Mar 12 08:03:51 crc kubenswrapper[4809]: E0312 08:03:51.710625 4809 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="400ms" Mar 12 08:03:52 crc kubenswrapper[4809]: E0312 08:03:52.111304 4809 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="800ms" Mar 12 08:03:52 crc kubenswrapper[4809]: E0312 08:03:52.912720 4809 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="1.6s" Mar 12 08:03:53 crc kubenswrapper[4809]: E0312 08:03:53.148962 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:03:53Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:03:53Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:03:53Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T08:03:53Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" Mar 12 08:03:53 crc kubenswrapper[4809]: E0312 08:03:53.149726 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" Mar 12 08:03:53 crc kubenswrapper[4809]: E0312 08:03:53.150530 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" Mar 12 08:03:53 crc kubenswrapper[4809]: E0312 08:03:53.151055 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" Mar 12 08:03:53 crc kubenswrapper[4809]: E0312 08:03:53.151601 4809 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" Mar 12 08:03:53 crc kubenswrapper[4809]: E0312 08:03:53.151644 4809 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 12 08:03:54 crc kubenswrapper[4809]: E0312 08:03:54.513381 4809 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="3.2s" Mar 12 08:03:57 crc kubenswrapper[4809]: I0312 08:03:57.109016 4809 status_manager.go:851] "Failed to get status for pod" podUID="5870849b-5199-42a7-b08d-614735e47737" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Mar 12 08:03:57 crc kubenswrapper[4809]: I0312 08:03:57.559848 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 12 08:03:57 crc kubenswrapper[4809]: I0312 08:03:57.561813 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Mar 12 08:03:57 crc kubenswrapper[4809]: I0312 08:03:57.562048 4809 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="e5d018b14132bda1464a87d5f4073560c28ae90351c58099d35605cfa0a18492" exitCode=1 Mar 12 08:03:57 crc kubenswrapper[4809]: I0312 08:03:57.562156 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"e5d018b14132bda1464a87d5f4073560c28ae90351c58099d35605cfa0a18492"} Mar 12 08:03:57 crc kubenswrapper[4809]: I0312 08:03:57.563311 4809 scope.go:117] "RemoveContainer" containerID="e5d018b14132bda1464a87d5f4073560c28ae90351c58099d35605cfa0a18492" Mar 12 08:03:57 crc kubenswrapper[4809]: I0312 08:03:57.563654 4809 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Mar 12 08:03:57 crc kubenswrapper[4809]: I0312 08:03:57.564266 4809 status_manager.go:851] "Failed to get status for pod" podUID="5870849b-5199-42a7-b08d-614735e47737" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Mar 12 08:03:57 crc kubenswrapper[4809]: E0312 08:03:57.714708 4809 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="6.4s" Mar 12 08:03:58 crc kubenswrapper[4809]: I0312 08:03:58.576347 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 12 08:03:58 crc kubenswrapper[4809]: I0312 08:03:58.577246 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Mar 12 08:03:58 crc kubenswrapper[4809]: I0312 08:03:58.577377 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8ac0b7040102615d0877faa8b6ef805083ce6ab86e4ccb94c8c19d56c4b4c50c"} Mar 12 08:03:58 crc kubenswrapper[4809]: I0312 08:03:58.579088 4809 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Mar 12 08:03:58 crc kubenswrapper[4809]: I0312 08:03:58.580277 4809 status_manager.go:851] "Failed to get status for pod" podUID="5870849b-5199-42a7-b08d-614735e47737" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Mar 12 08:03:59 crc kubenswrapper[4809]: E0312 08:03:59.573641 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-pod5870849b_5199_42a7_b08d_614735e47737.slice\": RecentStats: unable to find data in memory cache]" Mar 12 08:04:00 crc kubenswrapper[4809]: I0312 08:04:00.105386 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 08:04:00 crc kubenswrapper[4809]: I0312 08:04:00.107107 4809 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Mar 12 08:04:00 crc kubenswrapper[4809]: I0312 08:04:00.107867 4809 status_manager.go:851] "Failed to get status for pod" podUID="5870849b-5199-42a7-b08d-614735e47737" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Mar 12 08:04:00 crc kubenswrapper[4809]: I0312 08:04:00.128698 4809 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="09f5e61a-c077-449f-8291-dbf93ac9aca3" Mar 12 08:04:00 crc kubenswrapper[4809]: I0312 08:04:00.128734 4809 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="09f5e61a-c077-449f-8291-dbf93ac9aca3" Mar 12 08:04:00 crc kubenswrapper[4809]: E0312 08:04:00.129774 4809 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 08:04:00 crc kubenswrapper[4809]: I0312 08:04:00.130620 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 08:04:00 crc kubenswrapper[4809]: W0312 08:04:00.157317 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-d964356e8c1a373daab5dbd6d01186e7f3ccfdc5eaecb2c464c277f7adb1f8fb WatchSource:0}: Error finding container d964356e8c1a373daab5dbd6d01186e7f3ccfdc5eaecb2c464c277f7adb1f8fb: Status 404 returned error can't find the container with id d964356e8c1a373daab5dbd6d01186e7f3ccfdc5eaecb2c464c277f7adb1f8fb Mar 12 08:04:00 crc kubenswrapper[4809]: I0312 08:04:00.626060 4809 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="011399e5cfb2735c3f9cfe36a30974700ff79f9772120c46b0893eaf8d2d3a9e" exitCode=0 Mar 12 08:04:00 crc kubenswrapper[4809]: I0312 08:04:00.626165 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"011399e5cfb2735c3f9cfe36a30974700ff79f9772120c46b0893eaf8d2d3a9e"} Mar 12 08:04:00 crc kubenswrapper[4809]: I0312 08:04:00.626807 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d964356e8c1a373daab5dbd6d01186e7f3ccfdc5eaecb2c464c277f7adb1f8fb"} Mar 12 08:04:00 crc kubenswrapper[4809]: I0312 08:04:00.627476 4809 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="09f5e61a-c077-449f-8291-dbf93ac9aca3" Mar 12 08:04:00 crc kubenswrapper[4809]: I0312 08:04:00.627530 4809 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="09f5e61a-c077-449f-8291-dbf93ac9aca3" Mar 12 08:04:00 crc kubenswrapper[4809]: I0312 08:04:00.627805 4809 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Mar 12 08:04:00 crc kubenswrapper[4809]: I0312 08:04:00.628416 4809 status_manager.go:851] "Failed to get status for pod" podUID="5870849b-5199-42a7-b08d-614735e47737" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Mar 12 08:04:00 crc kubenswrapper[4809]: E0312 08:04:00.628511 4809 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 08:04:01 crc kubenswrapper[4809]: I0312 08:04:01.636494 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f942d4b262acb45ce12cfe89d917de487f050eba1dee50888627f8d82d96d550"} Mar 12 08:04:01 crc kubenswrapper[4809]: I0312 08:04:01.636845 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"bb5453bfe624edb9e35d2a82d9f8e1b3b3a5aca15b6e5e38a00b792dd47f6cd0"} Mar 12 08:04:01 crc kubenswrapper[4809]: I0312 08:04:01.636867 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a000e62268a2f14f982992966d13452a2b2f6260da81361086970247628684b7"} Mar 12 08:04:01 crc kubenswrapper[4809]: I0312 08:04:01.860218 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 08:04:02 crc kubenswrapper[4809]: I0312 08:04:02.645987 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ceccf342cc6b5e8fcf74826cf8ef2946ffee0e9ba258ac739afc7deffa56c122"} Mar 12 08:04:02 crc kubenswrapper[4809]: I0312 08:04:02.646033 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"04670a4ccc3e5e9d440e6f78e3355da090d972e447d97e8fb400ab1b344f2106"} Mar 12 08:04:02 crc kubenswrapper[4809]: I0312 08:04:02.646666 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 08:04:02 crc kubenswrapper[4809]: I0312 08:04:02.646776 4809 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="09f5e61a-c077-449f-8291-dbf93ac9aca3" Mar 12 08:04:02 crc kubenswrapper[4809]: I0312 08:04:02.646802 4809 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="09f5e61a-c077-449f-8291-dbf93ac9aca3" Mar 12 08:04:03 crc kubenswrapper[4809]: I0312 08:04:03.484679 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 08:04:03 crc kubenswrapper[4809]: I0312 08:04:03.490808 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 08:04:05 crc kubenswrapper[4809]: I0312 08:04:05.131143 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 08:04:05 crc kubenswrapper[4809]: I0312 08:04:05.131185 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 08:04:05 crc kubenswrapper[4809]: I0312 08:04:05.136233 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 08:04:07 crc kubenswrapper[4809]: I0312 08:04:07.664939 4809 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 08:04:07 crc kubenswrapper[4809]: I0312 08:04:07.802898 4809 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="e89d92d1-dfa5-4add-b7a8-c52d2d9b01b3" Mar 12 08:04:08 crc kubenswrapper[4809]: I0312 08:04:08.683696 4809 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="09f5e61a-c077-449f-8291-dbf93ac9aca3" Mar 12 08:04:08 crc kubenswrapper[4809]: I0312 08:04:08.684318 4809 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="09f5e61a-c077-449f-8291-dbf93ac9aca3" Mar 12 08:04:08 crc kubenswrapper[4809]: I0312 08:04:08.687340 4809 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="e89d92d1-dfa5-4add-b7a8-c52d2d9b01b3" Mar 12 08:04:09 crc kubenswrapper[4809]: E0312 08:04:09.694012 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-pod5870849b_5199_42a7_b08d_614735e47737.slice\": RecentStats: unable to find data in memory cache]" Mar 12 08:04:11 crc kubenswrapper[4809]: I0312 08:04:11.868754 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 08:04:18 crc kubenswrapper[4809]: I0312 08:04:18.033725 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 12 08:04:18 crc kubenswrapper[4809]: I0312 08:04:18.854510 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 12 08:04:19 crc kubenswrapper[4809]: I0312 08:04:19.147015 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 12 08:04:19 crc kubenswrapper[4809]: I0312 08:04:19.184290 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Mar 12 08:04:19 crc kubenswrapper[4809]: I0312 08:04:19.349029 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 12 08:04:19 crc kubenswrapper[4809]: I0312 08:04:19.387575 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Mar 12 08:04:19 crc kubenswrapper[4809]: I0312 08:04:19.388929 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 12 08:04:19 crc kubenswrapper[4809]: I0312 08:04:19.494287 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Mar 12 08:04:19 crc kubenswrapper[4809]: I0312 08:04:19.496929 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 12 08:04:19 crc kubenswrapper[4809]: I0312 08:04:19.689258 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 12 08:04:19 crc kubenswrapper[4809]: E0312 08:04:19.851762 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-pod5870849b_5199_42a7_b08d_614735e47737.slice\": RecentStats: unable to find data in memory cache]" Mar 12 08:04:19 crc kubenswrapper[4809]: I0312 08:04:19.943456 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 12 08:04:20 crc kubenswrapper[4809]: I0312 08:04:20.313063 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 12 08:04:20 crc kubenswrapper[4809]: I0312 08:04:20.327189 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 12 08:04:20 crc kubenswrapper[4809]: I0312 08:04:20.347335 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 12 08:04:20 crc kubenswrapper[4809]: I0312 08:04:20.358341 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 12 08:04:20 crc kubenswrapper[4809]: I0312 08:04:20.423043 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Mar 12 08:04:20 crc kubenswrapper[4809]: I0312 08:04:20.482489 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 12 08:04:20 crc kubenswrapper[4809]: I0312 08:04:20.520028 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 12 08:04:20 crc kubenswrapper[4809]: I0312 08:04:20.637484 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 12 08:04:20 crc kubenswrapper[4809]: I0312 08:04:20.647462 4809 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 12 08:04:20 crc kubenswrapper[4809]: I0312 08:04:20.671235 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 12 08:04:20 crc kubenswrapper[4809]: I0312 08:04:20.671418 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 12 08:04:20 crc kubenswrapper[4809]: I0312 08:04:20.703414 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 12 08:04:20 crc kubenswrapper[4809]: I0312 08:04:20.821052 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 12 08:04:21 crc kubenswrapper[4809]: I0312 08:04:21.035149 4809 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 12 08:04:21 crc kubenswrapper[4809]: I0312 08:04:21.070284 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 12 08:04:21 crc kubenswrapper[4809]: I0312 08:04:21.180167 4809 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Mar 12 08:04:21 crc kubenswrapper[4809]: I0312 08:04:21.484205 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 12 08:04:21 crc kubenswrapper[4809]: I0312 08:04:21.485732 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 12 08:04:21 crc kubenswrapper[4809]: I0312 08:04:21.486356 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 12 08:04:21 crc kubenswrapper[4809]: I0312 08:04:21.612053 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 12 08:04:21 crc kubenswrapper[4809]: I0312 08:04:21.632595 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 12 08:04:21 crc kubenswrapper[4809]: I0312 08:04:21.675585 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 12 08:04:21 crc kubenswrapper[4809]: I0312 08:04:21.790474 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 12 08:04:21 crc kubenswrapper[4809]: I0312 08:04:21.809920 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 12 08:04:21 crc kubenswrapper[4809]: I0312 08:04:21.843689 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 12 08:04:21 crc kubenswrapper[4809]: I0312 08:04:21.857614 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 12 08:04:21 crc kubenswrapper[4809]: I0312 08:04:21.883713 4809 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 12 08:04:21 crc kubenswrapper[4809]: I0312 08:04:21.888530 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 12 08:04:21 crc kubenswrapper[4809]: I0312 08:04:21.888587 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 12 08:04:21 crc kubenswrapper[4809]: I0312 08:04:21.890675 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 12 08:04:21 crc kubenswrapper[4809]: I0312 08:04:21.893491 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 08:04:21 crc kubenswrapper[4809]: I0312 08:04:21.893925 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 08:04:21 crc kubenswrapper[4809]: I0312 08:04:21.910402 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=14.910380524 podStartE2EDuration="14.910380524s" podCreationTimestamp="2026-03-12 08:04:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:04:21.907499489 +0000 UTC m=+335.489535222" watchObservedRunningTime="2026-03-12 08:04:21.910380524 +0000 UTC m=+335.492416247" Mar 12 08:04:21 crc kubenswrapper[4809]: I0312 08:04:21.942174 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 12 08:04:21 crc kubenswrapper[4809]: I0312 08:04:21.956772 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Mar 12 08:04:21 crc kubenswrapper[4809]: I0312 08:04:21.962346 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 12 08:04:21 crc kubenswrapper[4809]: I0312 08:04:21.995086 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 12 08:04:22 crc kubenswrapper[4809]: I0312 08:04:22.019377 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 12 08:04:22 crc kubenswrapper[4809]: I0312 08:04:22.028039 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 12 08:04:22 crc kubenswrapper[4809]: I0312 08:04:22.070270 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Mar 12 08:04:22 crc kubenswrapper[4809]: I0312 08:04:22.086394 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Mar 12 08:04:22 crc kubenswrapper[4809]: I0312 08:04:22.087574 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 12 08:04:22 crc kubenswrapper[4809]: I0312 08:04:22.119348 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 12 08:04:22 crc kubenswrapper[4809]: I0312 08:04:22.120272 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 12 08:04:22 crc kubenswrapper[4809]: I0312 08:04:22.174689 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 12 08:04:22 crc kubenswrapper[4809]: I0312 08:04:22.287976 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Mar 12 08:04:22 crc kubenswrapper[4809]: I0312 08:04:22.324571 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 12 08:04:22 crc kubenswrapper[4809]: I0312 08:04:22.493000 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Mar 12 08:04:22 crc kubenswrapper[4809]: I0312 08:04:22.502845 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 12 08:04:22 crc kubenswrapper[4809]: I0312 08:04:22.569773 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 12 08:04:22 crc kubenswrapper[4809]: I0312 08:04:22.654383 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 12 08:04:22 crc kubenswrapper[4809]: I0312 08:04:22.698160 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 12 08:04:22 crc kubenswrapper[4809]: I0312 08:04:22.704143 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Mar 12 08:04:22 crc kubenswrapper[4809]: I0312 08:04:22.731144 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 12 08:04:22 crc kubenswrapper[4809]: I0312 08:04:22.773587 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Mar 12 08:04:22 crc kubenswrapper[4809]: I0312 08:04:22.912750 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 12 08:04:22 crc kubenswrapper[4809]: I0312 08:04:22.954930 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Mar 12 08:04:23 crc kubenswrapper[4809]: I0312 08:04:23.002989 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 12 08:04:23 crc kubenswrapper[4809]: I0312 08:04:23.012087 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Mar 12 08:04:23 crc kubenswrapper[4809]: I0312 08:04:23.022223 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 12 08:04:23 crc kubenswrapper[4809]: I0312 08:04:23.066698 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Mar 12 08:04:23 crc kubenswrapper[4809]: I0312 08:04:23.135915 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 12 08:04:23 crc kubenswrapper[4809]: I0312 08:04:23.178488 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 12 08:04:23 crc kubenswrapper[4809]: I0312 08:04:23.208360 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 12 08:04:23 crc kubenswrapper[4809]: I0312 08:04:23.332558 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 12 08:04:23 crc kubenswrapper[4809]: I0312 08:04:23.355044 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 12 08:04:23 crc kubenswrapper[4809]: I0312 08:04:23.380902 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 12 08:04:23 crc kubenswrapper[4809]: I0312 08:04:23.397954 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 12 08:04:23 crc kubenswrapper[4809]: I0312 08:04:23.446225 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 12 08:04:23 crc kubenswrapper[4809]: I0312 08:04:23.533048 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 12 08:04:23 crc kubenswrapper[4809]: I0312 08:04:23.837579 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 12 08:04:23 crc kubenswrapper[4809]: I0312 08:04:23.854507 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Mar 12 08:04:23 crc kubenswrapper[4809]: I0312 08:04:23.893331 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 12 08:04:24 crc kubenswrapper[4809]: I0312 08:04:24.099814 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Mar 12 08:04:24 crc kubenswrapper[4809]: I0312 08:04:24.215921 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 12 08:04:24 crc kubenswrapper[4809]: I0312 08:04:24.379637 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 12 08:04:24 crc kubenswrapper[4809]: I0312 08:04:24.418462 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 12 08:04:24 crc kubenswrapper[4809]: I0312 08:04:24.612838 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 12 08:04:24 crc kubenswrapper[4809]: I0312 08:04:24.643226 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 12 08:04:24 crc kubenswrapper[4809]: I0312 08:04:24.649075 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 12 08:04:24 crc kubenswrapper[4809]: I0312 08:04:24.712209 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 12 08:04:24 crc kubenswrapper[4809]: I0312 08:04:24.712510 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 12 08:04:24 crc kubenswrapper[4809]: I0312 08:04:24.714180 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 12 08:04:24 crc kubenswrapper[4809]: I0312 08:04:24.715447 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 12 08:04:24 crc kubenswrapper[4809]: I0312 08:04:24.740533 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Mar 12 08:04:24 crc kubenswrapper[4809]: I0312 08:04:24.994102 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 12 08:04:25 crc kubenswrapper[4809]: I0312 08:04:25.022422 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 12 08:04:25 crc kubenswrapper[4809]: I0312 08:04:25.042674 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 12 08:04:25 crc kubenswrapper[4809]: I0312 08:04:25.077333 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555044-7j2cn"] Mar 12 08:04:25 crc kubenswrapper[4809]: E0312 08:04:25.077687 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5870849b-5199-42a7-b08d-614735e47737" containerName="installer" Mar 12 08:04:25 crc kubenswrapper[4809]: I0312 08:04:25.077707 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="5870849b-5199-42a7-b08d-614735e47737" containerName="installer" Mar 12 08:04:25 crc kubenswrapper[4809]: I0312 08:04:25.077844 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="5870849b-5199-42a7-b08d-614735e47737" containerName="installer" Mar 12 08:04:25 crc kubenswrapper[4809]: I0312 08:04:25.078403 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555044-7j2cn" Mar 12 08:04:25 crc kubenswrapper[4809]: I0312 08:04:25.080004 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 08:04:25 crc kubenswrapper[4809]: I0312 08:04:25.080483 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-tvxqv" Mar 12 08:04:25 crc kubenswrapper[4809]: I0312 08:04:25.085562 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555044-7j2cn"] Mar 12 08:04:25 crc kubenswrapper[4809]: I0312 08:04:25.096504 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 08:04:25 crc kubenswrapper[4809]: I0312 08:04:25.133025 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Mar 12 08:04:25 crc kubenswrapper[4809]: I0312 08:04:25.145404 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7skwk\" (UniqueName: \"kubernetes.io/projected/9274685c-c53e-4796-bb96-e1f50db591ed-kube-api-access-7skwk\") pod \"auto-csr-approver-29555044-7j2cn\" (UID: \"9274685c-c53e-4796-bb96-e1f50db591ed\") " pod="openshift-infra/auto-csr-approver-29555044-7j2cn" Mar 12 08:04:25 crc kubenswrapper[4809]: I0312 08:04:25.158157 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 12 08:04:25 crc kubenswrapper[4809]: I0312 08:04:25.188867 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Mar 12 08:04:25 crc kubenswrapper[4809]: I0312 08:04:25.232158 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Mar 12 08:04:25 crc kubenswrapper[4809]: I0312 08:04:25.242549 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 12 08:04:25 crc kubenswrapper[4809]: I0312 08:04:25.246460 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7skwk\" (UniqueName: \"kubernetes.io/projected/9274685c-c53e-4796-bb96-e1f50db591ed-kube-api-access-7skwk\") pod \"auto-csr-approver-29555044-7j2cn\" (UID: \"9274685c-c53e-4796-bb96-e1f50db591ed\") " pod="openshift-infra/auto-csr-approver-29555044-7j2cn" Mar 12 08:04:25 crc kubenswrapper[4809]: I0312 08:04:25.265838 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7skwk\" (UniqueName: \"kubernetes.io/projected/9274685c-c53e-4796-bb96-e1f50db591ed-kube-api-access-7skwk\") pod \"auto-csr-approver-29555044-7j2cn\" (UID: \"9274685c-c53e-4796-bb96-e1f50db591ed\") " pod="openshift-infra/auto-csr-approver-29555044-7j2cn" Mar 12 08:04:25 crc kubenswrapper[4809]: I0312 08:04:25.266403 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 12 08:04:25 crc kubenswrapper[4809]: I0312 08:04:25.394835 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555044-7j2cn" Mar 12 08:04:25 crc kubenswrapper[4809]: I0312 08:04:25.406532 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Mar 12 08:04:25 crc kubenswrapper[4809]: I0312 08:04:25.448668 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 12 08:04:25 crc kubenswrapper[4809]: I0312 08:04:25.497914 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 12 08:04:25 crc kubenswrapper[4809]: I0312 08:04:25.686974 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 12 08:04:25 crc kubenswrapper[4809]: I0312 08:04:25.784988 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 12 08:04:25 crc kubenswrapper[4809]: I0312 08:04:25.842124 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555044-7j2cn"] Mar 12 08:04:25 crc kubenswrapper[4809]: W0312 08:04:25.845070 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9274685c_c53e_4796_bb96_e1f50db591ed.slice/crio-4da8bff55bb221a193cc80d76a656ef4823a33f6f7ad43c3d9abcc15a6f4ba31 WatchSource:0}: Error finding container 4da8bff55bb221a193cc80d76a656ef4823a33f6f7ad43c3d9abcc15a6f4ba31: Status 404 returned error can't find the container with id 4da8bff55bb221a193cc80d76a656ef4823a33f6f7ad43c3d9abcc15a6f4ba31 Mar 12 08:04:25 crc kubenswrapper[4809]: I0312 08:04:25.902610 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 12 08:04:25 crc kubenswrapper[4809]: I0312 08:04:25.927649 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 12 08:04:25 crc kubenswrapper[4809]: I0312 08:04:25.934938 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 12 08:04:26 crc kubenswrapper[4809]: I0312 08:04:26.007661 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 12 08:04:26 crc kubenswrapper[4809]: I0312 08:04:26.148186 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 12 08:04:26 crc kubenswrapper[4809]: I0312 08:04:26.173098 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 12 08:04:26 crc kubenswrapper[4809]: I0312 08:04:26.189258 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 12 08:04:26 crc kubenswrapper[4809]: I0312 08:04:26.224089 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 12 08:04:26 crc kubenswrapper[4809]: I0312 08:04:26.265910 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 12 08:04:26 crc kubenswrapper[4809]: I0312 08:04:26.284735 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 12 08:04:26 crc kubenswrapper[4809]: I0312 08:04:26.332176 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 12 08:04:26 crc kubenswrapper[4809]: I0312 08:04:26.343702 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 12 08:04:26 crc kubenswrapper[4809]: I0312 08:04:26.453268 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 12 08:04:26 crc kubenswrapper[4809]: I0312 08:04:26.489953 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Mar 12 08:04:26 crc kubenswrapper[4809]: I0312 08:04:26.524180 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Mar 12 08:04:26 crc kubenswrapper[4809]: I0312 08:04:26.555301 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 12 08:04:26 crc kubenswrapper[4809]: I0312 08:04:26.594290 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 12 08:04:26 crc kubenswrapper[4809]: I0312 08:04:26.619968 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 12 08:04:26 crc kubenswrapper[4809]: I0312 08:04:26.693166 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 12 08:04:26 crc kubenswrapper[4809]: I0312 08:04:26.733869 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 12 08:04:26 crc kubenswrapper[4809]: I0312 08:04:26.761770 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 12 08:04:26 crc kubenswrapper[4809]: I0312 08:04:26.768441 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Mar 12 08:04:26 crc kubenswrapper[4809]: I0312 08:04:26.826514 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555044-7j2cn" event={"ID":"9274685c-c53e-4796-bb96-e1f50db591ed","Type":"ContainerStarted","Data":"4da8bff55bb221a193cc80d76a656ef4823a33f6f7ad43c3d9abcc15a6f4ba31"} Mar 12 08:04:26 crc kubenswrapper[4809]: I0312 08:04:26.957036 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Mar 12 08:04:27 crc kubenswrapper[4809]: I0312 08:04:27.067469 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 12 08:04:27 crc kubenswrapper[4809]: I0312 08:04:27.179696 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 12 08:04:27 crc kubenswrapper[4809]: I0312 08:04:27.193176 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 12 08:04:27 crc kubenswrapper[4809]: I0312 08:04:27.203247 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Mar 12 08:04:27 crc kubenswrapper[4809]: I0312 08:04:27.218912 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Mar 12 08:04:27 crc kubenswrapper[4809]: I0312 08:04:27.275562 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Mar 12 08:04:27 crc kubenswrapper[4809]: I0312 08:04:27.419204 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 12 08:04:27 crc kubenswrapper[4809]: I0312 08:04:27.450204 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 12 08:04:27 crc kubenswrapper[4809]: I0312 08:04:27.596437 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 12 08:04:27 crc kubenswrapper[4809]: I0312 08:04:27.638149 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 12 08:04:27 crc kubenswrapper[4809]: I0312 08:04:27.704976 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 12 08:04:27 crc kubenswrapper[4809]: I0312 08:04:27.718896 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 12 08:04:27 crc kubenswrapper[4809]: I0312 08:04:27.735519 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 12 08:04:27 crc kubenswrapper[4809]: I0312 08:04:27.800430 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 12 08:04:27 crc kubenswrapper[4809]: I0312 08:04:27.811996 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 12 08:04:27 crc kubenswrapper[4809]: I0312 08:04:27.835223 4809 generic.go:334] "Generic (PLEG): container finished" podID="9274685c-c53e-4796-bb96-e1f50db591ed" containerID="8c47932dc5b93742eb8a05948d3b248a5b749a68e6094eed23ee2c185d8f7cd8" exitCode=0 Mar 12 08:04:27 crc kubenswrapper[4809]: I0312 08:04:27.835286 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555044-7j2cn" event={"ID":"9274685c-c53e-4796-bb96-e1f50db591ed","Type":"ContainerDied","Data":"8c47932dc5b93742eb8a05948d3b248a5b749a68e6094eed23ee2c185d8f7cd8"} Mar 12 08:04:27 crc kubenswrapper[4809]: I0312 08:04:27.908292 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 12 08:04:27 crc kubenswrapper[4809]: I0312 08:04:27.908654 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 12 08:04:27 crc kubenswrapper[4809]: I0312 08:04:27.948344 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 12 08:04:27 crc kubenswrapper[4809]: I0312 08:04:27.958745 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 12 08:04:27 crc kubenswrapper[4809]: I0312 08:04:27.963767 4809 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 12 08:04:27 crc kubenswrapper[4809]: I0312 08:04:27.972105 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 12 08:04:28 crc kubenswrapper[4809]: I0312 08:04:28.031778 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 12 08:04:28 crc kubenswrapper[4809]: I0312 08:04:28.064432 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 12 08:04:28 crc kubenswrapper[4809]: I0312 08:04:28.089057 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 12 08:04:28 crc kubenswrapper[4809]: I0312 08:04:28.126687 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 12 08:04:28 crc kubenswrapper[4809]: I0312 08:04:28.139857 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 12 08:04:28 crc kubenswrapper[4809]: I0312 08:04:28.194542 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 12 08:04:28 crc kubenswrapper[4809]: I0312 08:04:28.194635 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 12 08:04:28 crc kubenswrapper[4809]: I0312 08:04:28.210451 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 12 08:04:28 crc kubenswrapper[4809]: I0312 08:04:28.235232 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 12 08:04:28 crc kubenswrapper[4809]: I0312 08:04:28.254252 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Mar 12 08:04:28 crc kubenswrapper[4809]: I0312 08:04:28.349600 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Mar 12 08:04:28 crc kubenswrapper[4809]: I0312 08:04:28.376582 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 12 08:04:28 crc kubenswrapper[4809]: I0312 08:04:28.417501 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 12 08:04:28 crc kubenswrapper[4809]: I0312 08:04:28.423929 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 12 08:04:28 crc kubenswrapper[4809]: I0312 08:04:28.428466 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 12 08:04:28 crc kubenswrapper[4809]: I0312 08:04:28.483471 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 12 08:04:28 crc kubenswrapper[4809]: I0312 08:04:28.509536 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 12 08:04:28 crc kubenswrapper[4809]: I0312 08:04:28.582770 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 12 08:04:28 crc kubenswrapper[4809]: I0312 08:04:28.663503 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 12 08:04:28 crc kubenswrapper[4809]: I0312 08:04:28.675733 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 12 08:04:28 crc kubenswrapper[4809]: I0312 08:04:28.692326 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 12 08:04:28 crc kubenswrapper[4809]: I0312 08:04:28.748994 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Mar 12 08:04:28 crc kubenswrapper[4809]: I0312 08:04:28.822945 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 12 08:04:28 crc kubenswrapper[4809]: I0312 08:04:28.862235 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 12 08:04:28 crc kubenswrapper[4809]: I0312 08:04:28.865677 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 12 08:04:28 crc kubenswrapper[4809]: I0312 08:04:28.913074 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 12 08:04:28 crc kubenswrapper[4809]: I0312 08:04:28.956012 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 12 08:04:28 crc kubenswrapper[4809]: I0312 08:04:28.978869 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 12 08:04:29 crc kubenswrapper[4809]: I0312 08:04:29.036163 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Mar 12 08:04:29 crc kubenswrapper[4809]: I0312 08:04:29.054922 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 12 08:04:29 crc kubenswrapper[4809]: I0312 08:04:29.077739 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 12 08:04:29 crc kubenswrapper[4809]: I0312 08:04:29.081218 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 12 08:04:29 crc kubenswrapper[4809]: I0312 08:04:29.114597 4809 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 12 08:04:29 crc kubenswrapper[4809]: I0312 08:04:29.114914 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://8bc2f834965b5e39b43bee7f1a0a4592eee5d2ca252c23a2fefec7e61cac38aa" gracePeriod=5 Mar 12 08:04:29 crc kubenswrapper[4809]: I0312 08:04:29.134295 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 12 08:04:29 crc kubenswrapper[4809]: I0312 08:04:29.147446 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Mar 12 08:04:29 crc kubenswrapper[4809]: I0312 08:04:29.205039 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555044-7j2cn" Mar 12 08:04:29 crc kubenswrapper[4809]: I0312 08:04:29.214161 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Mar 12 08:04:29 crc kubenswrapper[4809]: I0312 08:04:29.302766 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7skwk\" (UniqueName: \"kubernetes.io/projected/9274685c-c53e-4796-bb96-e1f50db591ed-kube-api-access-7skwk\") pod \"9274685c-c53e-4796-bb96-e1f50db591ed\" (UID: \"9274685c-c53e-4796-bb96-e1f50db591ed\") " Mar 12 08:04:29 crc kubenswrapper[4809]: I0312 08:04:29.305382 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 12 08:04:29 crc kubenswrapper[4809]: I0312 08:04:29.309627 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 12 08:04:29 crc kubenswrapper[4809]: I0312 08:04:29.326563 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9274685c-c53e-4796-bb96-e1f50db591ed-kube-api-access-7skwk" (OuterVolumeSpecName: "kube-api-access-7skwk") pod "9274685c-c53e-4796-bb96-e1f50db591ed" (UID: "9274685c-c53e-4796-bb96-e1f50db591ed"). InnerVolumeSpecName "kube-api-access-7skwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:04:29 crc kubenswrapper[4809]: I0312 08:04:29.403800 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7skwk\" (UniqueName: \"kubernetes.io/projected/9274685c-c53e-4796-bb96-e1f50db591ed-kube-api-access-7skwk\") on node \"crc\" DevicePath \"\"" Mar 12 08:04:29 crc kubenswrapper[4809]: I0312 08:04:29.429451 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 12 08:04:29 crc kubenswrapper[4809]: I0312 08:04:29.458008 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Mar 12 08:04:29 crc kubenswrapper[4809]: I0312 08:04:29.459685 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 12 08:04:29 crc kubenswrapper[4809]: I0312 08:04:29.489231 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 12 08:04:29 crc kubenswrapper[4809]: I0312 08:04:29.590081 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 12 08:04:29 crc kubenswrapper[4809]: I0312 08:04:29.699998 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 12 08:04:29 crc kubenswrapper[4809]: I0312 08:04:29.711614 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 12 08:04:29 crc kubenswrapper[4809]: I0312 08:04:29.729282 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Mar 12 08:04:29 crc kubenswrapper[4809]: I0312 08:04:29.751234 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 12 08:04:29 crc kubenswrapper[4809]: I0312 08:04:29.825035 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Mar 12 08:04:29 crc kubenswrapper[4809]: I0312 08:04:29.847698 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555044-7j2cn" event={"ID":"9274685c-c53e-4796-bb96-e1f50db591ed","Type":"ContainerDied","Data":"4da8bff55bb221a193cc80d76a656ef4823a33f6f7ad43c3d9abcc15a6f4ba31"} Mar 12 08:04:29 crc kubenswrapper[4809]: I0312 08:04:29.847737 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4da8bff55bb221a193cc80d76a656ef4823a33f6f7ad43c3d9abcc15a6f4ba31" Mar 12 08:04:29 crc kubenswrapper[4809]: I0312 08:04:29.847786 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555044-7j2cn" Mar 12 08:04:29 crc kubenswrapper[4809]: I0312 08:04:29.969327 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Mar 12 08:04:29 crc kubenswrapper[4809]: E0312 08:04:29.972896 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-pod5870849b_5199_42a7_b08d_614735e47737.slice\": RecentStats: unable to find data in memory cache]" Mar 12 08:04:30 crc kubenswrapper[4809]: I0312 08:04:30.002037 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 12 08:04:30 crc kubenswrapper[4809]: I0312 08:04:30.025415 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 12 08:04:30 crc kubenswrapper[4809]: I0312 08:04:30.096786 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Mar 12 08:04:30 crc kubenswrapper[4809]: I0312 08:04:30.259085 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 12 08:04:30 crc kubenswrapper[4809]: I0312 08:04:30.297328 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Mar 12 08:04:30 crc kubenswrapper[4809]: I0312 08:04:30.304202 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Mar 12 08:04:30 crc kubenswrapper[4809]: I0312 08:04:30.310269 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 12 08:04:30 crc kubenswrapper[4809]: I0312 08:04:30.395945 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Mar 12 08:04:30 crc kubenswrapper[4809]: I0312 08:04:30.417575 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 12 08:04:30 crc kubenswrapper[4809]: I0312 08:04:30.545292 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 12 08:04:30 crc kubenswrapper[4809]: I0312 08:04:30.688607 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 12 08:04:30 crc kubenswrapper[4809]: I0312 08:04:30.792514 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 12 08:04:30 crc kubenswrapper[4809]: I0312 08:04:30.880567 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 12 08:04:30 crc kubenswrapper[4809]: I0312 08:04:30.958768 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 12 08:04:30 crc kubenswrapper[4809]: I0312 08:04:30.965385 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 12 08:04:31 crc kubenswrapper[4809]: I0312 08:04:31.053811 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 12 08:04:31 crc kubenswrapper[4809]: I0312 08:04:31.080632 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 12 08:04:31 crc kubenswrapper[4809]: I0312 08:04:31.126242 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 12 08:04:31 crc kubenswrapper[4809]: I0312 08:04:31.160380 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 12 08:04:31 crc kubenswrapper[4809]: I0312 08:04:31.396291 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 12 08:04:31 crc kubenswrapper[4809]: I0312 08:04:31.459216 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 12 08:04:31 crc kubenswrapper[4809]: I0312 08:04:31.499716 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 12 08:04:31 crc kubenswrapper[4809]: I0312 08:04:31.671590 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 12 08:04:31 crc kubenswrapper[4809]: I0312 08:04:31.677592 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 12 08:04:31 crc kubenswrapper[4809]: I0312 08:04:31.711289 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Mar 12 08:04:31 crc kubenswrapper[4809]: I0312 08:04:31.787431 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 12 08:04:31 crc kubenswrapper[4809]: I0312 08:04:31.810494 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Mar 12 08:04:32 crc kubenswrapper[4809]: I0312 08:04:32.062249 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 12 08:04:32 crc kubenswrapper[4809]: I0312 08:04:32.127488 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Mar 12 08:04:32 crc kubenswrapper[4809]: I0312 08:04:32.222826 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 12 08:04:32 crc kubenswrapper[4809]: I0312 08:04:32.230023 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Mar 12 08:04:32 crc kubenswrapper[4809]: I0312 08:04:32.246514 4809 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 12 08:04:32 crc kubenswrapper[4809]: I0312 08:04:32.302544 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 12 08:04:32 crc kubenswrapper[4809]: I0312 08:04:32.382316 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 12 08:04:32 crc kubenswrapper[4809]: I0312 08:04:32.406574 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 12 08:04:32 crc kubenswrapper[4809]: I0312 08:04:32.441802 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 12 08:04:32 crc kubenswrapper[4809]: I0312 08:04:32.542101 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 12 08:04:33 crc kubenswrapper[4809]: I0312 08:04:33.020830 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Mar 12 08:04:33 crc kubenswrapper[4809]: I0312 08:04:33.138582 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 12 08:04:33 crc kubenswrapper[4809]: I0312 08:04:33.246330 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 12 08:04:33 crc kubenswrapper[4809]: I0312 08:04:33.312277 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 12 08:04:33 crc kubenswrapper[4809]: I0312 08:04:33.653088 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Mar 12 08:04:33 crc kubenswrapper[4809]: I0312 08:04:33.655611 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 12 08:04:34 crc kubenswrapper[4809]: I0312 08:04:34.715397 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Mar 12 08:04:34 crc kubenswrapper[4809]: I0312 08:04:34.716906 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 08:04:34 crc kubenswrapper[4809]: I0312 08:04:34.813849 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 12 08:04:34 crc kubenswrapper[4809]: I0312 08:04:34.813957 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 12 08:04:34 crc kubenswrapper[4809]: I0312 08:04:34.814047 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 12 08:04:34 crc kubenswrapper[4809]: I0312 08:04:34.814194 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 12 08:04:34 crc kubenswrapper[4809]: I0312 08:04:34.814234 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 12 08:04:34 crc kubenswrapper[4809]: I0312 08:04:34.814674 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 08:04:34 crc kubenswrapper[4809]: I0312 08:04:34.814742 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 08:04:34 crc kubenswrapper[4809]: I0312 08:04:34.814941 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 08:04:34 crc kubenswrapper[4809]: I0312 08:04:34.815172 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 08:04:34 crc kubenswrapper[4809]: I0312 08:04:34.828697 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 08:04:34 crc kubenswrapper[4809]: I0312 08:04:34.913775 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Mar 12 08:04:34 crc kubenswrapper[4809]: I0312 08:04:34.913877 4809 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="8bc2f834965b5e39b43bee7f1a0a4592eee5d2ca252c23a2fefec7e61cac38aa" exitCode=137 Mar 12 08:04:34 crc kubenswrapper[4809]: I0312 08:04:34.914029 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 08:04:34 crc kubenswrapper[4809]: I0312 08:04:34.913962 4809 scope.go:117] "RemoveContainer" containerID="8bc2f834965b5e39b43bee7f1a0a4592eee5d2ca252c23a2fefec7e61cac38aa" Mar 12 08:04:34 crc kubenswrapper[4809]: I0312 08:04:34.916726 4809 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Mar 12 08:04:34 crc kubenswrapper[4809]: I0312 08:04:34.916824 4809 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Mar 12 08:04:34 crc kubenswrapper[4809]: I0312 08:04:34.916900 4809 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 12 08:04:34 crc kubenswrapper[4809]: I0312 08:04:34.916978 4809 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 12 08:04:34 crc kubenswrapper[4809]: I0312 08:04:34.917012 4809 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Mar 12 08:04:34 crc kubenswrapper[4809]: I0312 08:04:34.940523 4809 scope.go:117] "RemoveContainer" containerID="8bc2f834965b5e39b43bee7f1a0a4592eee5d2ca252c23a2fefec7e61cac38aa" Mar 12 08:04:34 crc kubenswrapper[4809]: E0312 08:04:34.943543 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bc2f834965b5e39b43bee7f1a0a4592eee5d2ca252c23a2fefec7e61cac38aa\": container with ID starting with 8bc2f834965b5e39b43bee7f1a0a4592eee5d2ca252c23a2fefec7e61cac38aa not found: ID does not exist" containerID="8bc2f834965b5e39b43bee7f1a0a4592eee5d2ca252c23a2fefec7e61cac38aa" Mar 12 08:04:34 crc kubenswrapper[4809]: I0312 08:04:34.943598 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bc2f834965b5e39b43bee7f1a0a4592eee5d2ca252c23a2fefec7e61cac38aa"} err="failed to get container status \"8bc2f834965b5e39b43bee7f1a0a4592eee5d2ca252c23a2fefec7e61cac38aa\": rpc error: code = NotFound desc = could not find container \"8bc2f834965b5e39b43bee7f1a0a4592eee5d2ca252c23a2fefec7e61cac38aa\": container with ID starting with 8bc2f834965b5e39b43bee7f1a0a4592eee5d2ca252c23a2fefec7e61cac38aa not found: ID does not exist" Mar 12 08:04:35 crc kubenswrapper[4809]: I0312 08:04:35.122759 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Mar 12 08:04:40 crc kubenswrapper[4809]: E0312 08:04:40.071048 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-pod5870849b_5199_42a7_b08d_614735e47737.slice\": RecentStats: unable to find data in memory cache]" Mar 12 08:04:47 crc kubenswrapper[4809]: I0312 08:04:47.404378 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 12 08:04:48 crc kubenswrapper[4809]: I0312 08:04:48.099536 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 12 08:04:48 crc kubenswrapper[4809]: I0312 08:04:48.874333 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 12 08:04:50 crc kubenswrapper[4809]: I0312 08:04:50.178680 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 12 08:04:51 crc kubenswrapper[4809]: I0312 08:04:51.023817 4809 generic.go:334] "Generic (PLEG): container finished" podID="512b5035-d9af-4615-b351-2199e94f9c50" containerID="dcf5ef7dc6b85329eeaf508ced67438c6450c1948cf660623dee159685fc882c" exitCode=0 Mar 12 08:04:51 crc kubenswrapper[4809]: I0312 08:04:51.023907 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hrnnz" event={"ID":"512b5035-d9af-4615-b351-2199e94f9c50","Type":"ContainerDied","Data":"dcf5ef7dc6b85329eeaf508ced67438c6450c1948cf660623dee159685fc882c"} Mar 12 08:04:51 crc kubenswrapper[4809]: I0312 08:04:51.024776 4809 scope.go:117] "RemoveContainer" containerID="dcf5ef7dc6b85329eeaf508ced67438c6450c1948cf660623dee159685fc882c" Mar 12 08:04:52 crc kubenswrapper[4809]: I0312 08:04:52.038684 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hrnnz" event={"ID":"512b5035-d9af-4615-b351-2199e94f9c50","Type":"ContainerStarted","Data":"1066f01edb8b85480901db466b5358fefe8b997fb408f1d63f431012ca7746d4"} Mar 12 08:04:52 crc kubenswrapper[4809]: I0312 08:04:52.040321 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-hrnnz" Mar 12 08:04:52 crc kubenswrapper[4809]: I0312 08:04:52.041987 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-hrnnz" Mar 12 08:04:52 crc kubenswrapper[4809]: I0312 08:04:52.434685 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 12 08:04:54 crc kubenswrapper[4809]: I0312 08:04:54.537034 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 12 08:04:55 crc kubenswrapper[4809]: I0312 08:04:55.236608 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Mar 12 08:04:56 crc kubenswrapper[4809]: I0312 08:04:56.951940 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 12 08:04:57 crc kubenswrapper[4809]: I0312 08:04:57.177416 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 12 08:05:02 crc kubenswrapper[4809]: I0312 08:05:02.142058 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 12 08:05:03 crc kubenswrapper[4809]: I0312 08:05:03.140623 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 12 08:05:10 crc kubenswrapper[4809]: I0312 08:05:10.419786 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Mar 12 08:05:15 crc kubenswrapper[4809]: I0312 08:05:15.049240 4809 patch_prober.go:28] interesting pod/machine-config-daemon-h6d4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 08:05:15 crc kubenswrapper[4809]: I0312 08:05:15.050485 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 08:05:45 crc kubenswrapper[4809]: I0312 08:05:45.049251 4809 patch_prober.go:28] interesting pod/machine-config-daemon-h6d4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 08:05:45 crc kubenswrapper[4809]: I0312 08:05:45.050426 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 08:06:00 crc kubenswrapper[4809]: I0312 08:06:00.147598 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555046-2w55h"] Mar 12 08:06:00 crc kubenswrapper[4809]: E0312 08:06:00.148580 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9274685c-c53e-4796-bb96-e1f50db591ed" containerName="oc" Mar 12 08:06:00 crc kubenswrapper[4809]: I0312 08:06:00.148596 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="9274685c-c53e-4796-bb96-e1f50db591ed" containerName="oc" Mar 12 08:06:00 crc kubenswrapper[4809]: E0312 08:06:00.148606 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 12 08:06:00 crc kubenswrapper[4809]: I0312 08:06:00.148614 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 12 08:06:00 crc kubenswrapper[4809]: I0312 08:06:00.148751 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 12 08:06:00 crc kubenswrapper[4809]: I0312 08:06:00.148763 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="9274685c-c53e-4796-bb96-e1f50db591ed" containerName="oc" Mar 12 08:06:00 crc kubenswrapper[4809]: I0312 08:06:00.149295 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555046-2w55h" Mar 12 08:06:00 crc kubenswrapper[4809]: I0312 08:06:00.153074 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-tvxqv" Mar 12 08:06:00 crc kubenswrapper[4809]: I0312 08:06:00.153924 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 08:06:00 crc kubenswrapper[4809]: I0312 08:06:00.155165 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555046-2w55h"] Mar 12 08:06:00 crc kubenswrapper[4809]: I0312 08:06:00.156749 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 08:06:00 crc kubenswrapper[4809]: I0312 08:06:00.316463 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brbvf\" (UniqueName: \"kubernetes.io/projected/10b2cf88-bbdd-48d9-8401-5bbd10f925ed-kube-api-access-brbvf\") pod \"auto-csr-approver-29555046-2w55h\" (UID: \"10b2cf88-bbdd-48d9-8401-5bbd10f925ed\") " pod="openshift-infra/auto-csr-approver-29555046-2w55h" Mar 12 08:06:00 crc kubenswrapper[4809]: I0312 08:06:00.418057 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brbvf\" (UniqueName: \"kubernetes.io/projected/10b2cf88-bbdd-48d9-8401-5bbd10f925ed-kube-api-access-brbvf\") pod \"auto-csr-approver-29555046-2w55h\" (UID: \"10b2cf88-bbdd-48d9-8401-5bbd10f925ed\") " pod="openshift-infra/auto-csr-approver-29555046-2w55h" Mar 12 08:06:00 crc kubenswrapper[4809]: I0312 08:06:00.448869 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brbvf\" (UniqueName: \"kubernetes.io/projected/10b2cf88-bbdd-48d9-8401-5bbd10f925ed-kube-api-access-brbvf\") pod \"auto-csr-approver-29555046-2w55h\" (UID: \"10b2cf88-bbdd-48d9-8401-5bbd10f925ed\") " pod="openshift-infra/auto-csr-approver-29555046-2w55h" Mar 12 08:06:00 crc kubenswrapper[4809]: I0312 08:06:00.469612 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555046-2w55h" Mar 12 08:06:00 crc kubenswrapper[4809]: I0312 08:06:00.965484 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555046-2w55h"] Mar 12 08:06:01 crc kubenswrapper[4809]: I0312 08:06:01.483513 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555046-2w55h" event={"ID":"10b2cf88-bbdd-48d9-8401-5bbd10f925ed","Type":"ContainerStarted","Data":"6aba064d20587da41dcedea16ec188ba77823bf85c728da570c7a9863161866b"} Mar 12 08:06:02 crc kubenswrapper[4809]: I0312 08:06:02.489926 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555046-2w55h" event={"ID":"10b2cf88-bbdd-48d9-8401-5bbd10f925ed","Type":"ContainerStarted","Data":"db18b0ca54d6405a8c8d7af663fc2d374cb6ee863ee7f0a7b8b1203f3c5fadb6"} Mar 12 08:06:02 crc kubenswrapper[4809]: I0312 08:06:02.506149 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555046-2w55h" podStartSLOduration=1.23806214 podStartE2EDuration="2.506132093s" podCreationTimestamp="2026-03-12 08:06:00 +0000 UTC" firstStartedPulling="2026-03-12 08:06:00.976294555 +0000 UTC m=+434.558330288" lastFinishedPulling="2026-03-12 08:06:02.244364508 +0000 UTC m=+435.826400241" observedRunningTime="2026-03-12 08:06:02.50419846 +0000 UTC m=+436.086234203" watchObservedRunningTime="2026-03-12 08:06:02.506132093 +0000 UTC m=+436.088167826" Mar 12 08:06:03 crc kubenswrapper[4809]: I0312 08:06:03.496230 4809 generic.go:334] "Generic (PLEG): container finished" podID="10b2cf88-bbdd-48d9-8401-5bbd10f925ed" containerID="db18b0ca54d6405a8c8d7af663fc2d374cb6ee863ee7f0a7b8b1203f3c5fadb6" exitCode=0 Mar 12 08:06:03 crc kubenswrapper[4809]: I0312 08:06:03.496274 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555046-2w55h" event={"ID":"10b2cf88-bbdd-48d9-8401-5bbd10f925ed","Type":"ContainerDied","Data":"db18b0ca54d6405a8c8d7af663fc2d374cb6ee863ee7f0a7b8b1203f3c5fadb6"} Mar 12 08:06:04 crc kubenswrapper[4809]: I0312 08:06:04.747223 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555046-2w55h" Mar 12 08:06:04 crc kubenswrapper[4809]: I0312 08:06:04.767152 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-pfh6p"] Mar 12 08:06:04 crc kubenswrapper[4809]: E0312 08:06:04.767687 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10b2cf88-bbdd-48d9-8401-5bbd10f925ed" containerName="oc" Mar 12 08:06:04 crc kubenswrapper[4809]: I0312 08:06:04.767808 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="10b2cf88-bbdd-48d9-8401-5bbd10f925ed" containerName="oc" Mar 12 08:06:04 crc kubenswrapper[4809]: I0312 08:06:04.768048 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="10b2cf88-bbdd-48d9-8401-5bbd10f925ed" containerName="oc" Mar 12 08:06:04 crc kubenswrapper[4809]: I0312 08:06:04.769087 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-pfh6p" Mar 12 08:06:04 crc kubenswrapper[4809]: I0312 08:06:04.789514 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-pfh6p"] Mar 12 08:06:04 crc kubenswrapper[4809]: I0312 08:06:04.883331 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brbvf\" (UniqueName: \"kubernetes.io/projected/10b2cf88-bbdd-48d9-8401-5bbd10f925ed-kube-api-access-brbvf\") pod \"10b2cf88-bbdd-48d9-8401-5bbd10f925ed\" (UID: \"10b2cf88-bbdd-48d9-8401-5bbd10f925ed\") " Mar 12 08:06:04 crc kubenswrapper[4809]: I0312 08:06:04.883534 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/320a79e9-2b91-4ff1-94f3-df6a8f7489be-bound-sa-token\") pod \"image-registry-66df7c8f76-pfh6p\" (UID: \"320a79e9-2b91-4ff1-94f3-df6a8f7489be\") " pod="openshift-image-registry/image-registry-66df7c8f76-pfh6p" Mar 12 08:06:04 crc kubenswrapper[4809]: I0312 08:06:04.883557 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzvcc\" (UniqueName: \"kubernetes.io/projected/320a79e9-2b91-4ff1-94f3-df6a8f7489be-kube-api-access-qzvcc\") pod \"image-registry-66df7c8f76-pfh6p\" (UID: \"320a79e9-2b91-4ff1-94f3-df6a8f7489be\") " pod="openshift-image-registry/image-registry-66df7c8f76-pfh6p" Mar 12 08:06:04 crc kubenswrapper[4809]: I0312 08:06:04.883574 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/320a79e9-2b91-4ff1-94f3-df6a8f7489be-registry-certificates\") pod \"image-registry-66df7c8f76-pfh6p\" (UID: \"320a79e9-2b91-4ff1-94f3-df6a8f7489be\") " pod="openshift-image-registry/image-registry-66df7c8f76-pfh6p" Mar 12 08:06:04 crc kubenswrapper[4809]: I0312 08:06:04.883596 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/320a79e9-2b91-4ff1-94f3-df6a8f7489be-installation-pull-secrets\") pod \"image-registry-66df7c8f76-pfh6p\" (UID: \"320a79e9-2b91-4ff1-94f3-df6a8f7489be\") " pod="openshift-image-registry/image-registry-66df7c8f76-pfh6p" Mar 12 08:06:04 crc kubenswrapper[4809]: I0312 08:06:04.883627 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-pfh6p\" (UID: \"320a79e9-2b91-4ff1-94f3-df6a8f7489be\") " pod="openshift-image-registry/image-registry-66df7c8f76-pfh6p" Mar 12 08:06:04 crc kubenswrapper[4809]: I0312 08:06:04.883655 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/320a79e9-2b91-4ff1-94f3-df6a8f7489be-ca-trust-extracted\") pod \"image-registry-66df7c8f76-pfh6p\" (UID: \"320a79e9-2b91-4ff1-94f3-df6a8f7489be\") " pod="openshift-image-registry/image-registry-66df7c8f76-pfh6p" Mar 12 08:06:04 crc kubenswrapper[4809]: I0312 08:06:04.883676 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/320a79e9-2b91-4ff1-94f3-df6a8f7489be-registry-tls\") pod \"image-registry-66df7c8f76-pfh6p\" (UID: \"320a79e9-2b91-4ff1-94f3-df6a8f7489be\") " pod="openshift-image-registry/image-registry-66df7c8f76-pfh6p" Mar 12 08:06:04 crc kubenswrapper[4809]: I0312 08:06:04.883749 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/320a79e9-2b91-4ff1-94f3-df6a8f7489be-trusted-ca\") pod \"image-registry-66df7c8f76-pfh6p\" (UID: \"320a79e9-2b91-4ff1-94f3-df6a8f7489be\") " pod="openshift-image-registry/image-registry-66df7c8f76-pfh6p" Mar 12 08:06:04 crc kubenswrapper[4809]: I0312 08:06:04.888058 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10b2cf88-bbdd-48d9-8401-5bbd10f925ed-kube-api-access-brbvf" (OuterVolumeSpecName: "kube-api-access-brbvf") pod "10b2cf88-bbdd-48d9-8401-5bbd10f925ed" (UID: "10b2cf88-bbdd-48d9-8401-5bbd10f925ed"). InnerVolumeSpecName "kube-api-access-brbvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:06:04 crc kubenswrapper[4809]: I0312 08:06:04.904822 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-pfh6p\" (UID: \"320a79e9-2b91-4ff1-94f3-df6a8f7489be\") " pod="openshift-image-registry/image-registry-66df7c8f76-pfh6p" Mar 12 08:06:04 crc kubenswrapper[4809]: I0312 08:06:04.984544 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/320a79e9-2b91-4ff1-94f3-df6a8f7489be-trusted-ca\") pod \"image-registry-66df7c8f76-pfh6p\" (UID: \"320a79e9-2b91-4ff1-94f3-df6a8f7489be\") " pod="openshift-image-registry/image-registry-66df7c8f76-pfh6p" Mar 12 08:06:04 crc kubenswrapper[4809]: I0312 08:06:04.984645 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/320a79e9-2b91-4ff1-94f3-df6a8f7489be-bound-sa-token\") pod \"image-registry-66df7c8f76-pfh6p\" (UID: \"320a79e9-2b91-4ff1-94f3-df6a8f7489be\") " pod="openshift-image-registry/image-registry-66df7c8f76-pfh6p" Mar 12 08:06:04 crc kubenswrapper[4809]: I0312 08:06:04.984685 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzvcc\" (UniqueName: \"kubernetes.io/projected/320a79e9-2b91-4ff1-94f3-df6a8f7489be-kube-api-access-qzvcc\") pod \"image-registry-66df7c8f76-pfh6p\" (UID: \"320a79e9-2b91-4ff1-94f3-df6a8f7489be\") " pod="openshift-image-registry/image-registry-66df7c8f76-pfh6p" Mar 12 08:06:04 crc kubenswrapper[4809]: I0312 08:06:04.984722 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/320a79e9-2b91-4ff1-94f3-df6a8f7489be-registry-certificates\") pod \"image-registry-66df7c8f76-pfh6p\" (UID: \"320a79e9-2b91-4ff1-94f3-df6a8f7489be\") " pod="openshift-image-registry/image-registry-66df7c8f76-pfh6p" Mar 12 08:06:04 crc kubenswrapper[4809]: I0312 08:06:04.984762 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/320a79e9-2b91-4ff1-94f3-df6a8f7489be-installation-pull-secrets\") pod \"image-registry-66df7c8f76-pfh6p\" (UID: \"320a79e9-2b91-4ff1-94f3-df6a8f7489be\") " pod="openshift-image-registry/image-registry-66df7c8f76-pfh6p" Mar 12 08:06:04 crc kubenswrapper[4809]: I0312 08:06:04.985785 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/320a79e9-2b91-4ff1-94f3-df6a8f7489be-trusted-ca\") pod \"image-registry-66df7c8f76-pfh6p\" (UID: \"320a79e9-2b91-4ff1-94f3-df6a8f7489be\") " pod="openshift-image-registry/image-registry-66df7c8f76-pfh6p" Mar 12 08:06:04 crc kubenswrapper[4809]: I0312 08:06:04.986214 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/320a79e9-2b91-4ff1-94f3-df6a8f7489be-registry-certificates\") pod \"image-registry-66df7c8f76-pfh6p\" (UID: \"320a79e9-2b91-4ff1-94f3-df6a8f7489be\") " pod="openshift-image-registry/image-registry-66df7c8f76-pfh6p" Mar 12 08:06:04 crc kubenswrapper[4809]: I0312 08:06:04.986271 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/320a79e9-2b91-4ff1-94f3-df6a8f7489be-ca-trust-extracted\") pod \"image-registry-66df7c8f76-pfh6p\" (UID: \"320a79e9-2b91-4ff1-94f3-df6a8f7489be\") " pod="openshift-image-registry/image-registry-66df7c8f76-pfh6p" Mar 12 08:06:04 crc kubenswrapper[4809]: I0312 08:06:04.986962 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/320a79e9-2b91-4ff1-94f3-df6a8f7489be-ca-trust-extracted\") pod \"image-registry-66df7c8f76-pfh6p\" (UID: \"320a79e9-2b91-4ff1-94f3-df6a8f7489be\") " pod="openshift-image-registry/image-registry-66df7c8f76-pfh6p" Mar 12 08:06:04 crc kubenswrapper[4809]: I0312 08:06:04.987070 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/320a79e9-2b91-4ff1-94f3-df6a8f7489be-registry-tls\") pod \"image-registry-66df7c8f76-pfh6p\" (UID: \"320a79e9-2b91-4ff1-94f3-df6a8f7489be\") " pod="openshift-image-registry/image-registry-66df7c8f76-pfh6p" Mar 12 08:06:04 crc kubenswrapper[4809]: I0312 08:06:04.987232 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brbvf\" (UniqueName: \"kubernetes.io/projected/10b2cf88-bbdd-48d9-8401-5bbd10f925ed-kube-api-access-brbvf\") on node \"crc\" DevicePath \"\"" Mar 12 08:06:04 crc kubenswrapper[4809]: I0312 08:06:04.990988 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/320a79e9-2b91-4ff1-94f3-df6a8f7489be-registry-tls\") pod \"image-registry-66df7c8f76-pfh6p\" (UID: \"320a79e9-2b91-4ff1-94f3-df6a8f7489be\") " pod="openshift-image-registry/image-registry-66df7c8f76-pfh6p" Mar 12 08:06:04 crc kubenswrapper[4809]: I0312 08:06:04.991822 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/320a79e9-2b91-4ff1-94f3-df6a8f7489be-installation-pull-secrets\") pod \"image-registry-66df7c8f76-pfh6p\" (UID: \"320a79e9-2b91-4ff1-94f3-df6a8f7489be\") " pod="openshift-image-registry/image-registry-66df7c8f76-pfh6p" Mar 12 08:06:05 crc kubenswrapper[4809]: I0312 08:06:05.000157 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/320a79e9-2b91-4ff1-94f3-df6a8f7489be-bound-sa-token\") pod \"image-registry-66df7c8f76-pfh6p\" (UID: \"320a79e9-2b91-4ff1-94f3-df6a8f7489be\") " pod="openshift-image-registry/image-registry-66df7c8f76-pfh6p" Mar 12 08:06:05 crc kubenswrapper[4809]: I0312 08:06:05.004360 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzvcc\" (UniqueName: \"kubernetes.io/projected/320a79e9-2b91-4ff1-94f3-df6a8f7489be-kube-api-access-qzvcc\") pod \"image-registry-66df7c8f76-pfh6p\" (UID: \"320a79e9-2b91-4ff1-94f3-df6a8f7489be\") " pod="openshift-image-registry/image-registry-66df7c8f76-pfh6p" Mar 12 08:06:05 crc kubenswrapper[4809]: I0312 08:06:05.094476 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-pfh6p" Mar 12 08:06:05 crc kubenswrapper[4809]: I0312 08:06:05.345966 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-pfh6p"] Mar 12 08:06:05 crc kubenswrapper[4809]: W0312 08:06:05.353447 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod320a79e9_2b91_4ff1_94f3_df6a8f7489be.slice/crio-e40e82532d49348aab516db9e219d9093c606d6e2476f4f5c1fe93b33b22a093 WatchSource:0}: Error finding container e40e82532d49348aab516db9e219d9093c606d6e2476f4f5c1fe93b33b22a093: Status 404 returned error can't find the container with id e40e82532d49348aab516db9e219d9093c606d6e2476f4f5c1fe93b33b22a093 Mar 12 08:06:05 crc kubenswrapper[4809]: I0312 08:06:05.512791 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555046-2w55h" Mar 12 08:06:05 crc kubenswrapper[4809]: I0312 08:06:05.512975 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555046-2w55h" event={"ID":"10b2cf88-bbdd-48d9-8401-5bbd10f925ed","Type":"ContainerDied","Data":"6aba064d20587da41dcedea16ec188ba77823bf85c728da570c7a9863161866b"} Mar 12 08:06:05 crc kubenswrapper[4809]: I0312 08:06:05.513266 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6aba064d20587da41dcedea16ec188ba77823bf85c728da570c7a9863161866b" Mar 12 08:06:05 crc kubenswrapper[4809]: I0312 08:06:05.515832 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-pfh6p" event={"ID":"320a79e9-2b91-4ff1-94f3-df6a8f7489be","Type":"ContainerStarted","Data":"10fc80b9521fd51c95a120d43e263eb809eb4c53db087c1a761c7c07f8c00113"} Mar 12 08:06:05 crc kubenswrapper[4809]: I0312 08:06:05.515864 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-pfh6p" event={"ID":"320a79e9-2b91-4ff1-94f3-df6a8f7489be","Type":"ContainerStarted","Data":"e40e82532d49348aab516db9e219d9093c606d6e2476f4f5c1fe93b33b22a093"} Mar 12 08:06:05 crc kubenswrapper[4809]: I0312 08:06:05.516242 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-pfh6p" Mar 12 08:06:05 crc kubenswrapper[4809]: I0312 08:06:05.538245 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-pfh6p" podStartSLOduration=1.538228203 podStartE2EDuration="1.538228203s" podCreationTimestamp="2026-03-12 08:06:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:06:05.535100406 +0000 UTC m=+439.117136149" watchObservedRunningTime="2026-03-12 08:06:05.538228203 +0000 UTC m=+439.120263936" Mar 12 08:06:15 crc kubenswrapper[4809]: I0312 08:06:15.049271 4809 patch_prober.go:28] interesting pod/machine-config-daemon-h6d4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 08:06:15 crc kubenswrapper[4809]: I0312 08:06:15.050036 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 08:06:15 crc kubenswrapper[4809]: I0312 08:06:15.050134 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" Mar 12 08:06:15 crc kubenswrapper[4809]: I0312 08:06:15.051289 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4e9e4f9c5d2c28b5cf22cec3c1066c042f4246b7416ca727e9771e6b84eecf1f"} pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 08:06:15 crc kubenswrapper[4809]: I0312 08:06:15.051413 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" containerID="cri-o://4e9e4f9c5d2c28b5cf22cec3c1066c042f4246b7416ca727e9771e6b84eecf1f" gracePeriod=600 Mar 12 08:06:15 crc kubenswrapper[4809]: I0312 08:06:15.587928 4809 generic.go:334] "Generic (PLEG): container finished" podID="101483ba-8ed3-40eb-9855-077e9add029f" containerID="4e9e4f9c5d2c28b5cf22cec3c1066c042f4246b7416ca727e9771e6b84eecf1f" exitCode=0 Mar 12 08:06:15 crc kubenswrapper[4809]: I0312 08:06:15.588009 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" event={"ID":"101483ba-8ed3-40eb-9855-077e9add029f","Type":"ContainerDied","Data":"4e9e4f9c5d2c28b5cf22cec3c1066c042f4246b7416ca727e9771e6b84eecf1f"} Mar 12 08:06:15 crc kubenswrapper[4809]: I0312 08:06:15.589074 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" event={"ID":"101483ba-8ed3-40eb-9855-077e9add029f","Type":"ContainerStarted","Data":"97cdf03cc29e20d0fb57a7ec80495b91279dda92db249385bd14589bccc4f68c"} Mar 12 08:06:15 crc kubenswrapper[4809]: I0312 08:06:15.589152 4809 scope.go:117] "RemoveContainer" containerID="5a1dde7e156e97091c5fa419562607743e842e7e610e685fff1be3ce7ce33f39" Mar 12 08:06:19 crc kubenswrapper[4809]: I0312 08:06:19.333477 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vhrcn"] Mar 12 08:06:19 crc kubenswrapper[4809]: I0312 08:06:19.335612 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vhrcn" podUID="c914c474-5d5b-415b-ad58-76c7ac15dc94" containerName="registry-server" containerID="cri-o://6ac9290b8842acf4726ca719fcf6a69514ba4d833e22b09fb1312276e3df9ce0" gracePeriod=30 Mar 12 08:06:19 crc kubenswrapper[4809]: I0312 08:06:19.347280 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pqsb2"] Mar 12 08:06:19 crc kubenswrapper[4809]: I0312 08:06:19.347556 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pqsb2" podUID="bd4be45a-8370-4cbe-a718-bb31fd64d99a" containerName="registry-server" containerID="cri-o://be53b596bf62aac81e069baa64b49314df9e66ce3279fc49765f560dbae7810f" gracePeriod=30 Mar 12 08:06:19 crc kubenswrapper[4809]: I0312 08:06:19.361646 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hrnnz"] Mar 12 08:06:19 crc kubenswrapper[4809]: I0312 08:06:19.362082 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-hrnnz" podUID="512b5035-d9af-4615-b351-2199e94f9c50" containerName="marketplace-operator" containerID="cri-o://1066f01edb8b85480901db466b5358fefe8b997fb408f1d63f431012ca7746d4" gracePeriod=30 Mar 12 08:06:19 crc kubenswrapper[4809]: I0312 08:06:19.371437 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jn4tc"] Mar 12 08:06:19 crc kubenswrapper[4809]: I0312 08:06:19.371866 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jn4tc" podUID="c060e910-de6e-43df-a148-66f07bc71180" containerName="registry-server" containerID="cri-o://7e9e966724b0fb49a8489a41cdda03032b8797814018c4b1ec1ef27116c7ce3a" gracePeriod=30 Mar 12 08:06:19 crc kubenswrapper[4809]: I0312 08:06:19.379996 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lncvk"] Mar 12 08:06:19 crc kubenswrapper[4809]: I0312 08:06:19.380948 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-lncvk" Mar 12 08:06:19 crc kubenswrapper[4809]: I0312 08:06:19.389908 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4qrn8"] Mar 12 08:06:19 crc kubenswrapper[4809]: I0312 08:06:19.390282 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4qrn8" podUID="f283aa6d-85ad-44ff-8758-d8251b00ae50" containerName="registry-server" containerID="cri-o://4103d51c22bb98f593d103b0a3a160b75f3d4654273da212f6cf13fb92126a29" gracePeriod=30 Mar 12 08:06:19 crc kubenswrapper[4809]: I0312 08:06:19.394757 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lncvk"] Mar 12 08:06:19 crc kubenswrapper[4809]: I0312 08:06:19.508773 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmh6f\" (UniqueName: \"kubernetes.io/projected/49c3f940-85d8-49c5-a529-367c56018858-kube-api-access-vmh6f\") pod \"marketplace-operator-79b997595-lncvk\" (UID: \"49c3f940-85d8-49c5-a529-367c56018858\") " pod="openshift-marketplace/marketplace-operator-79b997595-lncvk" Mar 12 08:06:19 crc kubenswrapper[4809]: I0312 08:06:19.509228 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/49c3f940-85d8-49c5-a529-367c56018858-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-lncvk\" (UID: \"49c3f940-85d8-49c5-a529-367c56018858\") " pod="openshift-marketplace/marketplace-operator-79b997595-lncvk" Mar 12 08:06:19 crc kubenswrapper[4809]: I0312 08:06:19.509263 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/49c3f940-85d8-49c5-a529-367c56018858-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-lncvk\" (UID: \"49c3f940-85d8-49c5-a529-367c56018858\") " pod="openshift-marketplace/marketplace-operator-79b997595-lncvk" Mar 12 08:06:19 crc kubenswrapper[4809]: I0312 08:06:19.610756 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmh6f\" (UniqueName: \"kubernetes.io/projected/49c3f940-85d8-49c5-a529-367c56018858-kube-api-access-vmh6f\") pod \"marketplace-operator-79b997595-lncvk\" (UID: \"49c3f940-85d8-49c5-a529-367c56018858\") " pod="openshift-marketplace/marketplace-operator-79b997595-lncvk" Mar 12 08:06:19 crc kubenswrapper[4809]: I0312 08:06:19.610840 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/49c3f940-85d8-49c5-a529-367c56018858-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-lncvk\" (UID: \"49c3f940-85d8-49c5-a529-367c56018858\") " pod="openshift-marketplace/marketplace-operator-79b997595-lncvk" Mar 12 08:06:19 crc kubenswrapper[4809]: I0312 08:06:19.610882 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/49c3f940-85d8-49c5-a529-367c56018858-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-lncvk\" (UID: \"49c3f940-85d8-49c5-a529-367c56018858\") " pod="openshift-marketplace/marketplace-operator-79b997595-lncvk" Mar 12 08:06:19 crc kubenswrapper[4809]: I0312 08:06:19.612564 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/49c3f940-85d8-49c5-a529-367c56018858-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-lncvk\" (UID: \"49c3f940-85d8-49c5-a529-367c56018858\") " pod="openshift-marketplace/marketplace-operator-79b997595-lncvk" Mar 12 08:06:19 crc kubenswrapper[4809]: I0312 08:06:19.618530 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/49c3f940-85d8-49c5-a529-367c56018858-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-lncvk\" (UID: \"49c3f940-85d8-49c5-a529-367c56018858\") " pod="openshift-marketplace/marketplace-operator-79b997595-lncvk" Mar 12 08:06:19 crc kubenswrapper[4809]: I0312 08:06:19.620164 4809 generic.go:334] "Generic (PLEG): container finished" podID="bd4be45a-8370-4cbe-a718-bb31fd64d99a" containerID="be53b596bf62aac81e069baa64b49314df9e66ce3279fc49765f560dbae7810f" exitCode=0 Mar 12 08:06:19 crc kubenswrapper[4809]: I0312 08:06:19.620239 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pqsb2" event={"ID":"bd4be45a-8370-4cbe-a718-bb31fd64d99a","Type":"ContainerDied","Data":"be53b596bf62aac81e069baa64b49314df9e66ce3279fc49765f560dbae7810f"} Mar 12 08:06:19 crc kubenswrapper[4809]: I0312 08:06:19.622612 4809 generic.go:334] "Generic (PLEG): container finished" podID="f283aa6d-85ad-44ff-8758-d8251b00ae50" containerID="4103d51c22bb98f593d103b0a3a160b75f3d4654273da212f6cf13fb92126a29" exitCode=0 Mar 12 08:06:19 crc kubenswrapper[4809]: I0312 08:06:19.622683 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4qrn8" event={"ID":"f283aa6d-85ad-44ff-8758-d8251b00ae50","Type":"ContainerDied","Data":"4103d51c22bb98f593d103b0a3a160b75f3d4654273da212f6cf13fb92126a29"} Mar 12 08:06:19 crc kubenswrapper[4809]: I0312 08:06:19.624714 4809 generic.go:334] "Generic (PLEG): container finished" podID="c914c474-5d5b-415b-ad58-76c7ac15dc94" containerID="6ac9290b8842acf4726ca719fcf6a69514ba4d833e22b09fb1312276e3df9ce0" exitCode=0 Mar 12 08:06:19 crc kubenswrapper[4809]: I0312 08:06:19.624773 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vhrcn" event={"ID":"c914c474-5d5b-415b-ad58-76c7ac15dc94","Type":"ContainerDied","Data":"6ac9290b8842acf4726ca719fcf6a69514ba4d833e22b09fb1312276e3df9ce0"} Mar 12 08:06:19 crc kubenswrapper[4809]: I0312 08:06:19.630282 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmh6f\" (UniqueName: \"kubernetes.io/projected/49c3f940-85d8-49c5-a529-367c56018858-kube-api-access-vmh6f\") pod \"marketplace-operator-79b997595-lncvk\" (UID: \"49c3f940-85d8-49c5-a529-367c56018858\") " pod="openshift-marketplace/marketplace-operator-79b997595-lncvk" Mar 12 08:06:19 crc kubenswrapper[4809]: I0312 08:06:19.634056 4809 generic.go:334] "Generic (PLEG): container finished" podID="512b5035-d9af-4615-b351-2199e94f9c50" containerID="1066f01edb8b85480901db466b5358fefe8b997fb408f1d63f431012ca7746d4" exitCode=0 Mar 12 08:06:19 crc kubenswrapper[4809]: I0312 08:06:19.634087 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hrnnz" event={"ID":"512b5035-d9af-4615-b351-2199e94f9c50","Type":"ContainerDied","Data":"1066f01edb8b85480901db466b5358fefe8b997fb408f1d63f431012ca7746d4"} Mar 12 08:06:19 crc kubenswrapper[4809]: I0312 08:06:19.634202 4809 scope.go:117] "RemoveContainer" containerID="dcf5ef7dc6b85329eeaf508ced67438c6450c1948cf660623dee159685fc882c" Mar 12 08:06:19 crc kubenswrapper[4809]: I0312 08:06:19.638000 4809 generic.go:334] "Generic (PLEG): container finished" podID="c060e910-de6e-43df-a148-66f07bc71180" containerID="7e9e966724b0fb49a8489a41cdda03032b8797814018c4b1ec1ef27116c7ce3a" exitCode=0 Mar 12 08:06:19 crc kubenswrapper[4809]: I0312 08:06:19.638029 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jn4tc" event={"ID":"c060e910-de6e-43df-a148-66f07bc71180","Type":"ContainerDied","Data":"7e9e966724b0fb49a8489a41cdda03032b8797814018c4b1ec1ef27116c7ce3a"} Mar 12 08:06:19 crc kubenswrapper[4809]: I0312 08:06:19.708517 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-lncvk" Mar 12 08:06:19 crc kubenswrapper[4809]: I0312 08:06:19.875745 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vhrcn" Mar 12 08:06:19 crc kubenswrapper[4809]: I0312 08:06:19.885059 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pqsb2" Mar 12 08:06:19 crc kubenswrapper[4809]: I0312 08:06:19.906385 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jn4tc" Mar 12 08:06:19 crc kubenswrapper[4809]: I0312 08:06:19.928093 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4qrn8" Mar 12 08:06:19 crc kubenswrapper[4809]: I0312 08:06:19.928578 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hrnnz" Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.019733 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c060e910-de6e-43df-a148-66f07bc71180-utilities\") pod \"c060e910-de6e-43df-a148-66f07bc71180\" (UID: \"c060e910-de6e-43df-a148-66f07bc71180\") " Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.019776 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96kww\" (UniqueName: \"kubernetes.io/projected/c060e910-de6e-43df-a148-66f07bc71180-kube-api-access-96kww\") pod \"c060e910-de6e-43df-a148-66f07bc71180\" (UID: \"c060e910-de6e-43df-a148-66f07bc71180\") " Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.019803 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c914c474-5d5b-415b-ad58-76c7ac15dc94-utilities\") pod \"c914c474-5d5b-415b-ad58-76c7ac15dc94\" (UID: \"c914c474-5d5b-415b-ad58-76c7ac15dc94\") " Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.019848 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrjd7\" (UniqueName: \"kubernetes.io/projected/c914c474-5d5b-415b-ad58-76c7ac15dc94-kube-api-access-mrjd7\") pod \"c914c474-5d5b-415b-ad58-76c7ac15dc94\" (UID: \"c914c474-5d5b-415b-ad58-76c7ac15dc94\") " Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.019872 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvthr\" (UniqueName: \"kubernetes.io/projected/f283aa6d-85ad-44ff-8758-d8251b00ae50-kube-api-access-cvthr\") pod \"f283aa6d-85ad-44ff-8758-d8251b00ae50\" (UID: \"f283aa6d-85ad-44ff-8758-d8251b00ae50\") " Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.019893 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f283aa6d-85ad-44ff-8758-d8251b00ae50-utilities\") pod \"f283aa6d-85ad-44ff-8758-d8251b00ae50\" (UID: \"f283aa6d-85ad-44ff-8758-d8251b00ae50\") " Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.019918 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/512b5035-d9af-4615-b351-2199e94f9c50-marketplace-trusted-ca\") pod \"512b5035-d9af-4615-b351-2199e94f9c50\" (UID: \"512b5035-d9af-4615-b351-2199e94f9c50\") " Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.019940 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c060e910-de6e-43df-a148-66f07bc71180-catalog-content\") pod \"c060e910-de6e-43df-a148-66f07bc71180\" (UID: \"c060e910-de6e-43df-a148-66f07bc71180\") " Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.019967 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f283aa6d-85ad-44ff-8758-d8251b00ae50-catalog-content\") pod \"f283aa6d-85ad-44ff-8758-d8251b00ae50\" (UID: \"f283aa6d-85ad-44ff-8758-d8251b00ae50\") " Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.020010 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c914c474-5d5b-415b-ad58-76c7ac15dc94-catalog-content\") pod \"c914c474-5d5b-415b-ad58-76c7ac15dc94\" (UID: \"c914c474-5d5b-415b-ad58-76c7ac15dc94\") " Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.020029 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd4be45a-8370-4cbe-a718-bb31fd64d99a-catalog-content\") pod \"bd4be45a-8370-4cbe-a718-bb31fd64d99a\" (UID: \"bd4be45a-8370-4cbe-a718-bb31fd64d99a\") " Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.020047 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-752tp\" (UniqueName: \"kubernetes.io/projected/512b5035-d9af-4615-b351-2199e94f9c50-kube-api-access-752tp\") pod \"512b5035-d9af-4615-b351-2199e94f9c50\" (UID: \"512b5035-d9af-4615-b351-2199e94f9c50\") " Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.020069 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlkfz\" (UniqueName: \"kubernetes.io/projected/bd4be45a-8370-4cbe-a718-bb31fd64d99a-kube-api-access-nlkfz\") pod \"bd4be45a-8370-4cbe-a718-bb31fd64d99a\" (UID: \"bd4be45a-8370-4cbe-a718-bb31fd64d99a\") " Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.020083 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd4be45a-8370-4cbe-a718-bb31fd64d99a-utilities\") pod \"bd4be45a-8370-4cbe-a718-bb31fd64d99a\" (UID: \"bd4be45a-8370-4cbe-a718-bb31fd64d99a\") " Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.020105 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/512b5035-d9af-4615-b351-2199e94f9c50-marketplace-operator-metrics\") pod \"512b5035-d9af-4615-b351-2199e94f9c50\" (UID: \"512b5035-d9af-4615-b351-2199e94f9c50\") " Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.021991 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd4be45a-8370-4cbe-a718-bb31fd64d99a-utilities" (OuterVolumeSpecName: "utilities") pod "bd4be45a-8370-4cbe-a718-bb31fd64d99a" (UID: "bd4be45a-8370-4cbe-a718-bb31fd64d99a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.022022 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c914c474-5d5b-415b-ad58-76c7ac15dc94-utilities" (OuterVolumeSpecName: "utilities") pod "c914c474-5d5b-415b-ad58-76c7ac15dc94" (UID: "c914c474-5d5b-415b-ad58-76c7ac15dc94"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.022673 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/512b5035-d9af-4615-b351-2199e94f9c50-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "512b5035-d9af-4615-b351-2199e94f9c50" (UID: "512b5035-d9af-4615-b351-2199e94f9c50"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.022715 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c060e910-de6e-43df-a148-66f07bc71180-utilities" (OuterVolumeSpecName: "utilities") pod "c060e910-de6e-43df-a148-66f07bc71180" (UID: "c060e910-de6e-43df-a148-66f07bc71180"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.023317 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f283aa6d-85ad-44ff-8758-d8251b00ae50-utilities" (OuterVolumeSpecName: "utilities") pod "f283aa6d-85ad-44ff-8758-d8251b00ae50" (UID: "f283aa6d-85ad-44ff-8758-d8251b00ae50"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.025975 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f283aa6d-85ad-44ff-8758-d8251b00ae50-kube-api-access-cvthr" (OuterVolumeSpecName: "kube-api-access-cvthr") pod "f283aa6d-85ad-44ff-8758-d8251b00ae50" (UID: "f283aa6d-85ad-44ff-8758-d8251b00ae50"). InnerVolumeSpecName "kube-api-access-cvthr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.026248 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/512b5035-d9af-4615-b351-2199e94f9c50-kube-api-access-752tp" (OuterVolumeSpecName: "kube-api-access-752tp") pod "512b5035-d9af-4615-b351-2199e94f9c50" (UID: "512b5035-d9af-4615-b351-2199e94f9c50"). InnerVolumeSpecName "kube-api-access-752tp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.026501 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c914c474-5d5b-415b-ad58-76c7ac15dc94-kube-api-access-mrjd7" (OuterVolumeSpecName: "kube-api-access-mrjd7") pod "c914c474-5d5b-415b-ad58-76c7ac15dc94" (UID: "c914c474-5d5b-415b-ad58-76c7ac15dc94"). InnerVolumeSpecName "kube-api-access-mrjd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.026545 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c060e910-de6e-43df-a148-66f07bc71180-kube-api-access-96kww" (OuterVolumeSpecName: "kube-api-access-96kww") pod "c060e910-de6e-43df-a148-66f07bc71180" (UID: "c060e910-de6e-43df-a148-66f07bc71180"). InnerVolumeSpecName "kube-api-access-96kww". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.027483 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd4be45a-8370-4cbe-a718-bb31fd64d99a-kube-api-access-nlkfz" (OuterVolumeSpecName: "kube-api-access-nlkfz") pod "bd4be45a-8370-4cbe-a718-bb31fd64d99a" (UID: "bd4be45a-8370-4cbe-a718-bb31fd64d99a"). InnerVolumeSpecName "kube-api-access-nlkfz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.035759 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/512b5035-d9af-4615-b351-2199e94f9c50-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "512b5035-d9af-4615-b351-2199e94f9c50" (UID: "512b5035-d9af-4615-b351-2199e94f9c50"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.048431 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c060e910-de6e-43df-a148-66f07bc71180-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c060e910-de6e-43df-a148-66f07bc71180" (UID: "c060e910-de6e-43df-a148-66f07bc71180"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.092689 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd4be45a-8370-4cbe-a718-bb31fd64d99a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bd4be45a-8370-4cbe-a718-bb31fd64d99a" (UID: "bd4be45a-8370-4cbe-a718-bb31fd64d99a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.092966 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c914c474-5d5b-415b-ad58-76c7ac15dc94-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c914c474-5d5b-415b-ad58-76c7ac15dc94" (UID: "c914c474-5d5b-415b-ad58-76c7ac15dc94"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.121401 4809 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/512b5035-d9af-4615-b351-2199e94f9c50-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.121445 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c060e910-de6e-43df-a148-66f07bc71180-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.121456 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c914c474-5d5b-415b-ad58-76c7ac15dc94-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.121464 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd4be45a-8370-4cbe-a718-bb31fd64d99a-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.121472 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-752tp\" (UniqueName: \"kubernetes.io/projected/512b5035-d9af-4615-b351-2199e94f9c50-kube-api-access-752tp\") on node \"crc\" DevicePath \"\"" Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.121480 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nlkfz\" (UniqueName: \"kubernetes.io/projected/bd4be45a-8370-4cbe-a718-bb31fd64d99a-kube-api-access-nlkfz\") on node \"crc\" DevicePath \"\"" Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.121491 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd4be45a-8370-4cbe-a718-bb31fd64d99a-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.121500 4809 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/512b5035-d9af-4615-b351-2199e94f9c50-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.121508 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c060e910-de6e-43df-a148-66f07bc71180-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.121516 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96kww\" (UniqueName: \"kubernetes.io/projected/c060e910-de6e-43df-a148-66f07bc71180-kube-api-access-96kww\") on node \"crc\" DevicePath \"\"" Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.121523 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c914c474-5d5b-415b-ad58-76c7ac15dc94-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.121531 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrjd7\" (UniqueName: \"kubernetes.io/projected/c914c474-5d5b-415b-ad58-76c7ac15dc94-kube-api-access-mrjd7\") on node \"crc\" DevicePath \"\"" Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.121541 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvthr\" (UniqueName: \"kubernetes.io/projected/f283aa6d-85ad-44ff-8758-d8251b00ae50-kube-api-access-cvthr\") on node \"crc\" DevicePath \"\"" Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.121549 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f283aa6d-85ad-44ff-8758-d8251b00ae50-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.190808 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f283aa6d-85ad-44ff-8758-d8251b00ae50-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f283aa6d-85ad-44ff-8758-d8251b00ae50" (UID: "f283aa6d-85ad-44ff-8758-d8251b00ae50"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.211875 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lncvk"] Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.222103 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f283aa6d-85ad-44ff-8758-d8251b00ae50-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.645710 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vhrcn" event={"ID":"c914c474-5d5b-415b-ad58-76c7ac15dc94","Type":"ContainerDied","Data":"51a4ff237bf9424c77a6bd2270d90673c891acd17c99791159b4e9ce221d45bb"} Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.646084 4809 scope.go:117] "RemoveContainer" containerID="6ac9290b8842acf4726ca719fcf6a69514ba4d833e22b09fb1312276e3df9ce0" Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.645730 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vhrcn" Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.647139 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-lncvk" event={"ID":"49c3f940-85d8-49c5-a529-367c56018858","Type":"ContainerStarted","Data":"07310c740ed332b3a570139b3f69295264d6bd4a8374daf9ca40441fc764022e"} Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.647189 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-lncvk" event={"ID":"49c3f940-85d8-49c5-a529-367c56018858","Type":"ContainerStarted","Data":"a2a00510ac7cbc35a0d4cd5ff4df4fdaa8021f56f65d7e563eaf7835ace96648"} Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.647219 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-lncvk" Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.648284 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hrnnz" Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.648302 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hrnnz" event={"ID":"512b5035-d9af-4615-b351-2199e94f9c50","Type":"ContainerDied","Data":"6eb87b3f4a897140c8631b1e3de2c8b460d0c5f5d94aa9803341a1a96391a1ac"} Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.649887 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jn4tc" event={"ID":"c060e910-de6e-43df-a148-66f07bc71180","Type":"ContainerDied","Data":"37444f75c0babbbe85544bb605e046938218da9005ce316ca32c52d02ef8b24f"} Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.649948 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jn4tc" Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.657435 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-lncvk" Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.659377 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pqsb2" event={"ID":"bd4be45a-8370-4cbe-a718-bb31fd64d99a","Type":"ContainerDied","Data":"4d44820f43bd97e63156bb805d5aa7b888ae62c51e188ca1f434ab459cecd21d"} Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.659413 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pqsb2" Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.661930 4809 scope.go:117] "RemoveContainer" containerID="37c9d156ce82bc23d254b0fdfc60ad8956e7abffce078f4f62f89baa9a799d85" Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.662001 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4qrn8" event={"ID":"f283aa6d-85ad-44ff-8758-d8251b00ae50","Type":"ContainerDied","Data":"0506ef62eabeb4fb3a1d648b591cd0a6ae41373327d468dca287c92c093b397c"} Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.662088 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4qrn8" Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.686317 4809 scope.go:117] "RemoveContainer" containerID="c6ac79cab9604a6ea1f843ee80c9be80b0d41407f6da541a7b6f125de9476009" Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.692830 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-lncvk" podStartSLOduration=1.692807561 podStartE2EDuration="1.692807561s" podCreationTimestamp="2026-03-12 08:06:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:06:20.674647062 +0000 UTC m=+454.256682785" watchObservedRunningTime="2026-03-12 08:06:20.692807561 +0000 UTC m=+454.274843294" Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.728528 4809 scope.go:117] "RemoveContainer" containerID="1066f01edb8b85480901db466b5358fefe8b997fb408f1d63f431012ca7746d4" Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.736453 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hrnnz"] Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.743784 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hrnnz"] Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.748315 4809 scope.go:117] "RemoveContainer" containerID="7e9e966724b0fb49a8489a41cdda03032b8797814018c4b1ec1ef27116c7ce3a" Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.748852 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vhrcn"] Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.752026 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vhrcn"] Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.760230 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jn4tc"] Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.767498 4809 scope.go:117] "RemoveContainer" containerID="020d1eb6a1107b0302c6913f22d418bd2a1c75dfc8ba4a4861e02ad42af5a363" Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.771193 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jn4tc"] Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.780635 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pqsb2"] Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.783140 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pqsb2"] Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.783292 4809 scope.go:117] "RemoveContainer" containerID="98d4dcd19719f9d446fe81d94fdb22895a3e29ab190abea832e613920b2fd301" Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.786563 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4qrn8"] Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.789221 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4qrn8"] Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.794423 4809 scope.go:117] "RemoveContainer" containerID="be53b596bf62aac81e069baa64b49314df9e66ce3279fc49765f560dbae7810f" Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.812604 4809 scope.go:117] "RemoveContainer" containerID="e4ccc725d789b3c681c84bcf99393a7b2f29a97f026c12037aa6a0eb9293a992" Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.825157 4809 scope.go:117] "RemoveContainer" containerID="64665dd2e7b9f48fe25e16a0771d817ce1c5ee178d4741af1f43404a29c495a7" Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.838244 4809 scope.go:117] "RemoveContainer" containerID="4103d51c22bb98f593d103b0a3a160b75f3d4654273da212f6cf13fb92126a29" Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.849809 4809 scope.go:117] "RemoveContainer" containerID="4166c5bc26ff63694ab15f86f6827f1d249b39370efcc0315d54fb58d20c095e" Mar 12 08:06:20 crc kubenswrapper[4809]: I0312 08:06:20.870043 4809 scope.go:117] "RemoveContainer" containerID="c46ed0c35d3dadc5f3cfa341d9b06bd7ad18e531a54d2367f9cfdcbe3c3e9d5e" Mar 12 08:06:21 crc kubenswrapper[4809]: I0312 08:06:21.116475 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="512b5035-d9af-4615-b351-2199e94f9c50" path="/var/lib/kubelet/pods/512b5035-d9af-4615-b351-2199e94f9c50/volumes" Mar 12 08:06:21 crc kubenswrapper[4809]: I0312 08:06:21.117534 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd4be45a-8370-4cbe-a718-bb31fd64d99a" path="/var/lib/kubelet/pods/bd4be45a-8370-4cbe-a718-bb31fd64d99a/volumes" Mar 12 08:06:21 crc kubenswrapper[4809]: I0312 08:06:21.118778 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c060e910-de6e-43df-a148-66f07bc71180" path="/var/lib/kubelet/pods/c060e910-de6e-43df-a148-66f07bc71180/volumes" Mar 12 08:06:21 crc kubenswrapper[4809]: I0312 08:06:21.120765 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c914c474-5d5b-415b-ad58-76c7ac15dc94" path="/var/lib/kubelet/pods/c914c474-5d5b-415b-ad58-76c7ac15dc94/volumes" Mar 12 08:06:21 crc kubenswrapper[4809]: I0312 08:06:21.121945 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f283aa6d-85ad-44ff-8758-d8251b00ae50" path="/var/lib/kubelet/pods/f283aa6d-85ad-44ff-8758-d8251b00ae50/volumes" Mar 12 08:06:21 crc kubenswrapper[4809]: I0312 08:06:21.551463 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hrqzc"] Mar 12 08:06:21 crc kubenswrapper[4809]: E0312 08:06:21.551696 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f283aa6d-85ad-44ff-8758-d8251b00ae50" containerName="extract-utilities" Mar 12 08:06:21 crc kubenswrapper[4809]: I0312 08:06:21.551714 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="f283aa6d-85ad-44ff-8758-d8251b00ae50" containerName="extract-utilities" Mar 12 08:06:21 crc kubenswrapper[4809]: E0312 08:06:21.551728 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c914c474-5d5b-415b-ad58-76c7ac15dc94" containerName="registry-server" Mar 12 08:06:21 crc kubenswrapper[4809]: I0312 08:06:21.551736 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="c914c474-5d5b-415b-ad58-76c7ac15dc94" containerName="registry-server" Mar 12 08:06:21 crc kubenswrapper[4809]: E0312 08:06:21.551748 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c060e910-de6e-43df-a148-66f07bc71180" containerName="registry-server" Mar 12 08:06:21 crc kubenswrapper[4809]: I0312 08:06:21.551757 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="c060e910-de6e-43df-a148-66f07bc71180" containerName="registry-server" Mar 12 08:06:21 crc kubenswrapper[4809]: E0312 08:06:21.551768 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd4be45a-8370-4cbe-a718-bb31fd64d99a" containerName="registry-server" Mar 12 08:06:21 crc kubenswrapper[4809]: I0312 08:06:21.551777 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd4be45a-8370-4cbe-a718-bb31fd64d99a" containerName="registry-server" Mar 12 08:06:21 crc kubenswrapper[4809]: E0312 08:06:21.551787 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c914c474-5d5b-415b-ad58-76c7ac15dc94" containerName="extract-utilities" Mar 12 08:06:21 crc kubenswrapper[4809]: I0312 08:06:21.551794 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="c914c474-5d5b-415b-ad58-76c7ac15dc94" containerName="extract-utilities" Mar 12 08:06:21 crc kubenswrapper[4809]: E0312 08:06:21.551800 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c914c474-5d5b-415b-ad58-76c7ac15dc94" containerName="extract-content" Mar 12 08:06:21 crc kubenswrapper[4809]: I0312 08:06:21.551807 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="c914c474-5d5b-415b-ad58-76c7ac15dc94" containerName="extract-content" Mar 12 08:06:21 crc kubenswrapper[4809]: E0312 08:06:21.551817 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="512b5035-d9af-4615-b351-2199e94f9c50" containerName="marketplace-operator" Mar 12 08:06:21 crc kubenswrapper[4809]: I0312 08:06:21.551827 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="512b5035-d9af-4615-b351-2199e94f9c50" containerName="marketplace-operator" Mar 12 08:06:21 crc kubenswrapper[4809]: E0312 08:06:21.551838 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="512b5035-d9af-4615-b351-2199e94f9c50" containerName="marketplace-operator" Mar 12 08:06:21 crc kubenswrapper[4809]: I0312 08:06:21.551845 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="512b5035-d9af-4615-b351-2199e94f9c50" containerName="marketplace-operator" Mar 12 08:06:21 crc kubenswrapper[4809]: E0312 08:06:21.551858 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f283aa6d-85ad-44ff-8758-d8251b00ae50" containerName="registry-server" Mar 12 08:06:21 crc kubenswrapper[4809]: I0312 08:06:21.551867 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="f283aa6d-85ad-44ff-8758-d8251b00ae50" containerName="registry-server" Mar 12 08:06:21 crc kubenswrapper[4809]: E0312 08:06:21.551876 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f283aa6d-85ad-44ff-8758-d8251b00ae50" containerName="extract-content" Mar 12 08:06:21 crc kubenswrapper[4809]: I0312 08:06:21.551883 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="f283aa6d-85ad-44ff-8758-d8251b00ae50" containerName="extract-content" Mar 12 08:06:21 crc kubenswrapper[4809]: E0312 08:06:21.551891 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c060e910-de6e-43df-a148-66f07bc71180" containerName="extract-content" Mar 12 08:06:21 crc kubenswrapper[4809]: I0312 08:06:21.551899 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="c060e910-de6e-43df-a148-66f07bc71180" containerName="extract-content" Mar 12 08:06:21 crc kubenswrapper[4809]: E0312 08:06:21.551907 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd4be45a-8370-4cbe-a718-bb31fd64d99a" containerName="extract-content" Mar 12 08:06:21 crc kubenswrapper[4809]: I0312 08:06:21.551917 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd4be45a-8370-4cbe-a718-bb31fd64d99a" containerName="extract-content" Mar 12 08:06:21 crc kubenswrapper[4809]: E0312 08:06:21.551925 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd4be45a-8370-4cbe-a718-bb31fd64d99a" containerName="extract-utilities" Mar 12 08:06:21 crc kubenswrapper[4809]: I0312 08:06:21.551931 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd4be45a-8370-4cbe-a718-bb31fd64d99a" containerName="extract-utilities" Mar 12 08:06:21 crc kubenswrapper[4809]: E0312 08:06:21.551942 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c060e910-de6e-43df-a148-66f07bc71180" containerName="extract-utilities" Mar 12 08:06:21 crc kubenswrapper[4809]: I0312 08:06:21.551949 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="c060e910-de6e-43df-a148-66f07bc71180" containerName="extract-utilities" Mar 12 08:06:21 crc kubenswrapper[4809]: I0312 08:06:21.552058 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd4be45a-8370-4cbe-a718-bb31fd64d99a" containerName="registry-server" Mar 12 08:06:21 crc kubenswrapper[4809]: I0312 08:06:21.552072 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="c914c474-5d5b-415b-ad58-76c7ac15dc94" containerName="registry-server" Mar 12 08:06:21 crc kubenswrapper[4809]: I0312 08:06:21.552082 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="c060e910-de6e-43df-a148-66f07bc71180" containerName="registry-server" Mar 12 08:06:21 crc kubenswrapper[4809]: I0312 08:06:21.552093 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="512b5035-d9af-4615-b351-2199e94f9c50" containerName="marketplace-operator" Mar 12 08:06:21 crc kubenswrapper[4809]: I0312 08:06:21.552101 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="f283aa6d-85ad-44ff-8758-d8251b00ae50" containerName="registry-server" Mar 12 08:06:21 crc kubenswrapper[4809]: I0312 08:06:21.552362 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="512b5035-d9af-4615-b351-2199e94f9c50" containerName="marketplace-operator" Mar 12 08:06:21 crc kubenswrapper[4809]: I0312 08:06:21.553069 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hrqzc" Mar 12 08:06:21 crc kubenswrapper[4809]: I0312 08:06:21.557080 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 12 08:06:21 crc kubenswrapper[4809]: I0312 08:06:21.558145 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hrqzc"] Mar 12 08:06:21 crc kubenswrapper[4809]: I0312 08:06:21.642393 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0d3a3fc-1ad7-4a41-a732-d16ca4e7ceb1-utilities\") pod \"redhat-marketplace-hrqzc\" (UID: \"d0d3a3fc-1ad7-4a41-a732-d16ca4e7ceb1\") " pod="openshift-marketplace/redhat-marketplace-hrqzc" Mar 12 08:06:21 crc kubenswrapper[4809]: I0312 08:06:21.642702 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0d3a3fc-1ad7-4a41-a732-d16ca4e7ceb1-catalog-content\") pod \"redhat-marketplace-hrqzc\" (UID: \"d0d3a3fc-1ad7-4a41-a732-d16ca4e7ceb1\") " pod="openshift-marketplace/redhat-marketplace-hrqzc" Mar 12 08:06:21 crc kubenswrapper[4809]: I0312 08:06:21.642751 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dfv6\" (UniqueName: \"kubernetes.io/projected/d0d3a3fc-1ad7-4a41-a732-d16ca4e7ceb1-kube-api-access-2dfv6\") pod \"redhat-marketplace-hrqzc\" (UID: \"d0d3a3fc-1ad7-4a41-a732-d16ca4e7ceb1\") " pod="openshift-marketplace/redhat-marketplace-hrqzc" Mar 12 08:06:21 crc kubenswrapper[4809]: I0312 08:06:21.743884 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0d3a3fc-1ad7-4a41-a732-d16ca4e7ceb1-catalog-content\") pod \"redhat-marketplace-hrqzc\" (UID: \"d0d3a3fc-1ad7-4a41-a732-d16ca4e7ceb1\") " pod="openshift-marketplace/redhat-marketplace-hrqzc" Mar 12 08:06:21 crc kubenswrapper[4809]: I0312 08:06:21.743936 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dfv6\" (UniqueName: \"kubernetes.io/projected/d0d3a3fc-1ad7-4a41-a732-d16ca4e7ceb1-kube-api-access-2dfv6\") pod \"redhat-marketplace-hrqzc\" (UID: \"d0d3a3fc-1ad7-4a41-a732-d16ca4e7ceb1\") " pod="openshift-marketplace/redhat-marketplace-hrqzc" Mar 12 08:06:21 crc kubenswrapper[4809]: I0312 08:06:21.743972 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0d3a3fc-1ad7-4a41-a732-d16ca4e7ceb1-utilities\") pod \"redhat-marketplace-hrqzc\" (UID: \"d0d3a3fc-1ad7-4a41-a732-d16ca4e7ceb1\") " pod="openshift-marketplace/redhat-marketplace-hrqzc" Mar 12 08:06:21 crc kubenswrapper[4809]: I0312 08:06:21.744444 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0d3a3fc-1ad7-4a41-a732-d16ca4e7ceb1-catalog-content\") pod \"redhat-marketplace-hrqzc\" (UID: \"d0d3a3fc-1ad7-4a41-a732-d16ca4e7ceb1\") " pod="openshift-marketplace/redhat-marketplace-hrqzc" Mar 12 08:06:21 crc kubenswrapper[4809]: I0312 08:06:21.744524 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0d3a3fc-1ad7-4a41-a732-d16ca4e7ceb1-utilities\") pod \"redhat-marketplace-hrqzc\" (UID: \"d0d3a3fc-1ad7-4a41-a732-d16ca4e7ceb1\") " pod="openshift-marketplace/redhat-marketplace-hrqzc" Mar 12 08:06:21 crc kubenswrapper[4809]: I0312 08:06:21.750011 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pd2vq"] Mar 12 08:06:21 crc kubenswrapper[4809]: I0312 08:06:21.752289 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pd2vq" Mar 12 08:06:21 crc kubenswrapper[4809]: I0312 08:06:21.754323 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 12 08:06:21 crc kubenswrapper[4809]: I0312 08:06:21.764319 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pd2vq"] Mar 12 08:06:21 crc kubenswrapper[4809]: I0312 08:06:21.769547 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dfv6\" (UniqueName: \"kubernetes.io/projected/d0d3a3fc-1ad7-4a41-a732-d16ca4e7ceb1-kube-api-access-2dfv6\") pod \"redhat-marketplace-hrqzc\" (UID: \"d0d3a3fc-1ad7-4a41-a732-d16ca4e7ceb1\") " pod="openshift-marketplace/redhat-marketplace-hrqzc" Mar 12 08:06:21 crc kubenswrapper[4809]: I0312 08:06:21.845287 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec52f8eb-40dd-4475-9726-69b84829233d-utilities\") pod \"redhat-operators-pd2vq\" (UID: \"ec52f8eb-40dd-4475-9726-69b84829233d\") " pod="openshift-marketplace/redhat-operators-pd2vq" Mar 12 08:06:21 crc kubenswrapper[4809]: I0312 08:06:21.845348 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl65w\" (UniqueName: \"kubernetes.io/projected/ec52f8eb-40dd-4475-9726-69b84829233d-kube-api-access-zl65w\") pod \"redhat-operators-pd2vq\" (UID: \"ec52f8eb-40dd-4475-9726-69b84829233d\") " pod="openshift-marketplace/redhat-operators-pd2vq" Mar 12 08:06:21 crc kubenswrapper[4809]: I0312 08:06:21.845376 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec52f8eb-40dd-4475-9726-69b84829233d-catalog-content\") pod \"redhat-operators-pd2vq\" (UID: \"ec52f8eb-40dd-4475-9726-69b84829233d\") " pod="openshift-marketplace/redhat-operators-pd2vq" Mar 12 08:06:21 crc kubenswrapper[4809]: I0312 08:06:21.877232 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hrqzc" Mar 12 08:06:21 crc kubenswrapper[4809]: I0312 08:06:21.946388 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zl65w\" (UniqueName: \"kubernetes.io/projected/ec52f8eb-40dd-4475-9726-69b84829233d-kube-api-access-zl65w\") pod \"redhat-operators-pd2vq\" (UID: \"ec52f8eb-40dd-4475-9726-69b84829233d\") " pod="openshift-marketplace/redhat-operators-pd2vq" Mar 12 08:06:21 crc kubenswrapper[4809]: I0312 08:06:21.946442 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec52f8eb-40dd-4475-9726-69b84829233d-catalog-content\") pod \"redhat-operators-pd2vq\" (UID: \"ec52f8eb-40dd-4475-9726-69b84829233d\") " pod="openshift-marketplace/redhat-operators-pd2vq" Mar 12 08:06:21 crc kubenswrapper[4809]: I0312 08:06:21.946499 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec52f8eb-40dd-4475-9726-69b84829233d-utilities\") pod \"redhat-operators-pd2vq\" (UID: \"ec52f8eb-40dd-4475-9726-69b84829233d\") " pod="openshift-marketplace/redhat-operators-pd2vq" Mar 12 08:06:21 crc kubenswrapper[4809]: I0312 08:06:21.946890 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec52f8eb-40dd-4475-9726-69b84829233d-utilities\") pod \"redhat-operators-pd2vq\" (UID: \"ec52f8eb-40dd-4475-9726-69b84829233d\") " pod="openshift-marketplace/redhat-operators-pd2vq" Mar 12 08:06:21 crc kubenswrapper[4809]: I0312 08:06:21.947848 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec52f8eb-40dd-4475-9726-69b84829233d-catalog-content\") pod \"redhat-operators-pd2vq\" (UID: \"ec52f8eb-40dd-4475-9726-69b84829233d\") " pod="openshift-marketplace/redhat-operators-pd2vq" Mar 12 08:06:21 crc kubenswrapper[4809]: I0312 08:06:21.965498 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zl65w\" (UniqueName: \"kubernetes.io/projected/ec52f8eb-40dd-4475-9726-69b84829233d-kube-api-access-zl65w\") pod \"redhat-operators-pd2vq\" (UID: \"ec52f8eb-40dd-4475-9726-69b84829233d\") " pod="openshift-marketplace/redhat-operators-pd2vq" Mar 12 08:06:22 crc kubenswrapper[4809]: I0312 08:06:22.086419 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pd2vq" Mar 12 08:06:22 crc kubenswrapper[4809]: I0312 08:06:22.254685 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pd2vq"] Mar 12 08:06:22 crc kubenswrapper[4809]: W0312 08:06:22.256411 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec52f8eb_40dd_4475_9726_69b84829233d.slice/crio-1a0b82a486c8d14fce1c9bedf7d16fedad716a7233fbd0b96da97ce3e00dc907 WatchSource:0}: Error finding container 1a0b82a486c8d14fce1c9bedf7d16fedad716a7233fbd0b96da97ce3e00dc907: Status 404 returned error can't find the container with id 1a0b82a486c8d14fce1c9bedf7d16fedad716a7233fbd0b96da97ce3e00dc907 Mar 12 08:06:22 crc kubenswrapper[4809]: I0312 08:06:22.299650 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hrqzc"] Mar 12 08:06:22 crc kubenswrapper[4809]: W0312 08:06:22.305968 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd0d3a3fc_1ad7_4a41_a732_d16ca4e7ceb1.slice/crio-e48bf99fd6374076215696eed9473905247168f86f0e168adb5c00a7322885b7 WatchSource:0}: Error finding container e48bf99fd6374076215696eed9473905247168f86f0e168adb5c00a7322885b7: Status 404 returned error can't find the container with id e48bf99fd6374076215696eed9473905247168f86f0e168adb5c00a7322885b7 Mar 12 08:06:22 crc kubenswrapper[4809]: I0312 08:06:22.678975 4809 generic.go:334] "Generic (PLEG): container finished" podID="ec52f8eb-40dd-4475-9726-69b84829233d" containerID="9d91f82f5c869448e72035643d22bff6146ec5782c904583903d03ce54454926" exitCode=0 Mar 12 08:06:22 crc kubenswrapper[4809]: I0312 08:06:22.679063 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pd2vq" event={"ID":"ec52f8eb-40dd-4475-9726-69b84829233d","Type":"ContainerDied","Data":"9d91f82f5c869448e72035643d22bff6146ec5782c904583903d03ce54454926"} Mar 12 08:06:22 crc kubenswrapper[4809]: I0312 08:06:22.679365 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pd2vq" event={"ID":"ec52f8eb-40dd-4475-9726-69b84829233d","Type":"ContainerStarted","Data":"1a0b82a486c8d14fce1c9bedf7d16fedad716a7233fbd0b96da97ce3e00dc907"} Mar 12 08:06:22 crc kubenswrapper[4809]: I0312 08:06:22.681761 4809 generic.go:334] "Generic (PLEG): container finished" podID="d0d3a3fc-1ad7-4a41-a732-d16ca4e7ceb1" containerID="c83aece4e68d79586ae49c0e47ef7c01e6c3e4d9263a3cd9cf7b424fc40a7cc0" exitCode=0 Mar 12 08:06:22 crc kubenswrapper[4809]: I0312 08:06:22.681826 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hrqzc" event={"ID":"d0d3a3fc-1ad7-4a41-a732-d16ca4e7ceb1","Type":"ContainerDied","Data":"c83aece4e68d79586ae49c0e47ef7c01e6c3e4d9263a3cd9cf7b424fc40a7cc0"} Mar 12 08:06:22 crc kubenswrapper[4809]: I0312 08:06:22.681844 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hrqzc" event={"ID":"d0d3a3fc-1ad7-4a41-a732-d16ca4e7ceb1","Type":"ContainerStarted","Data":"e48bf99fd6374076215696eed9473905247168f86f0e168adb5c00a7322885b7"} Mar 12 08:06:23 crc kubenswrapper[4809]: I0312 08:06:23.957522 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zqzjb"] Mar 12 08:06:23 crc kubenswrapper[4809]: I0312 08:06:23.958474 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zqzjb" Mar 12 08:06:23 crc kubenswrapper[4809]: I0312 08:06:23.960815 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 12 08:06:23 crc kubenswrapper[4809]: I0312 08:06:23.980554 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zqzjb"] Mar 12 08:06:24 crc kubenswrapper[4809]: I0312 08:06:24.069073 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg2bd\" (UniqueName: \"kubernetes.io/projected/f9d9e6d3-d87f-485b-bb03-6ed4f067de44-kube-api-access-sg2bd\") pod \"certified-operators-zqzjb\" (UID: \"f9d9e6d3-d87f-485b-bb03-6ed4f067de44\") " pod="openshift-marketplace/certified-operators-zqzjb" Mar 12 08:06:24 crc kubenswrapper[4809]: I0312 08:06:24.069520 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9d9e6d3-d87f-485b-bb03-6ed4f067de44-catalog-content\") pod \"certified-operators-zqzjb\" (UID: \"f9d9e6d3-d87f-485b-bb03-6ed4f067de44\") " pod="openshift-marketplace/certified-operators-zqzjb" Mar 12 08:06:24 crc kubenswrapper[4809]: I0312 08:06:24.069580 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9d9e6d3-d87f-485b-bb03-6ed4f067de44-utilities\") pod \"certified-operators-zqzjb\" (UID: \"f9d9e6d3-d87f-485b-bb03-6ed4f067de44\") " pod="openshift-marketplace/certified-operators-zqzjb" Mar 12 08:06:24 crc kubenswrapper[4809]: I0312 08:06:24.148203 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nscs5"] Mar 12 08:06:24 crc kubenswrapper[4809]: I0312 08:06:24.149138 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nscs5" Mar 12 08:06:24 crc kubenswrapper[4809]: I0312 08:06:24.151280 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 12 08:06:24 crc kubenswrapper[4809]: I0312 08:06:24.161674 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nscs5"] Mar 12 08:06:24 crc kubenswrapper[4809]: I0312 08:06:24.170909 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sg2bd\" (UniqueName: \"kubernetes.io/projected/f9d9e6d3-d87f-485b-bb03-6ed4f067de44-kube-api-access-sg2bd\") pod \"certified-operators-zqzjb\" (UID: \"f9d9e6d3-d87f-485b-bb03-6ed4f067de44\") " pod="openshift-marketplace/certified-operators-zqzjb" Mar 12 08:06:24 crc kubenswrapper[4809]: I0312 08:06:24.170981 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9d9e6d3-d87f-485b-bb03-6ed4f067de44-catalog-content\") pod \"certified-operators-zqzjb\" (UID: \"f9d9e6d3-d87f-485b-bb03-6ed4f067de44\") " pod="openshift-marketplace/certified-operators-zqzjb" Mar 12 08:06:24 crc kubenswrapper[4809]: I0312 08:06:24.171137 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9d9e6d3-d87f-485b-bb03-6ed4f067de44-utilities\") pod \"certified-operators-zqzjb\" (UID: \"f9d9e6d3-d87f-485b-bb03-6ed4f067de44\") " pod="openshift-marketplace/certified-operators-zqzjb" Mar 12 08:06:24 crc kubenswrapper[4809]: I0312 08:06:24.171799 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9d9e6d3-d87f-485b-bb03-6ed4f067de44-utilities\") pod \"certified-operators-zqzjb\" (UID: \"f9d9e6d3-d87f-485b-bb03-6ed4f067de44\") " pod="openshift-marketplace/certified-operators-zqzjb" Mar 12 08:06:24 crc kubenswrapper[4809]: I0312 08:06:24.173355 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9d9e6d3-d87f-485b-bb03-6ed4f067de44-catalog-content\") pod \"certified-operators-zqzjb\" (UID: \"f9d9e6d3-d87f-485b-bb03-6ed4f067de44\") " pod="openshift-marketplace/certified-operators-zqzjb" Mar 12 08:06:24 crc kubenswrapper[4809]: I0312 08:06:24.193603 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sg2bd\" (UniqueName: \"kubernetes.io/projected/f9d9e6d3-d87f-485b-bb03-6ed4f067de44-kube-api-access-sg2bd\") pod \"certified-operators-zqzjb\" (UID: \"f9d9e6d3-d87f-485b-bb03-6ed4f067de44\") " pod="openshift-marketplace/certified-operators-zqzjb" Mar 12 08:06:24 crc kubenswrapper[4809]: I0312 08:06:24.272845 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d3cb5a3-ab13-4827-a66f-117150abe43b-catalog-content\") pod \"community-operators-nscs5\" (UID: \"8d3cb5a3-ab13-4827-a66f-117150abe43b\") " pod="openshift-marketplace/community-operators-nscs5" Mar 12 08:06:24 crc kubenswrapper[4809]: I0312 08:06:24.272938 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d3cb5a3-ab13-4827-a66f-117150abe43b-utilities\") pod \"community-operators-nscs5\" (UID: \"8d3cb5a3-ab13-4827-a66f-117150abe43b\") " pod="openshift-marketplace/community-operators-nscs5" Mar 12 08:06:24 crc kubenswrapper[4809]: I0312 08:06:24.273006 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt5gk\" (UniqueName: \"kubernetes.io/projected/8d3cb5a3-ab13-4827-a66f-117150abe43b-kube-api-access-wt5gk\") pod \"community-operators-nscs5\" (UID: \"8d3cb5a3-ab13-4827-a66f-117150abe43b\") " pod="openshift-marketplace/community-operators-nscs5" Mar 12 08:06:24 crc kubenswrapper[4809]: I0312 08:06:24.312850 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zqzjb" Mar 12 08:06:24 crc kubenswrapper[4809]: I0312 08:06:24.374313 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wt5gk\" (UniqueName: \"kubernetes.io/projected/8d3cb5a3-ab13-4827-a66f-117150abe43b-kube-api-access-wt5gk\") pod \"community-operators-nscs5\" (UID: \"8d3cb5a3-ab13-4827-a66f-117150abe43b\") " pod="openshift-marketplace/community-operators-nscs5" Mar 12 08:06:24 crc kubenswrapper[4809]: I0312 08:06:24.374358 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d3cb5a3-ab13-4827-a66f-117150abe43b-catalog-content\") pod \"community-operators-nscs5\" (UID: \"8d3cb5a3-ab13-4827-a66f-117150abe43b\") " pod="openshift-marketplace/community-operators-nscs5" Mar 12 08:06:24 crc kubenswrapper[4809]: I0312 08:06:24.374405 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d3cb5a3-ab13-4827-a66f-117150abe43b-utilities\") pod \"community-operators-nscs5\" (UID: \"8d3cb5a3-ab13-4827-a66f-117150abe43b\") " pod="openshift-marketplace/community-operators-nscs5" Mar 12 08:06:24 crc kubenswrapper[4809]: I0312 08:06:24.375389 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d3cb5a3-ab13-4827-a66f-117150abe43b-utilities\") pod \"community-operators-nscs5\" (UID: \"8d3cb5a3-ab13-4827-a66f-117150abe43b\") " pod="openshift-marketplace/community-operators-nscs5" Mar 12 08:06:24 crc kubenswrapper[4809]: I0312 08:06:24.375852 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d3cb5a3-ab13-4827-a66f-117150abe43b-catalog-content\") pod \"community-operators-nscs5\" (UID: \"8d3cb5a3-ab13-4827-a66f-117150abe43b\") " pod="openshift-marketplace/community-operators-nscs5" Mar 12 08:06:24 crc kubenswrapper[4809]: I0312 08:06:24.403466 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wt5gk\" (UniqueName: \"kubernetes.io/projected/8d3cb5a3-ab13-4827-a66f-117150abe43b-kube-api-access-wt5gk\") pod \"community-operators-nscs5\" (UID: \"8d3cb5a3-ab13-4827-a66f-117150abe43b\") " pod="openshift-marketplace/community-operators-nscs5" Mar 12 08:06:24 crc kubenswrapper[4809]: I0312 08:06:24.479318 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nscs5" Mar 12 08:06:24 crc kubenswrapper[4809]: I0312 08:06:24.669833 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nscs5"] Mar 12 08:06:24 crc kubenswrapper[4809]: I0312 08:06:24.694211 4809 generic.go:334] "Generic (PLEG): container finished" podID="d0d3a3fc-1ad7-4a41-a732-d16ca4e7ceb1" containerID="4cd67c4991566581942c2d1e141b91a98e381398698fd4650e4a141ecaf760a1" exitCode=0 Mar 12 08:06:24 crc kubenswrapper[4809]: I0312 08:06:24.694279 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hrqzc" event={"ID":"d0d3a3fc-1ad7-4a41-a732-d16ca4e7ceb1","Type":"ContainerDied","Data":"4cd67c4991566581942c2d1e141b91a98e381398698fd4650e4a141ecaf760a1"} Mar 12 08:06:24 crc kubenswrapper[4809]: I0312 08:06:24.698906 4809 generic.go:334] "Generic (PLEG): container finished" podID="ec52f8eb-40dd-4475-9726-69b84829233d" containerID="287c5cc635badc739725d036dc04be45e64a124d9850442cff1519588ec790ee" exitCode=0 Mar 12 08:06:24 crc kubenswrapper[4809]: I0312 08:06:24.699030 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pd2vq" event={"ID":"ec52f8eb-40dd-4475-9726-69b84829233d","Type":"ContainerDied","Data":"287c5cc635badc739725d036dc04be45e64a124d9850442cff1519588ec790ee"} Mar 12 08:06:24 crc kubenswrapper[4809]: I0312 08:06:24.699840 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nscs5" event={"ID":"8d3cb5a3-ab13-4827-a66f-117150abe43b","Type":"ContainerStarted","Data":"5e8cef456a3ccf9a28aa47c24a54b6117ebe89563983c82af93c3a0fc9485f6a"} Mar 12 08:06:24 crc kubenswrapper[4809]: I0312 08:06:24.764851 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zqzjb"] Mar 12 08:06:24 crc kubenswrapper[4809]: W0312 08:06:24.773502 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9d9e6d3_d87f_485b_bb03_6ed4f067de44.slice/crio-ea1c7f999cb64eb5bfcba754517ded12503f77d1c0e430c149cf3ee83bd40f0f WatchSource:0}: Error finding container ea1c7f999cb64eb5bfcba754517ded12503f77d1c0e430c149cf3ee83bd40f0f: Status 404 returned error can't find the container with id ea1c7f999cb64eb5bfcba754517ded12503f77d1c0e430c149cf3ee83bd40f0f Mar 12 08:06:25 crc kubenswrapper[4809]: I0312 08:06:25.100897 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-pfh6p" Mar 12 08:06:25 crc kubenswrapper[4809]: I0312 08:06:25.160752 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-5d8gm"] Mar 12 08:06:25 crc kubenswrapper[4809]: I0312 08:06:25.726094 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hrqzc" event={"ID":"d0d3a3fc-1ad7-4a41-a732-d16ca4e7ceb1","Type":"ContainerStarted","Data":"f53a71af3a5ee6cff52159ff172cd42b31dd4e98d7ee6493593c3cd0ee691c8a"} Mar 12 08:06:25 crc kubenswrapper[4809]: I0312 08:06:25.731294 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pd2vq" event={"ID":"ec52f8eb-40dd-4475-9726-69b84829233d","Type":"ContainerStarted","Data":"37cebd807e5d7931b7f965abc667b0283e25131eb5fd1f94ac80ff5e6140b755"} Mar 12 08:06:25 crc kubenswrapper[4809]: I0312 08:06:25.735025 4809 generic.go:334] "Generic (PLEG): container finished" podID="f9d9e6d3-d87f-485b-bb03-6ed4f067de44" containerID="4b4578b3855159a8e69658a5beb3b1a47f5153d1245e81d20420d6596adcd880" exitCode=0 Mar 12 08:06:25 crc kubenswrapper[4809]: I0312 08:06:25.735111 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zqzjb" event={"ID":"f9d9e6d3-d87f-485b-bb03-6ed4f067de44","Type":"ContainerDied","Data":"4b4578b3855159a8e69658a5beb3b1a47f5153d1245e81d20420d6596adcd880"} Mar 12 08:06:25 crc kubenswrapper[4809]: I0312 08:06:25.735160 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zqzjb" event={"ID":"f9d9e6d3-d87f-485b-bb03-6ed4f067de44","Type":"ContainerStarted","Data":"ea1c7f999cb64eb5bfcba754517ded12503f77d1c0e430c149cf3ee83bd40f0f"} Mar 12 08:06:25 crc kubenswrapper[4809]: I0312 08:06:25.738391 4809 generic.go:334] "Generic (PLEG): container finished" podID="8d3cb5a3-ab13-4827-a66f-117150abe43b" containerID="47739538ac431c900b19497312c4cc536e7833e54b717bbab04b1f63bdbfad77" exitCode=0 Mar 12 08:06:25 crc kubenswrapper[4809]: I0312 08:06:25.738431 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nscs5" event={"ID":"8d3cb5a3-ab13-4827-a66f-117150abe43b","Type":"ContainerDied","Data":"47739538ac431c900b19497312c4cc536e7833e54b717bbab04b1f63bdbfad77"} Mar 12 08:06:25 crc kubenswrapper[4809]: I0312 08:06:25.753228 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hrqzc" podStartSLOduration=2.340777462 podStartE2EDuration="4.753209219s" podCreationTimestamp="2026-03-12 08:06:21 +0000 UTC" firstStartedPulling="2026-03-12 08:06:22.683770124 +0000 UTC m=+456.265805857" lastFinishedPulling="2026-03-12 08:06:25.096201881 +0000 UTC m=+458.678237614" observedRunningTime="2026-03-12 08:06:25.748386086 +0000 UTC m=+459.330421819" watchObservedRunningTime="2026-03-12 08:06:25.753209219 +0000 UTC m=+459.335244972" Mar 12 08:06:25 crc kubenswrapper[4809]: I0312 08:06:25.807054 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pd2vq" podStartSLOduration=2.223666204 podStartE2EDuration="4.807036839s" podCreationTimestamp="2026-03-12 08:06:21 +0000 UTC" firstStartedPulling="2026-03-12 08:06:22.680507885 +0000 UTC m=+456.262543618" lastFinishedPulling="2026-03-12 08:06:25.26387852 +0000 UTC m=+458.845914253" observedRunningTime="2026-03-12 08:06:25.802183555 +0000 UTC m=+459.384219298" watchObservedRunningTime="2026-03-12 08:06:25.807036839 +0000 UTC m=+459.389072592" Mar 12 08:06:26 crc kubenswrapper[4809]: I0312 08:06:26.753593 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nscs5" event={"ID":"8d3cb5a3-ab13-4827-a66f-117150abe43b","Type":"ContainerStarted","Data":"f69ee30531caa2dd4c6c6d5bcb4fddd06cb9e0e40b4029435f823c0d136e8521"} Mar 12 08:06:27 crc kubenswrapper[4809]: I0312 08:06:27.759173 4809 generic.go:334] "Generic (PLEG): container finished" podID="f9d9e6d3-d87f-485b-bb03-6ed4f067de44" containerID="e6a66334ca8d2dfd282e2f251782bd2397664fbdd681fa9598de607bb64e6272" exitCode=0 Mar 12 08:06:27 crc kubenswrapper[4809]: I0312 08:06:27.759203 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zqzjb" event={"ID":"f9d9e6d3-d87f-485b-bb03-6ed4f067de44","Type":"ContainerDied","Data":"e6a66334ca8d2dfd282e2f251782bd2397664fbdd681fa9598de607bb64e6272"} Mar 12 08:06:27 crc kubenswrapper[4809]: I0312 08:06:27.761416 4809 generic.go:334] "Generic (PLEG): container finished" podID="8d3cb5a3-ab13-4827-a66f-117150abe43b" containerID="f69ee30531caa2dd4c6c6d5bcb4fddd06cb9e0e40b4029435f823c0d136e8521" exitCode=0 Mar 12 08:06:27 crc kubenswrapper[4809]: I0312 08:06:27.761445 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nscs5" event={"ID":"8d3cb5a3-ab13-4827-a66f-117150abe43b","Type":"ContainerDied","Data":"f69ee30531caa2dd4c6c6d5bcb4fddd06cb9e0e40b4029435f823c0d136e8521"} Mar 12 08:06:28 crc kubenswrapper[4809]: I0312 08:06:28.768220 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zqzjb" event={"ID":"f9d9e6d3-d87f-485b-bb03-6ed4f067de44","Type":"ContainerStarted","Data":"3be4dbc4e124c2f33f2c463ba7c45845bbc74ae4c1aafcb8ee60cfb1fd794c33"} Mar 12 08:06:28 crc kubenswrapper[4809]: I0312 08:06:28.773673 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nscs5" event={"ID":"8d3cb5a3-ab13-4827-a66f-117150abe43b","Type":"ContainerStarted","Data":"17d1302c6f20bbe09ea21efc6596ec988f06c1b114586cef8da05c3180ab136b"} Mar 12 08:06:28 crc kubenswrapper[4809]: I0312 08:06:28.798200 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zqzjb" podStartSLOduration=3.326488936 podStartE2EDuration="5.798172321s" podCreationTimestamp="2026-03-12 08:06:23 +0000 UTC" firstStartedPulling="2026-03-12 08:06:25.736693535 +0000 UTC m=+459.318729288" lastFinishedPulling="2026-03-12 08:06:28.20837692 +0000 UTC m=+461.790412673" observedRunningTime="2026-03-12 08:06:28.797526444 +0000 UTC m=+462.379562177" watchObservedRunningTime="2026-03-12 08:06:28.798172321 +0000 UTC m=+462.380208064" Mar 12 08:06:31 crc kubenswrapper[4809]: I0312 08:06:31.877704 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hrqzc" Mar 12 08:06:31 crc kubenswrapper[4809]: I0312 08:06:31.878103 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hrqzc" Mar 12 08:06:31 crc kubenswrapper[4809]: I0312 08:06:31.929726 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hrqzc" Mar 12 08:06:31 crc kubenswrapper[4809]: I0312 08:06:31.950760 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nscs5" podStartSLOduration=5.501135874 podStartE2EDuration="7.950738821s" podCreationTimestamp="2026-03-12 08:06:24 +0000 UTC" firstStartedPulling="2026-03-12 08:06:25.739651007 +0000 UTC m=+459.321686760" lastFinishedPulling="2026-03-12 08:06:28.189253974 +0000 UTC m=+461.771289707" observedRunningTime="2026-03-12 08:06:28.818142819 +0000 UTC m=+462.400178582" watchObservedRunningTime="2026-03-12 08:06:31.950738821 +0000 UTC m=+465.532774564" Mar 12 08:06:32 crc kubenswrapper[4809]: I0312 08:06:32.087255 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pd2vq" Mar 12 08:06:32 crc kubenswrapper[4809]: I0312 08:06:32.087339 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pd2vq" Mar 12 08:06:32 crc kubenswrapper[4809]: I0312 08:06:32.870388 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hrqzc" Mar 12 08:06:33 crc kubenswrapper[4809]: I0312 08:06:33.132269 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pd2vq" podUID="ec52f8eb-40dd-4475-9726-69b84829233d" containerName="registry-server" probeResult="failure" output=< Mar 12 08:06:33 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 08:06:33 crc kubenswrapper[4809]: > Mar 12 08:06:34 crc kubenswrapper[4809]: I0312 08:06:34.313944 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zqzjb" Mar 12 08:06:34 crc kubenswrapper[4809]: I0312 08:06:34.314483 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zqzjb" Mar 12 08:06:34 crc kubenswrapper[4809]: I0312 08:06:34.359421 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zqzjb" Mar 12 08:06:34 crc kubenswrapper[4809]: I0312 08:06:34.479500 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nscs5" Mar 12 08:06:34 crc kubenswrapper[4809]: I0312 08:06:34.479606 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nscs5" Mar 12 08:06:34 crc kubenswrapper[4809]: I0312 08:06:34.548957 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nscs5" Mar 12 08:06:34 crc kubenswrapper[4809]: I0312 08:06:34.879745 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nscs5" Mar 12 08:06:34 crc kubenswrapper[4809]: I0312 08:06:34.883246 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zqzjb" Mar 12 08:06:42 crc kubenswrapper[4809]: I0312 08:06:42.136594 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pd2vq" Mar 12 08:06:42 crc kubenswrapper[4809]: I0312 08:06:42.189892 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pd2vq" Mar 12 08:06:50 crc kubenswrapper[4809]: I0312 08:06:50.207650 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" podUID="b1c6c047-6fde-4d86-a82c-d8d259265412" containerName="registry" containerID="cri-o://355ce1ca0ee2b297c82684e9fbd29946a3bd4392727bdc30c17e72db57b02599" gracePeriod=30 Mar 12 08:06:50 crc kubenswrapper[4809]: I0312 08:06:50.646888 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:06:50 crc kubenswrapper[4809]: I0312 08:06:50.758685 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b1c6c047-6fde-4d86-a82c-d8d259265412-registry-tls\") pod \"b1c6c047-6fde-4d86-a82c-d8d259265412\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " Mar 12 08:06:50 crc kubenswrapper[4809]: I0312 08:06:50.758790 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b1c6c047-6fde-4d86-a82c-d8d259265412-registry-certificates\") pod \"b1c6c047-6fde-4d86-a82c-d8d259265412\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " Mar 12 08:06:50 crc kubenswrapper[4809]: I0312 08:06:50.758828 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b1c6c047-6fde-4d86-a82c-d8d259265412-trusted-ca\") pod \"b1c6c047-6fde-4d86-a82c-d8d259265412\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " Mar 12 08:06:50 crc kubenswrapper[4809]: I0312 08:06:50.759070 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"b1c6c047-6fde-4d86-a82c-d8d259265412\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " Mar 12 08:06:50 crc kubenswrapper[4809]: I0312 08:06:50.759187 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8zb7\" (UniqueName: \"kubernetes.io/projected/b1c6c047-6fde-4d86-a82c-d8d259265412-kube-api-access-x8zb7\") pod \"b1c6c047-6fde-4d86-a82c-d8d259265412\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " Mar 12 08:06:50 crc kubenswrapper[4809]: I0312 08:06:50.759231 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b1c6c047-6fde-4d86-a82c-d8d259265412-ca-trust-extracted\") pod \"b1c6c047-6fde-4d86-a82c-d8d259265412\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " Mar 12 08:06:50 crc kubenswrapper[4809]: I0312 08:06:50.759322 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b1c6c047-6fde-4d86-a82c-d8d259265412-installation-pull-secrets\") pod \"b1c6c047-6fde-4d86-a82c-d8d259265412\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " Mar 12 08:06:50 crc kubenswrapper[4809]: I0312 08:06:50.759362 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b1c6c047-6fde-4d86-a82c-d8d259265412-bound-sa-token\") pod \"b1c6c047-6fde-4d86-a82c-d8d259265412\" (UID: \"b1c6c047-6fde-4d86-a82c-d8d259265412\") " Mar 12 08:06:50 crc kubenswrapper[4809]: I0312 08:06:50.760941 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1c6c047-6fde-4d86-a82c-d8d259265412-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "b1c6c047-6fde-4d86-a82c-d8d259265412" (UID: "b1c6c047-6fde-4d86-a82c-d8d259265412"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:06:50 crc kubenswrapper[4809]: I0312 08:06:50.761459 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1c6c047-6fde-4d86-a82c-d8d259265412-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "b1c6c047-6fde-4d86-a82c-d8d259265412" (UID: "b1c6c047-6fde-4d86-a82c-d8d259265412"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:06:50 crc kubenswrapper[4809]: I0312 08:06:50.772068 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1c6c047-6fde-4d86-a82c-d8d259265412-kube-api-access-x8zb7" (OuterVolumeSpecName: "kube-api-access-x8zb7") pod "b1c6c047-6fde-4d86-a82c-d8d259265412" (UID: "b1c6c047-6fde-4d86-a82c-d8d259265412"). InnerVolumeSpecName "kube-api-access-x8zb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:06:50 crc kubenswrapper[4809]: I0312 08:06:50.772748 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1c6c047-6fde-4d86-a82c-d8d259265412-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "b1c6c047-6fde-4d86-a82c-d8d259265412" (UID: "b1c6c047-6fde-4d86-a82c-d8d259265412"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:06:50 crc kubenswrapper[4809]: I0312 08:06:50.773219 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1c6c047-6fde-4d86-a82c-d8d259265412-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "b1c6c047-6fde-4d86-a82c-d8d259265412" (UID: "b1c6c047-6fde-4d86-a82c-d8d259265412"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:06:50 crc kubenswrapper[4809]: I0312 08:06:50.773409 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "b1c6c047-6fde-4d86-a82c-d8d259265412" (UID: "b1c6c047-6fde-4d86-a82c-d8d259265412"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 12 08:06:50 crc kubenswrapper[4809]: I0312 08:06:50.774211 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1c6c047-6fde-4d86-a82c-d8d259265412-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "b1c6c047-6fde-4d86-a82c-d8d259265412" (UID: "b1c6c047-6fde-4d86-a82c-d8d259265412"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:06:50 crc kubenswrapper[4809]: I0312 08:06:50.781477 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1c6c047-6fde-4d86-a82c-d8d259265412-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "b1c6c047-6fde-4d86-a82c-d8d259265412" (UID: "b1c6c047-6fde-4d86-a82c-d8d259265412"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:06:50 crc kubenswrapper[4809]: I0312 08:06:50.860664 4809 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b1c6c047-6fde-4d86-a82c-d8d259265412-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 12 08:06:50 crc kubenswrapper[4809]: I0312 08:06:50.860721 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8zb7\" (UniqueName: \"kubernetes.io/projected/b1c6c047-6fde-4d86-a82c-d8d259265412-kube-api-access-x8zb7\") on node \"crc\" DevicePath \"\"" Mar 12 08:06:50 crc kubenswrapper[4809]: I0312 08:06:50.860734 4809 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b1c6c047-6fde-4d86-a82c-d8d259265412-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Mar 12 08:06:50 crc kubenswrapper[4809]: I0312 08:06:50.860744 4809 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b1c6c047-6fde-4d86-a82c-d8d259265412-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Mar 12 08:06:50 crc kubenswrapper[4809]: I0312 08:06:50.860755 4809 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b1c6c047-6fde-4d86-a82c-d8d259265412-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 12 08:06:50 crc kubenswrapper[4809]: I0312 08:06:50.860765 4809 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b1c6c047-6fde-4d86-a82c-d8d259265412-registry-tls\") on node \"crc\" DevicePath \"\"" Mar 12 08:06:50 crc kubenswrapper[4809]: I0312 08:06:50.860774 4809 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b1c6c047-6fde-4d86-a82c-d8d259265412-registry-certificates\") on node \"crc\" DevicePath \"\"" Mar 12 08:06:50 crc kubenswrapper[4809]: I0312 08:06:50.913174 4809 generic.go:334] "Generic (PLEG): container finished" podID="b1c6c047-6fde-4d86-a82c-d8d259265412" containerID="355ce1ca0ee2b297c82684e9fbd29946a3bd4392727bdc30c17e72db57b02599" exitCode=0 Mar 12 08:06:50 crc kubenswrapper[4809]: I0312 08:06:50.913263 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" event={"ID":"b1c6c047-6fde-4d86-a82c-d8d259265412","Type":"ContainerDied","Data":"355ce1ca0ee2b297c82684e9fbd29946a3bd4392727bdc30c17e72db57b02599"} Mar 12 08:06:50 crc kubenswrapper[4809]: I0312 08:06:50.913304 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" Mar 12 08:06:50 crc kubenswrapper[4809]: I0312 08:06:50.913992 4809 scope.go:117] "RemoveContainer" containerID="355ce1ca0ee2b297c82684e9fbd29946a3bd4392727bdc30c17e72db57b02599" Mar 12 08:06:50 crc kubenswrapper[4809]: I0312 08:06:50.914202 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-5d8gm" event={"ID":"b1c6c047-6fde-4d86-a82c-d8d259265412","Type":"ContainerDied","Data":"920dd07679d4801e7ea236ad649d9bcd2bf86dca0add2da2d49e152f03ed3b48"} Mar 12 08:06:50 crc kubenswrapper[4809]: I0312 08:06:50.950814 4809 scope.go:117] "RemoveContainer" containerID="355ce1ca0ee2b297c82684e9fbd29946a3bd4392727bdc30c17e72db57b02599" Mar 12 08:06:50 crc kubenswrapper[4809]: E0312 08:06:50.954570 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"355ce1ca0ee2b297c82684e9fbd29946a3bd4392727bdc30c17e72db57b02599\": container with ID starting with 355ce1ca0ee2b297c82684e9fbd29946a3bd4392727bdc30c17e72db57b02599 not found: ID does not exist" containerID="355ce1ca0ee2b297c82684e9fbd29946a3bd4392727bdc30c17e72db57b02599" Mar 12 08:06:50 crc kubenswrapper[4809]: I0312 08:06:50.954739 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"355ce1ca0ee2b297c82684e9fbd29946a3bd4392727bdc30c17e72db57b02599"} err="failed to get container status \"355ce1ca0ee2b297c82684e9fbd29946a3bd4392727bdc30c17e72db57b02599\": rpc error: code = NotFound desc = could not find container \"355ce1ca0ee2b297c82684e9fbd29946a3bd4392727bdc30c17e72db57b02599\": container with ID starting with 355ce1ca0ee2b297c82684e9fbd29946a3bd4392727bdc30c17e72db57b02599 not found: ID does not exist" Mar 12 08:06:50 crc kubenswrapper[4809]: I0312 08:06:50.970499 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-5d8gm"] Mar 12 08:06:50 crc kubenswrapper[4809]: I0312 08:06:50.978311 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-5d8gm"] Mar 12 08:06:51 crc kubenswrapper[4809]: I0312 08:06:51.114721 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1c6c047-6fde-4d86-a82c-d8d259265412" path="/var/lib/kubelet/pods/b1c6c047-6fde-4d86-a82c-d8d259265412/volumes" Mar 12 08:07:12 crc kubenswrapper[4809]: I0312 08:07:12.607451 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-mhnnv"] Mar 12 08:07:12 crc kubenswrapper[4809]: E0312 08:07:12.608235 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1c6c047-6fde-4d86-a82c-d8d259265412" containerName="registry" Mar 12 08:07:12 crc kubenswrapper[4809]: I0312 08:07:12.608249 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1c6c047-6fde-4d86-a82c-d8d259265412" containerName="registry" Mar 12 08:07:12 crc kubenswrapper[4809]: I0312 08:07:12.608344 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1c6c047-6fde-4d86-a82c-d8d259265412" containerName="registry" Mar 12 08:07:12 crc kubenswrapper[4809]: I0312 08:07:12.608845 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-mhnnv" Mar 12 08:07:12 crc kubenswrapper[4809]: I0312 08:07:12.627334 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-dockercfg-wwt9l" Mar 12 08:07:12 crc kubenswrapper[4809]: I0312 08:07:12.627623 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 12 08:07:12 crc kubenswrapper[4809]: I0312 08:07:12.628153 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 12 08:07:12 crc kubenswrapper[4809]: I0312 08:07:12.632334 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-mhnnv"] Mar 12 08:07:12 crc kubenswrapper[4809]: I0312 08:07:12.639325 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 12 08:07:12 crc kubenswrapper[4809]: I0312 08:07:12.639902 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 12 08:07:12 crc kubenswrapper[4809]: I0312 08:07:12.696413 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/bb661875-d885-4bb9-a3cd-6904d6ccde16-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-mhnnv\" (UID: \"bb661875-d885-4bb9-a3cd-6904d6ccde16\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-mhnnv" Mar 12 08:07:12 crc kubenswrapper[4809]: I0312 08:07:12.696477 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/bb661875-d885-4bb9-a3cd-6904d6ccde16-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-mhnnv\" (UID: \"bb661875-d885-4bb9-a3cd-6904d6ccde16\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-mhnnv" Mar 12 08:07:12 crc kubenswrapper[4809]: I0312 08:07:12.696730 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjhs8\" (UniqueName: \"kubernetes.io/projected/bb661875-d885-4bb9-a3cd-6904d6ccde16-kube-api-access-kjhs8\") pod \"cluster-monitoring-operator-6d5b84845-mhnnv\" (UID: \"bb661875-d885-4bb9-a3cd-6904d6ccde16\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-mhnnv" Mar 12 08:07:12 crc kubenswrapper[4809]: I0312 08:07:12.798540 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/bb661875-d885-4bb9-a3cd-6904d6ccde16-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-mhnnv\" (UID: \"bb661875-d885-4bb9-a3cd-6904d6ccde16\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-mhnnv" Mar 12 08:07:12 crc kubenswrapper[4809]: I0312 08:07:12.799580 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/bb661875-d885-4bb9-a3cd-6904d6ccde16-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-mhnnv\" (UID: \"bb661875-d885-4bb9-a3cd-6904d6ccde16\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-mhnnv" Mar 12 08:07:12 crc kubenswrapper[4809]: I0312 08:07:12.799920 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjhs8\" (UniqueName: \"kubernetes.io/projected/bb661875-d885-4bb9-a3cd-6904d6ccde16-kube-api-access-kjhs8\") pod \"cluster-monitoring-operator-6d5b84845-mhnnv\" (UID: \"bb661875-d885-4bb9-a3cd-6904d6ccde16\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-mhnnv" Mar 12 08:07:12 crc kubenswrapper[4809]: I0312 08:07:12.800587 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/bb661875-d885-4bb9-a3cd-6904d6ccde16-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-mhnnv\" (UID: \"bb661875-d885-4bb9-a3cd-6904d6ccde16\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-mhnnv" Mar 12 08:07:12 crc kubenswrapper[4809]: I0312 08:07:12.804323 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/bb661875-d885-4bb9-a3cd-6904d6ccde16-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-mhnnv\" (UID: \"bb661875-d885-4bb9-a3cd-6904d6ccde16\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-mhnnv" Mar 12 08:07:12 crc kubenswrapper[4809]: I0312 08:07:12.824154 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjhs8\" (UniqueName: \"kubernetes.io/projected/bb661875-d885-4bb9-a3cd-6904d6ccde16-kube-api-access-kjhs8\") pod \"cluster-monitoring-operator-6d5b84845-mhnnv\" (UID: \"bb661875-d885-4bb9-a3cd-6904d6ccde16\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-mhnnv" Mar 12 08:07:12 crc kubenswrapper[4809]: I0312 08:07:12.944975 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-mhnnv" Mar 12 08:07:13 crc kubenswrapper[4809]: I0312 08:07:13.389289 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-mhnnv"] Mar 12 08:07:14 crc kubenswrapper[4809]: I0312 08:07:14.084438 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-mhnnv" event={"ID":"bb661875-d885-4bb9-a3cd-6904d6ccde16","Type":"ContainerStarted","Data":"05ec8e373ca9a6f9bb30037f90f0f1421c5392fad4b51948bec186c32c2f2d47"} Mar 12 08:07:15 crc kubenswrapper[4809]: I0312 08:07:15.977880 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-5qhpm"] Mar 12 08:07:15 crc kubenswrapper[4809]: I0312 08:07:15.978995 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-5qhpm" Mar 12 08:07:15 crc kubenswrapper[4809]: I0312 08:07:15.980809 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-hrdjr" Mar 12 08:07:15 crc kubenswrapper[4809]: I0312 08:07:15.981019 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Mar 12 08:07:15 crc kubenswrapper[4809]: I0312 08:07:15.990058 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-5qhpm"] Mar 12 08:07:16 crc kubenswrapper[4809]: I0312 08:07:16.045015 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/2204a165-ee7b-4609-bed7-9683860bce5d-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-5qhpm\" (UID: \"2204a165-ee7b-4609-bed7-9683860bce5d\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-5qhpm" Mar 12 08:07:16 crc kubenswrapper[4809]: I0312 08:07:16.133072 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-mhnnv" event={"ID":"bb661875-d885-4bb9-a3cd-6904d6ccde16","Type":"ContainerStarted","Data":"5409c2fa1753cfd1d46512c34bef9ce0c118a7b4ba21bda5198b58d65f896ca0"} Mar 12 08:07:16 crc kubenswrapper[4809]: I0312 08:07:16.146166 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/2204a165-ee7b-4609-bed7-9683860bce5d-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-5qhpm\" (UID: \"2204a165-ee7b-4609-bed7-9683860bce5d\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-5qhpm" Mar 12 08:07:16 crc kubenswrapper[4809]: I0312 08:07:16.151679 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/2204a165-ee7b-4609-bed7-9683860bce5d-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-5qhpm\" (UID: \"2204a165-ee7b-4609-bed7-9683860bce5d\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-5qhpm" Mar 12 08:07:16 crc kubenswrapper[4809]: I0312 08:07:16.163181 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-mhnnv" podStartSLOduration=2.198611077 podStartE2EDuration="4.163164023s" podCreationTimestamp="2026-03-12 08:07:12 +0000 UTC" firstStartedPulling="2026-03-12 08:07:13.394385991 +0000 UTC m=+506.976421734" lastFinishedPulling="2026-03-12 08:07:15.358938927 +0000 UTC m=+508.940974680" observedRunningTime="2026-03-12 08:07:16.161768094 +0000 UTC m=+509.743803847" watchObservedRunningTime="2026-03-12 08:07:16.163164023 +0000 UTC m=+509.745199756" Mar 12 08:07:16 crc kubenswrapper[4809]: I0312 08:07:16.293302 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-5qhpm" Mar 12 08:07:16 crc kubenswrapper[4809]: I0312 08:07:16.515244 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-5qhpm"] Mar 12 08:07:16 crc kubenswrapper[4809]: I0312 08:07:16.523845 4809 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 08:07:17 crc kubenswrapper[4809]: I0312 08:07:17.141977 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-5qhpm" event={"ID":"2204a165-ee7b-4609-bed7-9683860bce5d","Type":"ContainerStarted","Data":"13f18b08632756672ab61c570ca8a189fb1cda5bdaef37a992ab5ce72b869eae"} Mar 12 08:07:19 crc kubenswrapper[4809]: I0312 08:07:19.155953 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-5qhpm" event={"ID":"2204a165-ee7b-4609-bed7-9683860bce5d","Type":"ContainerStarted","Data":"2ef62ac799deb0809963e6177cef2088c19a9d6b424d30f0121145975a233395"} Mar 12 08:07:19 crc kubenswrapper[4809]: I0312 08:07:19.156353 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-5qhpm" Mar 12 08:07:19 crc kubenswrapper[4809]: I0312 08:07:19.162214 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-5qhpm" Mar 12 08:07:19 crc kubenswrapper[4809]: I0312 08:07:19.176699 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-5qhpm" podStartSLOduration=1.9237800379999999 podStartE2EDuration="4.17667503s" podCreationTimestamp="2026-03-12 08:07:15 +0000 UTC" firstStartedPulling="2026-03-12 08:07:16.52363016 +0000 UTC m=+510.105665893" lastFinishedPulling="2026-03-12 08:07:18.776525152 +0000 UTC m=+512.358560885" observedRunningTime="2026-03-12 08:07:19.171022555 +0000 UTC m=+512.753058288" watchObservedRunningTime="2026-03-12 08:07:19.17667503 +0000 UTC m=+512.758710763" Mar 12 08:07:20 crc kubenswrapper[4809]: I0312 08:07:20.078677 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-zkwhc"] Mar 12 08:07:20 crc kubenswrapper[4809]: I0312 08:07:20.079882 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-zkwhc" Mar 12 08:07:20 crc kubenswrapper[4809]: I0312 08:07:20.089290 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Mar 12 08:07:20 crc kubenswrapper[4809]: I0312 08:07:20.089341 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Mar 12 08:07:20 crc kubenswrapper[4809]: I0312 08:07:20.090370 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Mar 12 08:07:20 crc kubenswrapper[4809]: I0312 08:07:20.090719 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-5nlt6" Mar 12 08:07:20 crc kubenswrapper[4809]: I0312 08:07:20.101728 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/a38e016f-30d0-47ac-877a-6a46f9e0ba90-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-zkwhc\" (UID: \"a38e016f-30d0-47ac-877a-6a46f9e0ba90\") " pod="openshift-monitoring/prometheus-operator-db54df47d-zkwhc" Mar 12 08:07:20 crc kubenswrapper[4809]: I0312 08:07:20.101782 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lf694\" (UniqueName: \"kubernetes.io/projected/a38e016f-30d0-47ac-877a-6a46f9e0ba90-kube-api-access-lf694\") pod \"prometheus-operator-db54df47d-zkwhc\" (UID: \"a38e016f-30d0-47ac-877a-6a46f9e0ba90\") " pod="openshift-monitoring/prometheus-operator-db54df47d-zkwhc" Mar 12 08:07:20 crc kubenswrapper[4809]: I0312 08:07:20.101821 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a38e016f-30d0-47ac-877a-6a46f9e0ba90-metrics-client-ca\") pod \"prometheus-operator-db54df47d-zkwhc\" (UID: \"a38e016f-30d0-47ac-877a-6a46f9e0ba90\") " pod="openshift-monitoring/prometheus-operator-db54df47d-zkwhc" Mar 12 08:07:20 crc kubenswrapper[4809]: I0312 08:07:20.102042 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a38e016f-30d0-47ac-877a-6a46f9e0ba90-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-zkwhc\" (UID: \"a38e016f-30d0-47ac-877a-6a46f9e0ba90\") " pod="openshift-monitoring/prometheus-operator-db54df47d-zkwhc" Mar 12 08:07:20 crc kubenswrapper[4809]: I0312 08:07:20.108849 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-zkwhc"] Mar 12 08:07:20 crc kubenswrapper[4809]: I0312 08:07:20.203579 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a38e016f-30d0-47ac-877a-6a46f9e0ba90-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-zkwhc\" (UID: \"a38e016f-30d0-47ac-877a-6a46f9e0ba90\") " pod="openshift-monitoring/prometheus-operator-db54df47d-zkwhc" Mar 12 08:07:20 crc kubenswrapper[4809]: I0312 08:07:20.203634 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/a38e016f-30d0-47ac-877a-6a46f9e0ba90-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-zkwhc\" (UID: \"a38e016f-30d0-47ac-877a-6a46f9e0ba90\") " pod="openshift-monitoring/prometheus-operator-db54df47d-zkwhc" Mar 12 08:07:20 crc kubenswrapper[4809]: I0312 08:07:20.203676 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lf694\" (UniqueName: \"kubernetes.io/projected/a38e016f-30d0-47ac-877a-6a46f9e0ba90-kube-api-access-lf694\") pod \"prometheus-operator-db54df47d-zkwhc\" (UID: \"a38e016f-30d0-47ac-877a-6a46f9e0ba90\") " pod="openshift-monitoring/prometheus-operator-db54df47d-zkwhc" Mar 12 08:07:20 crc kubenswrapper[4809]: I0312 08:07:20.203718 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a38e016f-30d0-47ac-877a-6a46f9e0ba90-metrics-client-ca\") pod \"prometheus-operator-db54df47d-zkwhc\" (UID: \"a38e016f-30d0-47ac-877a-6a46f9e0ba90\") " pod="openshift-monitoring/prometheus-operator-db54df47d-zkwhc" Mar 12 08:07:20 crc kubenswrapper[4809]: I0312 08:07:20.204593 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a38e016f-30d0-47ac-877a-6a46f9e0ba90-metrics-client-ca\") pod \"prometheus-operator-db54df47d-zkwhc\" (UID: \"a38e016f-30d0-47ac-877a-6a46f9e0ba90\") " pod="openshift-monitoring/prometheus-operator-db54df47d-zkwhc" Mar 12 08:07:20 crc kubenswrapper[4809]: I0312 08:07:20.208801 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/a38e016f-30d0-47ac-877a-6a46f9e0ba90-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-zkwhc\" (UID: \"a38e016f-30d0-47ac-877a-6a46f9e0ba90\") " pod="openshift-monitoring/prometheus-operator-db54df47d-zkwhc" Mar 12 08:07:20 crc kubenswrapper[4809]: I0312 08:07:20.212415 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a38e016f-30d0-47ac-877a-6a46f9e0ba90-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-zkwhc\" (UID: \"a38e016f-30d0-47ac-877a-6a46f9e0ba90\") " pod="openshift-monitoring/prometheus-operator-db54df47d-zkwhc" Mar 12 08:07:20 crc kubenswrapper[4809]: I0312 08:07:20.221002 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lf694\" (UniqueName: \"kubernetes.io/projected/a38e016f-30d0-47ac-877a-6a46f9e0ba90-kube-api-access-lf694\") pod \"prometheus-operator-db54df47d-zkwhc\" (UID: \"a38e016f-30d0-47ac-877a-6a46f9e0ba90\") " pod="openshift-monitoring/prometheus-operator-db54df47d-zkwhc" Mar 12 08:07:20 crc kubenswrapper[4809]: I0312 08:07:20.398262 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-zkwhc" Mar 12 08:07:21 crc kubenswrapper[4809]: I0312 08:07:20.661174 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-zkwhc"] Mar 12 08:07:21 crc kubenswrapper[4809]: I0312 08:07:21.171100 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-zkwhc" event={"ID":"a38e016f-30d0-47ac-877a-6a46f9e0ba90","Type":"ContainerStarted","Data":"5de18a8bb9dfe29877460f838b6423b4227eedb4126dfabadf5b8482433ca4f4"} Mar 12 08:07:23 crc kubenswrapper[4809]: I0312 08:07:23.183718 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-zkwhc" event={"ID":"a38e016f-30d0-47ac-877a-6a46f9e0ba90","Type":"ContainerStarted","Data":"f868cefdb61bac9c932d411263a6f7e35d015f6ec0145f89c529cb5552438d80"} Mar 12 08:07:23 crc kubenswrapper[4809]: I0312 08:07:23.184358 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-zkwhc" event={"ID":"a38e016f-30d0-47ac-877a-6a46f9e0ba90","Type":"ContainerStarted","Data":"1751eef038fc1997b221a3e1c5e84511ae1cc95a856faaf1b1159afcc73b9919"} Mar 12 08:07:23 crc kubenswrapper[4809]: I0312 08:07:23.201196 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-db54df47d-zkwhc" podStartSLOduration=1.11262992 podStartE2EDuration="3.201171795s" podCreationTimestamp="2026-03-12 08:07:20 +0000 UTC" firstStartedPulling="2026-03-12 08:07:20.670492758 +0000 UTC m=+514.252528501" lastFinishedPulling="2026-03-12 08:07:22.759034643 +0000 UTC m=+516.341070376" observedRunningTime="2026-03-12 08:07:23.19770377 +0000 UTC m=+516.779739503" watchObservedRunningTime="2026-03-12 08:07:23.201171795 +0000 UTC m=+516.783207538" Mar 12 08:07:25 crc kubenswrapper[4809]: I0312 08:07:25.412828 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-4tl7v"] Mar 12 08:07:25 crc kubenswrapper[4809]: I0312 08:07:25.414089 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-4tl7v" Mar 12 08:07:25 crc kubenswrapper[4809]: I0312 08:07:25.415762 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Mar 12 08:07:25 crc kubenswrapper[4809]: I0312 08:07:25.415865 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Mar 12 08:07:25 crc kubenswrapper[4809]: I0312 08:07:25.416189 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-vswjz" Mar 12 08:07:25 crc kubenswrapper[4809]: I0312 08:07:25.429610 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-9lzcz"] Mar 12 08:07:25 crc kubenswrapper[4809]: I0312 08:07:25.431151 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9lzcz" Mar 12 08:07:25 crc kubenswrapper[4809]: I0312 08:07:25.435907 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Mar 12 08:07:25 crc kubenswrapper[4809]: I0312 08:07:25.436031 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Mar 12 08:07:25 crc kubenswrapper[4809]: I0312 08:07:25.437314 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Mar 12 08:07:25 crc kubenswrapper[4809]: I0312 08:07:25.437333 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-6hbvd" Mar 12 08:07:25 crc kubenswrapper[4809]: I0312 08:07:25.444752 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-4tl7v"] Mar 12 08:07:25 crc kubenswrapper[4809]: I0312 08:07:25.452759 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-9lzcz"] Mar 12 08:07:25 crc kubenswrapper[4809]: I0312 08:07:25.494602 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8184866b-b320-4c23-94d5-2feb3d60c90e-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-4tl7v\" (UID: \"8184866b-b320-4c23-94d5-2feb3d60c90e\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-4tl7v" Mar 12 08:07:25 crc kubenswrapper[4809]: I0312 08:07:25.494660 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4666\" (UniqueName: \"kubernetes.io/projected/8184866b-b320-4c23-94d5-2feb3d60c90e-kube-api-access-j4666\") pod \"openshift-state-metrics-566fddb674-4tl7v\" (UID: \"8184866b-b320-4c23-94d5-2feb3d60c90e\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-4tl7v" Mar 12 08:07:25 crc kubenswrapper[4809]: I0312 08:07:25.494990 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/8184866b-b320-4c23-94d5-2feb3d60c90e-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-4tl7v\" (UID: \"8184866b-b320-4c23-94d5-2feb3d60c90e\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-4tl7v" Mar 12 08:07:25 crc kubenswrapper[4809]: I0312 08:07:25.495070 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8184866b-b320-4c23-94d5-2feb3d60c90e-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-4tl7v\" (UID: \"8184866b-b320-4c23-94d5-2feb3d60c90e\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-4tl7v" Mar 12 08:07:25 crc kubenswrapper[4809]: I0312 08:07:25.581407 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-r6p6w"] Mar 12 08:07:25 crc kubenswrapper[4809]: I0312 08:07:25.582516 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-r6p6w" Mar 12 08:07:25 crc kubenswrapper[4809]: I0312 08:07:25.584636 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Mar 12 08:07:25 crc kubenswrapper[4809]: I0312 08:07:25.584696 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Mar 12 08:07:25 crc kubenswrapper[4809]: I0312 08:07:25.585572 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-ktqf4" Mar 12 08:07:25 crc kubenswrapper[4809]: I0312 08:07:25.596286 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/405ee83b-3c16-42ca-97da-6011d8d8d399-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-9lzcz\" (UID: \"405ee83b-3c16-42ca-97da-6011d8d8d399\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9lzcz" Mar 12 08:07:25 crc kubenswrapper[4809]: I0312 08:07:25.596328 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a640e9c5-49e4-47fb-a113-9f81e659520f-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-r6p6w\" (UID: \"a640e9c5-49e4-47fb-a113-9f81e659520f\") " pod="openshift-monitoring/node-exporter-r6p6w" Mar 12 08:07:25 crc kubenswrapper[4809]: I0312 08:07:25.596356 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/a640e9c5-49e4-47fb-a113-9f81e659520f-node-exporter-tls\") pod \"node-exporter-r6p6w\" (UID: \"a640e9c5-49e4-47fb-a113-9f81e659520f\") " pod="openshift-monitoring/node-exporter-r6p6w" Mar 12 08:07:25 crc kubenswrapper[4809]: I0312 08:07:25.596389 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8184866b-b320-4c23-94d5-2feb3d60c90e-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-4tl7v\" (UID: \"8184866b-b320-4c23-94d5-2feb3d60c90e\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-4tl7v" Mar 12 08:07:25 crc kubenswrapper[4809]: I0312 08:07:25.596415 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klls7\" (UniqueName: \"kubernetes.io/projected/a640e9c5-49e4-47fb-a113-9f81e659520f-kube-api-access-klls7\") pod \"node-exporter-r6p6w\" (UID: \"a640e9c5-49e4-47fb-a113-9f81e659520f\") " pod="openshift-monitoring/node-exporter-r6p6w" Mar 12 08:07:25 crc kubenswrapper[4809]: I0312 08:07:25.596437 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4666\" (UniqueName: \"kubernetes.io/projected/8184866b-b320-4c23-94d5-2feb3d60c90e-kube-api-access-j4666\") pod \"openshift-state-metrics-566fddb674-4tl7v\" (UID: \"8184866b-b320-4c23-94d5-2feb3d60c90e\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-4tl7v" Mar 12 08:07:25 crc kubenswrapper[4809]: I0312 08:07:25.596459 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/a640e9c5-49e4-47fb-a113-9f81e659520f-node-exporter-textfile\") pod \"node-exporter-r6p6w\" (UID: \"a640e9c5-49e4-47fb-a113-9f81e659520f\") " pod="openshift-monitoring/node-exporter-r6p6w" Mar 12 08:07:25 crc kubenswrapper[4809]: I0312 08:07:25.596581 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/a640e9c5-49e4-47fb-a113-9f81e659520f-node-exporter-wtmp\") pod \"node-exporter-r6p6w\" (UID: \"a640e9c5-49e4-47fb-a113-9f81e659520f\") " pod="openshift-monitoring/node-exporter-r6p6w" Mar 12 08:07:25 crc kubenswrapper[4809]: I0312 08:07:25.596628 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phkrj\" (UniqueName: \"kubernetes.io/projected/405ee83b-3c16-42ca-97da-6011d8d8d399-kube-api-access-phkrj\") pod \"kube-state-metrics-777cb5bd5d-9lzcz\" (UID: \"405ee83b-3c16-42ca-97da-6011d8d8d399\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9lzcz" Mar 12 08:07:25 crc kubenswrapper[4809]: I0312 08:07:25.596667 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/8184866b-b320-4c23-94d5-2feb3d60c90e-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-4tl7v\" (UID: \"8184866b-b320-4c23-94d5-2feb3d60c90e\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-4tl7v" Mar 12 08:07:25 crc kubenswrapper[4809]: I0312 08:07:25.596697 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/405ee83b-3c16-42ca-97da-6011d8d8d399-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-9lzcz\" (UID: \"405ee83b-3c16-42ca-97da-6011d8d8d399\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9lzcz" Mar 12 08:07:25 crc kubenswrapper[4809]: I0312 08:07:25.596722 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/a640e9c5-49e4-47fb-a113-9f81e659520f-root\") pod \"node-exporter-r6p6w\" (UID: \"a640e9c5-49e4-47fb-a113-9f81e659520f\") " pod="openshift-monitoring/node-exporter-r6p6w" Mar 12 08:07:25 crc kubenswrapper[4809]: I0312 08:07:25.596800 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a640e9c5-49e4-47fb-a113-9f81e659520f-metrics-client-ca\") pod \"node-exporter-r6p6w\" (UID: \"a640e9c5-49e4-47fb-a113-9f81e659520f\") " pod="openshift-monitoring/node-exporter-r6p6w" Mar 12 08:07:25 crc kubenswrapper[4809]: I0312 08:07:25.596824 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/405ee83b-3c16-42ca-97da-6011d8d8d399-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-9lzcz\" (UID: \"405ee83b-3c16-42ca-97da-6011d8d8d399\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9lzcz" Mar 12 08:07:25 crc kubenswrapper[4809]: E0312 08:07:25.596849 4809 secret.go:188] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: secret "openshift-state-metrics-tls" not found Mar 12 08:07:25 crc kubenswrapper[4809]: I0312 08:07:25.596861 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/405ee83b-3c16-42ca-97da-6011d8d8d399-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-9lzcz\" (UID: \"405ee83b-3c16-42ca-97da-6011d8d8d399\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9lzcz" Mar 12 08:07:25 crc kubenswrapper[4809]: E0312 08:07:25.596947 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8184866b-b320-4c23-94d5-2feb3d60c90e-openshift-state-metrics-tls podName:8184866b-b320-4c23-94d5-2feb3d60c90e nodeName:}" failed. No retries permitted until 2026-03-12 08:07:26.096919143 +0000 UTC m=+519.678954886 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/8184866b-b320-4c23-94d5-2feb3d60c90e-openshift-state-metrics-tls") pod "openshift-state-metrics-566fddb674-4tl7v" (UID: "8184866b-b320-4c23-94d5-2feb3d60c90e") : secret "openshift-state-metrics-tls" not found Mar 12 08:07:25 crc kubenswrapper[4809]: I0312 08:07:25.597000 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a640e9c5-49e4-47fb-a113-9f81e659520f-sys\") pod \"node-exporter-r6p6w\" (UID: \"a640e9c5-49e4-47fb-a113-9f81e659520f\") " pod="openshift-monitoring/node-exporter-r6p6w" Mar 12 08:07:25 crc kubenswrapper[4809]: I0312 08:07:25.597026 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/405ee83b-3c16-42ca-97da-6011d8d8d399-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-9lzcz\" (UID: \"405ee83b-3c16-42ca-97da-6011d8d8d399\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9lzcz" Mar 12 08:07:25 crc kubenswrapper[4809]: I0312 08:07:25.597059 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8184866b-b320-4c23-94d5-2feb3d60c90e-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-4tl7v\" (UID: \"8184866b-b320-4c23-94d5-2feb3d60c90e\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-4tl7v" Mar 12 08:07:25 crc kubenswrapper[4809]: I0312 08:07:25.598263 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8184866b-b320-4c23-94d5-2feb3d60c90e-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-4tl7v\" (UID: \"8184866b-b320-4c23-94d5-2feb3d60c90e\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-4tl7v" Mar 12 08:07:25 crc kubenswrapper[4809]: I0312 08:07:25.602238 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8184866b-b320-4c23-94d5-2feb3d60c90e-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-4tl7v\" (UID: \"8184866b-b320-4c23-94d5-2feb3d60c90e\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-4tl7v" Mar 12 08:07:25 crc kubenswrapper[4809]: I0312 08:07:25.625340 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4666\" (UniqueName: \"kubernetes.io/projected/8184866b-b320-4c23-94d5-2feb3d60c90e-kube-api-access-j4666\") pod \"openshift-state-metrics-566fddb674-4tl7v\" (UID: \"8184866b-b320-4c23-94d5-2feb3d60c90e\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-4tl7v" Mar 12 08:07:25 crc kubenswrapper[4809]: I0312 08:07:25.697529 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/a640e9c5-49e4-47fb-a113-9f81e659520f-node-exporter-wtmp\") pod \"node-exporter-r6p6w\" (UID: \"a640e9c5-49e4-47fb-a113-9f81e659520f\") " pod="openshift-monitoring/node-exporter-r6p6w" Mar 12 08:07:25 crc kubenswrapper[4809]: I0312 08:07:25.697594 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phkrj\" (UniqueName: \"kubernetes.io/projected/405ee83b-3c16-42ca-97da-6011d8d8d399-kube-api-access-phkrj\") pod \"kube-state-metrics-777cb5bd5d-9lzcz\" (UID: \"405ee83b-3c16-42ca-97da-6011d8d8d399\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9lzcz" Mar 12 08:07:25 crc kubenswrapper[4809]: I0312 08:07:25.697641 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/405ee83b-3c16-42ca-97da-6011d8d8d399-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-9lzcz\" (UID: \"405ee83b-3c16-42ca-97da-6011d8d8d399\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9lzcz" Mar 12 08:07:25 crc kubenswrapper[4809]: I0312 08:07:25.697671 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/a640e9c5-49e4-47fb-a113-9f81e659520f-root\") pod \"node-exporter-r6p6w\" (UID: \"a640e9c5-49e4-47fb-a113-9f81e659520f\") " pod="openshift-monitoring/node-exporter-r6p6w" Mar 12 08:07:25 crc kubenswrapper[4809]: I0312 08:07:25.697711 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a640e9c5-49e4-47fb-a113-9f81e659520f-metrics-client-ca\") pod \"node-exporter-r6p6w\" (UID: \"a640e9c5-49e4-47fb-a113-9f81e659520f\") " pod="openshift-monitoring/node-exporter-r6p6w" Mar 12 08:07:25 crc kubenswrapper[4809]: I0312 08:07:25.697735 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/405ee83b-3c16-42ca-97da-6011d8d8d399-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-9lzcz\" (UID: \"405ee83b-3c16-42ca-97da-6011d8d8d399\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9lzcz" Mar 12 08:07:25 crc kubenswrapper[4809]: I0312 08:07:25.697760 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/405ee83b-3c16-42ca-97da-6011d8d8d399-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-9lzcz\" (UID: \"405ee83b-3c16-42ca-97da-6011d8d8d399\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9lzcz" Mar 12 08:07:25 crc kubenswrapper[4809]: I0312 08:07:25.697785 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a640e9c5-49e4-47fb-a113-9f81e659520f-sys\") pod \"node-exporter-r6p6w\" (UID: \"a640e9c5-49e4-47fb-a113-9f81e659520f\") " pod="openshift-monitoring/node-exporter-r6p6w" Mar 12 08:07:25 crc kubenswrapper[4809]: I0312 08:07:25.697806 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/405ee83b-3c16-42ca-97da-6011d8d8d399-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-9lzcz\" (UID: \"405ee83b-3c16-42ca-97da-6011d8d8d399\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9lzcz" Mar 12 08:07:25 crc kubenswrapper[4809]: I0312 08:07:25.697843 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/405ee83b-3c16-42ca-97da-6011d8d8d399-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-9lzcz\" (UID: \"405ee83b-3c16-42ca-97da-6011d8d8d399\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9lzcz" Mar 12 08:07:25 crc kubenswrapper[4809]: I0312 08:07:25.697875 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a640e9c5-49e4-47fb-a113-9f81e659520f-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-r6p6w\" (UID: \"a640e9c5-49e4-47fb-a113-9f81e659520f\") " pod="openshift-monitoring/node-exporter-r6p6w" Mar 12 08:07:25 crc kubenswrapper[4809]: I0312 08:07:25.697875 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/a640e9c5-49e4-47fb-a113-9f81e659520f-node-exporter-wtmp\") pod \"node-exporter-r6p6w\" (UID: \"a640e9c5-49e4-47fb-a113-9f81e659520f\") " pod="openshift-monitoring/node-exporter-r6p6w" Mar 12 08:07:25 crc kubenswrapper[4809]: I0312 08:07:25.697899 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/a640e9c5-49e4-47fb-a113-9f81e659520f-node-exporter-tls\") pod \"node-exporter-r6p6w\" (UID: \"a640e9c5-49e4-47fb-a113-9f81e659520f\") " pod="openshift-monitoring/node-exporter-r6p6w" Mar 12 08:07:25 crc kubenswrapper[4809]: I0312 08:07:25.697927 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klls7\" (UniqueName: \"kubernetes.io/projected/a640e9c5-49e4-47fb-a113-9f81e659520f-kube-api-access-klls7\") pod \"node-exporter-r6p6w\" (UID: \"a640e9c5-49e4-47fb-a113-9f81e659520f\") " pod="openshift-monitoring/node-exporter-r6p6w" Mar 12 08:07:25 crc kubenswrapper[4809]: I0312 08:07:25.697960 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/a640e9c5-49e4-47fb-a113-9f81e659520f-node-exporter-textfile\") pod \"node-exporter-r6p6w\" (UID: \"a640e9c5-49e4-47fb-a113-9f81e659520f\") " pod="openshift-monitoring/node-exporter-r6p6w" Mar 12 08:07:25 crc kubenswrapper[4809]: I0312 08:07:25.698234 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a640e9c5-49e4-47fb-a113-9f81e659520f-sys\") pod \"node-exporter-r6p6w\" (UID: \"a640e9c5-49e4-47fb-a113-9f81e659520f\") " pod="openshift-monitoring/node-exporter-r6p6w" Mar 12 08:07:25 crc kubenswrapper[4809]: E0312 08:07:25.698293 4809 secret.go:188] Couldn't get secret openshift-monitoring/node-exporter-tls: secret "node-exporter-tls" not found Mar 12 08:07:25 crc kubenswrapper[4809]: E0312 08:07:25.698457 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a640e9c5-49e4-47fb-a113-9f81e659520f-node-exporter-tls podName:a640e9c5-49e4-47fb-a113-9f81e659520f nodeName:}" failed. No retries permitted until 2026-03-12 08:07:26.198435564 +0000 UTC m=+519.780471287 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/a640e9c5-49e4-47fb-a113-9f81e659520f-node-exporter-tls") pod "node-exporter-r6p6w" (UID: "a640e9c5-49e4-47fb-a113-9f81e659520f") : secret "node-exporter-tls" not found Mar 12 08:07:25 crc kubenswrapper[4809]: I0312 08:07:25.698328 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/a640e9c5-49e4-47fb-a113-9f81e659520f-root\") pod \"node-exporter-r6p6w\" (UID: \"a640e9c5-49e4-47fb-a113-9f81e659520f\") " pod="openshift-monitoring/node-exporter-r6p6w" Mar 12 08:07:25 crc kubenswrapper[4809]: I0312 08:07:25.698939 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/405ee83b-3c16-42ca-97da-6011d8d8d399-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-9lzcz\" (UID: \"405ee83b-3c16-42ca-97da-6011d8d8d399\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9lzcz" Mar 12 08:07:25 crc kubenswrapper[4809]: I0312 08:07:25.699049 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a640e9c5-49e4-47fb-a113-9f81e659520f-metrics-client-ca\") pod \"node-exporter-r6p6w\" (UID: \"a640e9c5-49e4-47fb-a113-9f81e659520f\") " pod="openshift-monitoring/node-exporter-r6p6w" Mar 12 08:07:25 crc kubenswrapper[4809]: I0312 08:07:25.699641 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/405ee83b-3c16-42ca-97da-6011d8d8d399-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-9lzcz\" (UID: \"405ee83b-3c16-42ca-97da-6011d8d8d399\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9lzcz" Mar 12 08:07:25 crc kubenswrapper[4809]: I0312 08:07:25.701328 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/a640e9c5-49e4-47fb-a113-9f81e659520f-node-exporter-textfile\") pod \"node-exporter-r6p6w\" (UID: \"a640e9c5-49e4-47fb-a113-9f81e659520f\") " pod="openshift-monitoring/node-exporter-r6p6w" Mar 12 08:07:25 crc kubenswrapper[4809]: I0312 08:07:25.702010 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/405ee83b-3c16-42ca-97da-6011d8d8d399-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-9lzcz\" (UID: \"405ee83b-3c16-42ca-97da-6011d8d8d399\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9lzcz" Mar 12 08:07:25 crc kubenswrapper[4809]: I0312 08:07:25.702705 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/405ee83b-3c16-42ca-97da-6011d8d8d399-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-9lzcz\" (UID: \"405ee83b-3c16-42ca-97da-6011d8d8d399\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9lzcz" Mar 12 08:07:25 crc kubenswrapper[4809]: I0312 08:07:25.705865 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a640e9c5-49e4-47fb-a113-9f81e659520f-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-r6p6w\" (UID: \"a640e9c5-49e4-47fb-a113-9f81e659520f\") " pod="openshift-monitoring/node-exporter-r6p6w" Mar 12 08:07:25 crc kubenswrapper[4809]: I0312 08:07:25.708737 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/405ee83b-3c16-42ca-97da-6011d8d8d399-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-9lzcz\" (UID: \"405ee83b-3c16-42ca-97da-6011d8d8d399\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9lzcz" Mar 12 08:07:25 crc kubenswrapper[4809]: I0312 08:07:25.714364 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phkrj\" (UniqueName: \"kubernetes.io/projected/405ee83b-3c16-42ca-97da-6011d8d8d399-kube-api-access-phkrj\") pod \"kube-state-metrics-777cb5bd5d-9lzcz\" (UID: \"405ee83b-3c16-42ca-97da-6011d8d8d399\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9lzcz" Mar 12 08:07:25 crc kubenswrapper[4809]: I0312 08:07:25.715770 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klls7\" (UniqueName: \"kubernetes.io/projected/a640e9c5-49e4-47fb-a113-9f81e659520f-kube-api-access-klls7\") pod \"node-exporter-r6p6w\" (UID: \"a640e9c5-49e4-47fb-a113-9f81e659520f\") " pod="openshift-monitoring/node-exporter-r6p6w" Mar 12 08:07:25 crc kubenswrapper[4809]: I0312 08:07:25.742080 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9lzcz" Mar 12 08:07:26 crc kubenswrapper[4809]: I0312 08:07:26.101692 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/8184866b-b320-4c23-94d5-2feb3d60c90e-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-4tl7v\" (UID: \"8184866b-b320-4c23-94d5-2feb3d60c90e\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-4tl7v" Mar 12 08:07:26 crc kubenswrapper[4809]: I0312 08:07:26.106079 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/8184866b-b320-4c23-94d5-2feb3d60c90e-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-4tl7v\" (UID: \"8184866b-b320-4c23-94d5-2feb3d60c90e\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-4tl7v" Mar 12 08:07:26 crc kubenswrapper[4809]: I0312 08:07:26.149886 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-9lzcz"] Mar 12 08:07:26 crc kubenswrapper[4809]: W0312 08:07:26.161275 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod405ee83b_3c16_42ca_97da_6011d8d8d399.slice/crio-0c8b2c0be45381f09bed693e14bca15a9fdf02a09eeeeaf64b65394b6147ef5f WatchSource:0}: Error finding container 0c8b2c0be45381f09bed693e14bca15a9fdf02a09eeeeaf64b65394b6147ef5f: Status 404 returned error can't find the container with id 0c8b2c0be45381f09bed693e14bca15a9fdf02a09eeeeaf64b65394b6147ef5f Mar 12 08:07:26 crc kubenswrapper[4809]: I0312 08:07:26.203016 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/a640e9c5-49e4-47fb-a113-9f81e659520f-node-exporter-tls\") pod \"node-exporter-r6p6w\" (UID: \"a640e9c5-49e4-47fb-a113-9f81e659520f\") " pod="openshift-monitoring/node-exporter-r6p6w" Mar 12 08:07:26 crc kubenswrapper[4809]: I0312 08:07:26.203171 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9lzcz" event={"ID":"405ee83b-3c16-42ca-97da-6011d8d8d399","Type":"ContainerStarted","Data":"0c8b2c0be45381f09bed693e14bca15a9fdf02a09eeeeaf64b65394b6147ef5f"} Mar 12 08:07:26 crc kubenswrapper[4809]: I0312 08:07:26.208186 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/a640e9c5-49e4-47fb-a113-9f81e659520f-node-exporter-tls\") pod \"node-exporter-r6p6w\" (UID: \"a640e9c5-49e4-47fb-a113-9f81e659520f\") " pod="openshift-monitoring/node-exporter-r6p6w" Mar 12 08:07:26 crc kubenswrapper[4809]: I0312 08:07:26.328161 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-4tl7v" Mar 12 08:07:26 crc kubenswrapper[4809]: I0312 08:07:26.495729 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-r6p6w" Mar 12 08:07:26 crc kubenswrapper[4809]: I0312 08:07:26.665413 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-4tl7v"] Mar 12 08:07:26 crc kubenswrapper[4809]: W0312 08:07:26.681293 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8184866b_b320_4c23_94d5_2feb3d60c90e.slice/crio-6a6085e003c568a275db7720aae99fdcd98aa7873167907f490073255942e587 WatchSource:0}: Error finding container 6a6085e003c568a275db7720aae99fdcd98aa7873167907f490073255942e587: Status 404 returned error can't find the container with id 6a6085e003c568a275db7720aae99fdcd98aa7873167907f490073255942e587 Mar 12 08:07:26 crc kubenswrapper[4809]: I0312 08:07:26.694640 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 12 08:07:26 crc kubenswrapper[4809]: I0312 08:07:26.696628 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 12 08:07:26 crc kubenswrapper[4809]: I0312 08:07:26.702632 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Mar 12 08:07:26 crc kubenswrapper[4809]: I0312 08:07:26.702846 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-6bspg" Mar 12 08:07:26 crc kubenswrapper[4809]: I0312 08:07:26.704149 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Mar 12 08:07:26 crc kubenswrapper[4809]: I0312 08:07:26.705532 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Mar 12 08:07:26 crc kubenswrapper[4809]: I0312 08:07:26.705689 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Mar 12 08:07:26 crc kubenswrapper[4809]: I0312 08:07:26.705704 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Mar 12 08:07:26 crc kubenswrapper[4809]: I0312 08:07:26.708797 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Mar 12 08:07:26 crc kubenswrapper[4809]: I0312 08:07:26.717604 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Mar 12 08:07:26 crc kubenswrapper[4809]: I0312 08:07:26.744068 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Mar 12 08:07:26 crc kubenswrapper[4809]: I0312 08:07:26.764870 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 12 08:07:26 crc kubenswrapper[4809]: I0312 08:07:26.820844 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/3b464fbf-707a-4c62-aa45-f1d806dbf294-config-volume\") pod \"alertmanager-main-0\" (UID: \"3b464fbf-707a-4c62-aa45-f1d806dbf294\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 08:07:26 crc kubenswrapper[4809]: I0312 08:07:26.820908 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b464fbf-707a-4c62-aa45-f1d806dbf294-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"3b464fbf-707a-4c62-aa45-f1d806dbf294\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 08:07:26 crc kubenswrapper[4809]: I0312 08:07:26.820931 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6hcb\" (UniqueName: \"kubernetes.io/projected/3b464fbf-707a-4c62-aa45-f1d806dbf294-kube-api-access-k6hcb\") pod \"alertmanager-main-0\" (UID: \"3b464fbf-707a-4c62-aa45-f1d806dbf294\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 08:07:26 crc kubenswrapper[4809]: I0312 08:07:26.820964 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3b464fbf-707a-4c62-aa45-f1d806dbf294-tls-assets\") pod \"alertmanager-main-0\" (UID: \"3b464fbf-707a-4c62-aa45-f1d806dbf294\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 08:07:26 crc kubenswrapper[4809]: I0312 08:07:26.820993 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/3b464fbf-707a-4c62-aa45-f1d806dbf294-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"3b464fbf-707a-4c62-aa45-f1d806dbf294\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 08:07:26 crc kubenswrapper[4809]: I0312 08:07:26.821014 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/3b464fbf-707a-4c62-aa45-f1d806dbf294-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"3b464fbf-707a-4c62-aa45-f1d806dbf294\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 08:07:26 crc kubenswrapper[4809]: I0312 08:07:26.821035 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3b464fbf-707a-4c62-aa45-f1d806dbf294-web-config\") pod \"alertmanager-main-0\" (UID: \"3b464fbf-707a-4c62-aa45-f1d806dbf294\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 08:07:26 crc kubenswrapper[4809]: I0312 08:07:26.821206 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/3b464fbf-707a-4c62-aa45-f1d806dbf294-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"3b464fbf-707a-4c62-aa45-f1d806dbf294\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 08:07:26 crc kubenswrapper[4809]: I0312 08:07:26.821274 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/3b464fbf-707a-4c62-aa45-f1d806dbf294-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"3b464fbf-707a-4c62-aa45-f1d806dbf294\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 08:07:26 crc kubenswrapper[4809]: I0312 08:07:26.821355 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/3b464fbf-707a-4c62-aa45-f1d806dbf294-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"3b464fbf-707a-4c62-aa45-f1d806dbf294\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 08:07:26 crc kubenswrapper[4809]: I0312 08:07:26.821398 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3b464fbf-707a-4c62-aa45-f1d806dbf294-config-out\") pod \"alertmanager-main-0\" (UID: \"3b464fbf-707a-4c62-aa45-f1d806dbf294\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 08:07:26 crc kubenswrapper[4809]: I0312 08:07:26.821430 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3b464fbf-707a-4c62-aa45-f1d806dbf294-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"3b464fbf-707a-4c62-aa45-f1d806dbf294\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 08:07:26 crc kubenswrapper[4809]: I0312 08:07:26.923206 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/3b464fbf-707a-4c62-aa45-f1d806dbf294-config-volume\") pod \"alertmanager-main-0\" (UID: \"3b464fbf-707a-4c62-aa45-f1d806dbf294\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 08:07:26 crc kubenswrapper[4809]: I0312 08:07:26.923680 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b464fbf-707a-4c62-aa45-f1d806dbf294-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"3b464fbf-707a-4c62-aa45-f1d806dbf294\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 08:07:26 crc kubenswrapper[4809]: I0312 08:07:26.923761 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6hcb\" (UniqueName: \"kubernetes.io/projected/3b464fbf-707a-4c62-aa45-f1d806dbf294-kube-api-access-k6hcb\") pod \"alertmanager-main-0\" (UID: \"3b464fbf-707a-4c62-aa45-f1d806dbf294\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 08:07:26 crc kubenswrapper[4809]: I0312 08:07:26.923919 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3b464fbf-707a-4c62-aa45-f1d806dbf294-tls-assets\") pod \"alertmanager-main-0\" (UID: \"3b464fbf-707a-4c62-aa45-f1d806dbf294\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 08:07:26 crc kubenswrapper[4809]: I0312 08:07:26.923999 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/3b464fbf-707a-4c62-aa45-f1d806dbf294-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"3b464fbf-707a-4c62-aa45-f1d806dbf294\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 08:07:26 crc kubenswrapper[4809]: I0312 08:07:26.924062 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/3b464fbf-707a-4c62-aa45-f1d806dbf294-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"3b464fbf-707a-4c62-aa45-f1d806dbf294\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 08:07:26 crc kubenswrapper[4809]: I0312 08:07:26.924144 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3b464fbf-707a-4c62-aa45-f1d806dbf294-web-config\") pod \"alertmanager-main-0\" (UID: \"3b464fbf-707a-4c62-aa45-f1d806dbf294\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 08:07:26 crc kubenswrapper[4809]: I0312 08:07:26.924226 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/3b464fbf-707a-4c62-aa45-f1d806dbf294-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"3b464fbf-707a-4c62-aa45-f1d806dbf294\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 08:07:26 crc kubenswrapper[4809]: I0312 08:07:26.924289 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/3b464fbf-707a-4c62-aa45-f1d806dbf294-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"3b464fbf-707a-4c62-aa45-f1d806dbf294\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 08:07:26 crc kubenswrapper[4809]: I0312 08:07:26.924412 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/3b464fbf-707a-4c62-aa45-f1d806dbf294-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"3b464fbf-707a-4c62-aa45-f1d806dbf294\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 08:07:26 crc kubenswrapper[4809]: I0312 08:07:26.924487 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3b464fbf-707a-4c62-aa45-f1d806dbf294-config-out\") pod \"alertmanager-main-0\" (UID: \"3b464fbf-707a-4c62-aa45-f1d806dbf294\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 08:07:26 crc kubenswrapper[4809]: I0312 08:07:26.924572 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3b464fbf-707a-4c62-aa45-f1d806dbf294-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"3b464fbf-707a-4c62-aa45-f1d806dbf294\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 08:07:26 crc kubenswrapper[4809]: I0312 08:07:26.925244 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b464fbf-707a-4c62-aa45-f1d806dbf294-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"3b464fbf-707a-4c62-aa45-f1d806dbf294\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 08:07:26 crc kubenswrapper[4809]: I0312 08:07:26.925690 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3b464fbf-707a-4c62-aa45-f1d806dbf294-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"3b464fbf-707a-4c62-aa45-f1d806dbf294\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 08:07:26 crc kubenswrapper[4809]: I0312 08:07:26.926274 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/3b464fbf-707a-4c62-aa45-f1d806dbf294-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"3b464fbf-707a-4c62-aa45-f1d806dbf294\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 08:07:26 crc kubenswrapper[4809]: I0312 08:07:26.928923 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3b464fbf-707a-4c62-aa45-f1d806dbf294-web-config\") pod \"alertmanager-main-0\" (UID: \"3b464fbf-707a-4c62-aa45-f1d806dbf294\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 08:07:26 crc kubenswrapper[4809]: I0312 08:07:26.929201 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3b464fbf-707a-4c62-aa45-f1d806dbf294-tls-assets\") pod \"alertmanager-main-0\" (UID: \"3b464fbf-707a-4c62-aa45-f1d806dbf294\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 08:07:26 crc kubenswrapper[4809]: I0312 08:07:26.929310 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/3b464fbf-707a-4c62-aa45-f1d806dbf294-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"3b464fbf-707a-4c62-aa45-f1d806dbf294\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 08:07:26 crc kubenswrapper[4809]: I0312 08:07:26.929917 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/3b464fbf-707a-4c62-aa45-f1d806dbf294-config-volume\") pod \"alertmanager-main-0\" (UID: \"3b464fbf-707a-4c62-aa45-f1d806dbf294\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 08:07:26 crc kubenswrapper[4809]: I0312 08:07:26.930321 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3b464fbf-707a-4c62-aa45-f1d806dbf294-config-out\") pod \"alertmanager-main-0\" (UID: \"3b464fbf-707a-4c62-aa45-f1d806dbf294\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 08:07:26 crc kubenswrapper[4809]: I0312 08:07:26.930454 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/3b464fbf-707a-4c62-aa45-f1d806dbf294-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"3b464fbf-707a-4c62-aa45-f1d806dbf294\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 08:07:26 crc kubenswrapper[4809]: I0312 08:07:26.930578 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/3b464fbf-707a-4c62-aa45-f1d806dbf294-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"3b464fbf-707a-4c62-aa45-f1d806dbf294\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 08:07:26 crc kubenswrapper[4809]: I0312 08:07:26.930844 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/3b464fbf-707a-4c62-aa45-f1d806dbf294-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"3b464fbf-707a-4c62-aa45-f1d806dbf294\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 08:07:26 crc kubenswrapper[4809]: I0312 08:07:26.943636 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6hcb\" (UniqueName: \"kubernetes.io/projected/3b464fbf-707a-4c62-aa45-f1d806dbf294-kube-api-access-k6hcb\") pod \"alertmanager-main-0\" (UID: \"3b464fbf-707a-4c62-aa45-f1d806dbf294\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 08:07:27 crc kubenswrapper[4809]: I0312 08:07:27.013709 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 12 08:07:27 crc kubenswrapper[4809]: I0312 08:07:27.209422 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-r6p6w" event={"ID":"a640e9c5-49e4-47fb-a113-9f81e659520f","Type":"ContainerStarted","Data":"d34f042781e6ced293a532ba4f0067c354f9d249d64878db677b0865c0ccde28"} Mar 12 08:07:27 crc kubenswrapper[4809]: I0312 08:07:27.211616 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-4tl7v" event={"ID":"8184866b-b320-4c23-94d5-2feb3d60c90e","Type":"ContainerStarted","Data":"4b41a738c9d02ba557c5f697f7e49e866ad03cd846d03bac66a0664b19a70c4f"} Mar 12 08:07:27 crc kubenswrapper[4809]: I0312 08:07:27.211669 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-4tl7v" event={"ID":"8184866b-b320-4c23-94d5-2feb3d60c90e","Type":"ContainerStarted","Data":"a293ca762e55270e5049427ca8bc0f3bb97d3c232bdf0f198f4fd9ddd3f87601"} Mar 12 08:07:27 crc kubenswrapper[4809]: I0312 08:07:27.211680 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-4tl7v" event={"ID":"8184866b-b320-4c23-94d5-2feb3d60c90e","Type":"ContainerStarted","Data":"6a6085e003c568a275db7720aae99fdcd98aa7873167907f490073255942e587"} Mar 12 08:07:27 crc kubenswrapper[4809]: I0312 08:07:27.416267 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 12 08:07:27 crc kubenswrapper[4809]: W0312 08:07:27.429310 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b464fbf_707a_4c62_aa45_f1d806dbf294.slice/crio-e729909e024af661591279186dae37953ea1a96547c0641e06938d815f8f2614 WatchSource:0}: Error finding container e729909e024af661591279186dae37953ea1a96547c0641e06938d815f8f2614: Status 404 returned error can't find the container with id e729909e024af661591279186dae37953ea1a96547c0641e06938d815f8f2614 Mar 12 08:07:27 crc kubenswrapper[4809]: I0312 08:07:27.502426 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-5b6975bb77-fcddt"] Mar 12 08:07:27 crc kubenswrapper[4809]: I0312 08:07:27.507631 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-5b6975bb77-fcddt" Mar 12 08:07:27 crc kubenswrapper[4809]: I0312 08:07:27.509574 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Mar 12 08:07:27 crc kubenswrapper[4809]: I0312 08:07:27.510601 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Mar 12 08:07:27 crc kubenswrapper[4809]: I0312 08:07:27.510820 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-mlqrr" Mar 12 08:07:27 crc kubenswrapper[4809]: I0312 08:07:27.511166 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Mar 12 08:07:27 crc kubenswrapper[4809]: I0312 08:07:27.511304 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Mar 12 08:07:27 crc kubenswrapper[4809]: I0312 08:07:27.511483 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Mar 12 08:07:27 crc kubenswrapper[4809]: I0312 08:07:27.511609 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-9oltqhnvf8s7g" Mar 12 08:07:27 crc kubenswrapper[4809]: I0312 08:07:27.520251 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-5b6975bb77-fcddt"] Mar 12 08:07:27 crc kubenswrapper[4809]: I0312 08:07:27.635299 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/8716fe6b-cd87-4777-8291-6078ce9929bc-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-5b6975bb77-fcddt\" (UID: \"8716fe6b-cd87-4777-8291-6078ce9929bc\") " pod="openshift-monitoring/thanos-querier-5b6975bb77-fcddt" Mar 12 08:07:27 crc kubenswrapper[4809]: I0312 08:07:27.635366 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/8716fe6b-cd87-4777-8291-6078ce9929bc-secret-grpc-tls\") pod \"thanos-querier-5b6975bb77-fcddt\" (UID: \"8716fe6b-cd87-4777-8291-6078ce9929bc\") " pod="openshift-monitoring/thanos-querier-5b6975bb77-fcddt" Mar 12 08:07:27 crc kubenswrapper[4809]: I0312 08:07:27.635400 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcwft\" (UniqueName: \"kubernetes.io/projected/8716fe6b-cd87-4777-8291-6078ce9929bc-kube-api-access-vcwft\") pod \"thanos-querier-5b6975bb77-fcddt\" (UID: \"8716fe6b-cd87-4777-8291-6078ce9929bc\") " pod="openshift-monitoring/thanos-querier-5b6975bb77-fcddt" Mar 12 08:07:27 crc kubenswrapper[4809]: I0312 08:07:27.635422 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8716fe6b-cd87-4777-8291-6078ce9929bc-metrics-client-ca\") pod \"thanos-querier-5b6975bb77-fcddt\" (UID: \"8716fe6b-cd87-4777-8291-6078ce9929bc\") " pod="openshift-monitoring/thanos-querier-5b6975bb77-fcddt" Mar 12 08:07:27 crc kubenswrapper[4809]: I0312 08:07:27.635537 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/8716fe6b-cd87-4777-8291-6078ce9929bc-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-5b6975bb77-fcddt\" (UID: \"8716fe6b-cd87-4777-8291-6078ce9929bc\") " pod="openshift-monitoring/thanos-querier-5b6975bb77-fcddt" Mar 12 08:07:27 crc kubenswrapper[4809]: I0312 08:07:27.635594 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/8716fe6b-cd87-4777-8291-6078ce9929bc-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-5b6975bb77-fcddt\" (UID: \"8716fe6b-cd87-4777-8291-6078ce9929bc\") " pod="openshift-monitoring/thanos-querier-5b6975bb77-fcddt" Mar 12 08:07:27 crc kubenswrapper[4809]: I0312 08:07:27.635780 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/8716fe6b-cd87-4777-8291-6078ce9929bc-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-5b6975bb77-fcddt\" (UID: \"8716fe6b-cd87-4777-8291-6078ce9929bc\") " pod="openshift-monitoring/thanos-querier-5b6975bb77-fcddt" Mar 12 08:07:27 crc kubenswrapper[4809]: I0312 08:07:27.635812 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/8716fe6b-cd87-4777-8291-6078ce9929bc-secret-thanos-querier-tls\") pod \"thanos-querier-5b6975bb77-fcddt\" (UID: \"8716fe6b-cd87-4777-8291-6078ce9929bc\") " pod="openshift-monitoring/thanos-querier-5b6975bb77-fcddt" Mar 12 08:07:27 crc kubenswrapper[4809]: I0312 08:07:27.736865 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/8716fe6b-cd87-4777-8291-6078ce9929bc-secret-grpc-tls\") pod \"thanos-querier-5b6975bb77-fcddt\" (UID: \"8716fe6b-cd87-4777-8291-6078ce9929bc\") " pod="openshift-monitoring/thanos-querier-5b6975bb77-fcddt" Mar 12 08:07:27 crc kubenswrapper[4809]: I0312 08:07:27.736932 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcwft\" (UniqueName: \"kubernetes.io/projected/8716fe6b-cd87-4777-8291-6078ce9929bc-kube-api-access-vcwft\") pod \"thanos-querier-5b6975bb77-fcddt\" (UID: \"8716fe6b-cd87-4777-8291-6078ce9929bc\") " pod="openshift-monitoring/thanos-querier-5b6975bb77-fcddt" Mar 12 08:07:27 crc kubenswrapper[4809]: I0312 08:07:27.736962 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8716fe6b-cd87-4777-8291-6078ce9929bc-metrics-client-ca\") pod \"thanos-querier-5b6975bb77-fcddt\" (UID: \"8716fe6b-cd87-4777-8291-6078ce9929bc\") " pod="openshift-monitoring/thanos-querier-5b6975bb77-fcddt" Mar 12 08:07:27 crc kubenswrapper[4809]: I0312 08:07:27.736996 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/8716fe6b-cd87-4777-8291-6078ce9929bc-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-5b6975bb77-fcddt\" (UID: \"8716fe6b-cd87-4777-8291-6078ce9929bc\") " pod="openshift-monitoring/thanos-querier-5b6975bb77-fcddt" Mar 12 08:07:27 crc kubenswrapper[4809]: I0312 08:07:27.737031 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/8716fe6b-cd87-4777-8291-6078ce9929bc-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-5b6975bb77-fcddt\" (UID: \"8716fe6b-cd87-4777-8291-6078ce9929bc\") " pod="openshift-monitoring/thanos-querier-5b6975bb77-fcddt" Mar 12 08:07:27 crc kubenswrapper[4809]: I0312 08:07:27.737103 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/8716fe6b-cd87-4777-8291-6078ce9929bc-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-5b6975bb77-fcddt\" (UID: \"8716fe6b-cd87-4777-8291-6078ce9929bc\") " pod="openshift-monitoring/thanos-querier-5b6975bb77-fcddt" Mar 12 08:07:27 crc kubenswrapper[4809]: I0312 08:07:27.737149 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/8716fe6b-cd87-4777-8291-6078ce9929bc-secret-thanos-querier-tls\") pod \"thanos-querier-5b6975bb77-fcddt\" (UID: \"8716fe6b-cd87-4777-8291-6078ce9929bc\") " pod="openshift-monitoring/thanos-querier-5b6975bb77-fcddt" Mar 12 08:07:27 crc kubenswrapper[4809]: I0312 08:07:27.737181 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/8716fe6b-cd87-4777-8291-6078ce9929bc-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-5b6975bb77-fcddt\" (UID: \"8716fe6b-cd87-4777-8291-6078ce9929bc\") " pod="openshift-monitoring/thanos-querier-5b6975bb77-fcddt" Mar 12 08:07:27 crc kubenswrapper[4809]: I0312 08:07:27.739632 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8716fe6b-cd87-4777-8291-6078ce9929bc-metrics-client-ca\") pod \"thanos-querier-5b6975bb77-fcddt\" (UID: \"8716fe6b-cd87-4777-8291-6078ce9929bc\") " pod="openshift-monitoring/thanos-querier-5b6975bb77-fcddt" Mar 12 08:07:27 crc kubenswrapper[4809]: I0312 08:07:27.742609 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/8716fe6b-cd87-4777-8291-6078ce9929bc-secret-grpc-tls\") pod \"thanos-querier-5b6975bb77-fcddt\" (UID: \"8716fe6b-cd87-4777-8291-6078ce9929bc\") " pod="openshift-monitoring/thanos-querier-5b6975bb77-fcddt" Mar 12 08:07:27 crc kubenswrapper[4809]: I0312 08:07:27.750637 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/8716fe6b-cd87-4777-8291-6078ce9929bc-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-5b6975bb77-fcddt\" (UID: \"8716fe6b-cd87-4777-8291-6078ce9929bc\") " pod="openshift-monitoring/thanos-querier-5b6975bb77-fcddt" Mar 12 08:07:27 crc kubenswrapper[4809]: I0312 08:07:27.750903 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/8716fe6b-cd87-4777-8291-6078ce9929bc-secret-thanos-querier-tls\") pod \"thanos-querier-5b6975bb77-fcddt\" (UID: \"8716fe6b-cd87-4777-8291-6078ce9929bc\") " pod="openshift-monitoring/thanos-querier-5b6975bb77-fcddt" Mar 12 08:07:27 crc kubenswrapper[4809]: I0312 08:07:27.751316 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/8716fe6b-cd87-4777-8291-6078ce9929bc-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-5b6975bb77-fcddt\" (UID: \"8716fe6b-cd87-4777-8291-6078ce9929bc\") " pod="openshift-monitoring/thanos-querier-5b6975bb77-fcddt" Mar 12 08:07:27 crc kubenswrapper[4809]: I0312 08:07:27.751333 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/8716fe6b-cd87-4777-8291-6078ce9929bc-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-5b6975bb77-fcddt\" (UID: \"8716fe6b-cd87-4777-8291-6078ce9929bc\") " pod="openshift-monitoring/thanos-querier-5b6975bb77-fcddt" Mar 12 08:07:27 crc kubenswrapper[4809]: I0312 08:07:27.753247 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/8716fe6b-cd87-4777-8291-6078ce9929bc-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-5b6975bb77-fcddt\" (UID: \"8716fe6b-cd87-4777-8291-6078ce9929bc\") " pod="openshift-monitoring/thanos-querier-5b6975bb77-fcddt" Mar 12 08:07:27 crc kubenswrapper[4809]: I0312 08:07:27.754009 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcwft\" (UniqueName: \"kubernetes.io/projected/8716fe6b-cd87-4777-8291-6078ce9929bc-kube-api-access-vcwft\") pod \"thanos-querier-5b6975bb77-fcddt\" (UID: \"8716fe6b-cd87-4777-8291-6078ce9929bc\") " pod="openshift-monitoring/thanos-querier-5b6975bb77-fcddt" Mar 12 08:07:27 crc kubenswrapper[4809]: I0312 08:07:27.872433 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-5b6975bb77-fcddt" Mar 12 08:07:28 crc kubenswrapper[4809]: I0312 08:07:28.091437 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-5b6975bb77-fcddt"] Mar 12 08:07:28 crc kubenswrapper[4809]: I0312 08:07:28.220252 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3b464fbf-707a-4c62-aa45-f1d806dbf294","Type":"ContainerStarted","Data":"e729909e024af661591279186dae37953ea1a96547c0641e06938d815f8f2614"} Mar 12 08:07:28 crc kubenswrapper[4809]: W0312 08:07:28.406439 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8716fe6b_cd87_4777_8291_6078ce9929bc.slice/crio-64c4ff8a0f73ebb52bed4102c521642e8b4863730c956e1d24eb285705dc0d4d WatchSource:0}: Error finding container 64c4ff8a0f73ebb52bed4102c521642e8b4863730c956e1d24eb285705dc0d4d: Status 404 returned error can't find the container with id 64c4ff8a0f73ebb52bed4102c521642e8b4863730c956e1d24eb285705dc0d4d Mar 12 08:07:29 crc kubenswrapper[4809]: I0312 08:07:29.229998 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-4tl7v" event={"ID":"8184866b-b320-4c23-94d5-2feb3d60c90e","Type":"ContainerStarted","Data":"72f5f88d24e0373614b4e79cd81a93f27ace82d6e007ec927aab03192e46eaab"} Mar 12 08:07:29 crc kubenswrapper[4809]: I0312 08:07:29.234030 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9lzcz" event={"ID":"405ee83b-3c16-42ca-97da-6011d8d8d399","Type":"ContainerStarted","Data":"befa9409a84633750bc6c0f87e4f460047630da449a4f761f49d3913824c5970"} Mar 12 08:07:29 crc kubenswrapper[4809]: I0312 08:07:29.239265 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5b6975bb77-fcddt" event={"ID":"8716fe6b-cd87-4777-8291-6078ce9929bc","Type":"ContainerStarted","Data":"64c4ff8a0f73ebb52bed4102c521642e8b4863730c956e1d24eb285705dc0d4d"} Mar 12 08:07:30 crc kubenswrapper[4809]: I0312 08:07:30.251644 4809 generic.go:334] "Generic (PLEG): container finished" podID="3b464fbf-707a-4c62-aa45-f1d806dbf294" containerID="88f319925f14c3cd17068b926685b3d1acfd08c4072a7829a64c3363a8769af4" exitCode=0 Mar 12 08:07:30 crc kubenswrapper[4809]: I0312 08:07:30.252051 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3b464fbf-707a-4c62-aa45-f1d806dbf294","Type":"ContainerDied","Data":"88f319925f14c3cd17068b926685b3d1acfd08c4072a7829a64c3363a8769af4"} Mar 12 08:07:30 crc kubenswrapper[4809]: I0312 08:07:30.263184 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9lzcz" event={"ID":"405ee83b-3c16-42ca-97da-6011d8d8d399","Type":"ContainerStarted","Data":"d6bc965ad742f591651ccd9e8900f57d9fe6315d012753255b6a9914727c5842"} Mar 12 08:07:30 crc kubenswrapper[4809]: I0312 08:07:30.263245 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9lzcz" event={"ID":"405ee83b-3c16-42ca-97da-6011d8d8d399","Type":"ContainerStarted","Data":"683234739af5aae933066f920322c9a52128a30f05044f0e7069f86dee71931c"} Mar 12 08:07:30 crc kubenswrapper[4809]: I0312 08:07:30.282797 4809 generic.go:334] "Generic (PLEG): container finished" podID="a640e9c5-49e4-47fb-a113-9f81e659520f" containerID="48c766dcf4942a89a0f9621a04481963bfc812a228280c39d60cc83a2754b18a" exitCode=0 Mar 12 08:07:30 crc kubenswrapper[4809]: I0312 08:07:30.283244 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-r6p6w" event={"ID":"a640e9c5-49e4-47fb-a113-9f81e659520f","Type":"ContainerDied","Data":"48c766dcf4942a89a0f9621a04481963bfc812a228280c39d60cc83a2754b18a"} Mar 12 08:07:30 crc kubenswrapper[4809]: I0312 08:07:30.286457 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-566fddb674-4tl7v" podStartSLOduration=3.251861967 podStartE2EDuration="5.286432454s" podCreationTimestamp="2026-03-12 08:07:25 +0000 UTC" firstStartedPulling="2026-03-12 08:07:26.970749917 +0000 UTC m=+520.552785650" lastFinishedPulling="2026-03-12 08:07:29.005320394 +0000 UTC m=+522.587356137" observedRunningTime="2026-03-12 08:07:29.254682754 +0000 UTC m=+522.836718527" watchObservedRunningTime="2026-03-12 08:07:30.286432454 +0000 UTC m=+523.868468197" Mar 12 08:07:30 crc kubenswrapper[4809]: I0312 08:07:30.290327 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5f4d7b984b-8kzpt"] Mar 12 08:07:30 crc kubenswrapper[4809]: I0312 08:07:30.291592 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5f4d7b984b-8kzpt" Mar 12 08:07:30 crc kubenswrapper[4809]: I0312 08:07:30.304417 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5f4d7b984b-8kzpt"] Mar 12 08:07:30 crc kubenswrapper[4809]: I0312 08:07:30.344699 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9lzcz" podStartSLOduration=2.504398883 podStartE2EDuration="5.344679748s" podCreationTimestamp="2026-03-12 08:07:25 +0000 UTC" firstStartedPulling="2026-03-12 08:07:26.164437412 +0000 UTC m=+519.746473185" lastFinishedPulling="2026-03-12 08:07:29.004718317 +0000 UTC m=+522.586754050" observedRunningTime="2026-03-12 08:07:30.326686916 +0000 UTC m=+523.908722649" watchObservedRunningTime="2026-03-12 08:07:30.344679748 +0000 UTC m=+523.926715481" Mar 12 08:07:30 crc kubenswrapper[4809]: I0312 08:07:30.389037 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c785b38a-f6f9-4152-a343-814d2b15a3a1-service-ca\") pod \"console-5f4d7b984b-8kzpt\" (UID: \"c785b38a-f6f9-4152-a343-814d2b15a3a1\") " pod="openshift-console/console-5f4d7b984b-8kzpt" Mar 12 08:07:30 crc kubenswrapper[4809]: I0312 08:07:30.389101 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c785b38a-f6f9-4152-a343-814d2b15a3a1-trusted-ca-bundle\") pod \"console-5f4d7b984b-8kzpt\" (UID: \"c785b38a-f6f9-4152-a343-814d2b15a3a1\") " pod="openshift-console/console-5f4d7b984b-8kzpt" Mar 12 08:07:30 crc kubenswrapper[4809]: I0312 08:07:30.389204 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c785b38a-f6f9-4152-a343-814d2b15a3a1-oauth-serving-cert\") pod \"console-5f4d7b984b-8kzpt\" (UID: \"c785b38a-f6f9-4152-a343-814d2b15a3a1\") " pod="openshift-console/console-5f4d7b984b-8kzpt" Mar 12 08:07:30 crc kubenswrapper[4809]: I0312 08:07:30.389268 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c785b38a-f6f9-4152-a343-814d2b15a3a1-console-config\") pod \"console-5f4d7b984b-8kzpt\" (UID: \"c785b38a-f6f9-4152-a343-814d2b15a3a1\") " pod="openshift-console/console-5f4d7b984b-8kzpt" Mar 12 08:07:30 crc kubenswrapper[4809]: I0312 08:07:30.389412 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c785b38a-f6f9-4152-a343-814d2b15a3a1-console-oauth-config\") pod \"console-5f4d7b984b-8kzpt\" (UID: \"c785b38a-f6f9-4152-a343-814d2b15a3a1\") " pod="openshift-console/console-5f4d7b984b-8kzpt" Mar 12 08:07:30 crc kubenswrapper[4809]: I0312 08:07:30.389449 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c785b38a-f6f9-4152-a343-814d2b15a3a1-console-serving-cert\") pod \"console-5f4d7b984b-8kzpt\" (UID: \"c785b38a-f6f9-4152-a343-814d2b15a3a1\") " pod="openshift-console/console-5f4d7b984b-8kzpt" Mar 12 08:07:30 crc kubenswrapper[4809]: I0312 08:07:30.389480 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-488pr\" (UniqueName: \"kubernetes.io/projected/c785b38a-f6f9-4152-a343-814d2b15a3a1-kube-api-access-488pr\") pod \"console-5f4d7b984b-8kzpt\" (UID: \"c785b38a-f6f9-4152-a343-814d2b15a3a1\") " pod="openshift-console/console-5f4d7b984b-8kzpt" Mar 12 08:07:30 crc kubenswrapper[4809]: I0312 08:07:30.490377 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c785b38a-f6f9-4152-a343-814d2b15a3a1-service-ca\") pod \"console-5f4d7b984b-8kzpt\" (UID: \"c785b38a-f6f9-4152-a343-814d2b15a3a1\") " pod="openshift-console/console-5f4d7b984b-8kzpt" Mar 12 08:07:30 crc kubenswrapper[4809]: I0312 08:07:30.490844 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c785b38a-f6f9-4152-a343-814d2b15a3a1-trusted-ca-bundle\") pod \"console-5f4d7b984b-8kzpt\" (UID: \"c785b38a-f6f9-4152-a343-814d2b15a3a1\") " pod="openshift-console/console-5f4d7b984b-8kzpt" Mar 12 08:07:30 crc kubenswrapper[4809]: I0312 08:07:30.490885 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c785b38a-f6f9-4152-a343-814d2b15a3a1-oauth-serving-cert\") pod \"console-5f4d7b984b-8kzpt\" (UID: \"c785b38a-f6f9-4152-a343-814d2b15a3a1\") " pod="openshift-console/console-5f4d7b984b-8kzpt" Mar 12 08:07:30 crc kubenswrapper[4809]: I0312 08:07:30.490916 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c785b38a-f6f9-4152-a343-814d2b15a3a1-console-config\") pod \"console-5f4d7b984b-8kzpt\" (UID: \"c785b38a-f6f9-4152-a343-814d2b15a3a1\") " pod="openshift-console/console-5f4d7b984b-8kzpt" Mar 12 08:07:30 crc kubenswrapper[4809]: I0312 08:07:30.490993 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c785b38a-f6f9-4152-a343-814d2b15a3a1-console-oauth-config\") pod \"console-5f4d7b984b-8kzpt\" (UID: \"c785b38a-f6f9-4152-a343-814d2b15a3a1\") " pod="openshift-console/console-5f4d7b984b-8kzpt" Mar 12 08:07:30 crc kubenswrapper[4809]: I0312 08:07:30.491028 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c785b38a-f6f9-4152-a343-814d2b15a3a1-console-serving-cert\") pod \"console-5f4d7b984b-8kzpt\" (UID: \"c785b38a-f6f9-4152-a343-814d2b15a3a1\") " pod="openshift-console/console-5f4d7b984b-8kzpt" Mar 12 08:07:30 crc kubenswrapper[4809]: I0312 08:07:30.491055 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-488pr\" (UniqueName: \"kubernetes.io/projected/c785b38a-f6f9-4152-a343-814d2b15a3a1-kube-api-access-488pr\") pod \"console-5f4d7b984b-8kzpt\" (UID: \"c785b38a-f6f9-4152-a343-814d2b15a3a1\") " pod="openshift-console/console-5f4d7b984b-8kzpt" Mar 12 08:07:30 crc kubenswrapper[4809]: I0312 08:07:30.491728 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c785b38a-f6f9-4152-a343-814d2b15a3a1-service-ca\") pod \"console-5f4d7b984b-8kzpt\" (UID: \"c785b38a-f6f9-4152-a343-814d2b15a3a1\") " pod="openshift-console/console-5f4d7b984b-8kzpt" Mar 12 08:07:30 crc kubenswrapper[4809]: I0312 08:07:30.492604 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c785b38a-f6f9-4152-a343-814d2b15a3a1-oauth-serving-cert\") pod \"console-5f4d7b984b-8kzpt\" (UID: \"c785b38a-f6f9-4152-a343-814d2b15a3a1\") " pod="openshift-console/console-5f4d7b984b-8kzpt" Mar 12 08:07:30 crc kubenswrapper[4809]: I0312 08:07:30.492693 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c785b38a-f6f9-4152-a343-814d2b15a3a1-trusted-ca-bundle\") pod \"console-5f4d7b984b-8kzpt\" (UID: \"c785b38a-f6f9-4152-a343-814d2b15a3a1\") " pod="openshift-console/console-5f4d7b984b-8kzpt" Mar 12 08:07:30 crc kubenswrapper[4809]: I0312 08:07:30.492883 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c785b38a-f6f9-4152-a343-814d2b15a3a1-console-config\") pod \"console-5f4d7b984b-8kzpt\" (UID: \"c785b38a-f6f9-4152-a343-814d2b15a3a1\") " pod="openshift-console/console-5f4d7b984b-8kzpt" Mar 12 08:07:30 crc kubenswrapper[4809]: I0312 08:07:30.498311 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c785b38a-f6f9-4152-a343-814d2b15a3a1-console-oauth-config\") pod \"console-5f4d7b984b-8kzpt\" (UID: \"c785b38a-f6f9-4152-a343-814d2b15a3a1\") " pod="openshift-console/console-5f4d7b984b-8kzpt" Mar 12 08:07:30 crc kubenswrapper[4809]: I0312 08:07:30.498528 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c785b38a-f6f9-4152-a343-814d2b15a3a1-console-serving-cert\") pod \"console-5f4d7b984b-8kzpt\" (UID: \"c785b38a-f6f9-4152-a343-814d2b15a3a1\") " pod="openshift-console/console-5f4d7b984b-8kzpt" Mar 12 08:07:30 crc kubenswrapper[4809]: I0312 08:07:30.510262 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-488pr\" (UniqueName: \"kubernetes.io/projected/c785b38a-f6f9-4152-a343-814d2b15a3a1-kube-api-access-488pr\") pod \"console-5f4d7b984b-8kzpt\" (UID: \"c785b38a-f6f9-4152-a343-814d2b15a3a1\") " pod="openshift-console/console-5f4d7b984b-8kzpt" Mar 12 08:07:30 crc kubenswrapper[4809]: I0312 08:07:30.617672 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5f4d7b984b-8kzpt" Mar 12 08:07:30 crc kubenswrapper[4809]: I0312 08:07:30.816643 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5f4d7b984b-8kzpt"] Mar 12 08:07:30 crc kubenswrapper[4809]: I0312 08:07:30.847281 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-59b6cb496c-xqg5c"] Mar 12 08:07:30 crc kubenswrapper[4809]: I0312 08:07:30.848259 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-59b6cb496c-xqg5c" Mar 12 08:07:30 crc kubenswrapper[4809]: I0312 08:07:30.851453 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-bcb98" Mar 12 08:07:30 crc kubenswrapper[4809]: I0312 08:07:30.851686 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Mar 12 08:07:30 crc kubenswrapper[4809]: I0312 08:07:30.851932 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Mar 12 08:07:30 crc kubenswrapper[4809]: I0312 08:07:30.852223 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-d1gknmss8egio" Mar 12 08:07:30 crc kubenswrapper[4809]: I0312 08:07:30.852976 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Mar 12 08:07:30 crc kubenswrapper[4809]: I0312 08:07:30.862405 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Mar 12 08:07:30 crc kubenswrapper[4809]: I0312 08:07:30.869714 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-59b6cb496c-xqg5c"] Mar 12 08:07:30 crc kubenswrapper[4809]: I0312 08:07:30.998239 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/f3699ab3-2222-401a-b14c-3fef168b6861-secret-metrics-client-certs\") pod \"metrics-server-59b6cb496c-xqg5c\" (UID: \"f3699ab3-2222-401a-b14c-3fef168b6861\") " pod="openshift-monitoring/metrics-server-59b6cb496c-xqg5c" Mar 12 08:07:30 crc kubenswrapper[4809]: I0312 08:07:30.998359 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/f3699ab3-2222-401a-b14c-3fef168b6861-secret-metrics-server-tls\") pod \"metrics-server-59b6cb496c-xqg5c\" (UID: \"f3699ab3-2222-401a-b14c-3fef168b6861\") " pod="openshift-monitoring/metrics-server-59b6cb496c-xqg5c" Mar 12 08:07:30 crc kubenswrapper[4809]: I0312 08:07:30.998389 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6lwt\" (UniqueName: \"kubernetes.io/projected/f3699ab3-2222-401a-b14c-3fef168b6861-kube-api-access-q6lwt\") pod \"metrics-server-59b6cb496c-xqg5c\" (UID: \"f3699ab3-2222-401a-b14c-3fef168b6861\") " pod="openshift-monitoring/metrics-server-59b6cb496c-xqg5c" Mar 12 08:07:30 crc kubenswrapper[4809]: I0312 08:07:30.998615 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/f3699ab3-2222-401a-b14c-3fef168b6861-audit-log\") pod \"metrics-server-59b6cb496c-xqg5c\" (UID: \"f3699ab3-2222-401a-b14c-3fef168b6861\") " pod="openshift-monitoring/metrics-server-59b6cb496c-xqg5c" Mar 12 08:07:30 crc kubenswrapper[4809]: I0312 08:07:30.998704 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/f3699ab3-2222-401a-b14c-3fef168b6861-metrics-server-audit-profiles\") pod \"metrics-server-59b6cb496c-xqg5c\" (UID: \"f3699ab3-2222-401a-b14c-3fef168b6861\") " pod="openshift-monitoring/metrics-server-59b6cb496c-xqg5c" Mar 12 08:07:30 crc kubenswrapper[4809]: I0312 08:07:30.998761 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3699ab3-2222-401a-b14c-3fef168b6861-client-ca-bundle\") pod \"metrics-server-59b6cb496c-xqg5c\" (UID: \"f3699ab3-2222-401a-b14c-3fef168b6861\") " pod="openshift-monitoring/metrics-server-59b6cb496c-xqg5c" Mar 12 08:07:30 crc kubenswrapper[4809]: I0312 08:07:30.998833 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f3699ab3-2222-401a-b14c-3fef168b6861-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-59b6cb496c-xqg5c\" (UID: \"f3699ab3-2222-401a-b14c-3fef168b6861\") " pod="openshift-monitoring/metrics-server-59b6cb496c-xqg5c" Mar 12 08:07:31 crc kubenswrapper[4809]: I0312 08:07:31.100377 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/f3699ab3-2222-401a-b14c-3fef168b6861-secret-metrics-server-tls\") pod \"metrics-server-59b6cb496c-xqg5c\" (UID: \"f3699ab3-2222-401a-b14c-3fef168b6861\") " pod="openshift-monitoring/metrics-server-59b6cb496c-xqg5c" Mar 12 08:07:31 crc kubenswrapper[4809]: I0312 08:07:31.100732 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6lwt\" (UniqueName: \"kubernetes.io/projected/f3699ab3-2222-401a-b14c-3fef168b6861-kube-api-access-q6lwt\") pod \"metrics-server-59b6cb496c-xqg5c\" (UID: \"f3699ab3-2222-401a-b14c-3fef168b6861\") " pod="openshift-monitoring/metrics-server-59b6cb496c-xqg5c" Mar 12 08:07:31 crc kubenswrapper[4809]: I0312 08:07:31.100958 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/f3699ab3-2222-401a-b14c-3fef168b6861-audit-log\") pod \"metrics-server-59b6cb496c-xqg5c\" (UID: \"f3699ab3-2222-401a-b14c-3fef168b6861\") " pod="openshift-monitoring/metrics-server-59b6cb496c-xqg5c" Mar 12 08:07:31 crc kubenswrapper[4809]: I0312 08:07:31.101418 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/f3699ab3-2222-401a-b14c-3fef168b6861-metrics-server-audit-profiles\") pod \"metrics-server-59b6cb496c-xqg5c\" (UID: \"f3699ab3-2222-401a-b14c-3fef168b6861\") " pod="openshift-monitoring/metrics-server-59b6cb496c-xqg5c" Mar 12 08:07:31 crc kubenswrapper[4809]: I0312 08:07:31.101521 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/f3699ab3-2222-401a-b14c-3fef168b6861-audit-log\") pod \"metrics-server-59b6cb496c-xqg5c\" (UID: \"f3699ab3-2222-401a-b14c-3fef168b6861\") " pod="openshift-monitoring/metrics-server-59b6cb496c-xqg5c" Mar 12 08:07:31 crc kubenswrapper[4809]: I0312 08:07:31.101808 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3699ab3-2222-401a-b14c-3fef168b6861-client-ca-bundle\") pod \"metrics-server-59b6cb496c-xqg5c\" (UID: \"f3699ab3-2222-401a-b14c-3fef168b6861\") " pod="openshift-monitoring/metrics-server-59b6cb496c-xqg5c" Mar 12 08:07:31 crc kubenswrapper[4809]: I0312 08:07:31.103453 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/f3699ab3-2222-401a-b14c-3fef168b6861-metrics-server-audit-profiles\") pod \"metrics-server-59b6cb496c-xqg5c\" (UID: \"f3699ab3-2222-401a-b14c-3fef168b6861\") " pod="openshift-monitoring/metrics-server-59b6cb496c-xqg5c" Mar 12 08:07:31 crc kubenswrapper[4809]: I0312 08:07:31.105213 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f3699ab3-2222-401a-b14c-3fef168b6861-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-59b6cb496c-xqg5c\" (UID: \"f3699ab3-2222-401a-b14c-3fef168b6861\") " pod="openshift-monitoring/metrics-server-59b6cb496c-xqg5c" Mar 12 08:07:31 crc kubenswrapper[4809]: I0312 08:07:31.105302 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/f3699ab3-2222-401a-b14c-3fef168b6861-secret-metrics-client-certs\") pod \"metrics-server-59b6cb496c-xqg5c\" (UID: \"f3699ab3-2222-401a-b14c-3fef168b6861\") " pod="openshift-monitoring/metrics-server-59b6cb496c-xqg5c" Mar 12 08:07:31 crc kubenswrapper[4809]: I0312 08:07:31.106635 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3699ab3-2222-401a-b14c-3fef168b6861-client-ca-bundle\") pod \"metrics-server-59b6cb496c-xqg5c\" (UID: \"f3699ab3-2222-401a-b14c-3fef168b6861\") " pod="openshift-monitoring/metrics-server-59b6cb496c-xqg5c" Mar 12 08:07:31 crc kubenswrapper[4809]: I0312 08:07:31.107252 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/f3699ab3-2222-401a-b14c-3fef168b6861-secret-metrics-server-tls\") pod \"metrics-server-59b6cb496c-xqg5c\" (UID: \"f3699ab3-2222-401a-b14c-3fef168b6861\") " pod="openshift-monitoring/metrics-server-59b6cb496c-xqg5c" Mar 12 08:07:31 crc kubenswrapper[4809]: I0312 08:07:31.108673 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/f3699ab3-2222-401a-b14c-3fef168b6861-secret-metrics-client-certs\") pod \"metrics-server-59b6cb496c-xqg5c\" (UID: \"f3699ab3-2222-401a-b14c-3fef168b6861\") " pod="openshift-monitoring/metrics-server-59b6cb496c-xqg5c" Mar 12 08:07:31 crc kubenswrapper[4809]: I0312 08:07:31.108937 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f3699ab3-2222-401a-b14c-3fef168b6861-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-59b6cb496c-xqg5c\" (UID: \"f3699ab3-2222-401a-b14c-3fef168b6861\") " pod="openshift-monitoring/metrics-server-59b6cb496c-xqg5c" Mar 12 08:07:31 crc kubenswrapper[4809]: I0312 08:07:31.129839 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6lwt\" (UniqueName: \"kubernetes.io/projected/f3699ab3-2222-401a-b14c-3fef168b6861-kube-api-access-q6lwt\") pod \"metrics-server-59b6cb496c-xqg5c\" (UID: \"f3699ab3-2222-401a-b14c-3fef168b6861\") " pod="openshift-monitoring/metrics-server-59b6cb496c-xqg5c" Mar 12 08:07:31 crc kubenswrapper[4809]: I0312 08:07:31.202786 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-59b6cb496c-xqg5c" Mar 12 08:07:31 crc kubenswrapper[4809]: I0312 08:07:31.262772 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-56cf9d75b7-58kgc"] Mar 12 08:07:31 crc kubenswrapper[4809]: I0312 08:07:31.264203 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-56cf9d75b7-58kgc" Mar 12 08:07:31 crc kubenswrapper[4809]: I0312 08:07:31.266557 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-6tstp" Mar 12 08:07:31 crc kubenswrapper[4809]: I0312 08:07:31.267020 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Mar 12 08:07:31 crc kubenswrapper[4809]: I0312 08:07:31.271191 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-56cf9d75b7-58kgc"] Mar 12 08:07:31 crc kubenswrapper[4809]: I0312 08:07:31.298207 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-r6p6w" event={"ID":"a640e9c5-49e4-47fb-a113-9f81e659520f","Type":"ContainerStarted","Data":"e0857d516b962acaf09ea285d3ef203e4324f884deced32041f8943b7124c544"} Mar 12 08:07:31 crc kubenswrapper[4809]: I0312 08:07:31.298264 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-r6p6w" event={"ID":"a640e9c5-49e4-47fb-a113-9f81e659520f","Type":"ContainerStarted","Data":"8ca90a87aa194cb9dccf1142cada0dae4997f674aec9128c2ce121baa4b0d00a"} Mar 12 08:07:31 crc kubenswrapper[4809]: I0312 08:07:31.306069 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5f4d7b984b-8kzpt" event={"ID":"c785b38a-f6f9-4152-a343-814d2b15a3a1","Type":"ContainerStarted","Data":"5feaa0ed952de3eae2cad7603feb4201efebedfd7009a7940e50004f14fd4a1b"} Mar 12 08:07:31 crc kubenswrapper[4809]: I0312 08:07:31.306149 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5f4d7b984b-8kzpt" event={"ID":"c785b38a-f6f9-4152-a343-814d2b15a3a1","Type":"ContainerStarted","Data":"ee3bbd4eba71ed922c624f39175e9bf63e4011ccea598dbeb809101148c6cec1"} Mar 12 08:07:31 crc kubenswrapper[4809]: I0312 08:07:31.325280 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/9b29409f-7b59-433f-9daf-1c9bd70ef6a8-monitoring-plugin-cert\") pod \"monitoring-plugin-56cf9d75b7-58kgc\" (UID: \"9b29409f-7b59-433f-9daf-1c9bd70ef6a8\") " pod="openshift-monitoring/monitoring-plugin-56cf9d75b7-58kgc" Mar 12 08:07:31 crc kubenswrapper[4809]: I0312 08:07:31.326528 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-r6p6w" podStartSLOduration=3.860898718 podStartE2EDuration="6.326502541s" podCreationTimestamp="2026-03-12 08:07:25 +0000 UTC" firstStartedPulling="2026-03-12 08:07:26.55028525 +0000 UTC m=+520.132320983" lastFinishedPulling="2026-03-12 08:07:29.015889073 +0000 UTC m=+522.597924806" observedRunningTime="2026-03-12 08:07:31.319704885 +0000 UTC m=+524.901740618" watchObservedRunningTime="2026-03-12 08:07:31.326502541 +0000 UTC m=+524.908538284" Mar 12 08:07:31 crc kubenswrapper[4809]: I0312 08:07:31.347957 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5f4d7b984b-8kzpt" podStartSLOduration=1.347935318 podStartE2EDuration="1.347935318s" podCreationTimestamp="2026-03-12 08:07:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:07:31.343604039 +0000 UTC m=+524.925639792" watchObservedRunningTime="2026-03-12 08:07:31.347935318 +0000 UTC m=+524.929971061" Mar 12 08:07:31 crc kubenswrapper[4809]: I0312 08:07:31.427999 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/9b29409f-7b59-433f-9daf-1c9bd70ef6a8-monitoring-plugin-cert\") pod \"monitoring-plugin-56cf9d75b7-58kgc\" (UID: \"9b29409f-7b59-433f-9daf-1c9bd70ef6a8\") " pod="openshift-monitoring/monitoring-plugin-56cf9d75b7-58kgc" Mar 12 08:07:31 crc kubenswrapper[4809]: I0312 08:07:31.431720 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/9b29409f-7b59-433f-9daf-1c9bd70ef6a8-monitoring-plugin-cert\") pod \"monitoring-plugin-56cf9d75b7-58kgc\" (UID: \"9b29409f-7b59-433f-9daf-1c9bd70ef6a8\") " pod="openshift-monitoring/monitoring-plugin-56cf9d75b7-58kgc" Mar 12 08:07:31 crc kubenswrapper[4809]: I0312 08:07:31.601735 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-56cf9d75b7-58kgc" Mar 12 08:07:31 crc kubenswrapper[4809]: I0312 08:07:31.642125 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-59b6cb496c-xqg5c"] Mar 12 08:07:31 crc kubenswrapper[4809]: I0312 08:07:31.841844 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 12 08:07:31 crc kubenswrapper[4809]: I0312 08:07:31.846789 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 12 08:07:31 crc kubenswrapper[4809]: I0312 08:07:31.850836 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Mar 12 08:07:31 crc kubenswrapper[4809]: I0312 08:07:31.850921 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Mar 12 08:07:31 crc kubenswrapper[4809]: I0312 08:07:31.853290 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Mar 12 08:07:31 crc kubenswrapper[4809]: I0312 08:07:31.853792 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-56cf9d75b7-58kgc"] Mar 12 08:07:31 crc kubenswrapper[4809]: I0312 08:07:31.853889 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Mar 12 08:07:31 crc kubenswrapper[4809]: I0312 08:07:31.854001 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Mar 12 08:07:31 crc kubenswrapper[4809]: I0312 08:07:31.854080 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Mar 12 08:07:31 crc kubenswrapper[4809]: I0312 08:07:31.854138 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-dhiqj682bbgf6" Mar 12 08:07:31 crc kubenswrapper[4809]: I0312 08:07:31.854243 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Mar 12 08:07:31 crc kubenswrapper[4809]: I0312 08:07:31.854344 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Mar 12 08:07:31 crc kubenswrapper[4809]: I0312 08:07:31.855887 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Mar 12 08:07:31 crc kubenswrapper[4809]: I0312 08:07:31.858487 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Mar 12 08:07:31 crc kubenswrapper[4809]: I0312 08:07:31.860602 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-wvjpk" Mar 12 08:07:31 crc kubenswrapper[4809]: I0312 08:07:31.862946 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Mar 12 08:07:31 crc kubenswrapper[4809]: I0312 08:07:31.865816 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 12 08:07:31 crc kubenswrapper[4809]: I0312 08:07:31.937879 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/97133eff-e7a3-42a1-833a-674e836f7be8-config-out\") pod \"prometheus-k8s-0\" (UID: \"97133eff-e7a3-42a1-833a-674e836f7be8\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 08:07:31 crc kubenswrapper[4809]: I0312 08:07:31.937931 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/97133eff-e7a3-42a1-833a-674e836f7be8-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"97133eff-e7a3-42a1-833a-674e836f7be8\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 08:07:31 crc kubenswrapper[4809]: I0312 08:07:31.937962 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/97133eff-e7a3-42a1-833a-674e836f7be8-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"97133eff-e7a3-42a1-833a-674e836f7be8\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 08:07:31 crc kubenswrapper[4809]: I0312 08:07:31.937983 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/97133eff-e7a3-42a1-833a-674e836f7be8-config\") pod \"prometheus-k8s-0\" (UID: \"97133eff-e7a3-42a1-833a-674e836f7be8\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 08:07:31 crc kubenswrapper[4809]: I0312 08:07:31.938022 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97133eff-e7a3-42a1-833a-674e836f7be8-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"97133eff-e7a3-42a1-833a-674e836f7be8\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 08:07:31 crc kubenswrapper[4809]: I0312 08:07:31.938052 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/97133eff-e7a3-42a1-833a-674e836f7be8-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"97133eff-e7a3-42a1-833a-674e836f7be8\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 08:07:31 crc kubenswrapper[4809]: I0312 08:07:31.938073 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/97133eff-e7a3-42a1-833a-674e836f7be8-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"97133eff-e7a3-42a1-833a-674e836f7be8\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 08:07:31 crc kubenswrapper[4809]: I0312 08:07:31.938095 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/97133eff-e7a3-42a1-833a-674e836f7be8-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"97133eff-e7a3-42a1-833a-674e836f7be8\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 08:07:31 crc kubenswrapper[4809]: I0312 08:07:31.938144 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/97133eff-e7a3-42a1-833a-674e836f7be8-web-config\") pod \"prometheus-k8s-0\" (UID: \"97133eff-e7a3-42a1-833a-674e836f7be8\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 08:07:31 crc kubenswrapper[4809]: I0312 08:07:31.938165 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97133eff-e7a3-42a1-833a-674e836f7be8-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"97133eff-e7a3-42a1-833a-674e836f7be8\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 08:07:31 crc kubenswrapper[4809]: I0312 08:07:31.938195 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/97133eff-e7a3-42a1-833a-674e836f7be8-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"97133eff-e7a3-42a1-833a-674e836f7be8\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 08:07:31 crc kubenswrapper[4809]: I0312 08:07:31.938240 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97133eff-e7a3-42a1-833a-674e836f7be8-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"97133eff-e7a3-42a1-833a-674e836f7be8\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 08:07:31 crc kubenswrapper[4809]: I0312 08:07:31.938264 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/97133eff-e7a3-42a1-833a-674e836f7be8-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"97133eff-e7a3-42a1-833a-674e836f7be8\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 08:07:31 crc kubenswrapper[4809]: I0312 08:07:31.938282 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/97133eff-e7a3-42a1-833a-674e836f7be8-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"97133eff-e7a3-42a1-833a-674e836f7be8\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 08:07:31 crc kubenswrapper[4809]: I0312 08:07:31.938312 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/97133eff-e7a3-42a1-833a-674e836f7be8-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"97133eff-e7a3-42a1-833a-674e836f7be8\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 08:07:31 crc kubenswrapper[4809]: I0312 08:07:31.938336 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/97133eff-e7a3-42a1-833a-674e836f7be8-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"97133eff-e7a3-42a1-833a-674e836f7be8\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 08:07:31 crc kubenswrapper[4809]: I0312 08:07:31.938358 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/97133eff-e7a3-42a1-833a-674e836f7be8-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"97133eff-e7a3-42a1-833a-674e836f7be8\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 08:07:31 crc kubenswrapper[4809]: I0312 08:07:31.938380 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb88h\" (UniqueName: \"kubernetes.io/projected/97133eff-e7a3-42a1-833a-674e836f7be8-kube-api-access-wb88h\") pod \"prometheus-k8s-0\" (UID: \"97133eff-e7a3-42a1-833a-674e836f7be8\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 08:07:32 crc kubenswrapper[4809]: I0312 08:07:32.039959 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/97133eff-e7a3-42a1-833a-674e836f7be8-config-out\") pod \"prometheus-k8s-0\" (UID: \"97133eff-e7a3-42a1-833a-674e836f7be8\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 08:07:32 crc kubenswrapper[4809]: I0312 08:07:32.040006 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/97133eff-e7a3-42a1-833a-674e836f7be8-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"97133eff-e7a3-42a1-833a-674e836f7be8\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 08:07:32 crc kubenswrapper[4809]: I0312 08:07:32.040033 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/97133eff-e7a3-42a1-833a-674e836f7be8-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"97133eff-e7a3-42a1-833a-674e836f7be8\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 08:07:32 crc kubenswrapper[4809]: I0312 08:07:32.040051 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/97133eff-e7a3-42a1-833a-674e836f7be8-config\") pod \"prometheus-k8s-0\" (UID: \"97133eff-e7a3-42a1-833a-674e836f7be8\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 08:07:32 crc kubenswrapper[4809]: I0312 08:07:32.040073 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97133eff-e7a3-42a1-833a-674e836f7be8-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"97133eff-e7a3-42a1-833a-674e836f7be8\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 08:07:32 crc kubenswrapper[4809]: I0312 08:07:32.040098 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/97133eff-e7a3-42a1-833a-674e836f7be8-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"97133eff-e7a3-42a1-833a-674e836f7be8\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 08:07:32 crc kubenswrapper[4809]: I0312 08:07:32.040153 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/97133eff-e7a3-42a1-833a-674e836f7be8-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"97133eff-e7a3-42a1-833a-674e836f7be8\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 08:07:32 crc kubenswrapper[4809]: I0312 08:07:32.040172 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/97133eff-e7a3-42a1-833a-674e836f7be8-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"97133eff-e7a3-42a1-833a-674e836f7be8\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 08:07:32 crc kubenswrapper[4809]: I0312 08:07:32.040196 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/97133eff-e7a3-42a1-833a-674e836f7be8-web-config\") pod \"prometheus-k8s-0\" (UID: \"97133eff-e7a3-42a1-833a-674e836f7be8\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 08:07:32 crc kubenswrapper[4809]: I0312 08:07:32.040214 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97133eff-e7a3-42a1-833a-674e836f7be8-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"97133eff-e7a3-42a1-833a-674e836f7be8\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 08:07:32 crc kubenswrapper[4809]: I0312 08:07:32.040252 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/97133eff-e7a3-42a1-833a-674e836f7be8-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"97133eff-e7a3-42a1-833a-674e836f7be8\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 08:07:32 crc kubenswrapper[4809]: I0312 08:07:32.040299 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97133eff-e7a3-42a1-833a-674e836f7be8-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"97133eff-e7a3-42a1-833a-674e836f7be8\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 08:07:32 crc kubenswrapper[4809]: I0312 08:07:32.040322 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/97133eff-e7a3-42a1-833a-674e836f7be8-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"97133eff-e7a3-42a1-833a-674e836f7be8\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 08:07:32 crc kubenswrapper[4809]: I0312 08:07:32.040339 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/97133eff-e7a3-42a1-833a-674e836f7be8-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"97133eff-e7a3-42a1-833a-674e836f7be8\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 08:07:32 crc kubenswrapper[4809]: I0312 08:07:32.040363 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/97133eff-e7a3-42a1-833a-674e836f7be8-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"97133eff-e7a3-42a1-833a-674e836f7be8\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 08:07:32 crc kubenswrapper[4809]: I0312 08:07:32.040379 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/97133eff-e7a3-42a1-833a-674e836f7be8-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"97133eff-e7a3-42a1-833a-674e836f7be8\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 08:07:32 crc kubenswrapper[4809]: I0312 08:07:32.040397 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wb88h\" (UniqueName: \"kubernetes.io/projected/97133eff-e7a3-42a1-833a-674e836f7be8-kube-api-access-wb88h\") pod \"prometheus-k8s-0\" (UID: \"97133eff-e7a3-42a1-833a-674e836f7be8\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 08:07:32 crc kubenswrapper[4809]: I0312 08:07:32.040414 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/97133eff-e7a3-42a1-833a-674e836f7be8-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"97133eff-e7a3-42a1-833a-674e836f7be8\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 08:07:32 crc kubenswrapper[4809]: I0312 08:07:32.041178 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/97133eff-e7a3-42a1-833a-674e836f7be8-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"97133eff-e7a3-42a1-833a-674e836f7be8\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 08:07:32 crc kubenswrapper[4809]: I0312 08:07:32.041422 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/97133eff-e7a3-42a1-833a-674e836f7be8-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"97133eff-e7a3-42a1-833a-674e836f7be8\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 08:07:32 crc kubenswrapper[4809]: I0312 08:07:32.042314 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97133eff-e7a3-42a1-833a-674e836f7be8-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"97133eff-e7a3-42a1-833a-674e836f7be8\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 08:07:32 crc kubenswrapper[4809]: I0312 08:07:32.042866 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97133eff-e7a3-42a1-833a-674e836f7be8-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"97133eff-e7a3-42a1-833a-674e836f7be8\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 08:07:32 crc kubenswrapper[4809]: I0312 08:07:32.044937 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/97133eff-e7a3-42a1-833a-674e836f7be8-config\") pod \"prometheus-k8s-0\" (UID: \"97133eff-e7a3-42a1-833a-674e836f7be8\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 08:07:32 crc kubenswrapper[4809]: I0312 08:07:32.045049 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/97133eff-e7a3-42a1-833a-674e836f7be8-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"97133eff-e7a3-42a1-833a-674e836f7be8\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 08:07:32 crc kubenswrapper[4809]: I0312 08:07:32.045495 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/97133eff-e7a3-42a1-833a-674e836f7be8-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"97133eff-e7a3-42a1-833a-674e836f7be8\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 08:07:32 crc kubenswrapper[4809]: I0312 08:07:32.046243 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/97133eff-e7a3-42a1-833a-674e836f7be8-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"97133eff-e7a3-42a1-833a-674e836f7be8\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 08:07:32 crc kubenswrapper[4809]: I0312 08:07:32.047894 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/97133eff-e7a3-42a1-833a-674e836f7be8-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"97133eff-e7a3-42a1-833a-674e836f7be8\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 08:07:32 crc kubenswrapper[4809]: I0312 08:07:32.049075 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/97133eff-e7a3-42a1-833a-674e836f7be8-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"97133eff-e7a3-42a1-833a-674e836f7be8\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 08:07:32 crc kubenswrapper[4809]: I0312 08:07:32.049847 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97133eff-e7a3-42a1-833a-674e836f7be8-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"97133eff-e7a3-42a1-833a-674e836f7be8\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 08:07:32 crc kubenswrapper[4809]: I0312 08:07:32.051065 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/97133eff-e7a3-42a1-833a-674e836f7be8-config-out\") pod \"prometheus-k8s-0\" (UID: \"97133eff-e7a3-42a1-833a-674e836f7be8\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 08:07:32 crc kubenswrapper[4809]: I0312 08:07:32.055224 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/97133eff-e7a3-42a1-833a-674e836f7be8-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"97133eff-e7a3-42a1-833a-674e836f7be8\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 08:07:32 crc kubenswrapper[4809]: I0312 08:07:32.056924 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/97133eff-e7a3-42a1-833a-674e836f7be8-web-config\") pod \"prometheus-k8s-0\" (UID: \"97133eff-e7a3-42a1-833a-674e836f7be8\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 08:07:32 crc kubenswrapper[4809]: I0312 08:07:32.066415 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/97133eff-e7a3-42a1-833a-674e836f7be8-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"97133eff-e7a3-42a1-833a-674e836f7be8\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 08:07:32 crc kubenswrapper[4809]: I0312 08:07:32.068259 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/97133eff-e7a3-42a1-833a-674e836f7be8-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"97133eff-e7a3-42a1-833a-674e836f7be8\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 08:07:32 crc kubenswrapper[4809]: I0312 08:07:32.075429 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/97133eff-e7a3-42a1-833a-674e836f7be8-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"97133eff-e7a3-42a1-833a-674e836f7be8\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 08:07:32 crc kubenswrapper[4809]: I0312 08:07:32.077098 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb88h\" (UniqueName: \"kubernetes.io/projected/97133eff-e7a3-42a1-833a-674e836f7be8-kube-api-access-wb88h\") pod \"prometheus-k8s-0\" (UID: \"97133eff-e7a3-42a1-833a-674e836f7be8\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 08:07:32 crc kubenswrapper[4809]: I0312 08:07:32.169856 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 12 08:07:32 crc kubenswrapper[4809]: I0312 08:07:32.311968 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-59b6cb496c-xqg5c" event={"ID":"f3699ab3-2222-401a-b14c-3fef168b6861","Type":"ContainerStarted","Data":"d465464ac6eb04cfd442889c771818ff6c2a0185b75628ad9655a9e8129f57cd"} Mar 12 08:07:32 crc kubenswrapper[4809]: I0312 08:07:32.313930 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-56cf9d75b7-58kgc" event={"ID":"9b29409f-7b59-433f-9daf-1c9bd70ef6a8","Type":"ContainerStarted","Data":"fe7b9a7ec9c12db1654181d6e3ff81ba3bf4a92ce2f0c4e3565b3cdbce202076"} Mar 12 08:07:33 crc kubenswrapper[4809]: I0312 08:07:33.741095 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 12 08:07:34 crc kubenswrapper[4809]: I0312 08:07:34.328315 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3b464fbf-707a-4c62-aa45-f1d806dbf294","Type":"ContainerStarted","Data":"eb80afe909faa07e84ff69534c6ad00774f23adf527f6da1c868d3654209bbc0"} Mar 12 08:07:34 crc kubenswrapper[4809]: I0312 08:07:34.328382 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3b464fbf-707a-4c62-aa45-f1d806dbf294","Type":"ContainerStarted","Data":"0ea21e71fb790a6bc3eb152f89831794014862d832a04079b274fb0ae52b3230"} Mar 12 08:07:34 crc kubenswrapper[4809]: I0312 08:07:34.328396 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3b464fbf-707a-4c62-aa45-f1d806dbf294","Type":"ContainerStarted","Data":"d849c8b1ead07abcb85a104b9627ddc955deb0bc84aa78b662b7fac64431425c"} Mar 12 08:07:34 crc kubenswrapper[4809]: I0312 08:07:34.328408 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3b464fbf-707a-4c62-aa45-f1d806dbf294","Type":"ContainerStarted","Data":"f94c6040b4e84202c4b2e071b913c75116785cee74cec6d8cba0073411bc7d4d"} Mar 12 08:07:34 crc kubenswrapper[4809]: I0312 08:07:34.328418 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3b464fbf-707a-4c62-aa45-f1d806dbf294","Type":"ContainerStarted","Data":"76b4515dab020504d819cc645d43a1b53ccc0a85b251cf205e79dd943d4703a1"} Mar 12 08:07:34 crc kubenswrapper[4809]: I0312 08:07:34.330168 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5b6975bb77-fcddt" event={"ID":"8716fe6b-cd87-4777-8291-6078ce9929bc","Type":"ContainerStarted","Data":"ce4c66f5f1f639473df3a276a059e423b3db6d078dd815ec959a221ca1464dc8"} Mar 12 08:07:34 crc kubenswrapper[4809]: I0312 08:07:34.330219 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5b6975bb77-fcddt" event={"ID":"8716fe6b-cd87-4777-8291-6078ce9929bc","Type":"ContainerStarted","Data":"89933f4773d336000557da7dee2c9a95de4203e8d3dc355b3fe48953cf976f75"} Mar 12 08:07:34 crc kubenswrapper[4809]: I0312 08:07:34.330229 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5b6975bb77-fcddt" event={"ID":"8716fe6b-cd87-4777-8291-6078ce9929bc","Type":"ContainerStarted","Data":"238ae2fd96e54ce0af8ac8c856b5c659b9912d5fbf9f105dd39293925024f3c5"} Mar 12 08:07:34 crc kubenswrapper[4809]: I0312 08:07:34.338486 4809 generic.go:334] "Generic (PLEG): container finished" podID="97133eff-e7a3-42a1-833a-674e836f7be8" containerID="1a33f2fcd6c0e64117596c856d3c7a8cf8c231b146e97c3dcee9dec209931460" exitCode=0 Mar 12 08:07:34 crc kubenswrapper[4809]: I0312 08:07:34.338536 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"97133eff-e7a3-42a1-833a-674e836f7be8","Type":"ContainerDied","Data":"1a33f2fcd6c0e64117596c856d3c7a8cf8c231b146e97c3dcee9dec209931460"} Mar 12 08:07:34 crc kubenswrapper[4809]: I0312 08:07:34.338579 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"97133eff-e7a3-42a1-833a-674e836f7be8","Type":"ContainerStarted","Data":"a42c2a74aa14a496ffef2f55370dc3625b77d42918d09d1be9917c21879e23db"} Mar 12 08:07:35 crc kubenswrapper[4809]: I0312 08:07:35.347210 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-59b6cb496c-xqg5c" event={"ID":"f3699ab3-2222-401a-b14c-3fef168b6861","Type":"ContainerStarted","Data":"3ccab1e1e9f8ef6ce1dd1600d828bdc06505f15b55d747876b14e8812eecf8cc"} Mar 12 08:07:35 crc kubenswrapper[4809]: I0312 08:07:35.348795 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-56cf9d75b7-58kgc" event={"ID":"9b29409f-7b59-433f-9daf-1c9bd70ef6a8","Type":"ContainerStarted","Data":"2cf54f5810469af92192650760bae417599216f18eace79d0cfced0b290179a3"} Mar 12 08:07:35 crc kubenswrapper[4809]: I0312 08:07:35.349434 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-56cf9d75b7-58kgc" Mar 12 08:07:35 crc kubenswrapper[4809]: I0312 08:07:35.355527 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-56cf9d75b7-58kgc" Mar 12 08:07:35 crc kubenswrapper[4809]: I0312 08:07:35.365061 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-59b6cb496c-xqg5c" podStartSLOduration=2.097720334 podStartE2EDuration="5.365039306s" podCreationTimestamp="2026-03-12 08:07:30 +0000 UTC" firstStartedPulling="2026-03-12 08:07:31.66199001 +0000 UTC m=+525.244025743" lastFinishedPulling="2026-03-12 08:07:34.929308992 +0000 UTC m=+528.511344715" observedRunningTime="2026-03-12 08:07:35.363595237 +0000 UTC m=+528.945630970" watchObservedRunningTime="2026-03-12 08:07:35.365039306 +0000 UTC m=+528.947075039" Mar 12 08:07:35 crc kubenswrapper[4809]: I0312 08:07:35.408376 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-56cf9d75b7-58kgc" podStartSLOduration=1.3555195150000001 podStartE2EDuration="4.408348603s" podCreationTimestamp="2026-03-12 08:07:31 +0000 UTC" firstStartedPulling="2026-03-12 08:07:31.877869772 +0000 UTC m=+525.459905505" lastFinishedPulling="2026-03-12 08:07:34.93069886 +0000 UTC m=+528.512734593" observedRunningTime="2026-03-12 08:07:35.384849999 +0000 UTC m=+528.966885732" watchObservedRunningTime="2026-03-12 08:07:35.408348603 +0000 UTC m=+528.990384336" Mar 12 08:07:36 crc kubenswrapper[4809]: I0312 08:07:36.376280 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3b464fbf-707a-4c62-aa45-f1d806dbf294","Type":"ContainerStarted","Data":"073d0ab5693fddb640e2e248460fa19970672d6ffbf10de1aecc017e9aea12c8"} Mar 12 08:07:36 crc kubenswrapper[4809]: I0312 08:07:36.384275 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5b6975bb77-fcddt" event={"ID":"8716fe6b-cd87-4777-8291-6078ce9929bc","Type":"ContainerStarted","Data":"82594c4e4c01080e912718a75737f981077b95a8deeae8528e320ba8ff06c8b1"} Mar 12 08:07:36 crc kubenswrapper[4809]: I0312 08:07:36.384331 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5b6975bb77-fcddt" event={"ID":"8716fe6b-cd87-4777-8291-6078ce9929bc","Type":"ContainerStarted","Data":"3c1bc8c2c1c9b41b028accf7704ac13f9b815b596a47f1e8ca40d287e9e91421"} Mar 12 08:07:36 crc kubenswrapper[4809]: I0312 08:07:36.384343 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5b6975bb77-fcddt" event={"ID":"8716fe6b-cd87-4777-8291-6078ce9929bc","Type":"ContainerStarted","Data":"b451ac7009b2bc342788ce9e1038621f7fc50793e69aeabcc100116e2f2bbfe1"} Mar 12 08:07:36 crc kubenswrapper[4809]: I0312 08:07:36.384548 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-5b6975bb77-fcddt" Mar 12 08:07:36 crc kubenswrapper[4809]: I0312 08:07:36.420932 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=2.225917186 podStartE2EDuration="10.420888626s" podCreationTimestamp="2026-03-12 08:07:26 +0000 UTC" firstStartedPulling="2026-03-12 08:07:27.431105116 +0000 UTC m=+521.013140849" lastFinishedPulling="2026-03-12 08:07:35.626076556 +0000 UTC m=+529.208112289" observedRunningTime="2026-03-12 08:07:36.413785621 +0000 UTC m=+529.995821354" watchObservedRunningTime="2026-03-12 08:07:36.420888626 +0000 UTC m=+530.002924359" Mar 12 08:07:36 crc kubenswrapper[4809]: I0312 08:07:36.485862 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-5b6975bb77-fcddt" podStartSLOduration=2.293020253 podStartE2EDuration="9.485834995s" podCreationTimestamp="2026-03-12 08:07:27 +0000 UTC" firstStartedPulling="2026-03-12 08:07:28.42653287 +0000 UTC m=+522.008568603" lastFinishedPulling="2026-03-12 08:07:35.619347612 +0000 UTC m=+529.201383345" observedRunningTime="2026-03-12 08:07:36.483918082 +0000 UTC m=+530.065953835" watchObservedRunningTime="2026-03-12 08:07:36.485834995 +0000 UTC m=+530.067870728" Mar 12 08:07:38 crc kubenswrapper[4809]: I0312 08:07:38.403299 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-5b6975bb77-fcddt" Mar 12 08:07:39 crc kubenswrapper[4809]: I0312 08:07:39.401930 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"97133eff-e7a3-42a1-833a-674e836f7be8","Type":"ContainerStarted","Data":"c3d09c5e49b20965733dc5bbccd844e7114e8725c6ad1fdae1d03a11c5c8b454"} Mar 12 08:07:39 crc kubenswrapper[4809]: I0312 08:07:39.402265 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"97133eff-e7a3-42a1-833a-674e836f7be8","Type":"ContainerStarted","Data":"0c69adc51ce31b0f24e54c4d8f7aeb4ed2b9857b04e6774442b93a0ce06c5c9e"} Mar 12 08:07:39 crc kubenswrapper[4809]: I0312 08:07:39.402279 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"97133eff-e7a3-42a1-833a-674e836f7be8","Type":"ContainerStarted","Data":"61831bccd3686c84debf0b2d70aafe320de29af902c478cd96729234dde1df02"} Mar 12 08:07:39 crc kubenswrapper[4809]: I0312 08:07:39.402290 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"97133eff-e7a3-42a1-833a-674e836f7be8","Type":"ContainerStarted","Data":"60e0c77269d9c7134ff59e59c367650f22ee3e35e46801838786b5979a1b5cfa"} Mar 12 08:07:39 crc kubenswrapper[4809]: I0312 08:07:39.402299 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"97133eff-e7a3-42a1-833a-674e836f7be8","Type":"ContainerStarted","Data":"46d5f15db0242b1b4ebae80898fd623a52410a0936003a8b2f4ecf5ad52a1ff9"} Mar 12 08:07:39 crc kubenswrapper[4809]: I0312 08:07:39.402307 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"97133eff-e7a3-42a1-833a-674e836f7be8","Type":"ContainerStarted","Data":"a6998d14e1ee6b9e7b58b24df7449953074b68781955aaccbf9bc7eba1868073"} Mar 12 08:07:39 crc kubenswrapper[4809]: I0312 08:07:39.453464 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=4.536161524 podStartE2EDuration="8.453434408s" podCreationTimestamp="2026-03-12 08:07:31 +0000 UTC" firstStartedPulling="2026-03-12 08:07:34.34034161 +0000 UTC m=+527.922377343" lastFinishedPulling="2026-03-12 08:07:38.257614484 +0000 UTC m=+531.839650227" observedRunningTime="2026-03-12 08:07:39.450573389 +0000 UTC m=+533.032609182" watchObservedRunningTime="2026-03-12 08:07:39.453434408 +0000 UTC m=+533.035470181" Mar 12 08:07:40 crc kubenswrapper[4809]: I0312 08:07:40.618735 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5f4d7b984b-8kzpt" Mar 12 08:07:40 crc kubenswrapper[4809]: I0312 08:07:40.622284 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5f4d7b984b-8kzpt" Mar 12 08:07:40 crc kubenswrapper[4809]: I0312 08:07:40.630333 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5f4d7b984b-8kzpt" Mar 12 08:07:41 crc kubenswrapper[4809]: I0312 08:07:41.427520 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5f4d7b984b-8kzpt" Mar 12 08:07:41 crc kubenswrapper[4809]: I0312 08:07:41.489093 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-rvfv4"] Mar 12 08:07:42 crc kubenswrapper[4809]: I0312 08:07:42.171063 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Mar 12 08:07:51 crc kubenswrapper[4809]: I0312 08:07:51.203820 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-59b6cb496c-xqg5c" Mar 12 08:07:51 crc kubenswrapper[4809]: I0312 08:07:51.204383 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-59b6cb496c-xqg5c" Mar 12 08:08:00 crc kubenswrapper[4809]: I0312 08:08:00.170221 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555048-5nxr7"] Mar 12 08:08:00 crc kubenswrapper[4809]: I0312 08:08:00.180472 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555048-5nxr7"] Mar 12 08:08:00 crc kubenswrapper[4809]: I0312 08:08:00.180593 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555048-5nxr7" Mar 12 08:08:00 crc kubenswrapper[4809]: I0312 08:08:00.188599 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 08:08:00 crc kubenswrapper[4809]: I0312 08:08:00.188856 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-tvxqv" Mar 12 08:08:00 crc kubenswrapper[4809]: I0312 08:08:00.188883 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 08:08:00 crc kubenswrapper[4809]: I0312 08:08:00.292299 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shvn7\" (UniqueName: \"kubernetes.io/projected/3d51c3ce-6d91-4798-9fed-d45c18fad38d-kube-api-access-shvn7\") pod \"auto-csr-approver-29555048-5nxr7\" (UID: \"3d51c3ce-6d91-4798-9fed-d45c18fad38d\") " pod="openshift-infra/auto-csr-approver-29555048-5nxr7" Mar 12 08:08:00 crc kubenswrapper[4809]: I0312 08:08:00.394737 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shvn7\" (UniqueName: \"kubernetes.io/projected/3d51c3ce-6d91-4798-9fed-d45c18fad38d-kube-api-access-shvn7\") pod \"auto-csr-approver-29555048-5nxr7\" (UID: \"3d51c3ce-6d91-4798-9fed-d45c18fad38d\") " pod="openshift-infra/auto-csr-approver-29555048-5nxr7" Mar 12 08:08:00 crc kubenswrapper[4809]: I0312 08:08:00.431987 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shvn7\" (UniqueName: \"kubernetes.io/projected/3d51c3ce-6d91-4798-9fed-d45c18fad38d-kube-api-access-shvn7\") pod \"auto-csr-approver-29555048-5nxr7\" (UID: \"3d51c3ce-6d91-4798-9fed-d45c18fad38d\") " pod="openshift-infra/auto-csr-approver-29555048-5nxr7" Mar 12 08:08:00 crc kubenswrapper[4809]: I0312 08:08:00.520405 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555048-5nxr7" Mar 12 08:08:00 crc kubenswrapper[4809]: I0312 08:08:00.774344 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555048-5nxr7"] Mar 12 08:08:00 crc kubenswrapper[4809]: W0312 08:08:00.784953 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d51c3ce_6d91_4798_9fed_d45c18fad38d.slice/crio-26975093a095e47c0af2371b0a418560d47c93402ac059f5ccd04c4661053419 WatchSource:0}: Error finding container 26975093a095e47c0af2371b0a418560d47c93402ac059f5ccd04c4661053419: Status 404 returned error can't find the container with id 26975093a095e47c0af2371b0a418560d47c93402ac059f5ccd04c4661053419 Mar 12 08:08:01 crc kubenswrapper[4809]: I0312 08:08:01.595770 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555048-5nxr7" event={"ID":"3d51c3ce-6d91-4798-9fed-d45c18fad38d","Type":"ContainerStarted","Data":"26975093a095e47c0af2371b0a418560d47c93402ac059f5ccd04c4661053419"} Mar 12 08:08:02 crc kubenswrapper[4809]: I0312 08:08:02.608456 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555048-5nxr7" event={"ID":"3d51c3ce-6d91-4798-9fed-d45c18fad38d","Type":"ContainerStarted","Data":"2c948b97be361afc93d6dd63a1bf112c8ed27c0075ff33bc83cde3bbe8ddf6d4"} Mar 12 08:08:02 crc kubenswrapper[4809]: I0312 08:08:02.643209 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555048-5nxr7" podStartSLOduration=1.359447649 podStartE2EDuration="2.64318424s" podCreationTimestamp="2026-03-12 08:08:00 +0000 UTC" firstStartedPulling="2026-03-12 08:08:00.788237664 +0000 UTC m=+554.370273397" lastFinishedPulling="2026-03-12 08:08:02.071974255 +0000 UTC m=+555.654009988" observedRunningTime="2026-03-12 08:08:02.638940724 +0000 UTC m=+556.220976457" watchObservedRunningTime="2026-03-12 08:08:02.64318424 +0000 UTC m=+556.225219973" Mar 12 08:08:03 crc kubenswrapper[4809]: I0312 08:08:03.630613 4809 generic.go:334] "Generic (PLEG): container finished" podID="3d51c3ce-6d91-4798-9fed-d45c18fad38d" containerID="2c948b97be361afc93d6dd63a1bf112c8ed27c0075ff33bc83cde3bbe8ddf6d4" exitCode=0 Mar 12 08:08:03 crc kubenswrapper[4809]: I0312 08:08:03.630704 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555048-5nxr7" event={"ID":"3d51c3ce-6d91-4798-9fed-d45c18fad38d","Type":"ContainerDied","Data":"2c948b97be361afc93d6dd63a1bf112c8ed27c0075ff33bc83cde3bbe8ddf6d4"} Mar 12 08:08:04 crc kubenswrapper[4809]: I0312 08:08:04.913037 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555048-5nxr7" Mar 12 08:08:05 crc kubenswrapper[4809]: I0312 08:08:05.091660 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shvn7\" (UniqueName: \"kubernetes.io/projected/3d51c3ce-6d91-4798-9fed-d45c18fad38d-kube-api-access-shvn7\") pod \"3d51c3ce-6d91-4798-9fed-d45c18fad38d\" (UID: \"3d51c3ce-6d91-4798-9fed-d45c18fad38d\") " Mar 12 08:08:05 crc kubenswrapper[4809]: I0312 08:08:05.099548 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d51c3ce-6d91-4798-9fed-d45c18fad38d-kube-api-access-shvn7" (OuterVolumeSpecName: "kube-api-access-shvn7") pod "3d51c3ce-6d91-4798-9fed-d45c18fad38d" (UID: "3d51c3ce-6d91-4798-9fed-d45c18fad38d"). InnerVolumeSpecName "kube-api-access-shvn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:08:05 crc kubenswrapper[4809]: I0312 08:08:05.194853 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shvn7\" (UniqueName: \"kubernetes.io/projected/3d51c3ce-6d91-4798-9fed-d45c18fad38d-kube-api-access-shvn7\") on node \"crc\" DevicePath \"\"" Mar 12 08:08:05 crc kubenswrapper[4809]: I0312 08:08:05.645710 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555048-5nxr7" Mar 12 08:08:05 crc kubenswrapper[4809]: I0312 08:08:05.645680 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555048-5nxr7" event={"ID":"3d51c3ce-6d91-4798-9fed-d45c18fad38d","Type":"ContainerDied","Data":"26975093a095e47c0af2371b0a418560d47c93402ac059f5ccd04c4661053419"} Mar 12 08:08:05 crc kubenswrapper[4809]: I0312 08:08:05.646021 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26975093a095e47c0af2371b0a418560d47c93402ac059f5ccd04c4661053419" Mar 12 08:08:05 crc kubenswrapper[4809]: I0312 08:08:05.694474 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555042-287fz"] Mar 12 08:08:05 crc kubenswrapper[4809]: I0312 08:08:05.698017 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555042-287fz"] Mar 12 08:08:06 crc kubenswrapper[4809]: I0312 08:08:06.532698 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-rvfv4" podUID="6bd51111-825a-4679-95fa-6dfe33ff138c" containerName="console" containerID="cri-o://25b88d807d99e685a191b84ed90f4a984c34acaa2da7fcd646b5e0e75f4b9a7e" gracePeriod=15 Mar 12 08:08:06 crc kubenswrapper[4809]: I0312 08:08:06.980300 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-rvfv4_6bd51111-825a-4679-95fa-6dfe33ff138c/console/0.log" Mar 12 08:08:06 crc kubenswrapper[4809]: I0312 08:08:06.980404 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-rvfv4" Mar 12 08:08:07 crc kubenswrapper[4809]: I0312 08:08:07.112475 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da2d7bf2-3fcc-42c4-ae05-c16d5c714a26" path="/var/lib/kubelet/pods/da2d7bf2-3fcc-42c4-ae05-c16d5c714a26/volumes" Mar 12 08:08:07 crc kubenswrapper[4809]: I0312 08:08:07.127061 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6bd51111-825a-4679-95fa-6dfe33ff138c-oauth-serving-cert\") pod \"6bd51111-825a-4679-95fa-6dfe33ff138c\" (UID: \"6bd51111-825a-4679-95fa-6dfe33ff138c\") " Mar 12 08:08:07 crc kubenswrapper[4809]: I0312 08:08:07.127129 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6bd51111-825a-4679-95fa-6dfe33ff138c-console-config\") pod \"6bd51111-825a-4679-95fa-6dfe33ff138c\" (UID: \"6bd51111-825a-4679-95fa-6dfe33ff138c\") " Mar 12 08:08:07 crc kubenswrapper[4809]: I0312 08:08:07.127188 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6bd51111-825a-4679-95fa-6dfe33ff138c-service-ca\") pod \"6bd51111-825a-4679-95fa-6dfe33ff138c\" (UID: \"6bd51111-825a-4679-95fa-6dfe33ff138c\") " Mar 12 08:08:07 crc kubenswrapper[4809]: I0312 08:08:07.127265 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6bd51111-825a-4679-95fa-6dfe33ff138c-trusted-ca-bundle\") pod \"6bd51111-825a-4679-95fa-6dfe33ff138c\" (UID: \"6bd51111-825a-4679-95fa-6dfe33ff138c\") " Mar 12 08:08:07 crc kubenswrapper[4809]: I0312 08:08:07.127284 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8h92\" (UniqueName: \"kubernetes.io/projected/6bd51111-825a-4679-95fa-6dfe33ff138c-kube-api-access-d8h92\") pod \"6bd51111-825a-4679-95fa-6dfe33ff138c\" (UID: \"6bd51111-825a-4679-95fa-6dfe33ff138c\") " Mar 12 08:08:07 crc kubenswrapper[4809]: I0312 08:08:07.127347 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6bd51111-825a-4679-95fa-6dfe33ff138c-console-serving-cert\") pod \"6bd51111-825a-4679-95fa-6dfe33ff138c\" (UID: \"6bd51111-825a-4679-95fa-6dfe33ff138c\") " Mar 12 08:08:07 crc kubenswrapper[4809]: I0312 08:08:07.127391 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6bd51111-825a-4679-95fa-6dfe33ff138c-console-oauth-config\") pod \"6bd51111-825a-4679-95fa-6dfe33ff138c\" (UID: \"6bd51111-825a-4679-95fa-6dfe33ff138c\") " Mar 12 08:08:07 crc kubenswrapper[4809]: I0312 08:08:07.127887 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bd51111-825a-4679-95fa-6dfe33ff138c-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6bd51111-825a-4679-95fa-6dfe33ff138c" (UID: "6bd51111-825a-4679-95fa-6dfe33ff138c"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:08:07 crc kubenswrapper[4809]: I0312 08:08:07.127895 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bd51111-825a-4679-95fa-6dfe33ff138c-console-config" (OuterVolumeSpecName: "console-config") pod "6bd51111-825a-4679-95fa-6dfe33ff138c" (UID: "6bd51111-825a-4679-95fa-6dfe33ff138c"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:08:07 crc kubenswrapper[4809]: I0312 08:08:07.128341 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bd51111-825a-4679-95fa-6dfe33ff138c-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6bd51111-825a-4679-95fa-6dfe33ff138c" (UID: "6bd51111-825a-4679-95fa-6dfe33ff138c"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:08:07 crc kubenswrapper[4809]: I0312 08:08:07.128377 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bd51111-825a-4679-95fa-6dfe33ff138c-service-ca" (OuterVolumeSpecName: "service-ca") pod "6bd51111-825a-4679-95fa-6dfe33ff138c" (UID: "6bd51111-825a-4679-95fa-6dfe33ff138c"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:08:07 crc kubenswrapper[4809]: I0312 08:08:07.135229 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bd51111-825a-4679-95fa-6dfe33ff138c-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6bd51111-825a-4679-95fa-6dfe33ff138c" (UID: "6bd51111-825a-4679-95fa-6dfe33ff138c"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:08:07 crc kubenswrapper[4809]: I0312 08:08:07.135957 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bd51111-825a-4679-95fa-6dfe33ff138c-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6bd51111-825a-4679-95fa-6dfe33ff138c" (UID: "6bd51111-825a-4679-95fa-6dfe33ff138c"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:08:07 crc kubenswrapper[4809]: I0312 08:08:07.136191 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bd51111-825a-4679-95fa-6dfe33ff138c-kube-api-access-d8h92" (OuterVolumeSpecName: "kube-api-access-d8h92") pod "6bd51111-825a-4679-95fa-6dfe33ff138c" (UID: "6bd51111-825a-4679-95fa-6dfe33ff138c"). InnerVolumeSpecName "kube-api-access-d8h92". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:08:07 crc kubenswrapper[4809]: I0312 08:08:07.231875 4809 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6bd51111-825a-4679-95fa-6dfe33ff138c-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 08:08:07 crc kubenswrapper[4809]: I0312 08:08:07.231963 4809 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6bd51111-825a-4679-95fa-6dfe33ff138c-console-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:08:07 crc kubenswrapper[4809]: I0312 08:08:07.232020 4809 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6bd51111-825a-4679-95fa-6dfe33ff138c-service-ca\") on node \"crc\" DevicePath \"\"" Mar 12 08:08:07 crc kubenswrapper[4809]: I0312 08:08:07.232044 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8h92\" (UniqueName: \"kubernetes.io/projected/6bd51111-825a-4679-95fa-6dfe33ff138c-kube-api-access-d8h92\") on node \"crc\" DevicePath \"\"" Mar 12 08:08:07 crc kubenswrapper[4809]: I0312 08:08:07.232092 4809 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6bd51111-825a-4679-95fa-6dfe33ff138c-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:08:07 crc kubenswrapper[4809]: I0312 08:08:07.232197 4809 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6bd51111-825a-4679-95fa-6dfe33ff138c-console-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 08:08:07 crc kubenswrapper[4809]: I0312 08:08:07.232219 4809 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6bd51111-825a-4679-95fa-6dfe33ff138c-console-oauth-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:08:07 crc kubenswrapper[4809]: I0312 08:08:07.660907 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-rvfv4_6bd51111-825a-4679-95fa-6dfe33ff138c/console/0.log" Mar 12 08:08:07 crc kubenswrapper[4809]: I0312 08:08:07.660972 4809 generic.go:334] "Generic (PLEG): container finished" podID="6bd51111-825a-4679-95fa-6dfe33ff138c" containerID="25b88d807d99e685a191b84ed90f4a984c34acaa2da7fcd646b5e0e75f4b9a7e" exitCode=2 Mar 12 08:08:07 crc kubenswrapper[4809]: I0312 08:08:07.661015 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-rvfv4" event={"ID":"6bd51111-825a-4679-95fa-6dfe33ff138c","Type":"ContainerDied","Data":"25b88d807d99e685a191b84ed90f4a984c34acaa2da7fcd646b5e0e75f4b9a7e"} Mar 12 08:08:07 crc kubenswrapper[4809]: I0312 08:08:07.661049 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-rvfv4" event={"ID":"6bd51111-825a-4679-95fa-6dfe33ff138c","Type":"ContainerDied","Data":"f862b73f694a27ee8ed01f5a755f71528097afd1a3b257606a763320d603c909"} Mar 12 08:08:07 crc kubenswrapper[4809]: I0312 08:08:07.661067 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-rvfv4" Mar 12 08:08:07 crc kubenswrapper[4809]: I0312 08:08:07.661074 4809 scope.go:117] "RemoveContainer" containerID="25b88d807d99e685a191b84ed90f4a984c34acaa2da7fcd646b5e0e75f4b9a7e" Mar 12 08:08:07 crc kubenswrapper[4809]: I0312 08:08:07.678916 4809 scope.go:117] "RemoveContainer" containerID="25b88d807d99e685a191b84ed90f4a984c34acaa2da7fcd646b5e0e75f4b9a7e" Mar 12 08:08:07 crc kubenswrapper[4809]: E0312 08:08:07.679671 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25b88d807d99e685a191b84ed90f4a984c34acaa2da7fcd646b5e0e75f4b9a7e\": container with ID starting with 25b88d807d99e685a191b84ed90f4a984c34acaa2da7fcd646b5e0e75f4b9a7e not found: ID does not exist" containerID="25b88d807d99e685a191b84ed90f4a984c34acaa2da7fcd646b5e0e75f4b9a7e" Mar 12 08:08:07 crc kubenswrapper[4809]: I0312 08:08:07.679724 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25b88d807d99e685a191b84ed90f4a984c34acaa2da7fcd646b5e0e75f4b9a7e"} err="failed to get container status \"25b88d807d99e685a191b84ed90f4a984c34acaa2da7fcd646b5e0e75f4b9a7e\": rpc error: code = NotFound desc = could not find container \"25b88d807d99e685a191b84ed90f4a984c34acaa2da7fcd646b5e0e75f4b9a7e\": container with ID starting with 25b88d807d99e685a191b84ed90f4a984c34acaa2da7fcd646b5e0e75f4b9a7e not found: ID does not exist" Mar 12 08:08:07 crc kubenswrapper[4809]: I0312 08:08:07.697788 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-rvfv4"] Mar 12 08:08:07 crc kubenswrapper[4809]: I0312 08:08:07.701361 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-rvfv4"] Mar 12 08:08:09 crc kubenswrapper[4809]: I0312 08:08:09.117062 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bd51111-825a-4679-95fa-6dfe33ff138c" path="/var/lib/kubelet/pods/6bd51111-825a-4679-95fa-6dfe33ff138c/volumes" Mar 12 08:08:11 crc kubenswrapper[4809]: I0312 08:08:11.212500 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-59b6cb496c-xqg5c" Mar 12 08:08:11 crc kubenswrapper[4809]: I0312 08:08:11.217081 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-59b6cb496c-xqg5c" Mar 12 08:08:15 crc kubenswrapper[4809]: I0312 08:08:15.048759 4809 patch_prober.go:28] interesting pod/machine-config-daemon-h6d4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 08:08:15 crc kubenswrapper[4809]: I0312 08:08:15.049500 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 08:08:32 crc kubenswrapper[4809]: I0312 08:08:32.171078 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Mar 12 08:08:32 crc kubenswrapper[4809]: I0312 08:08:32.212906 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Mar 12 08:08:32 crc kubenswrapper[4809]: I0312 08:08:32.911008 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Mar 12 08:08:36 crc kubenswrapper[4809]: I0312 08:08:36.466319 4809 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod3d51c3ce-6d91-4798-9fed-d45c18fad38d"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod3d51c3ce-6d91-4798-9fed-d45c18fad38d] : Timed out while waiting for systemd to remove kubepods-besteffort-pod3d51c3ce_6d91_4798_9fed_d45c18fad38d.slice" Mar 12 08:08:45 crc kubenswrapper[4809]: I0312 08:08:45.048372 4809 patch_prober.go:28] interesting pod/machine-config-daemon-h6d4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 08:08:45 crc kubenswrapper[4809]: I0312 08:08:45.048661 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 08:09:15 crc kubenswrapper[4809]: I0312 08:09:15.048837 4809 patch_prober.go:28] interesting pod/machine-config-daemon-h6d4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 08:09:15 crc kubenswrapper[4809]: I0312 08:09:15.049920 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 08:09:15 crc kubenswrapper[4809]: I0312 08:09:15.049996 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" Mar 12 08:09:15 crc kubenswrapper[4809]: I0312 08:09:15.051105 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"97cdf03cc29e20d0fb57a7ec80495b91279dda92db249385bd14589bccc4f68c"} pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 08:09:15 crc kubenswrapper[4809]: I0312 08:09:15.051232 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" containerID="cri-o://97cdf03cc29e20d0fb57a7ec80495b91279dda92db249385bd14589bccc4f68c" gracePeriod=600 Mar 12 08:09:15 crc kubenswrapper[4809]: I0312 08:09:15.186311 4809 generic.go:334] "Generic (PLEG): container finished" podID="101483ba-8ed3-40eb-9855-077e9add029f" containerID="97cdf03cc29e20d0fb57a7ec80495b91279dda92db249385bd14589bccc4f68c" exitCode=0 Mar 12 08:09:15 crc kubenswrapper[4809]: I0312 08:09:15.186360 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" event={"ID":"101483ba-8ed3-40eb-9855-077e9add029f","Type":"ContainerDied","Data":"97cdf03cc29e20d0fb57a7ec80495b91279dda92db249385bd14589bccc4f68c"} Mar 12 08:09:15 crc kubenswrapper[4809]: I0312 08:09:15.186398 4809 scope.go:117] "RemoveContainer" containerID="4e9e4f9c5d2c28b5cf22cec3c1066c042f4246b7416ca727e9771e6b84eecf1f" Mar 12 08:09:16 crc kubenswrapper[4809]: I0312 08:09:16.197788 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" event={"ID":"101483ba-8ed3-40eb-9855-077e9add029f","Type":"ContainerStarted","Data":"ca8598ebfe987f1558a37c5bf134fcec2279989f25be8a694a644461aa780ee9"} Mar 12 08:09:25 crc kubenswrapper[4809]: I0312 08:09:25.827058 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-86779b9dcc-qsh2c"] Mar 12 08:09:25 crc kubenswrapper[4809]: E0312 08:09:25.828997 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d51c3ce-6d91-4798-9fed-d45c18fad38d" containerName="oc" Mar 12 08:09:25 crc kubenswrapper[4809]: I0312 08:09:25.829034 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d51c3ce-6d91-4798-9fed-d45c18fad38d" containerName="oc" Mar 12 08:09:25 crc kubenswrapper[4809]: E0312 08:09:25.829065 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bd51111-825a-4679-95fa-6dfe33ff138c" containerName="console" Mar 12 08:09:25 crc kubenswrapper[4809]: I0312 08:09:25.829082 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bd51111-825a-4679-95fa-6dfe33ff138c" containerName="console" Mar 12 08:09:25 crc kubenswrapper[4809]: I0312 08:09:25.829457 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d51c3ce-6d91-4798-9fed-d45c18fad38d" containerName="oc" Mar 12 08:09:25 crc kubenswrapper[4809]: I0312 08:09:25.829497 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bd51111-825a-4679-95fa-6dfe33ff138c" containerName="console" Mar 12 08:09:25 crc kubenswrapper[4809]: I0312 08:09:25.830800 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-86779b9dcc-qsh2c" Mar 12 08:09:25 crc kubenswrapper[4809]: I0312 08:09:25.852936 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-86779b9dcc-qsh2c"] Mar 12 08:09:25 crc kubenswrapper[4809]: I0312 08:09:25.961457 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c4d10085-7b35-4cc0-ae7a-b9c3f443e326-console-oauth-config\") pod \"console-86779b9dcc-qsh2c\" (UID: \"c4d10085-7b35-4cc0-ae7a-b9c3f443e326\") " pod="openshift-console/console-86779b9dcc-qsh2c" Mar 12 08:09:25 crc kubenswrapper[4809]: I0312 08:09:25.961515 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c4d10085-7b35-4cc0-ae7a-b9c3f443e326-oauth-serving-cert\") pod \"console-86779b9dcc-qsh2c\" (UID: \"c4d10085-7b35-4cc0-ae7a-b9c3f443e326\") " pod="openshift-console/console-86779b9dcc-qsh2c" Mar 12 08:09:25 crc kubenswrapper[4809]: I0312 08:09:25.961537 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c4d10085-7b35-4cc0-ae7a-b9c3f443e326-console-config\") pod \"console-86779b9dcc-qsh2c\" (UID: \"c4d10085-7b35-4cc0-ae7a-b9c3f443e326\") " pod="openshift-console/console-86779b9dcc-qsh2c" Mar 12 08:09:25 crc kubenswrapper[4809]: I0312 08:09:25.961570 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c4d10085-7b35-4cc0-ae7a-b9c3f443e326-console-serving-cert\") pod \"console-86779b9dcc-qsh2c\" (UID: \"c4d10085-7b35-4cc0-ae7a-b9c3f443e326\") " pod="openshift-console/console-86779b9dcc-qsh2c" Mar 12 08:09:25 crc kubenswrapper[4809]: I0312 08:09:25.961928 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4d10085-7b35-4cc0-ae7a-b9c3f443e326-trusted-ca-bundle\") pod \"console-86779b9dcc-qsh2c\" (UID: \"c4d10085-7b35-4cc0-ae7a-b9c3f443e326\") " pod="openshift-console/console-86779b9dcc-qsh2c" Mar 12 08:09:25 crc kubenswrapper[4809]: I0312 08:09:25.962101 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd2rx\" (UniqueName: \"kubernetes.io/projected/c4d10085-7b35-4cc0-ae7a-b9c3f443e326-kube-api-access-zd2rx\") pod \"console-86779b9dcc-qsh2c\" (UID: \"c4d10085-7b35-4cc0-ae7a-b9c3f443e326\") " pod="openshift-console/console-86779b9dcc-qsh2c" Mar 12 08:09:25 crc kubenswrapper[4809]: I0312 08:09:25.962226 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c4d10085-7b35-4cc0-ae7a-b9c3f443e326-service-ca\") pod \"console-86779b9dcc-qsh2c\" (UID: \"c4d10085-7b35-4cc0-ae7a-b9c3f443e326\") " pod="openshift-console/console-86779b9dcc-qsh2c" Mar 12 08:09:26 crc kubenswrapper[4809]: I0312 08:09:26.064436 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c4d10085-7b35-4cc0-ae7a-b9c3f443e326-console-oauth-config\") pod \"console-86779b9dcc-qsh2c\" (UID: \"c4d10085-7b35-4cc0-ae7a-b9c3f443e326\") " pod="openshift-console/console-86779b9dcc-qsh2c" Mar 12 08:09:26 crc kubenswrapper[4809]: I0312 08:09:26.064533 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c4d10085-7b35-4cc0-ae7a-b9c3f443e326-oauth-serving-cert\") pod \"console-86779b9dcc-qsh2c\" (UID: \"c4d10085-7b35-4cc0-ae7a-b9c3f443e326\") " pod="openshift-console/console-86779b9dcc-qsh2c" Mar 12 08:09:26 crc kubenswrapper[4809]: I0312 08:09:26.064576 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c4d10085-7b35-4cc0-ae7a-b9c3f443e326-console-config\") pod \"console-86779b9dcc-qsh2c\" (UID: \"c4d10085-7b35-4cc0-ae7a-b9c3f443e326\") " pod="openshift-console/console-86779b9dcc-qsh2c" Mar 12 08:09:26 crc kubenswrapper[4809]: I0312 08:09:26.064630 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c4d10085-7b35-4cc0-ae7a-b9c3f443e326-console-serving-cert\") pod \"console-86779b9dcc-qsh2c\" (UID: \"c4d10085-7b35-4cc0-ae7a-b9c3f443e326\") " pod="openshift-console/console-86779b9dcc-qsh2c" Mar 12 08:09:26 crc kubenswrapper[4809]: I0312 08:09:26.064707 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4d10085-7b35-4cc0-ae7a-b9c3f443e326-trusted-ca-bundle\") pod \"console-86779b9dcc-qsh2c\" (UID: \"c4d10085-7b35-4cc0-ae7a-b9c3f443e326\") " pod="openshift-console/console-86779b9dcc-qsh2c" Mar 12 08:09:26 crc kubenswrapper[4809]: I0312 08:09:26.064763 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zd2rx\" (UniqueName: \"kubernetes.io/projected/c4d10085-7b35-4cc0-ae7a-b9c3f443e326-kube-api-access-zd2rx\") pod \"console-86779b9dcc-qsh2c\" (UID: \"c4d10085-7b35-4cc0-ae7a-b9c3f443e326\") " pod="openshift-console/console-86779b9dcc-qsh2c" Mar 12 08:09:26 crc kubenswrapper[4809]: I0312 08:09:26.064810 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c4d10085-7b35-4cc0-ae7a-b9c3f443e326-service-ca\") pod \"console-86779b9dcc-qsh2c\" (UID: \"c4d10085-7b35-4cc0-ae7a-b9c3f443e326\") " pod="openshift-console/console-86779b9dcc-qsh2c" Mar 12 08:09:26 crc kubenswrapper[4809]: I0312 08:09:26.065552 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c4d10085-7b35-4cc0-ae7a-b9c3f443e326-oauth-serving-cert\") pod \"console-86779b9dcc-qsh2c\" (UID: \"c4d10085-7b35-4cc0-ae7a-b9c3f443e326\") " pod="openshift-console/console-86779b9dcc-qsh2c" Mar 12 08:09:26 crc kubenswrapper[4809]: I0312 08:09:26.066250 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c4d10085-7b35-4cc0-ae7a-b9c3f443e326-console-config\") pod \"console-86779b9dcc-qsh2c\" (UID: \"c4d10085-7b35-4cc0-ae7a-b9c3f443e326\") " pod="openshift-console/console-86779b9dcc-qsh2c" Mar 12 08:09:26 crc kubenswrapper[4809]: I0312 08:09:26.066869 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c4d10085-7b35-4cc0-ae7a-b9c3f443e326-service-ca\") pod \"console-86779b9dcc-qsh2c\" (UID: \"c4d10085-7b35-4cc0-ae7a-b9c3f443e326\") " pod="openshift-console/console-86779b9dcc-qsh2c" Mar 12 08:09:26 crc kubenswrapper[4809]: I0312 08:09:26.067169 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4d10085-7b35-4cc0-ae7a-b9c3f443e326-trusted-ca-bundle\") pod \"console-86779b9dcc-qsh2c\" (UID: \"c4d10085-7b35-4cc0-ae7a-b9c3f443e326\") " pod="openshift-console/console-86779b9dcc-qsh2c" Mar 12 08:09:26 crc kubenswrapper[4809]: I0312 08:09:26.077456 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c4d10085-7b35-4cc0-ae7a-b9c3f443e326-console-oauth-config\") pod \"console-86779b9dcc-qsh2c\" (UID: \"c4d10085-7b35-4cc0-ae7a-b9c3f443e326\") " pod="openshift-console/console-86779b9dcc-qsh2c" Mar 12 08:09:26 crc kubenswrapper[4809]: I0312 08:09:26.081477 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c4d10085-7b35-4cc0-ae7a-b9c3f443e326-console-serving-cert\") pod \"console-86779b9dcc-qsh2c\" (UID: \"c4d10085-7b35-4cc0-ae7a-b9c3f443e326\") " pod="openshift-console/console-86779b9dcc-qsh2c" Mar 12 08:09:26 crc kubenswrapper[4809]: I0312 08:09:26.109878 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zd2rx\" (UniqueName: \"kubernetes.io/projected/c4d10085-7b35-4cc0-ae7a-b9c3f443e326-kube-api-access-zd2rx\") pod \"console-86779b9dcc-qsh2c\" (UID: \"c4d10085-7b35-4cc0-ae7a-b9c3f443e326\") " pod="openshift-console/console-86779b9dcc-qsh2c" Mar 12 08:09:26 crc kubenswrapper[4809]: I0312 08:09:26.178000 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-86779b9dcc-qsh2c" Mar 12 08:09:26 crc kubenswrapper[4809]: I0312 08:09:26.699586 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-86779b9dcc-qsh2c"] Mar 12 08:09:26 crc kubenswrapper[4809]: W0312 08:09:26.711913 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4d10085_7b35_4cc0_ae7a_b9c3f443e326.slice/crio-21b91a806d7df23458eb6f3f3b68456ace9b8c28a896e20bfb5d5337a1b76555 WatchSource:0}: Error finding container 21b91a806d7df23458eb6f3f3b68456ace9b8c28a896e20bfb5d5337a1b76555: Status 404 returned error can't find the container with id 21b91a806d7df23458eb6f3f3b68456ace9b8c28a896e20bfb5d5337a1b76555 Mar 12 08:09:27 crc kubenswrapper[4809]: I0312 08:09:27.299587 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-86779b9dcc-qsh2c" event={"ID":"c4d10085-7b35-4cc0-ae7a-b9c3f443e326","Type":"ContainerStarted","Data":"8fb7cb25a6f08e2581183caaee3ad2a30d71a135aebabbe23188555a771c696c"} Mar 12 08:09:27 crc kubenswrapper[4809]: I0312 08:09:27.300187 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-86779b9dcc-qsh2c" event={"ID":"c4d10085-7b35-4cc0-ae7a-b9c3f443e326","Type":"ContainerStarted","Data":"21b91a806d7df23458eb6f3f3b68456ace9b8c28a896e20bfb5d5337a1b76555"} Mar 12 08:09:27 crc kubenswrapper[4809]: I0312 08:09:27.322469 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-86779b9dcc-qsh2c" podStartSLOduration=2.322442282 podStartE2EDuration="2.322442282s" podCreationTimestamp="2026-03-12 08:09:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:09:27.321666192 +0000 UTC m=+640.903701945" watchObservedRunningTime="2026-03-12 08:09:27.322442282 +0000 UTC m=+640.904478015" Mar 12 08:09:36 crc kubenswrapper[4809]: I0312 08:09:36.178561 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-86779b9dcc-qsh2c" Mar 12 08:09:36 crc kubenswrapper[4809]: I0312 08:09:36.179895 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-86779b9dcc-qsh2c" Mar 12 08:09:36 crc kubenswrapper[4809]: I0312 08:09:36.187893 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-86779b9dcc-qsh2c" Mar 12 08:09:36 crc kubenswrapper[4809]: I0312 08:09:36.389679 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-86779b9dcc-qsh2c" Mar 12 08:09:36 crc kubenswrapper[4809]: I0312 08:09:36.465162 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5f4d7b984b-8kzpt"] Mar 12 08:09:58 crc kubenswrapper[4809]: I0312 08:09:58.404476 4809 scope.go:117] "RemoveContainer" containerID="62e5cc06c620a7dcb17248e279555452486e3ebe420de46cfb780e6705bf96bd" Mar 12 08:10:00 crc kubenswrapper[4809]: I0312 08:10:00.162947 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555050-vrdkj"] Mar 12 08:10:00 crc kubenswrapper[4809]: I0312 08:10:00.164479 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555050-vrdkj" Mar 12 08:10:00 crc kubenswrapper[4809]: I0312 08:10:00.167705 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 08:10:00 crc kubenswrapper[4809]: I0312 08:10:00.168252 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 08:10:00 crc kubenswrapper[4809]: I0312 08:10:00.168867 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-tvxqv" Mar 12 08:10:00 crc kubenswrapper[4809]: I0312 08:10:00.175823 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555050-vrdkj"] Mar 12 08:10:00 crc kubenswrapper[4809]: I0312 08:10:00.299716 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2h8r\" (UniqueName: \"kubernetes.io/projected/3652d677-b5e0-425e-9805-aa4e6acc9437-kube-api-access-z2h8r\") pod \"auto-csr-approver-29555050-vrdkj\" (UID: \"3652d677-b5e0-425e-9805-aa4e6acc9437\") " pod="openshift-infra/auto-csr-approver-29555050-vrdkj" Mar 12 08:10:00 crc kubenswrapper[4809]: I0312 08:10:00.402809 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2h8r\" (UniqueName: \"kubernetes.io/projected/3652d677-b5e0-425e-9805-aa4e6acc9437-kube-api-access-z2h8r\") pod \"auto-csr-approver-29555050-vrdkj\" (UID: \"3652d677-b5e0-425e-9805-aa4e6acc9437\") " pod="openshift-infra/auto-csr-approver-29555050-vrdkj" Mar 12 08:10:00 crc kubenswrapper[4809]: I0312 08:10:00.434975 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2h8r\" (UniqueName: \"kubernetes.io/projected/3652d677-b5e0-425e-9805-aa4e6acc9437-kube-api-access-z2h8r\") pod \"auto-csr-approver-29555050-vrdkj\" (UID: \"3652d677-b5e0-425e-9805-aa4e6acc9437\") " pod="openshift-infra/auto-csr-approver-29555050-vrdkj" Mar 12 08:10:00 crc kubenswrapper[4809]: I0312 08:10:00.489678 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555050-vrdkj" Mar 12 08:10:00 crc kubenswrapper[4809]: I0312 08:10:00.744207 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555050-vrdkj"] Mar 12 08:10:01 crc kubenswrapper[4809]: I0312 08:10:01.525570 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-5f4d7b984b-8kzpt" podUID="c785b38a-f6f9-4152-a343-814d2b15a3a1" containerName="console" containerID="cri-o://5feaa0ed952de3eae2cad7603feb4201efebedfd7009a7940e50004f14fd4a1b" gracePeriod=15 Mar 12 08:10:01 crc kubenswrapper[4809]: I0312 08:10:01.590972 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555050-vrdkj" event={"ID":"3652d677-b5e0-425e-9805-aa4e6acc9437","Type":"ContainerStarted","Data":"8c2918c86ccb7208d9d64d188791634edaa4a561b8a4d1e3983c8f889877c3da"} Mar 12 08:10:01 crc kubenswrapper[4809]: I0312 08:10:01.901320 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5f4d7b984b-8kzpt_c785b38a-f6f9-4152-a343-814d2b15a3a1/console/0.log" Mar 12 08:10:01 crc kubenswrapper[4809]: I0312 08:10:01.901700 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5f4d7b984b-8kzpt" Mar 12 08:10:02 crc kubenswrapper[4809]: I0312 08:10:02.032009 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c785b38a-f6f9-4152-a343-814d2b15a3a1-trusted-ca-bundle\") pod \"c785b38a-f6f9-4152-a343-814d2b15a3a1\" (UID: \"c785b38a-f6f9-4152-a343-814d2b15a3a1\") " Mar 12 08:10:02 crc kubenswrapper[4809]: I0312 08:10:02.032131 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c785b38a-f6f9-4152-a343-814d2b15a3a1-service-ca\") pod \"c785b38a-f6f9-4152-a343-814d2b15a3a1\" (UID: \"c785b38a-f6f9-4152-a343-814d2b15a3a1\") " Mar 12 08:10:02 crc kubenswrapper[4809]: I0312 08:10:02.032214 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c785b38a-f6f9-4152-a343-814d2b15a3a1-console-serving-cert\") pod \"c785b38a-f6f9-4152-a343-814d2b15a3a1\" (UID: \"c785b38a-f6f9-4152-a343-814d2b15a3a1\") " Mar 12 08:10:02 crc kubenswrapper[4809]: I0312 08:10:02.032242 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-488pr\" (UniqueName: \"kubernetes.io/projected/c785b38a-f6f9-4152-a343-814d2b15a3a1-kube-api-access-488pr\") pod \"c785b38a-f6f9-4152-a343-814d2b15a3a1\" (UID: \"c785b38a-f6f9-4152-a343-814d2b15a3a1\") " Mar 12 08:10:02 crc kubenswrapper[4809]: I0312 08:10:02.032273 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c785b38a-f6f9-4152-a343-814d2b15a3a1-console-oauth-config\") pod \"c785b38a-f6f9-4152-a343-814d2b15a3a1\" (UID: \"c785b38a-f6f9-4152-a343-814d2b15a3a1\") " Mar 12 08:10:02 crc kubenswrapper[4809]: I0312 08:10:02.032313 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c785b38a-f6f9-4152-a343-814d2b15a3a1-console-config\") pod \"c785b38a-f6f9-4152-a343-814d2b15a3a1\" (UID: \"c785b38a-f6f9-4152-a343-814d2b15a3a1\") " Mar 12 08:10:02 crc kubenswrapper[4809]: I0312 08:10:02.032361 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c785b38a-f6f9-4152-a343-814d2b15a3a1-oauth-serving-cert\") pod \"c785b38a-f6f9-4152-a343-814d2b15a3a1\" (UID: \"c785b38a-f6f9-4152-a343-814d2b15a3a1\") " Mar 12 08:10:02 crc kubenswrapper[4809]: I0312 08:10:02.033267 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c785b38a-f6f9-4152-a343-814d2b15a3a1-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "c785b38a-f6f9-4152-a343-814d2b15a3a1" (UID: "c785b38a-f6f9-4152-a343-814d2b15a3a1"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:10:02 crc kubenswrapper[4809]: I0312 08:10:02.033400 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c785b38a-f6f9-4152-a343-814d2b15a3a1-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "c785b38a-f6f9-4152-a343-814d2b15a3a1" (UID: "c785b38a-f6f9-4152-a343-814d2b15a3a1"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:10:02 crc kubenswrapper[4809]: I0312 08:10:02.033421 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c785b38a-f6f9-4152-a343-814d2b15a3a1-console-config" (OuterVolumeSpecName: "console-config") pod "c785b38a-f6f9-4152-a343-814d2b15a3a1" (UID: "c785b38a-f6f9-4152-a343-814d2b15a3a1"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:10:02 crc kubenswrapper[4809]: I0312 08:10:02.033736 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c785b38a-f6f9-4152-a343-814d2b15a3a1-service-ca" (OuterVolumeSpecName: "service-ca") pod "c785b38a-f6f9-4152-a343-814d2b15a3a1" (UID: "c785b38a-f6f9-4152-a343-814d2b15a3a1"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:10:02 crc kubenswrapper[4809]: I0312 08:10:02.037793 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c785b38a-f6f9-4152-a343-814d2b15a3a1-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "c785b38a-f6f9-4152-a343-814d2b15a3a1" (UID: "c785b38a-f6f9-4152-a343-814d2b15a3a1"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:10:02 crc kubenswrapper[4809]: I0312 08:10:02.038908 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c785b38a-f6f9-4152-a343-814d2b15a3a1-kube-api-access-488pr" (OuterVolumeSpecName: "kube-api-access-488pr") pod "c785b38a-f6f9-4152-a343-814d2b15a3a1" (UID: "c785b38a-f6f9-4152-a343-814d2b15a3a1"). InnerVolumeSpecName "kube-api-access-488pr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:10:02 crc kubenswrapper[4809]: I0312 08:10:02.046000 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c785b38a-f6f9-4152-a343-814d2b15a3a1-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "c785b38a-f6f9-4152-a343-814d2b15a3a1" (UID: "c785b38a-f6f9-4152-a343-814d2b15a3a1"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:10:02 crc kubenswrapper[4809]: I0312 08:10:02.134435 4809 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c785b38a-f6f9-4152-a343-814d2b15a3a1-console-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 08:10:02 crc kubenswrapper[4809]: I0312 08:10:02.134469 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-488pr\" (UniqueName: \"kubernetes.io/projected/c785b38a-f6f9-4152-a343-814d2b15a3a1-kube-api-access-488pr\") on node \"crc\" DevicePath \"\"" Mar 12 08:10:02 crc kubenswrapper[4809]: I0312 08:10:02.134483 4809 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c785b38a-f6f9-4152-a343-814d2b15a3a1-console-oauth-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:10:02 crc kubenswrapper[4809]: I0312 08:10:02.134492 4809 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c785b38a-f6f9-4152-a343-814d2b15a3a1-console-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:10:02 crc kubenswrapper[4809]: I0312 08:10:02.134501 4809 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c785b38a-f6f9-4152-a343-814d2b15a3a1-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 08:10:02 crc kubenswrapper[4809]: I0312 08:10:02.134512 4809 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c785b38a-f6f9-4152-a343-814d2b15a3a1-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:10:02 crc kubenswrapper[4809]: I0312 08:10:02.134530 4809 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c785b38a-f6f9-4152-a343-814d2b15a3a1-service-ca\") on node \"crc\" DevicePath \"\"" Mar 12 08:10:02 crc kubenswrapper[4809]: I0312 08:10:02.601440 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5f4d7b984b-8kzpt_c785b38a-f6f9-4152-a343-814d2b15a3a1/console/0.log" Mar 12 08:10:02 crc kubenswrapper[4809]: I0312 08:10:02.601763 4809 generic.go:334] "Generic (PLEG): container finished" podID="c785b38a-f6f9-4152-a343-814d2b15a3a1" containerID="5feaa0ed952de3eae2cad7603feb4201efebedfd7009a7940e50004f14fd4a1b" exitCode=2 Mar 12 08:10:02 crc kubenswrapper[4809]: I0312 08:10:02.601841 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5f4d7b984b-8kzpt" Mar 12 08:10:02 crc kubenswrapper[4809]: I0312 08:10:02.601840 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5f4d7b984b-8kzpt" event={"ID":"c785b38a-f6f9-4152-a343-814d2b15a3a1","Type":"ContainerDied","Data":"5feaa0ed952de3eae2cad7603feb4201efebedfd7009a7940e50004f14fd4a1b"} Mar 12 08:10:02 crc kubenswrapper[4809]: I0312 08:10:02.601956 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5f4d7b984b-8kzpt" event={"ID":"c785b38a-f6f9-4152-a343-814d2b15a3a1","Type":"ContainerDied","Data":"ee3bbd4eba71ed922c624f39175e9bf63e4011ccea598dbeb809101148c6cec1"} Mar 12 08:10:02 crc kubenswrapper[4809]: I0312 08:10:02.601976 4809 scope.go:117] "RemoveContainer" containerID="5feaa0ed952de3eae2cad7603feb4201efebedfd7009a7940e50004f14fd4a1b" Mar 12 08:10:02 crc kubenswrapper[4809]: I0312 08:10:02.606595 4809 generic.go:334] "Generic (PLEG): container finished" podID="3652d677-b5e0-425e-9805-aa4e6acc9437" containerID="f65f18fe505c98f1153e0d27e39102c27241b5a85047688d9a1aa9184c0cf4c6" exitCode=0 Mar 12 08:10:02 crc kubenswrapper[4809]: I0312 08:10:02.606705 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555050-vrdkj" event={"ID":"3652d677-b5e0-425e-9805-aa4e6acc9437","Type":"ContainerDied","Data":"f65f18fe505c98f1153e0d27e39102c27241b5a85047688d9a1aa9184c0cf4c6"} Mar 12 08:10:02 crc kubenswrapper[4809]: I0312 08:10:02.639283 4809 scope.go:117] "RemoveContainer" containerID="5feaa0ed952de3eae2cad7603feb4201efebedfd7009a7940e50004f14fd4a1b" Mar 12 08:10:02 crc kubenswrapper[4809]: E0312 08:10:02.640095 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5feaa0ed952de3eae2cad7603feb4201efebedfd7009a7940e50004f14fd4a1b\": container with ID starting with 5feaa0ed952de3eae2cad7603feb4201efebedfd7009a7940e50004f14fd4a1b not found: ID does not exist" containerID="5feaa0ed952de3eae2cad7603feb4201efebedfd7009a7940e50004f14fd4a1b" Mar 12 08:10:02 crc kubenswrapper[4809]: I0312 08:10:02.640141 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5feaa0ed952de3eae2cad7603feb4201efebedfd7009a7940e50004f14fd4a1b"} err="failed to get container status \"5feaa0ed952de3eae2cad7603feb4201efebedfd7009a7940e50004f14fd4a1b\": rpc error: code = NotFound desc = could not find container \"5feaa0ed952de3eae2cad7603feb4201efebedfd7009a7940e50004f14fd4a1b\": container with ID starting with 5feaa0ed952de3eae2cad7603feb4201efebedfd7009a7940e50004f14fd4a1b not found: ID does not exist" Mar 12 08:10:02 crc kubenswrapper[4809]: I0312 08:10:02.643479 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5f4d7b984b-8kzpt"] Mar 12 08:10:02 crc kubenswrapper[4809]: I0312 08:10:02.646867 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-5f4d7b984b-8kzpt"] Mar 12 08:10:03 crc kubenswrapper[4809]: I0312 08:10:03.114303 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c785b38a-f6f9-4152-a343-814d2b15a3a1" path="/var/lib/kubelet/pods/c785b38a-f6f9-4152-a343-814d2b15a3a1/volumes" Mar 12 08:10:03 crc kubenswrapper[4809]: I0312 08:10:03.967090 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555050-vrdkj" Mar 12 08:10:04 crc kubenswrapper[4809]: I0312 08:10:04.065547 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2h8r\" (UniqueName: \"kubernetes.io/projected/3652d677-b5e0-425e-9805-aa4e6acc9437-kube-api-access-z2h8r\") pod \"3652d677-b5e0-425e-9805-aa4e6acc9437\" (UID: \"3652d677-b5e0-425e-9805-aa4e6acc9437\") " Mar 12 08:10:04 crc kubenswrapper[4809]: I0312 08:10:04.076060 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3652d677-b5e0-425e-9805-aa4e6acc9437-kube-api-access-z2h8r" (OuterVolumeSpecName: "kube-api-access-z2h8r") pod "3652d677-b5e0-425e-9805-aa4e6acc9437" (UID: "3652d677-b5e0-425e-9805-aa4e6acc9437"). InnerVolumeSpecName "kube-api-access-z2h8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:10:04 crc kubenswrapper[4809]: I0312 08:10:04.168714 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2h8r\" (UniqueName: \"kubernetes.io/projected/3652d677-b5e0-425e-9805-aa4e6acc9437-kube-api-access-z2h8r\") on node \"crc\" DevicePath \"\"" Mar 12 08:10:04 crc kubenswrapper[4809]: I0312 08:10:04.626399 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555050-vrdkj" event={"ID":"3652d677-b5e0-425e-9805-aa4e6acc9437","Type":"ContainerDied","Data":"8c2918c86ccb7208d9d64d188791634edaa4a561b8a4d1e3983c8f889877c3da"} Mar 12 08:10:04 crc kubenswrapper[4809]: I0312 08:10:04.626465 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c2918c86ccb7208d9d64d188791634edaa4a561b8a4d1e3983c8f889877c3da" Mar 12 08:10:04 crc kubenswrapper[4809]: I0312 08:10:04.626539 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555050-vrdkj" Mar 12 08:10:05 crc kubenswrapper[4809]: I0312 08:10:05.053405 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555044-7j2cn"] Mar 12 08:10:05 crc kubenswrapper[4809]: I0312 08:10:05.059349 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555044-7j2cn"] Mar 12 08:10:05 crc kubenswrapper[4809]: I0312 08:10:05.117599 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9274685c-c53e-4796-bb96-e1f50db591ed" path="/var/lib/kubelet/pods/9274685c-c53e-4796-bb96-e1f50db591ed/volumes" Mar 12 08:10:58 crc kubenswrapper[4809]: I0312 08:10:58.476087 4809 scope.go:117] "RemoveContainer" containerID="8c47932dc5b93742eb8a05948d3b248a5b749a68e6094eed23ee2c185d8f7cd8" Mar 12 08:11:15 crc kubenswrapper[4809]: I0312 08:11:15.048447 4809 patch_prober.go:28] interesting pod/machine-config-daemon-h6d4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 08:11:15 crc kubenswrapper[4809]: I0312 08:11:15.049254 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 08:11:27 crc kubenswrapper[4809]: I0312 08:11:27.967436 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086xr6r"] Mar 12 08:11:27 crc kubenswrapper[4809]: E0312 08:11:27.968178 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c785b38a-f6f9-4152-a343-814d2b15a3a1" containerName="console" Mar 12 08:11:27 crc kubenswrapper[4809]: I0312 08:11:27.968192 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="c785b38a-f6f9-4152-a343-814d2b15a3a1" containerName="console" Mar 12 08:11:27 crc kubenswrapper[4809]: E0312 08:11:27.968401 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3652d677-b5e0-425e-9805-aa4e6acc9437" containerName="oc" Mar 12 08:11:27 crc kubenswrapper[4809]: I0312 08:11:27.968409 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="3652d677-b5e0-425e-9805-aa4e6acc9437" containerName="oc" Mar 12 08:11:27 crc kubenswrapper[4809]: I0312 08:11:27.968518 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="c785b38a-f6f9-4152-a343-814d2b15a3a1" containerName="console" Mar 12 08:11:27 crc kubenswrapper[4809]: I0312 08:11:27.968532 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="3652d677-b5e0-425e-9805-aa4e6acc9437" containerName="oc" Mar 12 08:11:27 crc kubenswrapper[4809]: I0312 08:11:27.969542 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086xr6r" Mar 12 08:11:27 crc kubenswrapper[4809]: I0312 08:11:27.972069 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 12 08:11:28 crc kubenswrapper[4809]: I0312 08:11:28.034617 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086xr6r"] Mar 12 08:11:28 crc kubenswrapper[4809]: I0312 08:11:28.125520 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/56b4ec18-bdd0-4122-ab71-3ff920814d18-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086xr6r\" (UID: \"56b4ec18-bdd0-4122-ab71-3ff920814d18\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086xr6r" Mar 12 08:11:28 crc kubenswrapper[4809]: I0312 08:11:28.125575 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdwsj\" (UniqueName: \"kubernetes.io/projected/56b4ec18-bdd0-4122-ab71-3ff920814d18-kube-api-access-cdwsj\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086xr6r\" (UID: \"56b4ec18-bdd0-4122-ab71-3ff920814d18\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086xr6r" Mar 12 08:11:28 crc kubenswrapper[4809]: I0312 08:11:28.125674 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/56b4ec18-bdd0-4122-ab71-3ff920814d18-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086xr6r\" (UID: \"56b4ec18-bdd0-4122-ab71-3ff920814d18\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086xr6r" Mar 12 08:11:28 crc kubenswrapper[4809]: I0312 08:11:28.226778 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/56b4ec18-bdd0-4122-ab71-3ff920814d18-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086xr6r\" (UID: \"56b4ec18-bdd0-4122-ab71-3ff920814d18\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086xr6r" Mar 12 08:11:28 crc kubenswrapper[4809]: I0312 08:11:28.227275 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/56b4ec18-bdd0-4122-ab71-3ff920814d18-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086xr6r\" (UID: \"56b4ec18-bdd0-4122-ab71-3ff920814d18\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086xr6r" Mar 12 08:11:28 crc kubenswrapper[4809]: I0312 08:11:28.227301 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdwsj\" (UniqueName: \"kubernetes.io/projected/56b4ec18-bdd0-4122-ab71-3ff920814d18-kube-api-access-cdwsj\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086xr6r\" (UID: \"56b4ec18-bdd0-4122-ab71-3ff920814d18\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086xr6r" Mar 12 08:11:28 crc kubenswrapper[4809]: I0312 08:11:28.227309 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/56b4ec18-bdd0-4122-ab71-3ff920814d18-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086xr6r\" (UID: \"56b4ec18-bdd0-4122-ab71-3ff920814d18\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086xr6r" Mar 12 08:11:28 crc kubenswrapper[4809]: I0312 08:11:28.227640 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/56b4ec18-bdd0-4122-ab71-3ff920814d18-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086xr6r\" (UID: \"56b4ec18-bdd0-4122-ab71-3ff920814d18\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086xr6r" Mar 12 08:11:28 crc kubenswrapper[4809]: I0312 08:11:28.253262 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdwsj\" (UniqueName: \"kubernetes.io/projected/56b4ec18-bdd0-4122-ab71-3ff920814d18-kube-api-access-cdwsj\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086xr6r\" (UID: \"56b4ec18-bdd0-4122-ab71-3ff920814d18\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086xr6r" Mar 12 08:11:28 crc kubenswrapper[4809]: I0312 08:11:28.295330 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086xr6r" Mar 12 08:11:28 crc kubenswrapper[4809]: I0312 08:11:28.513889 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086xr6r"] Mar 12 08:11:29 crc kubenswrapper[4809]: I0312 08:11:29.306642 4809 generic.go:334] "Generic (PLEG): container finished" podID="56b4ec18-bdd0-4122-ab71-3ff920814d18" containerID="a4071b196491c8c57718d18784dc3ca24153b4375576fc20196b179915815dd2" exitCode=0 Mar 12 08:11:29 crc kubenswrapper[4809]: I0312 08:11:29.306746 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086xr6r" event={"ID":"56b4ec18-bdd0-4122-ab71-3ff920814d18","Type":"ContainerDied","Data":"a4071b196491c8c57718d18784dc3ca24153b4375576fc20196b179915815dd2"} Mar 12 08:11:29 crc kubenswrapper[4809]: I0312 08:11:29.306803 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086xr6r" event={"ID":"56b4ec18-bdd0-4122-ab71-3ff920814d18","Type":"ContainerStarted","Data":"b5f33d89c91aa4cfb25810446fad48af0ad140d36101f162fc9fc6c30c225813"} Mar 12 08:11:31 crc kubenswrapper[4809]: I0312 08:11:31.318876 4809 generic.go:334] "Generic (PLEG): container finished" podID="56b4ec18-bdd0-4122-ab71-3ff920814d18" containerID="3fc6d42efff750cf73fe8adf38ab185b58e148c1012499399107a27cc0183509" exitCode=0 Mar 12 08:11:31 crc kubenswrapper[4809]: I0312 08:11:31.318936 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086xr6r" event={"ID":"56b4ec18-bdd0-4122-ab71-3ff920814d18","Type":"ContainerDied","Data":"3fc6d42efff750cf73fe8adf38ab185b58e148c1012499399107a27cc0183509"} Mar 12 08:11:32 crc kubenswrapper[4809]: I0312 08:11:32.328977 4809 generic.go:334] "Generic (PLEG): container finished" podID="56b4ec18-bdd0-4122-ab71-3ff920814d18" containerID="722fd751d662fd7e933767631ec7e6cf11ace46c809e4339b243db2acaca1740" exitCode=0 Mar 12 08:11:32 crc kubenswrapper[4809]: I0312 08:11:32.329040 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086xr6r" event={"ID":"56b4ec18-bdd0-4122-ab71-3ff920814d18","Type":"ContainerDied","Data":"722fd751d662fd7e933767631ec7e6cf11ace46c809e4339b243db2acaca1740"} Mar 12 08:11:33 crc kubenswrapper[4809]: I0312 08:11:33.625546 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086xr6r" Mar 12 08:11:33 crc kubenswrapper[4809]: I0312 08:11:33.708241 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/56b4ec18-bdd0-4122-ab71-3ff920814d18-bundle\") pod \"56b4ec18-bdd0-4122-ab71-3ff920814d18\" (UID: \"56b4ec18-bdd0-4122-ab71-3ff920814d18\") " Mar 12 08:11:33 crc kubenswrapper[4809]: I0312 08:11:33.708351 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/56b4ec18-bdd0-4122-ab71-3ff920814d18-util\") pod \"56b4ec18-bdd0-4122-ab71-3ff920814d18\" (UID: \"56b4ec18-bdd0-4122-ab71-3ff920814d18\") " Mar 12 08:11:33 crc kubenswrapper[4809]: I0312 08:11:33.708385 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdwsj\" (UniqueName: \"kubernetes.io/projected/56b4ec18-bdd0-4122-ab71-3ff920814d18-kube-api-access-cdwsj\") pod \"56b4ec18-bdd0-4122-ab71-3ff920814d18\" (UID: \"56b4ec18-bdd0-4122-ab71-3ff920814d18\") " Mar 12 08:11:33 crc kubenswrapper[4809]: I0312 08:11:33.711152 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56b4ec18-bdd0-4122-ab71-3ff920814d18-bundle" (OuterVolumeSpecName: "bundle") pod "56b4ec18-bdd0-4122-ab71-3ff920814d18" (UID: "56b4ec18-bdd0-4122-ab71-3ff920814d18"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:11:33 crc kubenswrapper[4809]: I0312 08:11:33.715315 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56b4ec18-bdd0-4122-ab71-3ff920814d18-kube-api-access-cdwsj" (OuterVolumeSpecName: "kube-api-access-cdwsj") pod "56b4ec18-bdd0-4122-ab71-3ff920814d18" (UID: "56b4ec18-bdd0-4122-ab71-3ff920814d18"). InnerVolumeSpecName "kube-api-access-cdwsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:11:33 crc kubenswrapper[4809]: I0312 08:11:33.721994 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56b4ec18-bdd0-4122-ab71-3ff920814d18-util" (OuterVolumeSpecName: "util") pod "56b4ec18-bdd0-4122-ab71-3ff920814d18" (UID: "56b4ec18-bdd0-4122-ab71-3ff920814d18"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:11:33 crc kubenswrapper[4809]: I0312 08:11:33.809562 4809 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/56b4ec18-bdd0-4122-ab71-3ff920814d18-util\") on node \"crc\" DevicePath \"\"" Mar 12 08:11:33 crc kubenswrapper[4809]: I0312 08:11:33.809601 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdwsj\" (UniqueName: \"kubernetes.io/projected/56b4ec18-bdd0-4122-ab71-3ff920814d18-kube-api-access-cdwsj\") on node \"crc\" DevicePath \"\"" Mar 12 08:11:33 crc kubenswrapper[4809]: I0312 08:11:33.809612 4809 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/56b4ec18-bdd0-4122-ab71-3ff920814d18-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:11:34 crc kubenswrapper[4809]: I0312 08:11:34.364394 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086xr6r" event={"ID":"56b4ec18-bdd0-4122-ab71-3ff920814d18","Type":"ContainerDied","Data":"b5f33d89c91aa4cfb25810446fad48af0ad140d36101f162fc9fc6c30c225813"} Mar 12 08:11:34 crc kubenswrapper[4809]: I0312 08:11:34.364819 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5f33d89c91aa4cfb25810446fad48af0ad140d36101f162fc9fc6c30c225813" Mar 12 08:11:34 crc kubenswrapper[4809]: I0312 08:11:34.364551 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086xr6r" Mar 12 08:11:39 crc kubenswrapper[4809]: I0312 08:11:39.048339 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-7h9l6"] Mar 12 08:11:39 crc kubenswrapper[4809]: I0312 08:11:39.049000 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" podUID="cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" containerName="ovn-controller" containerID="cri-o://11f06a94e052ce6d8bc2496ade124907ca536c37d2710afe11c824327dc3de5a" gracePeriod=30 Mar 12 08:11:39 crc kubenswrapper[4809]: I0312 08:11:39.049140 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" podUID="cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" containerName="northd" containerID="cri-o://95b49430238b4b9dc6fefc7169830e7106e6d43b1130d0b2fd358f103161f390" gracePeriod=30 Mar 12 08:11:39 crc kubenswrapper[4809]: I0312 08:11:39.049166 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" podUID="cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" containerName="sbdb" containerID="cri-o://3a39013e253a1014bfc666169571d96ed943c28cc73221fcfa3f85d31ee9ea85" gracePeriod=30 Mar 12 08:11:39 crc kubenswrapper[4809]: I0312 08:11:39.049286 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" podUID="cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" containerName="nbdb" containerID="cri-o://833b7c9d59ec9b7cf52e9b6c12705d034b46279a46145c181f92d30ffb6e7f39" gracePeriod=30 Mar 12 08:11:39 crc kubenswrapper[4809]: I0312 08:11:39.049283 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" podUID="cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" containerName="kube-rbac-proxy-node" containerID="cri-o://31a4d3f57db51abc6b567283017a96e97ab98055de09513fe14d0be447942b4c" gracePeriod=30 Mar 12 08:11:39 crc kubenswrapper[4809]: I0312 08:11:39.049350 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" podUID="cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" containerName="ovn-acl-logging" containerID="cri-o://943bab511f40b661cb31e977ec90fbc4edd6cb41d26593b0d7829dfd854fc047" gracePeriod=30 Mar 12 08:11:39 crc kubenswrapper[4809]: I0312 08:11:39.049431 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" podUID="cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://06c9bb77ec46b7ddf658e865660c808ab82509e698598c42daf2b199305c32c2" gracePeriod=30 Mar 12 08:11:39 crc kubenswrapper[4809]: I0312 08:11:39.114342 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" podUID="cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" containerName="ovnkube-controller" containerID="cri-o://7c512f2d07de26de4898c7699488bb0fc61be75e0ad81b970d46e2cd73b3cae9" gracePeriod=30 Mar 12 08:11:39 crc kubenswrapper[4809]: I0312 08:11:39.394779 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7h9l6_cc7631d0-7d4b-4f5a-ab01-7516b2ed998e/ovnkube-controller/3.log" Mar 12 08:11:39 crc kubenswrapper[4809]: I0312 08:11:39.408752 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7h9l6_cc7631d0-7d4b-4f5a-ab01-7516b2ed998e/ovn-acl-logging/0.log" Mar 12 08:11:39 crc kubenswrapper[4809]: I0312 08:11:39.412667 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7h9l6_cc7631d0-7d4b-4f5a-ab01-7516b2ed998e/ovn-controller/0.log" Mar 12 08:11:39 crc kubenswrapper[4809]: I0312 08:11:39.414456 4809 generic.go:334] "Generic (PLEG): container finished" podID="cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" containerID="7c512f2d07de26de4898c7699488bb0fc61be75e0ad81b970d46e2cd73b3cae9" exitCode=0 Mar 12 08:11:39 crc kubenswrapper[4809]: I0312 08:11:39.414516 4809 generic.go:334] "Generic (PLEG): container finished" podID="cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" containerID="833b7c9d59ec9b7cf52e9b6c12705d034b46279a46145c181f92d30ffb6e7f39" exitCode=0 Mar 12 08:11:39 crc kubenswrapper[4809]: I0312 08:11:39.414526 4809 generic.go:334] "Generic (PLEG): container finished" podID="cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" containerID="95b49430238b4b9dc6fefc7169830e7106e6d43b1130d0b2fd358f103161f390" exitCode=0 Mar 12 08:11:39 crc kubenswrapper[4809]: I0312 08:11:39.414532 4809 generic.go:334] "Generic (PLEG): container finished" podID="cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" containerID="943bab511f40b661cb31e977ec90fbc4edd6cb41d26593b0d7829dfd854fc047" exitCode=143 Mar 12 08:11:39 crc kubenswrapper[4809]: I0312 08:11:39.414544 4809 generic.go:334] "Generic (PLEG): container finished" podID="cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" containerID="11f06a94e052ce6d8bc2496ade124907ca536c37d2710afe11c824327dc3de5a" exitCode=143 Mar 12 08:11:39 crc kubenswrapper[4809]: I0312 08:11:39.414521 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" event={"ID":"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e","Type":"ContainerDied","Data":"7c512f2d07de26de4898c7699488bb0fc61be75e0ad81b970d46e2cd73b3cae9"} Mar 12 08:11:39 crc kubenswrapper[4809]: I0312 08:11:39.414694 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" event={"ID":"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e","Type":"ContainerDied","Data":"833b7c9d59ec9b7cf52e9b6c12705d034b46279a46145c181f92d30ffb6e7f39"} Mar 12 08:11:39 crc kubenswrapper[4809]: I0312 08:11:39.414727 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" event={"ID":"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e","Type":"ContainerDied","Data":"95b49430238b4b9dc6fefc7169830e7106e6d43b1130d0b2fd358f103161f390"} Mar 12 08:11:39 crc kubenswrapper[4809]: I0312 08:11:39.414739 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" event={"ID":"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e","Type":"ContainerDied","Data":"943bab511f40b661cb31e977ec90fbc4edd6cb41d26593b0d7829dfd854fc047"} Mar 12 08:11:39 crc kubenswrapper[4809]: I0312 08:11:39.414750 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" event={"ID":"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e","Type":"ContainerDied","Data":"11f06a94e052ce6d8bc2496ade124907ca536c37d2710afe11c824327dc3de5a"} Mar 12 08:11:39 crc kubenswrapper[4809]: I0312 08:11:39.414750 4809 scope.go:117] "RemoveContainer" containerID="b1d9f4aacaba0b546f2a616ca1c4e27fabeb14a3a1800c92409bdf7bea6f167d" Mar 12 08:11:39 crc kubenswrapper[4809]: I0312 08:11:39.423881 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4xgl7_85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff/kube-multus/2.log" Mar 12 08:11:39 crc kubenswrapper[4809]: I0312 08:11:39.427787 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4xgl7_85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff/kube-multus/1.log" Mar 12 08:11:39 crc kubenswrapper[4809]: I0312 08:11:39.427842 4809 generic.go:334] "Generic (PLEG): container finished" podID="85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff" containerID="4a23740d494fbb78dad140e4ef9ec81eea5ab2a2bec25923af1731fb54b4beed" exitCode=2 Mar 12 08:11:39 crc kubenswrapper[4809]: I0312 08:11:39.427874 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4xgl7" event={"ID":"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff","Type":"ContainerDied","Data":"4a23740d494fbb78dad140e4ef9ec81eea5ab2a2bec25923af1731fb54b4beed"} Mar 12 08:11:39 crc kubenswrapper[4809]: I0312 08:11:39.428488 4809 scope.go:117] "RemoveContainer" containerID="4a23740d494fbb78dad140e4ef9ec81eea5ab2a2bec25923af1731fb54b4beed" Mar 12 08:11:39 crc kubenswrapper[4809]: E0312 08:11:39.428798 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-4xgl7_openshift-multus(85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff)\"" pod="openshift-multus/multus-4xgl7" podUID="85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff" Mar 12 08:11:39 crc kubenswrapper[4809]: I0312 08:11:39.457123 4809 scope.go:117] "RemoveContainer" containerID="a8d25ff9aef142a7444e4e864d929858a9e9a4d6831ed4e012b288e96ad920d6" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.243359 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7h9l6_cc7631d0-7d4b-4f5a-ab01-7516b2ed998e/ovn-acl-logging/0.log" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.243861 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7h9l6_cc7631d0-7d4b-4f5a-ab01-7516b2ed998e/ovn-controller/0.log" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.244343 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.404571 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-run-openvswitch\") pod \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.404712 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" (UID: "cc7631d0-7d4b-4f5a-ab01-7516b2ed998e"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.404899 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-host-run-ovn-kubernetes\") pod \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.404991 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2wz9\" (UniqueName: \"kubernetes.io/projected/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-kube-api-access-r2wz9\") pod \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.405028 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-ovnkube-script-lib\") pod \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.405048 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-host-kubelet\") pod \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.405069 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-ovn-node-metrics-cert\") pod \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.405083 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-host-slash\") pod \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.405100 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.405139 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-host-cni-bin\") pod \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.405166 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-host-cni-netd\") pod \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.405185 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-log-socket\") pod \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.405203 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-var-lib-openvswitch\") pod \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.405221 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-run-ovn\") pod \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.405221 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" (UID: "cc7631d0-7d4b-4f5a-ab01-7516b2ed998e"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.405240 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-host-run-netns\") pod \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.405291 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" (UID: "cc7631d0-7d4b-4f5a-ab01-7516b2ed998e"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.405283 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" (UID: "cc7631d0-7d4b-4f5a-ab01-7516b2ed998e"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.405308 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-etc-openvswitch\") pod \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.405315 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" (UID: "cc7631d0-7d4b-4f5a-ab01-7516b2ed998e"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.405332 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-log-socket" (OuterVolumeSpecName: "log-socket") pod "cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" (UID: "cc7631d0-7d4b-4f5a-ab01-7516b2ed998e"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.405345 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-host-slash" (OuterVolumeSpecName: "host-slash") pod "cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" (UID: "cc7631d0-7d4b-4f5a-ab01-7516b2ed998e"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.405351 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" (UID: "cc7631d0-7d4b-4f5a-ab01-7516b2ed998e"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.405355 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-ovnkube-config\") pod \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.405367 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" (UID: "cc7631d0-7d4b-4f5a-ab01-7516b2ed998e"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.405392 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-env-overrides\") pod \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.405393 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" (UID: "cc7631d0-7d4b-4f5a-ab01-7516b2ed998e"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.405412 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-node-log\") pod \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.405414 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" (UID: "cc7631d0-7d4b-4f5a-ab01-7516b2ed998e"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.405428 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-run-systemd\") pod \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.405444 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-systemd-units\") pod \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\" (UID: \"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e\") " Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.405521 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" (UID: "cc7631d0-7d4b-4f5a-ab01-7516b2ed998e"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.405550 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-node-log" (OuterVolumeSpecName: "node-log") pod "cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" (UID: "cc7631d0-7d4b-4f5a-ab01-7516b2ed998e"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.405621 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" (UID: "cc7631d0-7d4b-4f5a-ab01-7516b2ed998e"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.405644 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" (UID: "cc7631d0-7d4b-4f5a-ab01-7516b2ed998e"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.405719 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" (UID: "cc7631d0-7d4b-4f5a-ab01-7516b2ed998e"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.405749 4809 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-node-log\") on node \"crc\" DevicePath \"\"" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.405762 4809 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-systemd-units\") on node \"crc\" DevicePath \"\"" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.405774 4809 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-run-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.405784 4809 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.405793 4809 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-host-kubelet\") on node \"crc\" DevicePath \"\"" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.405803 4809 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-host-slash\") on node \"crc\" DevicePath \"\"" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.405812 4809 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.405821 4809 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-host-cni-bin\") on node \"crc\" DevicePath \"\"" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.405830 4809 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-host-cni-netd\") on node \"crc\" DevicePath \"\"" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.405838 4809 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-log-socket\") on node \"crc\" DevicePath \"\"" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.405847 4809 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.405855 4809 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-run-ovn\") on node \"crc\" DevicePath \"\"" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.405863 4809 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-host-run-netns\") on node \"crc\" DevicePath \"\"" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.405871 4809 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.405879 4809 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.406676 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" (UID: "cc7631d0-7d4b-4f5a-ab01-7516b2ed998e"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.415225 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" (UID: "cc7631d0-7d4b-4f5a-ab01-7516b2ed998e"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.418327 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-kube-api-access-r2wz9" (OuterVolumeSpecName: "kube-api-access-r2wz9") pod "cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" (UID: "cc7631d0-7d4b-4f5a-ab01-7516b2ed998e"). InnerVolumeSpecName "kube-api-access-r2wz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.433770 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-lszkd"] Mar 12 08:11:40 crc kubenswrapper[4809]: E0312 08:11:40.434007 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" containerName="nbdb" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.434020 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" containerName="nbdb" Mar 12 08:11:40 crc kubenswrapper[4809]: E0312 08:11:40.434030 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56b4ec18-bdd0-4122-ab71-3ff920814d18" containerName="pull" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.434036 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="56b4ec18-bdd0-4122-ab71-3ff920814d18" containerName="pull" Mar 12 08:11:40 crc kubenswrapper[4809]: E0312 08:11:40.434047 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" containerName="ovn-controller" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.434053 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" containerName="ovn-controller" Mar 12 08:11:40 crc kubenswrapper[4809]: E0312 08:11:40.434062 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" containerName="northd" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.434079 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" containerName="northd" Mar 12 08:11:40 crc kubenswrapper[4809]: E0312 08:11:40.434089 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56b4ec18-bdd0-4122-ab71-3ff920814d18" containerName="extract" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.434094 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="56b4ec18-bdd0-4122-ab71-3ff920814d18" containerName="extract" Mar 12 08:11:40 crc kubenswrapper[4809]: E0312 08:11:40.434100 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" containerName="ovnkube-controller" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.434107 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" containerName="ovnkube-controller" Mar 12 08:11:40 crc kubenswrapper[4809]: E0312 08:11:40.434133 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56b4ec18-bdd0-4122-ab71-3ff920814d18" containerName="util" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.434139 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="56b4ec18-bdd0-4122-ab71-3ff920814d18" containerName="util" Mar 12 08:11:40 crc kubenswrapper[4809]: E0312 08:11:40.434150 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" containerName="kube-rbac-proxy-node" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.434155 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" containerName="kube-rbac-proxy-node" Mar 12 08:11:40 crc kubenswrapper[4809]: E0312 08:11:40.434164 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" containerName="ovnkube-controller" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.434172 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" containerName="ovnkube-controller" Mar 12 08:11:40 crc kubenswrapper[4809]: E0312 08:11:40.434179 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" containerName="sbdb" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.434185 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" containerName="sbdb" Mar 12 08:11:40 crc kubenswrapper[4809]: E0312 08:11:40.434196 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" containerName="ovnkube-controller" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.434202 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" containerName="ovnkube-controller" Mar 12 08:11:40 crc kubenswrapper[4809]: E0312 08:11:40.434208 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" containerName="ovn-acl-logging" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.434215 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" containerName="ovn-acl-logging" Mar 12 08:11:40 crc kubenswrapper[4809]: E0312 08:11:40.434223 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" containerName="kube-rbac-proxy-ovn-metrics" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.434228 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" containerName="kube-rbac-proxy-ovn-metrics" Mar 12 08:11:40 crc kubenswrapper[4809]: E0312 08:11:40.434238 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" containerName="ovnkube-controller" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.434244 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" containerName="ovnkube-controller" Mar 12 08:11:40 crc kubenswrapper[4809]: E0312 08:11:40.434255 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" containerName="kubecfg-setup" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.434261 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" containerName="kubecfg-setup" Mar 12 08:11:40 crc kubenswrapper[4809]: E0312 08:11:40.434271 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" containerName="ovnkube-controller" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.434276 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" containerName="ovnkube-controller" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.434373 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" containerName="ovnkube-controller" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.434383 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" containerName="ovnkube-controller" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.434389 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" containerName="ovnkube-controller" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.434396 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" containerName="sbdb" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.434404 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" containerName="nbdb" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.434410 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" containerName="ovn-acl-logging" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.434417 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" containerName="northd" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.434423 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" containerName="kube-rbac-proxy-node" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.434429 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" containerName="ovn-controller" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.434437 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="56b4ec18-bdd0-4122-ab71-3ff920814d18" containerName="extract" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.434446 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" containerName="kube-rbac-proxy-ovn-metrics" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.434642 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" containerName="ovnkube-controller" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.434843 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" containerName="ovnkube-controller" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.436510 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.438207 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" (UID: "cc7631d0-7d4b-4f5a-ab01-7516b2ed998e"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.441703 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7h9l6_cc7631d0-7d4b-4f5a-ab01-7516b2ed998e/ovn-acl-logging/0.log" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.442235 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7h9l6_cc7631d0-7d4b-4f5a-ab01-7516b2ed998e/ovn-controller/0.log" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.442782 4809 generic.go:334] "Generic (PLEG): container finished" podID="cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" containerID="3a39013e253a1014bfc666169571d96ed943c28cc73221fcfa3f85d31ee9ea85" exitCode=0 Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.442865 4809 generic.go:334] "Generic (PLEG): container finished" podID="cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" containerID="06c9bb77ec46b7ddf658e865660c808ab82509e698598c42daf2b199305c32c2" exitCode=0 Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.442918 4809 generic.go:334] "Generic (PLEG): container finished" podID="cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" containerID="31a4d3f57db51abc6b567283017a96e97ab98055de09513fe14d0be447942b4c" exitCode=0 Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.443033 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" event={"ID":"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e","Type":"ContainerDied","Data":"3a39013e253a1014bfc666169571d96ed943c28cc73221fcfa3f85d31ee9ea85"} Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.443134 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" event={"ID":"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e","Type":"ContainerDied","Data":"06c9bb77ec46b7ddf658e865660c808ab82509e698598c42daf2b199305c32c2"} Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.443218 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" event={"ID":"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e","Type":"ContainerDied","Data":"31a4d3f57db51abc6b567283017a96e97ab98055de09513fe14d0be447942b4c"} Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.443312 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" event={"ID":"cc7631d0-7d4b-4f5a-ab01-7516b2ed998e","Type":"ContainerDied","Data":"664fc95bdb25511c48003bee104a460ddacbd720d5808bd8ddcfbd0a8a9d6c60"} Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.443398 4809 scope.go:117] "RemoveContainer" containerID="7c512f2d07de26de4898c7699488bb0fc61be75e0ad81b970d46e2cd73b3cae9" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.443647 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7h9l6" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.449260 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4xgl7_85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff/kube-multus/2.log" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.474788 4809 scope.go:117] "RemoveContainer" containerID="3a39013e253a1014bfc666169571d96ed943c28cc73221fcfa3f85d31ee9ea85" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.493631 4809 scope.go:117] "RemoveContainer" containerID="833b7c9d59ec9b7cf52e9b6c12705d034b46279a46145c181f92d30ffb6e7f39" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.508425 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c49fc18e-a2b6-448b-9e0e-f97942ee77ae-host-run-ovn-kubernetes\") pod \"ovnkube-node-lszkd\" (UID: \"c49fc18e-a2b6-448b-9e0e-f97942ee77ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.508477 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c49fc18e-a2b6-448b-9e0e-f97942ee77ae-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lszkd\" (UID: \"c49fc18e-a2b6-448b-9e0e-f97942ee77ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.508501 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c49fc18e-a2b6-448b-9e0e-f97942ee77ae-host-cni-netd\") pod \"ovnkube-node-lszkd\" (UID: \"c49fc18e-a2b6-448b-9e0e-f97942ee77ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.508530 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c49fc18e-a2b6-448b-9e0e-f97942ee77ae-ovnkube-script-lib\") pod \"ovnkube-node-lszkd\" (UID: \"c49fc18e-a2b6-448b-9e0e-f97942ee77ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.508561 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c49fc18e-a2b6-448b-9e0e-f97942ee77ae-ovnkube-config\") pod \"ovnkube-node-lszkd\" (UID: \"c49fc18e-a2b6-448b-9e0e-f97942ee77ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.508577 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c49fc18e-a2b6-448b-9e0e-f97942ee77ae-node-log\") pod \"ovnkube-node-lszkd\" (UID: \"c49fc18e-a2b6-448b-9e0e-f97942ee77ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.508594 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c49fc18e-a2b6-448b-9e0e-f97942ee77ae-run-systemd\") pod \"ovnkube-node-lszkd\" (UID: \"c49fc18e-a2b6-448b-9e0e-f97942ee77ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.508614 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c49fc18e-a2b6-448b-9e0e-f97942ee77ae-host-kubelet\") pod \"ovnkube-node-lszkd\" (UID: \"c49fc18e-a2b6-448b-9e0e-f97942ee77ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.508627 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c49fc18e-a2b6-448b-9e0e-f97942ee77ae-ovn-node-metrics-cert\") pod \"ovnkube-node-lszkd\" (UID: \"c49fc18e-a2b6-448b-9e0e-f97942ee77ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.508642 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c49fc18e-a2b6-448b-9e0e-f97942ee77ae-etc-openvswitch\") pod \"ovnkube-node-lszkd\" (UID: \"c49fc18e-a2b6-448b-9e0e-f97942ee77ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.508663 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c49fc18e-a2b6-448b-9e0e-f97942ee77ae-env-overrides\") pod \"ovnkube-node-lszkd\" (UID: \"c49fc18e-a2b6-448b-9e0e-f97942ee77ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.508688 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcpwm\" (UniqueName: \"kubernetes.io/projected/c49fc18e-a2b6-448b-9e0e-f97942ee77ae-kube-api-access-xcpwm\") pod \"ovnkube-node-lszkd\" (UID: \"c49fc18e-a2b6-448b-9e0e-f97942ee77ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.508711 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c49fc18e-a2b6-448b-9e0e-f97942ee77ae-var-lib-openvswitch\") pod \"ovnkube-node-lszkd\" (UID: \"c49fc18e-a2b6-448b-9e0e-f97942ee77ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.508728 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c49fc18e-a2b6-448b-9e0e-f97942ee77ae-host-slash\") pod \"ovnkube-node-lszkd\" (UID: \"c49fc18e-a2b6-448b-9e0e-f97942ee77ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.508766 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c49fc18e-a2b6-448b-9e0e-f97942ee77ae-run-openvswitch\") pod \"ovnkube-node-lszkd\" (UID: \"c49fc18e-a2b6-448b-9e0e-f97942ee77ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.508785 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c49fc18e-a2b6-448b-9e0e-f97942ee77ae-run-ovn\") pod \"ovnkube-node-lszkd\" (UID: \"c49fc18e-a2b6-448b-9e0e-f97942ee77ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.508802 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c49fc18e-a2b6-448b-9e0e-f97942ee77ae-log-socket\") pod \"ovnkube-node-lszkd\" (UID: \"c49fc18e-a2b6-448b-9e0e-f97942ee77ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.508822 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c49fc18e-a2b6-448b-9e0e-f97942ee77ae-host-cni-bin\") pod \"ovnkube-node-lszkd\" (UID: \"c49fc18e-a2b6-448b-9e0e-f97942ee77ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.508836 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c49fc18e-a2b6-448b-9e0e-f97942ee77ae-systemd-units\") pod \"ovnkube-node-lszkd\" (UID: \"c49fc18e-a2b6-448b-9e0e-f97942ee77ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.508851 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c49fc18e-a2b6-448b-9e0e-f97942ee77ae-host-run-netns\") pod \"ovnkube-node-lszkd\" (UID: \"c49fc18e-a2b6-448b-9e0e-f97942ee77ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.508887 4809 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.508901 4809 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.508910 4809 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-run-systemd\") on node \"crc\" DevicePath \"\"" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.508919 4809 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.508928 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2wz9\" (UniqueName: \"kubernetes.io/projected/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e-kube-api-access-r2wz9\") on node \"crc\" DevicePath \"\"" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.514952 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-7h9l6"] Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.521287 4809 scope.go:117] "RemoveContainer" containerID="95b49430238b4b9dc6fefc7169830e7106e6d43b1130d0b2fd358f103161f390" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.533155 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-7h9l6"] Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.540260 4809 scope.go:117] "RemoveContainer" containerID="06c9bb77ec46b7ddf658e865660c808ab82509e698598c42daf2b199305c32c2" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.571469 4809 scope.go:117] "RemoveContainer" containerID="31a4d3f57db51abc6b567283017a96e97ab98055de09513fe14d0be447942b4c" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.591203 4809 scope.go:117] "RemoveContainer" containerID="943bab511f40b661cb31e977ec90fbc4edd6cb41d26593b0d7829dfd854fc047" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.610303 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c49fc18e-a2b6-448b-9e0e-f97942ee77ae-host-run-ovn-kubernetes\") pod \"ovnkube-node-lszkd\" (UID: \"c49fc18e-a2b6-448b-9e0e-f97942ee77ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.610351 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c49fc18e-a2b6-448b-9e0e-f97942ee77ae-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lszkd\" (UID: \"c49fc18e-a2b6-448b-9e0e-f97942ee77ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.610377 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c49fc18e-a2b6-448b-9e0e-f97942ee77ae-host-cni-netd\") pod \"ovnkube-node-lszkd\" (UID: \"c49fc18e-a2b6-448b-9e0e-f97942ee77ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.610413 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c49fc18e-a2b6-448b-9e0e-f97942ee77ae-ovnkube-script-lib\") pod \"ovnkube-node-lszkd\" (UID: \"c49fc18e-a2b6-448b-9e0e-f97942ee77ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.610441 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c49fc18e-a2b6-448b-9e0e-f97942ee77ae-ovnkube-config\") pod \"ovnkube-node-lszkd\" (UID: \"c49fc18e-a2b6-448b-9e0e-f97942ee77ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.610455 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c49fc18e-a2b6-448b-9e0e-f97942ee77ae-run-systemd\") pod \"ovnkube-node-lszkd\" (UID: \"c49fc18e-a2b6-448b-9e0e-f97942ee77ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.610474 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c49fc18e-a2b6-448b-9e0e-f97942ee77ae-node-log\") pod \"ovnkube-node-lszkd\" (UID: \"c49fc18e-a2b6-448b-9e0e-f97942ee77ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.610501 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c49fc18e-a2b6-448b-9e0e-f97942ee77ae-host-kubelet\") pod \"ovnkube-node-lszkd\" (UID: \"c49fc18e-a2b6-448b-9e0e-f97942ee77ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.610517 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c49fc18e-a2b6-448b-9e0e-f97942ee77ae-etc-openvswitch\") pod \"ovnkube-node-lszkd\" (UID: \"c49fc18e-a2b6-448b-9e0e-f97942ee77ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.610532 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c49fc18e-a2b6-448b-9e0e-f97942ee77ae-ovn-node-metrics-cert\") pod \"ovnkube-node-lszkd\" (UID: \"c49fc18e-a2b6-448b-9e0e-f97942ee77ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.610553 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c49fc18e-a2b6-448b-9e0e-f97942ee77ae-env-overrides\") pod \"ovnkube-node-lszkd\" (UID: \"c49fc18e-a2b6-448b-9e0e-f97942ee77ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.610573 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcpwm\" (UniqueName: \"kubernetes.io/projected/c49fc18e-a2b6-448b-9e0e-f97942ee77ae-kube-api-access-xcpwm\") pod \"ovnkube-node-lszkd\" (UID: \"c49fc18e-a2b6-448b-9e0e-f97942ee77ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.610610 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c49fc18e-a2b6-448b-9e0e-f97942ee77ae-var-lib-openvswitch\") pod \"ovnkube-node-lszkd\" (UID: \"c49fc18e-a2b6-448b-9e0e-f97942ee77ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.610626 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c49fc18e-a2b6-448b-9e0e-f97942ee77ae-host-slash\") pod \"ovnkube-node-lszkd\" (UID: \"c49fc18e-a2b6-448b-9e0e-f97942ee77ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.610650 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c49fc18e-a2b6-448b-9e0e-f97942ee77ae-run-openvswitch\") pod \"ovnkube-node-lszkd\" (UID: \"c49fc18e-a2b6-448b-9e0e-f97942ee77ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.610665 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c49fc18e-a2b6-448b-9e0e-f97942ee77ae-run-ovn\") pod \"ovnkube-node-lszkd\" (UID: \"c49fc18e-a2b6-448b-9e0e-f97942ee77ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.610680 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c49fc18e-a2b6-448b-9e0e-f97942ee77ae-log-socket\") pod \"ovnkube-node-lszkd\" (UID: \"c49fc18e-a2b6-448b-9e0e-f97942ee77ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.610702 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c49fc18e-a2b6-448b-9e0e-f97942ee77ae-host-cni-bin\") pod \"ovnkube-node-lszkd\" (UID: \"c49fc18e-a2b6-448b-9e0e-f97942ee77ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.610718 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c49fc18e-a2b6-448b-9e0e-f97942ee77ae-host-run-netns\") pod \"ovnkube-node-lszkd\" (UID: \"c49fc18e-a2b6-448b-9e0e-f97942ee77ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.610732 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c49fc18e-a2b6-448b-9e0e-f97942ee77ae-systemd-units\") pod \"ovnkube-node-lszkd\" (UID: \"c49fc18e-a2b6-448b-9e0e-f97942ee77ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.610803 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c49fc18e-a2b6-448b-9e0e-f97942ee77ae-systemd-units\") pod \"ovnkube-node-lszkd\" (UID: \"c49fc18e-a2b6-448b-9e0e-f97942ee77ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.610839 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c49fc18e-a2b6-448b-9e0e-f97942ee77ae-host-run-ovn-kubernetes\") pod \"ovnkube-node-lszkd\" (UID: \"c49fc18e-a2b6-448b-9e0e-f97942ee77ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.610863 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c49fc18e-a2b6-448b-9e0e-f97942ee77ae-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lszkd\" (UID: \"c49fc18e-a2b6-448b-9e0e-f97942ee77ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.610885 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c49fc18e-a2b6-448b-9e0e-f97942ee77ae-host-cni-netd\") pod \"ovnkube-node-lszkd\" (UID: \"c49fc18e-a2b6-448b-9e0e-f97942ee77ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.611450 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c49fc18e-a2b6-448b-9e0e-f97942ee77ae-run-ovn\") pod \"ovnkube-node-lszkd\" (UID: \"c49fc18e-a2b6-448b-9e0e-f97942ee77ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.611617 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c49fc18e-a2b6-448b-9e0e-f97942ee77ae-var-lib-openvswitch\") pod \"ovnkube-node-lszkd\" (UID: \"c49fc18e-a2b6-448b-9e0e-f97942ee77ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.611734 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c49fc18e-a2b6-448b-9e0e-f97942ee77ae-host-slash\") pod \"ovnkube-node-lszkd\" (UID: \"c49fc18e-a2b6-448b-9e0e-f97942ee77ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.611847 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c49fc18e-a2b6-448b-9e0e-f97942ee77ae-run-openvswitch\") pod \"ovnkube-node-lszkd\" (UID: \"c49fc18e-a2b6-448b-9e0e-f97942ee77ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.611968 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c49fc18e-a2b6-448b-9e0e-f97942ee77ae-host-kubelet\") pod \"ovnkube-node-lszkd\" (UID: \"c49fc18e-a2b6-448b-9e0e-f97942ee77ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.611989 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c49fc18e-a2b6-448b-9e0e-f97942ee77ae-ovnkube-config\") pod \"ovnkube-node-lszkd\" (UID: \"c49fc18e-a2b6-448b-9e0e-f97942ee77ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.612011 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c49fc18e-a2b6-448b-9e0e-f97942ee77ae-etc-openvswitch\") pod \"ovnkube-node-lszkd\" (UID: \"c49fc18e-a2b6-448b-9e0e-f97942ee77ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.612230 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c49fc18e-a2b6-448b-9e0e-f97942ee77ae-host-cni-bin\") pod \"ovnkube-node-lszkd\" (UID: \"c49fc18e-a2b6-448b-9e0e-f97942ee77ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.612260 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c49fc18e-a2b6-448b-9e0e-f97942ee77ae-run-systemd\") pod \"ovnkube-node-lszkd\" (UID: \"c49fc18e-a2b6-448b-9e0e-f97942ee77ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.612255 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c49fc18e-a2b6-448b-9e0e-f97942ee77ae-log-socket\") pod \"ovnkube-node-lszkd\" (UID: \"c49fc18e-a2b6-448b-9e0e-f97942ee77ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.612287 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c49fc18e-a2b6-448b-9e0e-f97942ee77ae-node-log\") pod \"ovnkube-node-lszkd\" (UID: \"c49fc18e-a2b6-448b-9e0e-f97942ee77ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.612311 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c49fc18e-a2b6-448b-9e0e-f97942ee77ae-host-run-netns\") pod \"ovnkube-node-lszkd\" (UID: \"c49fc18e-a2b6-448b-9e0e-f97942ee77ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.612770 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c49fc18e-a2b6-448b-9e0e-f97942ee77ae-env-overrides\") pod \"ovnkube-node-lszkd\" (UID: \"c49fc18e-a2b6-448b-9e0e-f97942ee77ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.612868 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c49fc18e-a2b6-448b-9e0e-f97942ee77ae-ovnkube-script-lib\") pod \"ovnkube-node-lszkd\" (UID: \"c49fc18e-a2b6-448b-9e0e-f97942ee77ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.615330 4809 scope.go:117] "RemoveContainer" containerID="11f06a94e052ce6d8bc2496ade124907ca536c37d2710afe11c824327dc3de5a" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.616195 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c49fc18e-a2b6-448b-9e0e-f97942ee77ae-ovn-node-metrics-cert\") pod \"ovnkube-node-lszkd\" (UID: \"c49fc18e-a2b6-448b-9e0e-f97942ee77ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.649914 4809 scope.go:117] "RemoveContainer" containerID="dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.679349 4809 scope.go:117] "RemoveContainer" containerID="7c512f2d07de26de4898c7699488bb0fc61be75e0ad81b970d46e2cd73b3cae9" Mar 12 08:11:40 crc kubenswrapper[4809]: E0312 08:11:40.683898 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c512f2d07de26de4898c7699488bb0fc61be75e0ad81b970d46e2cd73b3cae9\": container with ID starting with 7c512f2d07de26de4898c7699488bb0fc61be75e0ad81b970d46e2cd73b3cae9 not found: ID does not exist" containerID="7c512f2d07de26de4898c7699488bb0fc61be75e0ad81b970d46e2cd73b3cae9" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.683944 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c512f2d07de26de4898c7699488bb0fc61be75e0ad81b970d46e2cd73b3cae9"} err="failed to get container status \"7c512f2d07de26de4898c7699488bb0fc61be75e0ad81b970d46e2cd73b3cae9\": rpc error: code = NotFound desc = could not find container \"7c512f2d07de26de4898c7699488bb0fc61be75e0ad81b970d46e2cd73b3cae9\": container with ID starting with 7c512f2d07de26de4898c7699488bb0fc61be75e0ad81b970d46e2cd73b3cae9 not found: ID does not exist" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.683970 4809 scope.go:117] "RemoveContainer" containerID="3a39013e253a1014bfc666169571d96ed943c28cc73221fcfa3f85d31ee9ea85" Mar 12 08:11:40 crc kubenswrapper[4809]: E0312 08:11:40.686743 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a39013e253a1014bfc666169571d96ed943c28cc73221fcfa3f85d31ee9ea85\": container with ID starting with 3a39013e253a1014bfc666169571d96ed943c28cc73221fcfa3f85d31ee9ea85 not found: ID does not exist" containerID="3a39013e253a1014bfc666169571d96ed943c28cc73221fcfa3f85d31ee9ea85" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.686773 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a39013e253a1014bfc666169571d96ed943c28cc73221fcfa3f85d31ee9ea85"} err="failed to get container status \"3a39013e253a1014bfc666169571d96ed943c28cc73221fcfa3f85d31ee9ea85\": rpc error: code = NotFound desc = could not find container \"3a39013e253a1014bfc666169571d96ed943c28cc73221fcfa3f85d31ee9ea85\": container with ID starting with 3a39013e253a1014bfc666169571d96ed943c28cc73221fcfa3f85d31ee9ea85 not found: ID does not exist" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.686811 4809 scope.go:117] "RemoveContainer" containerID="833b7c9d59ec9b7cf52e9b6c12705d034b46279a46145c181f92d30ffb6e7f39" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.686843 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcpwm\" (UniqueName: \"kubernetes.io/projected/c49fc18e-a2b6-448b-9e0e-f97942ee77ae-kube-api-access-xcpwm\") pod \"ovnkube-node-lszkd\" (UID: \"c49fc18e-a2b6-448b-9e0e-f97942ee77ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" Mar 12 08:11:40 crc kubenswrapper[4809]: E0312 08:11:40.692072 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"833b7c9d59ec9b7cf52e9b6c12705d034b46279a46145c181f92d30ffb6e7f39\": container with ID starting with 833b7c9d59ec9b7cf52e9b6c12705d034b46279a46145c181f92d30ffb6e7f39 not found: ID does not exist" containerID="833b7c9d59ec9b7cf52e9b6c12705d034b46279a46145c181f92d30ffb6e7f39" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.692164 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"833b7c9d59ec9b7cf52e9b6c12705d034b46279a46145c181f92d30ffb6e7f39"} err="failed to get container status \"833b7c9d59ec9b7cf52e9b6c12705d034b46279a46145c181f92d30ffb6e7f39\": rpc error: code = NotFound desc = could not find container \"833b7c9d59ec9b7cf52e9b6c12705d034b46279a46145c181f92d30ffb6e7f39\": container with ID starting with 833b7c9d59ec9b7cf52e9b6c12705d034b46279a46145c181f92d30ffb6e7f39 not found: ID does not exist" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.692192 4809 scope.go:117] "RemoveContainer" containerID="95b49430238b4b9dc6fefc7169830e7106e6d43b1130d0b2fd358f103161f390" Mar 12 08:11:40 crc kubenswrapper[4809]: E0312 08:11:40.694394 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95b49430238b4b9dc6fefc7169830e7106e6d43b1130d0b2fd358f103161f390\": container with ID starting with 95b49430238b4b9dc6fefc7169830e7106e6d43b1130d0b2fd358f103161f390 not found: ID does not exist" containerID="95b49430238b4b9dc6fefc7169830e7106e6d43b1130d0b2fd358f103161f390" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.694434 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95b49430238b4b9dc6fefc7169830e7106e6d43b1130d0b2fd358f103161f390"} err="failed to get container status \"95b49430238b4b9dc6fefc7169830e7106e6d43b1130d0b2fd358f103161f390\": rpc error: code = NotFound desc = could not find container \"95b49430238b4b9dc6fefc7169830e7106e6d43b1130d0b2fd358f103161f390\": container with ID starting with 95b49430238b4b9dc6fefc7169830e7106e6d43b1130d0b2fd358f103161f390 not found: ID does not exist" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.694457 4809 scope.go:117] "RemoveContainer" containerID="06c9bb77ec46b7ddf658e865660c808ab82509e698598c42daf2b199305c32c2" Mar 12 08:11:40 crc kubenswrapper[4809]: E0312 08:11:40.695966 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06c9bb77ec46b7ddf658e865660c808ab82509e698598c42daf2b199305c32c2\": container with ID starting with 06c9bb77ec46b7ddf658e865660c808ab82509e698598c42daf2b199305c32c2 not found: ID does not exist" containerID="06c9bb77ec46b7ddf658e865660c808ab82509e698598c42daf2b199305c32c2" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.695994 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06c9bb77ec46b7ddf658e865660c808ab82509e698598c42daf2b199305c32c2"} err="failed to get container status \"06c9bb77ec46b7ddf658e865660c808ab82509e698598c42daf2b199305c32c2\": rpc error: code = NotFound desc = could not find container \"06c9bb77ec46b7ddf658e865660c808ab82509e698598c42daf2b199305c32c2\": container with ID starting with 06c9bb77ec46b7ddf658e865660c808ab82509e698598c42daf2b199305c32c2 not found: ID does not exist" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.696011 4809 scope.go:117] "RemoveContainer" containerID="31a4d3f57db51abc6b567283017a96e97ab98055de09513fe14d0be447942b4c" Mar 12 08:11:40 crc kubenswrapper[4809]: E0312 08:11:40.697504 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31a4d3f57db51abc6b567283017a96e97ab98055de09513fe14d0be447942b4c\": container with ID starting with 31a4d3f57db51abc6b567283017a96e97ab98055de09513fe14d0be447942b4c not found: ID does not exist" containerID="31a4d3f57db51abc6b567283017a96e97ab98055de09513fe14d0be447942b4c" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.697523 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31a4d3f57db51abc6b567283017a96e97ab98055de09513fe14d0be447942b4c"} err="failed to get container status \"31a4d3f57db51abc6b567283017a96e97ab98055de09513fe14d0be447942b4c\": rpc error: code = NotFound desc = could not find container \"31a4d3f57db51abc6b567283017a96e97ab98055de09513fe14d0be447942b4c\": container with ID starting with 31a4d3f57db51abc6b567283017a96e97ab98055de09513fe14d0be447942b4c not found: ID does not exist" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.697538 4809 scope.go:117] "RemoveContainer" containerID="943bab511f40b661cb31e977ec90fbc4edd6cb41d26593b0d7829dfd854fc047" Mar 12 08:11:40 crc kubenswrapper[4809]: E0312 08:11:40.701573 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"943bab511f40b661cb31e977ec90fbc4edd6cb41d26593b0d7829dfd854fc047\": container with ID starting with 943bab511f40b661cb31e977ec90fbc4edd6cb41d26593b0d7829dfd854fc047 not found: ID does not exist" containerID="943bab511f40b661cb31e977ec90fbc4edd6cb41d26593b0d7829dfd854fc047" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.701615 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"943bab511f40b661cb31e977ec90fbc4edd6cb41d26593b0d7829dfd854fc047"} err="failed to get container status \"943bab511f40b661cb31e977ec90fbc4edd6cb41d26593b0d7829dfd854fc047\": rpc error: code = NotFound desc = could not find container \"943bab511f40b661cb31e977ec90fbc4edd6cb41d26593b0d7829dfd854fc047\": container with ID starting with 943bab511f40b661cb31e977ec90fbc4edd6cb41d26593b0d7829dfd854fc047 not found: ID does not exist" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.701642 4809 scope.go:117] "RemoveContainer" containerID="11f06a94e052ce6d8bc2496ade124907ca536c37d2710afe11c824327dc3de5a" Mar 12 08:11:40 crc kubenswrapper[4809]: E0312 08:11:40.704665 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11f06a94e052ce6d8bc2496ade124907ca536c37d2710afe11c824327dc3de5a\": container with ID starting with 11f06a94e052ce6d8bc2496ade124907ca536c37d2710afe11c824327dc3de5a not found: ID does not exist" containerID="11f06a94e052ce6d8bc2496ade124907ca536c37d2710afe11c824327dc3de5a" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.704719 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11f06a94e052ce6d8bc2496ade124907ca536c37d2710afe11c824327dc3de5a"} err="failed to get container status \"11f06a94e052ce6d8bc2496ade124907ca536c37d2710afe11c824327dc3de5a\": rpc error: code = NotFound desc = could not find container \"11f06a94e052ce6d8bc2496ade124907ca536c37d2710afe11c824327dc3de5a\": container with ID starting with 11f06a94e052ce6d8bc2496ade124907ca536c37d2710afe11c824327dc3de5a not found: ID does not exist" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.704755 4809 scope.go:117] "RemoveContainer" containerID="dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828" Mar 12 08:11:40 crc kubenswrapper[4809]: E0312 08:11:40.706621 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\": container with ID starting with dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828 not found: ID does not exist" containerID="dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.706652 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828"} err="failed to get container status \"dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\": rpc error: code = NotFound desc = could not find container \"dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\": container with ID starting with dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828 not found: ID does not exist" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.706709 4809 scope.go:117] "RemoveContainer" containerID="7c512f2d07de26de4898c7699488bb0fc61be75e0ad81b970d46e2cd73b3cae9" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.712173 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c512f2d07de26de4898c7699488bb0fc61be75e0ad81b970d46e2cd73b3cae9"} err="failed to get container status \"7c512f2d07de26de4898c7699488bb0fc61be75e0ad81b970d46e2cd73b3cae9\": rpc error: code = NotFound desc = could not find container \"7c512f2d07de26de4898c7699488bb0fc61be75e0ad81b970d46e2cd73b3cae9\": container with ID starting with 7c512f2d07de26de4898c7699488bb0fc61be75e0ad81b970d46e2cd73b3cae9 not found: ID does not exist" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.712243 4809 scope.go:117] "RemoveContainer" containerID="3a39013e253a1014bfc666169571d96ed943c28cc73221fcfa3f85d31ee9ea85" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.717304 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a39013e253a1014bfc666169571d96ed943c28cc73221fcfa3f85d31ee9ea85"} err="failed to get container status \"3a39013e253a1014bfc666169571d96ed943c28cc73221fcfa3f85d31ee9ea85\": rpc error: code = NotFound desc = could not find container \"3a39013e253a1014bfc666169571d96ed943c28cc73221fcfa3f85d31ee9ea85\": container with ID starting with 3a39013e253a1014bfc666169571d96ed943c28cc73221fcfa3f85d31ee9ea85 not found: ID does not exist" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.717367 4809 scope.go:117] "RemoveContainer" containerID="833b7c9d59ec9b7cf52e9b6c12705d034b46279a46145c181f92d30ffb6e7f39" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.738671 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"833b7c9d59ec9b7cf52e9b6c12705d034b46279a46145c181f92d30ffb6e7f39"} err="failed to get container status \"833b7c9d59ec9b7cf52e9b6c12705d034b46279a46145c181f92d30ffb6e7f39\": rpc error: code = NotFound desc = could not find container \"833b7c9d59ec9b7cf52e9b6c12705d034b46279a46145c181f92d30ffb6e7f39\": container with ID starting with 833b7c9d59ec9b7cf52e9b6c12705d034b46279a46145c181f92d30ffb6e7f39 not found: ID does not exist" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.738766 4809 scope.go:117] "RemoveContainer" containerID="95b49430238b4b9dc6fefc7169830e7106e6d43b1130d0b2fd358f103161f390" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.739744 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95b49430238b4b9dc6fefc7169830e7106e6d43b1130d0b2fd358f103161f390"} err="failed to get container status \"95b49430238b4b9dc6fefc7169830e7106e6d43b1130d0b2fd358f103161f390\": rpc error: code = NotFound desc = could not find container \"95b49430238b4b9dc6fefc7169830e7106e6d43b1130d0b2fd358f103161f390\": container with ID starting with 95b49430238b4b9dc6fefc7169830e7106e6d43b1130d0b2fd358f103161f390 not found: ID does not exist" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.739788 4809 scope.go:117] "RemoveContainer" containerID="06c9bb77ec46b7ddf658e865660c808ab82509e698598c42daf2b199305c32c2" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.740344 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06c9bb77ec46b7ddf658e865660c808ab82509e698598c42daf2b199305c32c2"} err="failed to get container status \"06c9bb77ec46b7ddf658e865660c808ab82509e698598c42daf2b199305c32c2\": rpc error: code = NotFound desc = could not find container \"06c9bb77ec46b7ddf658e865660c808ab82509e698598c42daf2b199305c32c2\": container with ID starting with 06c9bb77ec46b7ddf658e865660c808ab82509e698598c42daf2b199305c32c2 not found: ID does not exist" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.740489 4809 scope.go:117] "RemoveContainer" containerID="31a4d3f57db51abc6b567283017a96e97ab98055de09513fe14d0be447942b4c" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.743018 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31a4d3f57db51abc6b567283017a96e97ab98055de09513fe14d0be447942b4c"} err="failed to get container status \"31a4d3f57db51abc6b567283017a96e97ab98055de09513fe14d0be447942b4c\": rpc error: code = NotFound desc = could not find container \"31a4d3f57db51abc6b567283017a96e97ab98055de09513fe14d0be447942b4c\": container with ID starting with 31a4d3f57db51abc6b567283017a96e97ab98055de09513fe14d0be447942b4c not found: ID does not exist" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.743073 4809 scope.go:117] "RemoveContainer" containerID="943bab511f40b661cb31e977ec90fbc4edd6cb41d26593b0d7829dfd854fc047" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.754564 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"943bab511f40b661cb31e977ec90fbc4edd6cb41d26593b0d7829dfd854fc047"} err="failed to get container status \"943bab511f40b661cb31e977ec90fbc4edd6cb41d26593b0d7829dfd854fc047\": rpc error: code = NotFound desc = could not find container \"943bab511f40b661cb31e977ec90fbc4edd6cb41d26593b0d7829dfd854fc047\": container with ID starting with 943bab511f40b661cb31e977ec90fbc4edd6cb41d26593b0d7829dfd854fc047 not found: ID does not exist" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.754606 4809 scope.go:117] "RemoveContainer" containerID="11f06a94e052ce6d8bc2496ade124907ca536c37d2710afe11c824327dc3de5a" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.757363 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11f06a94e052ce6d8bc2496ade124907ca536c37d2710afe11c824327dc3de5a"} err="failed to get container status \"11f06a94e052ce6d8bc2496ade124907ca536c37d2710afe11c824327dc3de5a\": rpc error: code = NotFound desc = could not find container \"11f06a94e052ce6d8bc2496ade124907ca536c37d2710afe11c824327dc3de5a\": container with ID starting with 11f06a94e052ce6d8bc2496ade124907ca536c37d2710afe11c824327dc3de5a not found: ID does not exist" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.757431 4809 scope.go:117] "RemoveContainer" containerID="dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.770388 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828"} err="failed to get container status \"dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\": rpc error: code = NotFound desc = could not find container \"dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\": container with ID starting with dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828 not found: ID does not exist" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.770436 4809 scope.go:117] "RemoveContainer" containerID="7c512f2d07de26de4898c7699488bb0fc61be75e0ad81b970d46e2cd73b3cae9" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.774207 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c512f2d07de26de4898c7699488bb0fc61be75e0ad81b970d46e2cd73b3cae9"} err="failed to get container status \"7c512f2d07de26de4898c7699488bb0fc61be75e0ad81b970d46e2cd73b3cae9\": rpc error: code = NotFound desc = could not find container \"7c512f2d07de26de4898c7699488bb0fc61be75e0ad81b970d46e2cd73b3cae9\": container with ID starting with 7c512f2d07de26de4898c7699488bb0fc61be75e0ad81b970d46e2cd73b3cae9 not found: ID does not exist" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.774252 4809 scope.go:117] "RemoveContainer" containerID="3a39013e253a1014bfc666169571d96ed943c28cc73221fcfa3f85d31ee9ea85" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.776446 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a39013e253a1014bfc666169571d96ed943c28cc73221fcfa3f85d31ee9ea85"} err="failed to get container status \"3a39013e253a1014bfc666169571d96ed943c28cc73221fcfa3f85d31ee9ea85\": rpc error: code = NotFound desc = could not find container \"3a39013e253a1014bfc666169571d96ed943c28cc73221fcfa3f85d31ee9ea85\": container with ID starting with 3a39013e253a1014bfc666169571d96ed943c28cc73221fcfa3f85d31ee9ea85 not found: ID does not exist" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.776473 4809 scope.go:117] "RemoveContainer" containerID="833b7c9d59ec9b7cf52e9b6c12705d034b46279a46145c181f92d30ffb6e7f39" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.777933 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"833b7c9d59ec9b7cf52e9b6c12705d034b46279a46145c181f92d30ffb6e7f39"} err="failed to get container status \"833b7c9d59ec9b7cf52e9b6c12705d034b46279a46145c181f92d30ffb6e7f39\": rpc error: code = NotFound desc = could not find container \"833b7c9d59ec9b7cf52e9b6c12705d034b46279a46145c181f92d30ffb6e7f39\": container with ID starting with 833b7c9d59ec9b7cf52e9b6c12705d034b46279a46145c181f92d30ffb6e7f39 not found: ID does not exist" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.777960 4809 scope.go:117] "RemoveContainer" containerID="95b49430238b4b9dc6fefc7169830e7106e6d43b1130d0b2fd358f103161f390" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.779098 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95b49430238b4b9dc6fefc7169830e7106e6d43b1130d0b2fd358f103161f390"} err="failed to get container status \"95b49430238b4b9dc6fefc7169830e7106e6d43b1130d0b2fd358f103161f390\": rpc error: code = NotFound desc = could not find container \"95b49430238b4b9dc6fefc7169830e7106e6d43b1130d0b2fd358f103161f390\": container with ID starting with 95b49430238b4b9dc6fefc7169830e7106e6d43b1130d0b2fd358f103161f390 not found: ID does not exist" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.779151 4809 scope.go:117] "RemoveContainer" containerID="06c9bb77ec46b7ddf658e865660c808ab82509e698598c42daf2b199305c32c2" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.782728 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.784893 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06c9bb77ec46b7ddf658e865660c808ab82509e698598c42daf2b199305c32c2"} err="failed to get container status \"06c9bb77ec46b7ddf658e865660c808ab82509e698598c42daf2b199305c32c2\": rpc error: code = NotFound desc = could not find container \"06c9bb77ec46b7ddf658e865660c808ab82509e698598c42daf2b199305c32c2\": container with ID starting with 06c9bb77ec46b7ddf658e865660c808ab82509e698598c42daf2b199305c32c2 not found: ID does not exist" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.784923 4809 scope.go:117] "RemoveContainer" containerID="31a4d3f57db51abc6b567283017a96e97ab98055de09513fe14d0be447942b4c" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.785207 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31a4d3f57db51abc6b567283017a96e97ab98055de09513fe14d0be447942b4c"} err="failed to get container status \"31a4d3f57db51abc6b567283017a96e97ab98055de09513fe14d0be447942b4c\": rpc error: code = NotFound desc = could not find container \"31a4d3f57db51abc6b567283017a96e97ab98055de09513fe14d0be447942b4c\": container with ID starting with 31a4d3f57db51abc6b567283017a96e97ab98055de09513fe14d0be447942b4c not found: ID does not exist" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.785224 4809 scope.go:117] "RemoveContainer" containerID="943bab511f40b661cb31e977ec90fbc4edd6cb41d26593b0d7829dfd854fc047" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.785438 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"943bab511f40b661cb31e977ec90fbc4edd6cb41d26593b0d7829dfd854fc047"} err="failed to get container status \"943bab511f40b661cb31e977ec90fbc4edd6cb41d26593b0d7829dfd854fc047\": rpc error: code = NotFound desc = could not find container \"943bab511f40b661cb31e977ec90fbc4edd6cb41d26593b0d7829dfd854fc047\": container with ID starting with 943bab511f40b661cb31e977ec90fbc4edd6cb41d26593b0d7829dfd854fc047 not found: ID does not exist" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.785452 4809 scope.go:117] "RemoveContainer" containerID="11f06a94e052ce6d8bc2496ade124907ca536c37d2710afe11c824327dc3de5a" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.785639 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11f06a94e052ce6d8bc2496ade124907ca536c37d2710afe11c824327dc3de5a"} err="failed to get container status \"11f06a94e052ce6d8bc2496ade124907ca536c37d2710afe11c824327dc3de5a\": rpc error: code = NotFound desc = could not find container \"11f06a94e052ce6d8bc2496ade124907ca536c37d2710afe11c824327dc3de5a\": container with ID starting with 11f06a94e052ce6d8bc2496ade124907ca536c37d2710afe11c824327dc3de5a not found: ID does not exist" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.785654 4809 scope.go:117] "RemoveContainer" containerID="dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828" Mar 12 08:11:40 crc kubenswrapper[4809]: I0312 08:11:40.785833 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828"} err="failed to get container status \"dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\": rpc error: code = NotFound desc = could not find container \"dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828\": container with ID starting with dba8662650bf1433ed0fa43f8119364c9b03119d2ba62985c825e779669b6828 not found: ID does not exist" Mar 12 08:11:41 crc kubenswrapper[4809]: I0312 08:11:41.113092 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc7631d0-7d4b-4f5a-ab01-7516b2ed998e" path="/var/lib/kubelet/pods/cc7631d0-7d4b-4f5a-ab01-7516b2ed998e/volumes" Mar 12 08:11:41 crc kubenswrapper[4809]: I0312 08:11:41.465546 4809 generic.go:334] "Generic (PLEG): container finished" podID="c49fc18e-a2b6-448b-9e0e-f97942ee77ae" containerID="0b999bd94965df363a021c9f0d4ad5dfd233b4b40478ec3f93f654df7782c77d" exitCode=0 Mar 12 08:11:41 crc kubenswrapper[4809]: I0312 08:11:41.465614 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" event={"ID":"c49fc18e-a2b6-448b-9e0e-f97942ee77ae","Type":"ContainerDied","Data":"0b999bd94965df363a021c9f0d4ad5dfd233b4b40478ec3f93f654df7782c77d"} Mar 12 08:11:41 crc kubenswrapper[4809]: I0312 08:11:41.465871 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" event={"ID":"c49fc18e-a2b6-448b-9e0e-f97942ee77ae","Type":"ContainerStarted","Data":"c7848169f83c487c70abd0620c9fce012a023b075c8e6dc360853b27a02df75f"} Mar 12 08:11:42 crc kubenswrapper[4809]: I0312 08:11:42.293171 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-fzj9d"] Mar 12 08:11:42 crc kubenswrapper[4809]: I0312 08:11:42.294646 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fzj9d" Mar 12 08:11:42 crc kubenswrapper[4809]: I0312 08:11:42.297070 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Mar 12 08:11:42 crc kubenswrapper[4809]: I0312 08:11:42.301832 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Mar 12 08:11:42 crc kubenswrapper[4809]: I0312 08:11:42.307853 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-hs5bl" Mar 12 08:11:42 crc kubenswrapper[4809]: I0312 08:11:42.344001 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-56f5999595-jrnk6"] Mar 12 08:11:42 crc kubenswrapper[4809]: I0312 08:11:42.345373 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56f5999595-jrnk6" Mar 12 08:11:42 crc kubenswrapper[4809]: I0312 08:11:42.348206 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-fj54r" Mar 12 08:11:42 crc kubenswrapper[4809]: I0312 08:11:42.349199 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Mar 12 08:11:42 crc kubenswrapper[4809]: I0312 08:11:42.364868 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-56f5999595-xkcxn"] Mar 12 08:11:42 crc kubenswrapper[4809]: I0312 08:11:42.366401 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56f5999595-xkcxn" Mar 12 08:11:42 crc kubenswrapper[4809]: I0312 08:11:42.441226 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ef4ced13-c901-407a-aa7b-ed5198a4cca8-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-56f5999595-xkcxn\" (UID: \"ef4ced13-c901-407a-aa7b-ed5198a4cca8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56f5999595-xkcxn" Mar 12 08:11:42 crc kubenswrapper[4809]: I0312 08:11:42.441267 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps65x\" (UniqueName: \"kubernetes.io/projected/a6312c6e-68f1-40c5-82ea-50fda65c492f-kube-api-access-ps65x\") pod \"obo-prometheus-operator-68bc856cb9-fzj9d\" (UID: \"a6312c6e-68f1-40c5-82ea-50fda65c492f\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fzj9d" Mar 12 08:11:42 crc kubenswrapper[4809]: I0312 08:11:42.441308 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d4977a4c-9481-45c8-ba76-6e985c4e11be-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-56f5999595-jrnk6\" (UID: \"d4977a4c-9481-45c8-ba76-6e985c4e11be\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56f5999595-jrnk6" Mar 12 08:11:42 crc kubenswrapper[4809]: I0312 08:11:42.441447 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d4977a4c-9481-45c8-ba76-6e985c4e11be-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-56f5999595-jrnk6\" (UID: \"d4977a4c-9481-45c8-ba76-6e985c4e11be\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56f5999595-jrnk6" Mar 12 08:11:42 crc kubenswrapper[4809]: I0312 08:11:42.441502 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef4ced13-c901-407a-aa7b-ed5198a4cca8-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-56f5999595-xkcxn\" (UID: \"ef4ced13-c901-407a-aa7b-ed5198a4cca8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56f5999595-xkcxn" Mar 12 08:11:42 crc kubenswrapper[4809]: I0312 08:11:42.487903 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" event={"ID":"c49fc18e-a2b6-448b-9e0e-f97942ee77ae","Type":"ContainerStarted","Data":"5fb5119b201aedd27cd92b2bb4e614c4d181874cca176fdf01388f228a267527"} Mar 12 08:11:42 crc kubenswrapper[4809]: I0312 08:11:42.489465 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" event={"ID":"c49fc18e-a2b6-448b-9e0e-f97942ee77ae","Type":"ContainerStarted","Data":"81a8a19dee672fc65d1ff9db2d29e9ad5789ef3bd675e1ac9335b2184facbd48"} Mar 12 08:11:42 crc kubenswrapper[4809]: I0312 08:11:42.489573 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" event={"ID":"c49fc18e-a2b6-448b-9e0e-f97942ee77ae","Type":"ContainerStarted","Data":"91d011fc9a83415dc922c29db3880f0eee22e5ffe6a11530bb19fb9c931d81ac"} Mar 12 08:11:42 crc kubenswrapper[4809]: I0312 08:11:42.489638 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" event={"ID":"c49fc18e-a2b6-448b-9e0e-f97942ee77ae","Type":"ContainerStarted","Data":"d84186d9fe051a59ed0402d8baee0b397d42c2b1a698c1bb473bc9244ce6603d"} Mar 12 08:11:42 crc kubenswrapper[4809]: I0312 08:11:42.489707 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" event={"ID":"c49fc18e-a2b6-448b-9e0e-f97942ee77ae","Type":"ContainerStarted","Data":"6e0bcdb30f1784a47d6376d55d46b0daeeff0658551f6e385fff630791102780"} Mar 12 08:11:42 crc kubenswrapper[4809]: I0312 08:11:42.489761 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" event={"ID":"c49fc18e-a2b6-448b-9e0e-f97942ee77ae","Type":"ContainerStarted","Data":"a1c6e57968d80c7ffb32ecc5cc7de270fa29f759ee932b7f6095a7ee5f60b5d5"} Mar 12 08:11:42 crc kubenswrapper[4809]: I0312 08:11:42.518918 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-wwrvc"] Mar 12 08:11:42 crc kubenswrapper[4809]: I0312 08:11:42.519709 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-wwrvc" Mar 12 08:11:42 crc kubenswrapper[4809]: I0312 08:11:42.522599 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-kmcth" Mar 12 08:11:42 crc kubenswrapper[4809]: I0312 08:11:42.524569 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Mar 12 08:11:42 crc kubenswrapper[4809]: I0312 08:11:42.542987 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d4977a4c-9481-45c8-ba76-6e985c4e11be-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-56f5999595-jrnk6\" (UID: \"d4977a4c-9481-45c8-ba76-6e985c4e11be\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56f5999595-jrnk6" Mar 12 08:11:42 crc kubenswrapper[4809]: I0312 08:11:42.543037 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef4ced13-c901-407a-aa7b-ed5198a4cca8-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-56f5999595-xkcxn\" (UID: \"ef4ced13-c901-407a-aa7b-ed5198a4cca8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56f5999595-xkcxn" Mar 12 08:11:42 crc kubenswrapper[4809]: I0312 08:11:42.543175 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ef4ced13-c901-407a-aa7b-ed5198a4cca8-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-56f5999595-xkcxn\" (UID: \"ef4ced13-c901-407a-aa7b-ed5198a4cca8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56f5999595-xkcxn" Mar 12 08:11:42 crc kubenswrapper[4809]: I0312 08:11:42.543203 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ps65x\" (UniqueName: \"kubernetes.io/projected/a6312c6e-68f1-40c5-82ea-50fda65c492f-kube-api-access-ps65x\") pod \"obo-prometheus-operator-68bc856cb9-fzj9d\" (UID: \"a6312c6e-68f1-40c5-82ea-50fda65c492f\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fzj9d" Mar 12 08:11:42 crc kubenswrapper[4809]: I0312 08:11:42.543251 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d4977a4c-9481-45c8-ba76-6e985c4e11be-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-56f5999595-jrnk6\" (UID: \"d4977a4c-9481-45c8-ba76-6e985c4e11be\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56f5999595-jrnk6" Mar 12 08:11:42 crc kubenswrapper[4809]: I0312 08:11:42.562924 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ef4ced13-c901-407a-aa7b-ed5198a4cca8-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-56f5999595-xkcxn\" (UID: \"ef4ced13-c901-407a-aa7b-ed5198a4cca8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56f5999595-xkcxn" Mar 12 08:11:42 crc kubenswrapper[4809]: I0312 08:11:42.562969 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d4977a4c-9481-45c8-ba76-6e985c4e11be-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-56f5999595-jrnk6\" (UID: \"d4977a4c-9481-45c8-ba76-6e985c4e11be\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56f5999595-jrnk6" Mar 12 08:11:42 crc kubenswrapper[4809]: I0312 08:11:42.562924 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d4977a4c-9481-45c8-ba76-6e985c4e11be-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-56f5999595-jrnk6\" (UID: \"d4977a4c-9481-45c8-ba76-6e985c4e11be\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56f5999595-jrnk6" Mar 12 08:11:42 crc kubenswrapper[4809]: I0312 08:11:42.562986 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef4ced13-c901-407a-aa7b-ed5198a4cca8-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-56f5999595-xkcxn\" (UID: \"ef4ced13-c901-407a-aa7b-ed5198a4cca8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56f5999595-xkcxn" Mar 12 08:11:42 crc kubenswrapper[4809]: I0312 08:11:42.585604 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ps65x\" (UniqueName: \"kubernetes.io/projected/a6312c6e-68f1-40c5-82ea-50fda65c492f-kube-api-access-ps65x\") pod \"obo-prometheus-operator-68bc856cb9-fzj9d\" (UID: \"a6312c6e-68f1-40c5-82ea-50fda65c492f\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fzj9d" Mar 12 08:11:42 crc kubenswrapper[4809]: I0312 08:11:42.611457 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fzj9d" Mar 12 08:11:42 crc kubenswrapper[4809]: E0312 08:11:42.642139 4809 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-fzj9d_openshift-operators_a6312c6e-68f1-40c5-82ea-50fda65c492f_0(2e07d96cd019fe8c8441e624841a64ea575dd9630e324b6ee6100a1cb706a7db): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 08:11:42 crc kubenswrapper[4809]: E0312 08:11:42.642220 4809 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-fzj9d_openshift-operators_a6312c6e-68f1-40c5-82ea-50fda65c492f_0(2e07d96cd019fe8c8441e624841a64ea575dd9630e324b6ee6100a1cb706a7db): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fzj9d" Mar 12 08:11:42 crc kubenswrapper[4809]: E0312 08:11:42.642246 4809 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-fzj9d_openshift-operators_a6312c6e-68f1-40c5-82ea-50fda65c492f_0(2e07d96cd019fe8c8441e624841a64ea575dd9630e324b6ee6100a1cb706a7db): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fzj9d" Mar 12 08:11:42 crc kubenswrapper[4809]: E0312 08:11:42.642292 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-fzj9d_openshift-operators(a6312c6e-68f1-40c5-82ea-50fda65c492f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-fzj9d_openshift-operators(a6312c6e-68f1-40c5-82ea-50fda65c492f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-fzj9d_openshift-operators_a6312c6e-68f1-40c5-82ea-50fda65c492f_0(2e07d96cd019fe8c8441e624841a64ea575dd9630e324b6ee6100a1cb706a7db): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fzj9d" podUID="a6312c6e-68f1-40c5-82ea-50fda65c492f" Mar 12 08:11:42 crc kubenswrapper[4809]: I0312 08:11:42.644996 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/8f1a2dab-e883-409f-ba21-a52ea0947c1b-observability-operator-tls\") pod \"observability-operator-59bdc8b94-wwrvc\" (UID: \"8f1a2dab-e883-409f-ba21-a52ea0947c1b\") " pod="openshift-operators/observability-operator-59bdc8b94-wwrvc" Mar 12 08:11:42 crc kubenswrapper[4809]: I0312 08:11:42.645157 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5z5p\" (UniqueName: \"kubernetes.io/projected/8f1a2dab-e883-409f-ba21-a52ea0947c1b-kube-api-access-t5z5p\") pod \"observability-operator-59bdc8b94-wwrvc\" (UID: \"8f1a2dab-e883-409f-ba21-a52ea0947c1b\") " pod="openshift-operators/observability-operator-59bdc8b94-wwrvc" Mar 12 08:11:42 crc kubenswrapper[4809]: I0312 08:11:42.659690 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56f5999595-jrnk6" Mar 12 08:11:42 crc kubenswrapper[4809]: I0312 08:11:42.679551 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56f5999595-xkcxn" Mar 12 08:11:42 crc kubenswrapper[4809]: E0312 08:11:42.694942 4809 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56f5999595-jrnk6_openshift-operators_d4977a4c-9481-45c8-ba76-6e985c4e11be_0(3688ff0460d4bf728ee639e60eb1f19d627dbaa5ceb8e4fb0ab732fc1aee14cd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 08:11:42 crc kubenswrapper[4809]: E0312 08:11:42.695006 4809 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56f5999595-jrnk6_openshift-operators_d4977a4c-9481-45c8-ba76-6e985c4e11be_0(3688ff0460d4bf728ee639e60eb1f19d627dbaa5ceb8e4fb0ab732fc1aee14cd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56f5999595-jrnk6" Mar 12 08:11:42 crc kubenswrapper[4809]: E0312 08:11:42.695034 4809 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56f5999595-jrnk6_openshift-operators_d4977a4c-9481-45c8-ba76-6e985c4e11be_0(3688ff0460d4bf728ee639e60eb1f19d627dbaa5ceb8e4fb0ab732fc1aee14cd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56f5999595-jrnk6" Mar 12 08:11:42 crc kubenswrapper[4809]: E0312 08:11:42.695083 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-56f5999595-jrnk6_openshift-operators(d4977a4c-9481-45c8-ba76-6e985c4e11be)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-56f5999595-jrnk6_openshift-operators(d4977a4c-9481-45c8-ba76-6e985c4e11be)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56f5999595-jrnk6_openshift-operators_d4977a4c-9481-45c8-ba76-6e985c4e11be_0(3688ff0460d4bf728ee639e60eb1f19d627dbaa5ceb8e4fb0ab732fc1aee14cd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56f5999595-jrnk6" podUID="d4977a4c-9481-45c8-ba76-6e985c4e11be" Mar 12 08:11:42 crc kubenswrapper[4809]: I0312 08:11:42.697546 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-gb2mk"] Mar 12 08:11:42 crc kubenswrapper[4809]: I0312 08:11:42.698409 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-gb2mk" Mar 12 08:11:42 crc kubenswrapper[4809]: I0312 08:11:42.700923 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-j4pww" Mar 12 08:11:42 crc kubenswrapper[4809]: E0312 08:11:42.719787 4809 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56f5999595-xkcxn_openshift-operators_ef4ced13-c901-407a-aa7b-ed5198a4cca8_0(c27182cc07b70c09acd54e837a30e8920277b6fc129259849f081464cbc0254e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 08:11:42 crc kubenswrapper[4809]: E0312 08:11:42.719862 4809 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56f5999595-xkcxn_openshift-operators_ef4ced13-c901-407a-aa7b-ed5198a4cca8_0(c27182cc07b70c09acd54e837a30e8920277b6fc129259849f081464cbc0254e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56f5999595-xkcxn" Mar 12 08:11:42 crc kubenswrapper[4809]: E0312 08:11:42.719895 4809 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56f5999595-xkcxn_openshift-operators_ef4ced13-c901-407a-aa7b-ed5198a4cca8_0(c27182cc07b70c09acd54e837a30e8920277b6fc129259849f081464cbc0254e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56f5999595-xkcxn" Mar 12 08:11:42 crc kubenswrapper[4809]: E0312 08:11:42.719957 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-56f5999595-xkcxn_openshift-operators(ef4ced13-c901-407a-aa7b-ed5198a4cca8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-56f5999595-xkcxn_openshift-operators(ef4ced13-c901-407a-aa7b-ed5198a4cca8)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56f5999595-xkcxn_openshift-operators_ef4ced13-c901-407a-aa7b-ed5198a4cca8_0(c27182cc07b70c09acd54e837a30e8920277b6fc129259849f081464cbc0254e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56f5999595-xkcxn" podUID="ef4ced13-c901-407a-aa7b-ed5198a4cca8" Mar 12 08:11:42 crc kubenswrapper[4809]: I0312 08:11:42.746763 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/8f1a2dab-e883-409f-ba21-a52ea0947c1b-observability-operator-tls\") pod \"observability-operator-59bdc8b94-wwrvc\" (UID: \"8f1a2dab-e883-409f-ba21-a52ea0947c1b\") " pod="openshift-operators/observability-operator-59bdc8b94-wwrvc" Mar 12 08:11:42 crc kubenswrapper[4809]: I0312 08:11:42.746903 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5z5p\" (UniqueName: \"kubernetes.io/projected/8f1a2dab-e883-409f-ba21-a52ea0947c1b-kube-api-access-t5z5p\") pod \"observability-operator-59bdc8b94-wwrvc\" (UID: \"8f1a2dab-e883-409f-ba21-a52ea0947c1b\") " pod="openshift-operators/observability-operator-59bdc8b94-wwrvc" Mar 12 08:11:42 crc kubenswrapper[4809]: I0312 08:11:42.751830 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/8f1a2dab-e883-409f-ba21-a52ea0947c1b-observability-operator-tls\") pod \"observability-operator-59bdc8b94-wwrvc\" (UID: \"8f1a2dab-e883-409f-ba21-a52ea0947c1b\") " pod="openshift-operators/observability-operator-59bdc8b94-wwrvc" Mar 12 08:11:42 crc kubenswrapper[4809]: I0312 08:11:42.773200 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5z5p\" (UniqueName: \"kubernetes.io/projected/8f1a2dab-e883-409f-ba21-a52ea0947c1b-kube-api-access-t5z5p\") pod \"observability-operator-59bdc8b94-wwrvc\" (UID: \"8f1a2dab-e883-409f-ba21-a52ea0947c1b\") " pod="openshift-operators/observability-operator-59bdc8b94-wwrvc" Mar 12 08:11:42 crc kubenswrapper[4809]: I0312 08:11:42.834334 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-wwrvc" Mar 12 08:11:42 crc kubenswrapper[4809]: I0312 08:11:42.848489 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/68cdd0d8-8927-4777-8067-995b7a404794-openshift-service-ca\") pod \"perses-operator-5bf474d74f-gb2mk\" (UID: \"68cdd0d8-8927-4777-8067-995b7a404794\") " pod="openshift-operators/perses-operator-5bf474d74f-gb2mk" Mar 12 08:11:42 crc kubenswrapper[4809]: I0312 08:11:42.848562 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9xq6\" (UniqueName: \"kubernetes.io/projected/68cdd0d8-8927-4777-8067-995b7a404794-kube-api-access-j9xq6\") pod \"perses-operator-5bf474d74f-gb2mk\" (UID: \"68cdd0d8-8927-4777-8067-995b7a404794\") " pod="openshift-operators/perses-operator-5bf474d74f-gb2mk" Mar 12 08:11:42 crc kubenswrapper[4809]: E0312 08:11:42.863505 4809 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-wwrvc_openshift-operators_8f1a2dab-e883-409f-ba21-a52ea0947c1b_0(bf57b0110752e6bf90949a5512fc0a849dd8d2e51c9376b1363133f49e600978): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 08:11:42 crc kubenswrapper[4809]: E0312 08:11:42.863584 4809 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-wwrvc_openshift-operators_8f1a2dab-e883-409f-ba21-a52ea0947c1b_0(bf57b0110752e6bf90949a5512fc0a849dd8d2e51c9376b1363133f49e600978): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-wwrvc" Mar 12 08:11:42 crc kubenswrapper[4809]: E0312 08:11:42.863614 4809 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-wwrvc_openshift-operators_8f1a2dab-e883-409f-ba21-a52ea0947c1b_0(bf57b0110752e6bf90949a5512fc0a849dd8d2e51c9376b1363133f49e600978): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-wwrvc" Mar 12 08:11:42 crc kubenswrapper[4809]: E0312 08:11:42.863684 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-wwrvc_openshift-operators(8f1a2dab-e883-409f-ba21-a52ea0947c1b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-wwrvc_openshift-operators(8f1a2dab-e883-409f-ba21-a52ea0947c1b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-wwrvc_openshift-operators_8f1a2dab-e883-409f-ba21-a52ea0947c1b_0(bf57b0110752e6bf90949a5512fc0a849dd8d2e51c9376b1363133f49e600978): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-wwrvc" podUID="8f1a2dab-e883-409f-ba21-a52ea0947c1b" Mar 12 08:11:42 crc kubenswrapper[4809]: I0312 08:11:42.950042 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9xq6\" (UniqueName: \"kubernetes.io/projected/68cdd0d8-8927-4777-8067-995b7a404794-kube-api-access-j9xq6\") pod \"perses-operator-5bf474d74f-gb2mk\" (UID: \"68cdd0d8-8927-4777-8067-995b7a404794\") " pod="openshift-operators/perses-operator-5bf474d74f-gb2mk" Mar 12 08:11:42 crc kubenswrapper[4809]: I0312 08:11:42.950176 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/68cdd0d8-8927-4777-8067-995b7a404794-openshift-service-ca\") pod \"perses-operator-5bf474d74f-gb2mk\" (UID: \"68cdd0d8-8927-4777-8067-995b7a404794\") " pod="openshift-operators/perses-operator-5bf474d74f-gb2mk" Mar 12 08:11:42 crc kubenswrapper[4809]: I0312 08:11:42.951096 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/68cdd0d8-8927-4777-8067-995b7a404794-openshift-service-ca\") pod \"perses-operator-5bf474d74f-gb2mk\" (UID: \"68cdd0d8-8927-4777-8067-995b7a404794\") " pod="openshift-operators/perses-operator-5bf474d74f-gb2mk" Mar 12 08:11:42 crc kubenswrapper[4809]: I0312 08:11:42.972900 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9xq6\" (UniqueName: \"kubernetes.io/projected/68cdd0d8-8927-4777-8067-995b7a404794-kube-api-access-j9xq6\") pod \"perses-operator-5bf474d74f-gb2mk\" (UID: \"68cdd0d8-8927-4777-8067-995b7a404794\") " pod="openshift-operators/perses-operator-5bf474d74f-gb2mk" Mar 12 08:11:43 crc kubenswrapper[4809]: I0312 08:11:43.013804 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-gb2mk" Mar 12 08:11:43 crc kubenswrapper[4809]: E0312 08:11:43.049768 4809 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-gb2mk_openshift-operators_68cdd0d8-8927-4777-8067-995b7a404794_0(84cefe084f84699c2f4b3b34d32f1b76a764ac5bf6e739c328485d5fd1d38148): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 08:11:43 crc kubenswrapper[4809]: E0312 08:11:43.049849 4809 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-gb2mk_openshift-operators_68cdd0d8-8927-4777-8067-995b7a404794_0(84cefe084f84699c2f4b3b34d32f1b76a764ac5bf6e739c328485d5fd1d38148): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-gb2mk" Mar 12 08:11:43 crc kubenswrapper[4809]: E0312 08:11:43.049897 4809 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-gb2mk_openshift-operators_68cdd0d8-8927-4777-8067-995b7a404794_0(84cefe084f84699c2f4b3b34d32f1b76a764ac5bf6e739c328485d5fd1d38148): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-gb2mk" Mar 12 08:11:43 crc kubenswrapper[4809]: E0312 08:11:43.049946 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-gb2mk_openshift-operators(68cdd0d8-8927-4777-8067-995b7a404794)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-gb2mk_openshift-operators(68cdd0d8-8927-4777-8067-995b7a404794)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-gb2mk_openshift-operators_68cdd0d8-8927-4777-8067-995b7a404794_0(84cefe084f84699c2f4b3b34d32f1b76a764ac5bf6e739c328485d5fd1d38148): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-gb2mk" podUID="68cdd0d8-8927-4777-8067-995b7a404794" Mar 12 08:11:45 crc kubenswrapper[4809]: I0312 08:11:45.048694 4809 patch_prober.go:28] interesting pod/machine-config-daemon-h6d4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 08:11:45 crc kubenswrapper[4809]: I0312 08:11:45.049042 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 08:11:45 crc kubenswrapper[4809]: I0312 08:11:45.512477 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" event={"ID":"c49fc18e-a2b6-448b-9e0e-f97942ee77ae","Type":"ContainerStarted","Data":"2beeb86bf9a7d6dc3d026e7aeceff721093681a1b455af823356d31c8014d652"} Mar 12 08:11:47 crc kubenswrapper[4809]: I0312 08:11:47.527310 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" event={"ID":"c49fc18e-a2b6-448b-9e0e-f97942ee77ae","Type":"ContainerStarted","Data":"fb64340d825bb31da85dd18e922052b593dce6474ef4f57bf547a0cdd34155be"} Mar 12 08:11:47 crc kubenswrapper[4809]: I0312 08:11:47.527781 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" Mar 12 08:11:47 crc kubenswrapper[4809]: I0312 08:11:47.527793 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" Mar 12 08:11:47 crc kubenswrapper[4809]: I0312 08:11:47.527800 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" Mar 12 08:11:47 crc kubenswrapper[4809]: I0312 08:11:47.579772 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" Mar 12 08:11:47 crc kubenswrapper[4809]: I0312 08:11:47.584441 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" podStartSLOduration=7.584415516 podStartE2EDuration="7.584415516s" podCreationTimestamp="2026-03-12 08:11:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:11:47.572785376 +0000 UTC m=+781.154821119" watchObservedRunningTime="2026-03-12 08:11:47.584415516 +0000 UTC m=+781.166451269" Mar 12 08:11:47 crc kubenswrapper[4809]: I0312 08:11:47.585107 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" Mar 12 08:11:48 crc kubenswrapper[4809]: I0312 08:11:48.046286 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-56f5999595-xkcxn"] Mar 12 08:11:48 crc kubenswrapper[4809]: I0312 08:11:48.046430 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56f5999595-xkcxn" Mar 12 08:11:48 crc kubenswrapper[4809]: I0312 08:11:48.046964 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56f5999595-xkcxn" Mar 12 08:11:48 crc kubenswrapper[4809]: I0312 08:11:48.055188 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-wwrvc"] Mar 12 08:11:48 crc kubenswrapper[4809]: I0312 08:11:48.055342 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-wwrvc" Mar 12 08:11:48 crc kubenswrapper[4809]: I0312 08:11:48.056163 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-wwrvc" Mar 12 08:11:48 crc kubenswrapper[4809]: I0312 08:11:48.067909 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-gb2mk"] Mar 12 08:11:48 crc kubenswrapper[4809]: I0312 08:11:48.068136 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-gb2mk" Mar 12 08:11:48 crc kubenswrapper[4809]: I0312 08:11:48.068815 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-gb2mk" Mar 12 08:11:48 crc kubenswrapper[4809]: I0312 08:11:48.101143 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-fzj9d"] Mar 12 08:11:48 crc kubenswrapper[4809]: I0312 08:11:48.101280 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fzj9d" Mar 12 08:11:48 crc kubenswrapper[4809]: I0312 08:11:48.101859 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fzj9d" Mar 12 08:11:48 crc kubenswrapper[4809]: I0312 08:11:48.105335 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-56f5999595-jrnk6"] Mar 12 08:11:48 crc kubenswrapper[4809]: I0312 08:11:48.105417 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56f5999595-jrnk6" Mar 12 08:11:48 crc kubenswrapper[4809]: I0312 08:11:48.106135 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56f5999595-jrnk6" Mar 12 08:11:48 crc kubenswrapper[4809]: E0312 08:11:48.116684 4809 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56f5999595-xkcxn_openshift-operators_ef4ced13-c901-407a-aa7b-ed5198a4cca8_0(436ab16005fc2887cdc382f3045f64153b3ee353921efd797d2edbb46070d55d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 08:11:48 crc kubenswrapper[4809]: E0312 08:11:48.117018 4809 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56f5999595-xkcxn_openshift-operators_ef4ced13-c901-407a-aa7b-ed5198a4cca8_0(436ab16005fc2887cdc382f3045f64153b3ee353921efd797d2edbb46070d55d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56f5999595-xkcxn" Mar 12 08:11:48 crc kubenswrapper[4809]: E0312 08:11:48.117050 4809 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56f5999595-xkcxn_openshift-operators_ef4ced13-c901-407a-aa7b-ed5198a4cca8_0(436ab16005fc2887cdc382f3045f64153b3ee353921efd797d2edbb46070d55d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56f5999595-xkcxn" Mar 12 08:11:48 crc kubenswrapper[4809]: E0312 08:11:48.117175 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-56f5999595-xkcxn_openshift-operators(ef4ced13-c901-407a-aa7b-ed5198a4cca8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-56f5999595-xkcxn_openshift-operators(ef4ced13-c901-407a-aa7b-ed5198a4cca8)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56f5999595-xkcxn_openshift-operators_ef4ced13-c901-407a-aa7b-ed5198a4cca8_0(436ab16005fc2887cdc382f3045f64153b3ee353921efd797d2edbb46070d55d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56f5999595-xkcxn" podUID="ef4ced13-c901-407a-aa7b-ed5198a4cca8" Mar 12 08:11:48 crc kubenswrapper[4809]: E0312 08:11:48.136201 4809 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-wwrvc_openshift-operators_8f1a2dab-e883-409f-ba21-a52ea0947c1b_0(e49a2d6f14cc47f3976fdc87e6c733fb267588374ab4afe4ac2fa556dba892d1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 08:11:48 crc kubenswrapper[4809]: E0312 08:11:48.136279 4809 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-wwrvc_openshift-operators_8f1a2dab-e883-409f-ba21-a52ea0947c1b_0(e49a2d6f14cc47f3976fdc87e6c733fb267588374ab4afe4ac2fa556dba892d1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-wwrvc" Mar 12 08:11:48 crc kubenswrapper[4809]: E0312 08:11:48.136307 4809 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-wwrvc_openshift-operators_8f1a2dab-e883-409f-ba21-a52ea0947c1b_0(e49a2d6f14cc47f3976fdc87e6c733fb267588374ab4afe4ac2fa556dba892d1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-wwrvc" Mar 12 08:11:48 crc kubenswrapper[4809]: E0312 08:11:48.136356 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-wwrvc_openshift-operators(8f1a2dab-e883-409f-ba21-a52ea0947c1b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-wwrvc_openshift-operators(8f1a2dab-e883-409f-ba21-a52ea0947c1b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-wwrvc_openshift-operators_8f1a2dab-e883-409f-ba21-a52ea0947c1b_0(e49a2d6f14cc47f3976fdc87e6c733fb267588374ab4afe4ac2fa556dba892d1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-wwrvc" podUID="8f1a2dab-e883-409f-ba21-a52ea0947c1b" Mar 12 08:11:48 crc kubenswrapper[4809]: E0312 08:11:48.142371 4809 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-gb2mk_openshift-operators_68cdd0d8-8927-4777-8067-995b7a404794_0(3461275d8d8fd377de6cc50bc47c553edf8bd94180eae976cdcb8e0601fcb93c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 08:11:48 crc kubenswrapper[4809]: E0312 08:11:48.142428 4809 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-gb2mk_openshift-operators_68cdd0d8-8927-4777-8067-995b7a404794_0(3461275d8d8fd377de6cc50bc47c553edf8bd94180eae976cdcb8e0601fcb93c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-gb2mk" Mar 12 08:11:48 crc kubenswrapper[4809]: E0312 08:11:48.142451 4809 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-gb2mk_openshift-operators_68cdd0d8-8927-4777-8067-995b7a404794_0(3461275d8d8fd377de6cc50bc47c553edf8bd94180eae976cdcb8e0601fcb93c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-gb2mk" Mar 12 08:11:48 crc kubenswrapper[4809]: E0312 08:11:48.142492 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-gb2mk_openshift-operators(68cdd0d8-8927-4777-8067-995b7a404794)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-gb2mk_openshift-operators(68cdd0d8-8927-4777-8067-995b7a404794)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-gb2mk_openshift-operators_68cdd0d8-8927-4777-8067-995b7a404794_0(3461275d8d8fd377de6cc50bc47c553edf8bd94180eae976cdcb8e0601fcb93c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-gb2mk" podUID="68cdd0d8-8927-4777-8067-995b7a404794" Mar 12 08:11:48 crc kubenswrapper[4809]: E0312 08:11:48.178376 4809 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56f5999595-jrnk6_openshift-operators_d4977a4c-9481-45c8-ba76-6e985c4e11be_0(3c10b1b8a7f7904f9a70c5b768484a786ed600021dca1e4bcff989ab445f4104): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 08:11:48 crc kubenswrapper[4809]: E0312 08:11:48.178485 4809 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56f5999595-jrnk6_openshift-operators_d4977a4c-9481-45c8-ba76-6e985c4e11be_0(3c10b1b8a7f7904f9a70c5b768484a786ed600021dca1e4bcff989ab445f4104): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56f5999595-jrnk6" Mar 12 08:11:48 crc kubenswrapper[4809]: E0312 08:11:48.178520 4809 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56f5999595-jrnk6_openshift-operators_d4977a4c-9481-45c8-ba76-6e985c4e11be_0(3c10b1b8a7f7904f9a70c5b768484a786ed600021dca1e4bcff989ab445f4104): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56f5999595-jrnk6" Mar 12 08:11:48 crc kubenswrapper[4809]: E0312 08:11:48.178588 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-56f5999595-jrnk6_openshift-operators(d4977a4c-9481-45c8-ba76-6e985c4e11be)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-56f5999595-jrnk6_openshift-operators(d4977a4c-9481-45c8-ba76-6e985c4e11be)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56f5999595-jrnk6_openshift-operators_d4977a4c-9481-45c8-ba76-6e985c4e11be_0(3c10b1b8a7f7904f9a70c5b768484a786ed600021dca1e4bcff989ab445f4104): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56f5999595-jrnk6" podUID="d4977a4c-9481-45c8-ba76-6e985c4e11be" Mar 12 08:11:48 crc kubenswrapper[4809]: E0312 08:11:48.179461 4809 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-fzj9d_openshift-operators_a6312c6e-68f1-40c5-82ea-50fda65c492f_0(fbe972945a32a56e765b790e4e7be1bc77467721ce47c51e1d336cf11d9b1435): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 08:11:48 crc kubenswrapper[4809]: E0312 08:11:48.179628 4809 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-fzj9d_openshift-operators_a6312c6e-68f1-40c5-82ea-50fda65c492f_0(fbe972945a32a56e765b790e4e7be1bc77467721ce47c51e1d336cf11d9b1435): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fzj9d" Mar 12 08:11:48 crc kubenswrapper[4809]: E0312 08:11:48.179723 4809 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-fzj9d_openshift-operators_a6312c6e-68f1-40c5-82ea-50fda65c492f_0(fbe972945a32a56e765b790e4e7be1bc77467721ce47c51e1d336cf11d9b1435): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fzj9d" Mar 12 08:11:48 crc kubenswrapper[4809]: E0312 08:11:48.179848 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-fzj9d_openshift-operators(a6312c6e-68f1-40c5-82ea-50fda65c492f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-fzj9d_openshift-operators(a6312c6e-68f1-40c5-82ea-50fda65c492f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-fzj9d_openshift-operators_a6312c6e-68f1-40c5-82ea-50fda65c492f_0(fbe972945a32a56e765b790e4e7be1bc77467721ce47c51e1d336cf11d9b1435): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fzj9d" podUID="a6312c6e-68f1-40c5-82ea-50fda65c492f" Mar 12 08:11:51 crc kubenswrapper[4809]: I0312 08:11:51.106445 4809 scope.go:117] "RemoveContainer" containerID="4a23740d494fbb78dad140e4ef9ec81eea5ab2a2bec25923af1731fb54b4beed" Mar 12 08:11:51 crc kubenswrapper[4809]: E0312 08:11:51.107703 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-4xgl7_openshift-multus(85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff)\"" pod="openshift-multus/multus-4xgl7" podUID="85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff" Mar 12 08:11:59 crc kubenswrapper[4809]: I0312 08:11:59.105441 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56f5999595-xkcxn" Mar 12 08:11:59 crc kubenswrapper[4809]: I0312 08:11:59.105481 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-gb2mk" Mar 12 08:11:59 crc kubenswrapper[4809]: I0312 08:11:59.106831 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56f5999595-xkcxn" Mar 12 08:11:59 crc kubenswrapper[4809]: I0312 08:11:59.107016 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-gb2mk" Mar 12 08:11:59 crc kubenswrapper[4809]: E0312 08:11:59.163516 4809 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-gb2mk_openshift-operators_68cdd0d8-8927-4777-8067-995b7a404794_0(e317d7daf55ca6d1cb6d11b457a1b3c2dfa0c19d4650828b1475b5659d651e6d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 08:11:59 crc kubenswrapper[4809]: E0312 08:11:59.163860 4809 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-gb2mk_openshift-operators_68cdd0d8-8927-4777-8067-995b7a404794_0(e317d7daf55ca6d1cb6d11b457a1b3c2dfa0c19d4650828b1475b5659d651e6d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-gb2mk" Mar 12 08:11:59 crc kubenswrapper[4809]: E0312 08:11:59.163884 4809 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-gb2mk_openshift-operators_68cdd0d8-8927-4777-8067-995b7a404794_0(e317d7daf55ca6d1cb6d11b457a1b3c2dfa0c19d4650828b1475b5659d651e6d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-gb2mk" Mar 12 08:11:59 crc kubenswrapper[4809]: E0312 08:11:59.163931 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-gb2mk_openshift-operators(68cdd0d8-8927-4777-8067-995b7a404794)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-gb2mk_openshift-operators(68cdd0d8-8927-4777-8067-995b7a404794)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-gb2mk_openshift-operators_68cdd0d8-8927-4777-8067-995b7a404794_0(e317d7daf55ca6d1cb6d11b457a1b3c2dfa0c19d4650828b1475b5659d651e6d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-gb2mk" podUID="68cdd0d8-8927-4777-8067-995b7a404794" Mar 12 08:11:59 crc kubenswrapper[4809]: E0312 08:11:59.180393 4809 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56f5999595-xkcxn_openshift-operators_ef4ced13-c901-407a-aa7b-ed5198a4cca8_0(aefaa8808e233e974a20dcdbffc2a588166428a50871643edd48afda34b20b02): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 08:11:59 crc kubenswrapper[4809]: E0312 08:11:59.180465 4809 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56f5999595-xkcxn_openshift-operators_ef4ced13-c901-407a-aa7b-ed5198a4cca8_0(aefaa8808e233e974a20dcdbffc2a588166428a50871643edd48afda34b20b02): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56f5999595-xkcxn" Mar 12 08:11:59 crc kubenswrapper[4809]: E0312 08:11:59.181227 4809 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56f5999595-xkcxn_openshift-operators_ef4ced13-c901-407a-aa7b-ed5198a4cca8_0(aefaa8808e233e974a20dcdbffc2a588166428a50871643edd48afda34b20b02): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56f5999595-xkcxn" Mar 12 08:11:59 crc kubenswrapper[4809]: E0312 08:11:59.181297 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-56f5999595-xkcxn_openshift-operators(ef4ced13-c901-407a-aa7b-ed5198a4cca8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-56f5999595-xkcxn_openshift-operators(ef4ced13-c901-407a-aa7b-ed5198a4cca8)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56f5999595-xkcxn_openshift-operators_ef4ced13-c901-407a-aa7b-ed5198a4cca8_0(aefaa8808e233e974a20dcdbffc2a588166428a50871643edd48afda34b20b02): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56f5999595-xkcxn" podUID="ef4ced13-c901-407a-aa7b-ed5198a4cca8" Mar 12 08:12:00 crc kubenswrapper[4809]: I0312 08:12:00.105224 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-wwrvc" Mar 12 08:12:00 crc kubenswrapper[4809]: I0312 08:12:00.105922 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-wwrvc" Mar 12 08:12:00 crc kubenswrapper[4809]: I0312 08:12:00.137625 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555052-th5qj"] Mar 12 08:12:00 crc kubenswrapper[4809]: I0312 08:12:00.138604 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555052-th5qj" Mar 12 08:12:00 crc kubenswrapper[4809]: E0312 08:12:00.138888 4809 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-wwrvc_openshift-operators_8f1a2dab-e883-409f-ba21-a52ea0947c1b_0(bd6e73de9756fece9fb09505f1e01a40574f1da8606bcb6e85c7c74b4248ba5f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 08:12:00 crc kubenswrapper[4809]: E0312 08:12:00.138946 4809 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-wwrvc_openshift-operators_8f1a2dab-e883-409f-ba21-a52ea0947c1b_0(bd6e73de9756fece9fb09505f1e01a40574f1da8606bcb6e85c7c74b4248ba5f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-wwrvc" Mar 12 08:12:00 crc kubenswrapper[4809]: E0312 08:12:00.138973 4809 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-wwrvc_openshift-operators_8f1a2dab-e883-409f-ba21-a52ea0947c1b_0(bd6e73de9756fece9fb09505f1e01a40574f1da8606bcb6e85c7c74b4248ba5f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-wwrvc" Mar 12 08:12:00 crc kubenswrapper[4809]: E0312 08:12:00.139022 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-wwrvc_openshift-operators(8f1a2dab-e883-409f-ba21-a52ea0947c1b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-wwrvc_openshift-operators(8f1a2dab-e883-409f-ba21-a52ea0947c1b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-wwrvc_openshift-operators_8f1a2dab-e883-409f-ba21-a52ea0947c1b_0(bd6e73de9756fece9fb09505f1e01a40574f1da8606bcb6e85c7c74b4248ba5f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-wwrvc" podUID="8f1a2dab-e883-409f-ba21-a52ea0947c1b" Mar 12 08:12:00 crc kubenswrapper[4809]: I0312 08:12:00.148668 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-tvxqv" Mar 12 08:12:00 crc kubenswrapper[4809]: I0312 08:12:00.148851 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 08:12:00 crc kubenswrapper[4809]: I0312 08:12:00.148948 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 08:12:00 crc kubenswrapper[4809]: I0312 08:12:00.156447 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555052-th5qj"] Mar 12 08:12:00 crc kubenswrapper[4809]: I0312 08:12:00.276951 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7cqj\" (UniqueName: \"kubernetes.io/projected/1979c745-e0ef-473f-b9df-b7444a9cfd62-kube-api-access-m7cqj\") pod \"auto-csr-approver-29555052-th5qj\" (UID: \"1979c745-e0ef-473f-b9df-b7444a9cfd62\") " pod="openshift-infra/auto-csr-approver-29555052-th5qj" Mar 12 08:12:00 crc kubenswrapper[4809]: I0312 08:12:00.378416 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7cqj\" (UniqueName: \"kubernetes.io/projected/1979c745-e0ef-473f-b9df-b7444a9cfd62-kube-api-access-m7cqj\") pod \"auto-csr-approver-29555052-th5qj\" (UID: \"1979c745-e0ef-473f-b9df-b7444a9cfd62\") " pod="openshift-infra/auto-csr-approver-29555052-th5qj" Mar 12 08:12:00 crc kubenswrapper[4809]: I0312 08:12:00.397042 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7cqj\" (UniqueName: \"kubernetes.io/projected/1979c745-e0ef-473f-b9df-b7444a9cfd62-kube-api-access-m7cqj\") pod \"auto-csr-approver-29555052-th5qj\" (UID: \"1979c745-e0ef-473f-b9df-b7444a9cfd62\") " pod="openshift-infra/auto-csr-approver-29555052-th5qj" Mar 12 08:12:00 crc kubenswrapper[4809]: I0312 08:12:00.498276 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555052-th5qj" Mar 12 08:12:00 crc kubenswrapper[4809]: E0312 08:12:00.531474 4809 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29555052-th5qj_openshift-infra_1979c745-e0ef-473f-b9df-b7444a9cfd62_0(3d726884e500a9c4d3bf2f2891745da53a9dc82d09dc85848730d945a1de45d1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 08:12:00 crc kubenswrapper[4809]: E0312 08:12:00.531555 4809 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29555052-th5qj_openshift-infra_1979c745-e0ef-473f-b9df-b7444a9cfd62_0(3d726884e500a9c4d3bf2f2891745da53a9dc82d09dc85848730d945a1de45d1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-infra/auto-csr-approver-29555052-th5qj" Mar 12 08:12:00 crc kubenswrapper[4809]: E0312 08:12:00.531583 4809 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29555052-th5qj_openshift-infra_1979c745-e0ef-473f-b9df-b7444a9cfd62_0(3d726884e500a9c4d3bf2f2891745da53a9dc82d09dc85848730d945a1de45d1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-infra/auto-csr-approver-29555052-th5qj" Mar 12 08:12:00 crc kubenswrapper[4809]: E0312 08:12:00.531638 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"auto-csr-approver-29555052-th5qj_openshift-infra(1979c745-e0ef-473f-b9df-b7444a9cfd62)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"auto-csr-approver-29555052-th5qj_openshift-infra(1979c745-e0ef-473f-b9df-b7444a9cfd62)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29555052-th5qj_openshift-infra_1979c745-e0ef-473f-b9df-b7444a9cfd62_0(3d726884e500a9c4d3bf2f2891745da53a9dc82d09dc85848730d945a1de45d1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-infra/auto-csr-approver-29555052-th5qj" podUID="1979c745-e0ef-473f-b9df-b7444a9cfd62" Mar 12 08:12:00 crc kubenswrapper[4809]: I0312 08:12:00.623250 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555052-th5qj" Mar 12 08:12:00 crc kubenswrapper[4809]: I0312 08:12:00.623747 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555052-th5qj" Mar 12 08:12:00 crc kubenswrapper[4809]: E0312 08:12:00.646620 4809 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29555052-th5qj_openshift-infra_1979c745-e0ef-473f-b9df-b7444a9cfd62_0(3132623580ddfbc1393e224daf600686120390dc20e370ef5f0b6932c9b1e31f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 08:12:00 crc kubenswrapper[4809]: E0312 08:12:00.646686 4809 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29555052-th5qj_openshift-infra_1979c745-e0ef-473f-b9df-b7444a9cfd62_0(3132623580ddfbc1393e224daf600686120390dc20e370ef5f0b6932c9b1e31f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-infra/auto-csr-approver-29555052-th5qj" Mar 12 08:12:00 crc kubenswrapper[4809]: E0312 08:12:00.646706 4809 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29555052-th5qj_openshift-infra_1979c745-e0ef-473f-b9df-b7444a9cfd62_0(3132623580ddfbc1393e224daf600686120390dc20e370ef5f0b6932c9b1e31f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-infra/auto-csr-approver-29555052-th5qj" Mar 12 08:12:00 crc kubenswrapper[4809]: E0312 08:12:00.646751 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"auto-csr-approver-29555052-th5qj_openshift-infra(1979c745-e0ef-473f-b9df-b7444a9cfd62)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"auto-csr-approver-29555052-th5qj_openshift-infra(1979c745-e0ef-473f-b9df-b7444a9cfd62)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29555052-th5qj_openshift-infra_1979c745-e0ef-473f-b9df-b7444a9cfd62_0(3132623580ddfbc1393e224daf600686120390dc20e370ef5f0b6932c9b1e31f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-infra/auto-csr-approver-29555052-th5qj" podUID="1979c745-e0ef-473f-b9df-b7444a9cfd62" Mar 12 08:12:01 crc kubenswrapper[4809]: I0312 08:12:01.105764 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fzj9d" Mar 12 08:12:01 crc kubenswrapper[4809]: I0312 08:12:01.106833 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fzj9d" Mar 12 08:12:01 crc kubenswrapper[4809]: E0312 08:12:01.131358 4809 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-fzj9d_openshift-operators_a6312c6e-68f1-40c5-82ea-50fda65c492f_0(6218a808be12462957b211fa23b8693c0471b5803c4e5f9616b95dcc98c80a72): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 08:12:01 crc kubenswrapper[4809]: E0312 08:12:01.131505 4809 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-fzj9d_openshift-operators_a6312c6e-68f1-40c5-82ea-50fda65c492f_0(6218a808be12462957b211fa23b8693c0471b5803c4e5f9616b95dcc98c80a72): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fzj9d" Mar 12 08:12:01 crc kubenswrapper[4809]: E0312 08:12:01.131595 4809 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-fzj9d_openshift-operators_a6312c6e-68f1-40c5-82ea-50fda65c492f_0(6218a808be12462957b211fa23b8693c0471b5803c4e5f9616b95dcc98c80a72): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fzj9d" Mar 12 08:12:01 crc kubenswrapper[4809]: E0312 08:12:01.131695 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-fzj9d_openshift-operators(a6312c6e-68f1-40c5-82ea-50fda65c492f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-fzj9d_openshift-operators(a6312c6e-68f1-40c5-82ea-50fda65c492f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-fzj9d_openshift-operators_a6312c6e-68f1-40c5-82ea-50fda65c492f_0(6218a808be12462957b211fa23b8693c0471b5803c4e5f9616b95dcc98c80a72): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fzj9d" podUID="a6312c6e-68f1-40c5-82ea-50fda65c492f" Mar 12 08:12:02 crc kubenswrapper[4809]: I0312 08:12:02.106581 4809 scope.go:117] "RemoveContainer" containerID="4a23740d494fbb78dad140e4ef9ec81eea5ab2a2bec25923af1731fb54b4beed" Mar 12 08:12:02 crc kubenswrapper[4809]: I0312 08:12:02.733679 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4xgl7_85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff/kube-multus/2.log" Mar 12 08:12:02 crc kubenswrapper[4809]: I0312 08:12:02.733760 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4xgl7" event={"ID":"85879dda-3dbb-4ed2-a8e0-4b2dbcf175ff","Type":"ContainerStarted","Data":"3db7312c89323e4fbb6b3fe9bf9e7e7e920a8822bc20a93f91c5daa5783f1f50"} Mar 12 08:12:03 crc kubenswrapper[4809]: I0312 08:12:03.105220 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56f5999595-jrnk6" Mar 12 08:12:03 crc kubenswrapper[4809]: I0312 08:12:03.106007 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56f5999595-jrnk6" Mar 12 08:12:03 crc kubenswrapper[4809]: E0312 08:12:03.146645 4809 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56f5999595-jrnk6_openshift-operators_d4977a4c-9481-45c8-ba76-6e985c4e11be_0(b15ce19c0f20eccf936d70d8fab72a26fd3927de20431cb7859d691ddf75d36f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 08:12:03 crc kubenswrapper[4809]: E0312 08:12:03.146728 4809 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56f5999595-jrnk6_openshift-operators_d4977a4c-9481-45c8-ba76-6e985c4e11be_0(b15ce19c0f20eccf936d70d8fab72a26fd3927de20431cb7859d691ddf75d36f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56f5999595-jrnk6" Mar 12 08:12:03 crc kubenswrapper[4809]: E0312 08:12:03.146757 4809 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56f5999595-jrnk6_openshift-operators_d4977a4c-9481-45c8-ba76-6e985c4e11be_0(b15ce19c0f20eccf936d70d8fab72a26fd3927de20431cb7859d691ddf75d36f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56f5999595-jrnk6" Mar 12 08:12:03 crc kubenswrapper[4809]: E0312 08:12:03.146829 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-56f5999595-jrnk6_openshift-operators(d4977a4c-9481-45c8-ba76-6e985c4e11be)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-56f5999595-jrnk6_openshift-operators(d4977a4c-9481-45c8-ba76-6e985c4e11be)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56f5999595-jrnk6_openshift-operators_d4977a4c-9481-45c8-ba76-6e985c4e11be_0(b15ce19c0f20eccf936d70d8fab72a26fd3927de20431cb7859d691ddf75d36f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56f5999595-jrnk6" podUID="d4977a4c-9481-45c8-ba76-6e985c4e11be" Mar 12 08:12:10 crc kubenswrapper[4809]: I0312 08:12:10.105712 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56f5999595-xkcxn" Mar 12 08:12:10 crc kubenswrapper[4809]: I0312 08:12:10.107316 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56f5999595-xkcxn" Mar 12 08:12:10 crc kubenswrapper[4809]: I0312 08:12:10.395356 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-56f5999595-xkcxn"] Mar 12 08:12:10 crc kubenswrapper[4809]: I0312 08:12:10.806325 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56f5999595-xkcxn" event={"ID":"ef4ced13-c901-407a-aa7b-ed5198a4cca8","Type":"ContainerStarted","Data":"03aeb84acb02342d294c4ee8f2f6dc9d6711adb78e975c3ca39e5d66ee253156"} Mar 12 08:12:10 crc kubenswrapper[4809]: I0312 08:12:10.813868 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-lszkd" Mar 12 08:12:11 crc kubenswrapper[4809]: I0312 08:12:11.108554 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-wwrvc" Mar 12 08:12:11 crc kubenswrapper[4809]: I0312 08:12:11.109096 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-wwrvc" Mar 12 08:12:11 crc kubenswrapper[4809]: I0312 08:12:11.341922 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-wwrvc"] Mar 12 08:12:11 crc kubenswrapper[4809]: I0312 08:12:11.815400 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-wwrvc" event={"ID":"8f1a2dab-e883-409f-ba21-a52ea0947c1b","Type":"ContainerStarted","Data":"3ea50538f74641d98a165daf14c4ec128f3d9ee487d7318123ffd3bbb92d6264"} Mar 12 08:12:12 crc kubenswrapper[4809]: I0312 08:12:12.105282 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-gb2mk" Mar 12 08:12:12 crc kubenswrapper[4809]: I0312 08:12:12.106543 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-gb2mk" Mar 12 08:12:13 crc kubenswrapper[4809]: I0312 08:12:13.109530 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555052-th5qj" Mar 12 08:12:13 crc kubenswrapper[4809]: I0312 08:12:13.110581 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555052-th5qj" Mar 12 08:12:13 crc kubenswrapper[4809]: I0312 08:12:13.961895 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555052-th5qj"] Mar 12 08:12:13 crc kubenswrapper[4809]: W0312 08:12:13.969822 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1979c745_e0ef_473f_b9df_b7444a9cfd62.slice/crio-a6423ae0907b7aaecfd63e073da9d2fccfe8c4d447b243f3fc3b626a8895779a WatchSource:0}: Error finding container a6423ae0907b7aaecfd63e073da9d2fccfe8c4d447b243f3fc3b626a8895779a: Status 404 returned error can't find the container with id a6423ae0907b7aaecfd63e073da9d2fccfe8c4d447b243f3fc3b626a8895779a Mar 12 08:12:14 crc kubenswrapper[4809]: I0312 08:12:14.107299 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56f5999595-jrnk6" Mar 12 08:12:14 crc kubenswrapper[4809]: I0312 08:12:14.108419 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56f5999595-jrnk6" Mar 12 08:12:14 crc kubenswrapper[4809]: I0312 08:12:14.130831 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-gb2mk"] Mar 12 08:12:14 crc kubenswrapper[4809]: W0312 08:12:14.137982 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod68cdd0d8_8927_4777_8067_995b7a404794.slice/crio-48cf11cf30d363142f0488d03933e596c2da9b666a06333c2f176b1ea41476da WatchSource:0}: Error finding container 48cf11cf30d363142f0488d03933e596c2da9b666a06333c2f176b1ea41476da: Status 404 returned error can't find the container with id 48cf11cf30d363142f0488d03933e596c2da9b666a06333c2f176b1ea41476da Mar 12 08:12:14 crc kubenswrapper[4809]: I0312 08:12:14.381278 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-56f5999595-jrnk6"] Mar 12 08:12:14 crc kubenswrapper[4809]: W0312 08:12:14.392772 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4977a4c_9481_45c8_ba76_6e985c4e11be.slice/crio-5539729ed25f31c8f4d739b1bec7e4f6355ea26d9d0e62f250f199358e644d14 WatchSource:0}: Error finding container 5539729ed25f31c8f4d739b1bec7e4f6355ea26d9d0e62f250f199358e644d14: Status 404 returned error can't find the container with id 5539729ed25f31c8f4d739b1bec7e4f6355ea26d9d0e62f250f199358e644d14 Mar 12 08:12:14 crc kubenswrapper[4809]: I0312 08:12:14.838963 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56f5999595-xkcxn" event={"ID":"ef4ced13-c901-407a-aa7b-ed5198a4cca8","Type":"ContainerStarted","Data":"1b5a9350edcd114f122262312c74cdc35d089b881907353b1524302d4e6e5de0"} Mar 12 08:12:14 crc kubenswrapper[4809]: I0312 08:12:14.840647 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-gb2mk" event={"ID":"68cdd0d8-8927-4777-8067-995b7a404794","Type":"ContainerStarted","Data":"48cf11cf30d363142f0488d03933e596c2da9b666a06333c2f176b1ea41476da"} Mar 12 08:12:14 crc kubenswrapper[4809]: I0312 08:12:14.841950 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555052-th5qj" event={"ID":"1979c745-e0ef-473f-b9df-b7444a9cfd62","Type":"ContainerStarted","Data":"a6423ae0907b7aaecfd63e073da9d2fccfe8c4d447b243f3fc3b626a8895779a"} Mar 12 08:12:14 crc kubenswrapper[4809]: I0312 08:12:14.843850 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56f5999595-jrnk6" event={"ID":"d4977a4c-9481-45c8-ba76-6e985c4e11be","Type":"ContainerStarted","Data":"0c11b3ca17e4669a986fff9f5abecb152306fae4325b4c8bf9d91a8e2f8d781a"} Mar 12 08:12:14 crc kubenswrapper[4809]: I0312 08:12:14.843873 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56f5999595-jrnk6" event={"ID":"d4977a4c-9481-45c8-ba76-6e985c4e11be","Type":"ContainerStarted","Data":"5539729ed25f31c8f4d739b1bec7e4f6355ea26d9d0e62f250f199358e644d14"} Mar 12 08:12:14 crc kubenswrapper[4809]: I0312 08:12:14.862621 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56f5999595-xkcxn" podStartSLOduration=29.564681409 podStartE2EDuration="32.862598116s" podCreationTimestamp="2026-03-12 08:11:42 +0000 UTC" firstStartedPulling="2026-03-12 08:12:10.417491192 +0000 UTC m=+803.999526925" lastFinishedPulling="2026-03-12 08:12:13.715407899 +0000 UTC m=+807.297443632" observedRunningTime="2026-03-12 08:12:14.856713153 +0000 UTC m=+808.438748876" watchObservedRunningTime="2026-03-12 08:12:14.862598116 +0000 UTC m=+808.444633849" Mar 12 08:12:14 crc kubenswrapper[4809]: I0312 08:12:14.887504 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56f5999595-jrnk6" podStartSLOduration=32.887466781 podStartE2EDuration="32.887466781s" podCreationTimestamp="2026-03-12 08:11:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:12:14.886662289 +0000 UTC m=+808.468698062" watchObservedRunningTime="2026-03-12 08:12:14.887466781 +0000 UTC m=+808.469502524" Mar 12 08:12:15 crc kubenswrapper[4809]: I0312 08:12:15.057311 4809 patch_prober.go:28] interesting pod/machine-config-daemon-h6d4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 08:12:15 crc kubenswrapper[4809]: I0312 08:12:15.057379 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 08:12:15 crc kubenswrapper[4809]: I0312 08:12:15.057430 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" Mar 12 08:12:15 crc kubenswrapper[4809]: I0312 08:12:15.058125 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ca8598ebfe987f1558a37c5bf134fcec2279989f25be8a694a644461aa780ee9"} pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 08:12:15 crc kubenswrapper[4809]: I0312 08:12:15.058180 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" containerID="cri-o://ca8598ebfe987f1558a37c5bf134fcec2279989f25be8a694a644461aa780ee9" gracePeriod=600 Mar 12 08:12:15 crc kubenswrapper[4809]: E0312 08:12:15.608538 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1979c745_e0ef_473f_b9df_b7444a9cfd62.slice/crio-conmon-f846103c2aa84b4a246514941e6eefaa4f603e46dc61389f7662f4a4a2252af7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1979c745_e0ef_473f_b9df_b7444a9cfd62.slice/crio-f846103c2aa84b4a246514941e6eefaa4f603e46dc61389f7662f4a4a2252af7.scope\": RecentStats: unable to find data in memory cache]" Mar 12 08:12:15 crc kubenswrapper[4809]: I0312 08:12:15.865100 4809 generic.go:334] "Generic (PLEG): container finished" podID="101483ba-8ed3-40eb-9855-077e9add029f" containerID="ca8598ebfe987f1558a37c5bf134fcec2279989f25be8a694a644461aa780ee9" exitCode=0 Mar 12 08:12:15 crc kubenswrapper[4809]: I0312 08:12:15.865219 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" event={"ID":"101483ba-8ed3-40eb-9855-077e9add029f","Type":"ContainerDied","Data":"ca8598ebfe987f1558a37c5bf134fcec2279989f25be8a694a644461aa780ee9"} Mar 12 08:12:15 crc kubenswrapper[4809]: I0312 08:12:15.865274 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" event={"ID":"101483ba-8ed3-40eb-9855-077e9add029f","Type":"ContainerStarted","Data":"5a05a1c82af92d83a47e94a370bbfc613a47227fe6fbddbdc5d572ba375fcf3f"} Mar 12 08:12:15 crc kubenswrapper[4809]: I0312 08:12:15.865298 4809 scope.go:117] "RemoveContainer" containerID="97cdf03cc29e20d0fb57a7ec80495b91279dda92db249385bd14589bccc4f68c" Mar 12 08:12:15 crc kubenswrapper[4809]: I0312 08:12:15.868757 4809 generic.go:334] "Generic (PLEG): container finished" podID="1979c745-e0ef-473f-b9df-b7444a9cfd62" containerID="f846103c2aa84b4a246514941e6eefaa4f603e46dc61389f7662f4a4a2252af7" exitCode=0 Mar 12 08:12:15 crc kubenswrapper[4809]: I0312 08:12:15.868877 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555052-th5qj" event={"ID":"1979c745-e0ef-473f-b9df-b7444a9cfd62","Type":"ContainerDied","Data":"f846103c2aa84b4a246514941e6eefaa4f603e46dc61389f7662f4a4a2252af7"} Mar 12 08:12:16 crc kubenswrapper[4809]: I0312 08:12:16.105707 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fzj9d" Mar 12 08:12:16 crc kubenswrapper[4809]: I0312 08:12:16.106470 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fzj9d" Mar 12 08:12:18 crc kubenswrapper[4809]: I0312 08:12:18.326425 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555052-th5qj" Mar 12 08:12:18 crc kubenswrapper[4809]: I0312 08:12:18.413238 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7cqj\" (UniqueName: \"kubernetes.io/projected/1979c745-e0ef-473f-b9df-b7444a9cfd62-kube-api-access-m7cqj\") pod \"1979c745-e0ef-473f-b9df-b7444a9cfd62\" (UID: \"1979c745-e0ef-473f-b9df-b7444a9cfd62\") " Mar 12 08:12:18 crc kubenswrapper[4809]: I0312 08:12:18.419980 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1979c745-e0ef-473f-b9df-b7444a9cfd62-kube-api-access-m7cqj" (OuterVolumeSpecName: "kube-api-access-m7cqj") pod "1979c745-e0ef-473f-b9df-b7444a9cfd62" (UID: "1979c745-e0ef-473f-b9df-b7444a9cfd62"). InnerVolumeSpecName "kube-api-access-m7cqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:12:18 crc kubenswrapper[4809]: I0312 08:12:18.515916 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7cqj\" (UniqueName: \"kubernetes.io/projected/1979c745-e0ef-473f-b9df-b7444a9cfd62-kube-api-access-m7cqj\") on node \"crc\" DevicePath \"\"" Mar 12 08:12:18 crc kubenswrapper[4809]: I0312 08:12:18.775331 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-fzj9d"] Mar 12 08:12:18 crc kubenswrapper[4809]: W0312 08:12:18.784350 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6312c6e_68f1_40c5_82ea_50fda65c492f.slice/crio-7da5c81e621af1c6e2fc1f034328e51d2b55b2f80357e94d02840c32ea53a38c WatchSource:0}: Error finding container 7da5c81e621af1c6e2fc1f034328e51d2b55b2f80357e94d02840c32ea53a38c: Status 404 returned error can't find the container with id 7da5c81e621af1c6e2fc1f034328e51d2b55b2f80357e94d02840c32ea53a38c Mar 12 08:12:18 crc kubenswrapper[4809]: I0312 08:12:18.787503 4809 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 08:12:18 crc kubenswrapper[4809]: I0312 08:12:18.896600 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555052-th5qj" event={"ID":"1979c745-e0ef-473f-b9df-b7444a9cfd62","Type":"ContainerDied","Data":"a6423ae0907b7aaecfd63e073da9d2fccfe8c4d447b243f3fc3b626a8895779a"} Mar 12 08:12:18 crc kubenswrapper[4809]: I0312 08:12:18.896647 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555052-th5qj" Mar 12 08:12:18 crc kubenswrapper[4809]: I0312 08:12:18.896975 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6423ae0907b7aaecfd63e073da9d2fccfe8c4d447b243f3fc3b626a8895779a" Mar 12 08:12:18 crc kubenswrapper[4809]: I0312 08:12:18.898333 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fzj9d" event={"ID":"a6312c6e-68f1-40c5-82ea-50fda65c492f","Type":"ContainerStarted","Data":"7da5c81e621af1c6e2fc1f034328e51d2b55b2f80357e94d02840c32ea53a38c"} Mar 12 08:12:18 crc kubenswrapper[4809]: I0312 08:12:18.900551 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-wwrvc" event={"ID":"8f1a2dab-e883-409f-ba21-a52ea0947c1b","Type":"ContainerStarted","Data":"b38529b3955e5f1bb2b8f48bff6dcc9085baffa4f70ff9a1a02a9599a23d8460"} Mar 12 08:12:18 crc kubenswrapper[4809]: I0312 08:12:18.900872 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-wwrvc" Mar 12 08:12:18 crc kubenswrapper[4809]: I0312 08:12:18.902268 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-gb2mk" event={"ID":"68cdd0d8-8927-4777-8067-995b7a404794","Type":"ContainerStarted","Data":"f0172e8f51507662a5599bfd4ec1ccc841a295762d4d4556c71370939fa65dab"} Mar 12 08:12:18 crc kubenswrapper[4809]: I0312 08:12:18.902834 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-gb2mk" Mar 12 08:12:18 crc kubenswrapper[4809]: I0312 08:12:18.903325 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-wwrvc" Mar 12 08:12:18 crc kubenswrapper[4809]: I0312 08:12:18.944847 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-wwrvc" podStartSLOduration=29.941576479 podStartE2EDuration="36.944822986s" podCreationTimestamp="2026-03-12 08:11:42 +0000 UTC" firstStartedPulling="2026-03-12 08:12:11.357181158 +0000 UTC m=+804.939216891" lastFinishedPulling="2026-03-12 08:12:18.360427665 +0000 UTC m=+811.942463398" observedRunningTime="2026-03-12 08:12:18.937750921 +0000 UTC m=+812.519786664" watchObservedRunningTime="2026-03-12 08:12:18.944822986 +0000 UTC m=+812.526858719" Mar 12 08:12:19 crc kubenswrapper[4809]: I0312 08:12:19.032689 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-gb2mk" podStartSLOduration=32.84424663 podStartE2EDuration="37.032660387s" podCreationTimestamp="2026-03-12 08:11:42 +0000 UTC" firstStartedPulling="2026-03-12 08:12:14.144970072 +0000 UTC m=+807.727005805" lastFinishedPulling="2026-03-12 08:12:18.333383829 +0000 UTC m=+811.915419562" observedRunningTime="2026-03-12 08:12:19.024256006 +0000 UTC m=+812.606291749" watchObservedRunningTime="2026-03-12 08:12:19.032660387 +0000 UTC m=+812.614696120" Mar 12 08:12:19 crc kubenswrapper[4809]: I0312 08:12:19.375287 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555046-2w55h"] Mar 12 08:12:19 crc kubenswrapper[4809]: I0312 08:12:19.379735 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555046-2w55h"] Mar 12 08:12:21 crc kubenswrapper[4809]: I0312 08:12:21.115707 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10b2cf88-bbdd-48d9-8401-5bbd10f925ed" path="/var/lib/kubelet/pods/10b2cf88-bbdd-48d9-8401-5bbd10f925ed/volumes" Mar 12 08:12:21 crc kubenswrapper[4809]: I0312 08:12:21.928738 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fzj9d" event={"ID":"a6312c6e-68f1-40c5-82ea-50fda65c492f","Type":"ContainerStarted","Data":"fced1b6dc93b3f12b428d642d480a559b9436315936abc7c369ad03abea59eaf"} Mar 12 08:12:23 crc kubenswrapper[4809]: I0312 08:12:23.016791 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-gb2mk" Mar 12 08:12:23 crc kubenswrapper[4809]: I0312 08:12:23.039463 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fzj9d" podStartSLOduration=38.552940429 podStartE2EDuration="41.039441308s" podCreationTimestamp="2026-03-12 08:11:42 +0000 UTC" firstStartedPulling="2026-03-12 08:12:18.787243562 +0000 UTC m=+812.369279295" lastFinishedPulling="2026-03-12 08:12:21.273744441 +0000 UTC m=+814.855780174" observedRunningTime="2026-03-12 08:12:21.954770846 +0000 UTC m=+815.536806569" watchObservedRunningTime="2026-03-12 08:12:23.039441308 +0000 UTC m=+816.621477051" Mar 12 08:12:26 crc kubenswrapper[4809]: I0312 08:12:26.397908 4809 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 12 08:12:26 crc kubenswrapper[4809]: I0312 08:12:26.485482 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-jfznn"] Mar 12 08:12:26 crc kubenswrapper[4809]: E0312 08:12:26.485756 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1979c745-e0ef-473f-b9df-b7444a9cfd62" containerName="oc" Mar 12 08:12:26 crc kubenswrapper[4809]: I0312 08:12:26.485777 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="1979c745-e0ef-473f-b9df-b7444a9cfd62" containerName="oc" Mar 12 08:12:26 crc kubenswrapper[4809]: I0312 08:12:26.485897 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="1979c745-e0ef-473f-b9df-b7444a9cfd62" containerName="oc" Mar 12 08:12:26 crc kubenswrapper[4809]: I0312 08:12:26.486356 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-jfznn" Mar 12 08:12:26 crc kubenswrapper[4809]: I0312 08:12:26.491942 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Mar 12 08:12:26 crc kubenswrapper[4809]: I0312 08:12:26.492215 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Mar 12 08:12:26 crc kubenswrapper[4809]: I0312 08:12:26.492348 4809 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-qjg8l" Mar 12 08:12:26 crc kubenswrapper[4809]: I0312 08:12:26.517639 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-jfznn"] Mar 12 08:12:26 crc kubenswrapper[4809]: I0312 08:12:26.542365 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-947gd"] Mar 12 08:12:26 crc kubenswrapper[4809]: I0312 08:12:26.543181 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-947gd" Mar 12 08:12:26 crc kubenswrapper[4809]: W0312 08:12:26.546025 4809 reflector.go:561] object-"cert-manager"/"cert-manager-dockercfg-dhvms": failed to list *v1.Secret: secrets "cert-manager-dockercfg-dhvms" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "cert-manager": no relationship found between node 'crc' and this object Mar 12 08:12:26 crc kubenswrapper[4809]: E0312 08:12:26.546079 4809 reflector.go:158] "Unhandled Error" err="object-\"cert-manager\"/\"cert-manager-dockercfg-dhvms\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cert-manager-dockercfg-dhvms\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"cert-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Mar 12 08:12:26 crc kubenswrapper[4809]: I0312 08:12:26.567136 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-947gd"] Mar 12 08:12:26 crc kubenswrapper[4809]: I0312 08:12:26.583383 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxjkg\" (UniqueName: \"kubernetes.io/projected/a993902b-9d72-48a5-8acb-0dd1501e3445-kube-api-access-fxjkg\") pod \"cert-manager-cainjector-cf98fcc89-jfznn\" (UID: \"a993902b-9d72-48a5-8acb-0dd1501e3445\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-jfznn" Mar 12 08:12:26 crc kubenswrapper[4809]: I0312 08:12:26.612282 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-4jx86"] Mar 12 08:12:26 crc kubenswrapper[4809]: I0312 08:12:26.613720 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-4jx86" Mar 12 08:12:26 crc kubenswrapper[4809]: I0312 08:12:26.636352 4809 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-xrwgv" Mar 12 08:12:26 crc kubenswrapper[4809]: I0312 08:12:26.677076 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-4jx86"] Mar 12 08:12:26 crc kubenswrapper[4809]: I0312 08:12:26.687066 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7fcb\" (UniqueName: \"kubernetes.io/projected/5097b432-e4b9-407e-97a3-3821992f9f91-kube-api-access-g7fcb\") pod \"cert-manager-webhook-687f57d79b-4jx86\" (UID: \"5097b432-e4b9-407e-97a3-3821992f9f91\") " pod="cert-manager/cert-manager-webhook-687f57d79b-4jx86" Mar 12 08:12:26 crc kubenswrapper[4809]: I0312 08:12:26.687135 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2hr7\" (UniqueName: \"kubernetes.io/projected/6bc14a5b-8cfa-4f00-917e-72248d3aadb5-kube-api-access-n2hr7\") pod \"cert-manager-858654f9db-947gd\" (UID: \"6bc14a5b-8cfa-4f00-917e-72248d3aadb5\") " pod="cert-manager/cert-manager-858654f9db-947gd" Mar 12 08:12:26 crc kubenswrapper[4809]: I0312 08:12:26.687632 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxjkg\" (UniqueName: \"kubernetes.io/projected/a993902b-9d72-48a5-8acb-0dd1501e3445-kube-api-access-fxjkg\") pod \"cert-manager-cainjector-cf98fcc89-jfznn\" (UID: \"a993902b-9d72-48a5-8acb-0dd1501e3445\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-jfznn" Mar 12 08:12:26 crc kubenswrapper[4809]: I0312 08:12:26.766560 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxjkg\" (UniqueName: \"kubernetes.io/projected/a993902b-9d72-48a5-8acb-0dd1501e3445-kube-api-access-fxjkg\") pod \"cert-manager-cainjector-cf98fcc89-jfznn\" (UID: \"a993902b-9d72-48a5-8acb-0dd1501e3445\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-jfznn" Mar 12 08:12:26 crc kubenswrapper[4809]: I0312 08:12:26.792200 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7fcb\" (UniqueName: \"kubernetes.io/projected/5097b432-e4b9-407e-97a3-3821992f9f91-kube-api-access-g7fcb\") pod \"cert-manager-webhook-687f57d79b-4jx86\" (UID: \"5097b432-e4b9-407e-97a3-3821992f9f91\") " pod="cert-manager/cert-manager-webhook-687f57d79b-4jx86" Mar 12 08:12:26 crc kubenswrapper[4809]: I0312 08:12:26.792265 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2hr7\" (UniqueName: \"kubernetes.io/projected/6bc14a5b-8cfa-4f00-917e-72248d3aadb5-kube-api-access-n2hr7\") pod \"cert-manager-858654f9db-947gd\" (UID: \"6bc14a5b-8cfa-4f00-917e-72248d3aadb5\") " pod="cert-manager/cert-manager-858654f9db-947gd" Mar 12 08:12:26 crc kubenswrapper[4809]: I0312 08:12:26.815541 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-jfznn" Mar 12 08:12:26 crc kubenswrapper[4809]: I0312 08:12:26.821855 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2hr7\" (UniqueName: \"kubernetes.io/projected/6bc14a5b-8cfa-4f00-917e-72248d3aadb5-kube-api-access-n2hr7\") pod \"cert-manager-858654f9db-947gd\" (UID: \"6bc14a5b-8cfa-4f00-917e-72248d3aadb5\") " pod="cert-manager/cert-manager-858654f9db-947gd" Mar 12 08:12:26 crc kubenswrapper[4809]: I0312 08:12:26.854193 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7fcb\" (UniqueName: \"kubernetes.io/projected/5097b432-e4b9-407e-97a3-3821992f9f91-kube-api-access-g7fcb\") pod \"cert-manager-webhook-687f57d79b-4jx86\" (UID: \"5097b432-e4b9-407e-97a3-3821992f9f91\") " pod="cert-manager/cert-manager-webhook-687f57d79b-4jx86" Mar 12 08:12:26 crc kubenswrapper[4809]: I0312 08:12:26.941546 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-4jx86" Mar 12 08:12:27 crc kubenswrapper[4809]: I0312 08:12:27.280487 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-jfznn"] Mar 12 08:12:27 crc kubenswrapper[4809]: I0312 08:12:27.364761 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-4jx86"] Mar 12 08:12:27 crc kubenswrapper[4809]: W0312 08:12:27.370510 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5097b432_e4b9_407e_97a3_3821992f9f91.slice/crio-bc8c096f3b4c544f6d4d3d35e4a1c1f2be77411a02108cec351d264d25d126fe WatchSource:0}: Error finding container bc8c096f3b4c544f6d4d3d35e4a1c1f2be77411a02108cec351d264d25d126fe: Status 404 returned error can't find the container with id bc8c096f3b4c544f6d4d3d35e4a1c1f2be77411a02108cec351d264d25d126fe Mar 12 08:12:27 crc kubenswrapper[4809]: I0312 08:12:27.477833 4809 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-dhvms" Mar 12 08:12:27 crc kubenswrapper[4809]: I0312 08:12:27.484788 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-947gd" Mar 12 08:12:27 crc kubenswrapper[4809]: I0312 08:12:27.817228 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-947gd"] Mar 12 08:12:27 crc kubenswrapper[4809]: W0312 08:12:27.832357 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6bc14a5b_8cfa_4f00_917e_72248d3aadb5.slice/crio-b7bda3f80220047124e442f504dca292a0f0b0c4d8c617c66c6e2f440d60d039 WatchSource:0}: Error finding container b7bda3f80220047124e442f504dca292a0f0b0c4d8c617c66c6e2f440d60d039: Status 404 returned error can't find the container with id b7bda3f80220047124e442f504dca292a0f0b0c4d8c617c66c6e2f440d60d039 Mar 12 08:12:27 crc kubenswrapper[4809]: I0312 08:12:27.993162 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-4jx86" event={"ID":"5097b432-e4b9-407e-97a3-3821992f9f91","Type":"ContainerStarted","Data":"bc8c096f3b4c544f6d4d3d35e4a1c1f2be77411a02108cec351d264d25d126fe"} Mar 12 08:12:27 crc kubenswrapper[4809]: I0312 08:12:27.993980 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-947gd" event={"ID":"6bc14a5b-8cfa-4f00-917e-72248d3aadb5","Type":"ContainerStarted","Data":"b7bda3f80220047124e442f504dca292a0f0b0c4d8c617c66c6e2f440d60d039"} Mar 12 08:12:27 crc kubenswrapper[4809]: I0312 08:12:27.994778 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-jfznn" event={"ID":"a993902b-9d72-48a5-8acb-0dd1501e3445","Type":"ContainerStarted","Data":"270d85a65d10db569ddc7db463807b865868c43e6c3e5b8799294d89747a59e3"} Mar 12 08:12:32 crc kubenswrapper[4809]: I0312 08:12:32.024000 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-jfznn" event={"ID":"a993902b-9d72-48a5-8acb-0dd1501e3445","Type":"ContainerStarted","Data":"c250af8eb4f2c3c749f83e611c8690f40b8a28fbeb39520adf7ddaab6bbfecb3"} Mar 12 08:12:32 crc kubenswrapper[4809]: I0312 08:12:32.028432 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-4jx86" event={"ID":"5097b432-e4b9-407e-97a3-3821992f9f91","Type":"ContainerStarted","Data":"e0f2d5a01c7bc383b5425777e1764d5bfe290048c62e1ea19ae116b52f6d41db"} Mar 12 08:12:32 crc kubenswrapper[4809]: I0312 08:12:32.028619 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-4jx86" Mar 12 08:12:32 crc kubenswrapper[4809]: I0312 08:12:32.047581 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-jfznn" podStartSLOduration=2.502000211 podStartE2EDuration="6.047562395s" podCreationTimestamp="2026-03-12 08:12:26 +0000 UTC" firstStartedPulling="2026-03-12 08:12:27.296484887 +0000 UTC m=+820.878520620" lastFinishedPulling="2026-03-12 08:12:30.842047071 +0000 UTC m=+824.424082804" observedRunningTime="2026-03-12 08:12:32.044567913 +0000 UTC m=+825.626603646" watchObservedRunningTime="2026-03-12 08:12:32.047562395 +0000 UTC m=+825.629598118" Mar 12 08:12:33 crc kubenswrapper[4809]: I0312 08:12:33.039406 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-947gd" event={"ID":"6bc14a5b-8cfa-4f00-917e-72248d3aadb5","Type":"ContainerStarted","Data":"e8d2588539a652f011c5e8275ece067ab38dc36cd98023976163553ab276dddd"} Mar 12 08:12:33 crc kubenswrapper[4809]: I0312 08:12:33.061651 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-947gd" podStartSLOduration=2.4656763809999998 podStartE2EDuration="7.061631402s" podCreationTimestamp="2026-03-12 08:12:26 +0000 UTC" firstStartedPulling="2026-03-12 08:12:27.839089896 +0000 UTC m=+821.421125619" lastFinishedPulling="2026-03-12 08:12:32.435044907 +0000 UTC m=+826.017080640" observedRunningTime="2026-03-12 08:12:33.060264384 +0000 UTC m=+826.642300127" watchObservedRunningTime="2026-03-12 08:12:33.061631402 +0000 UTC m=+826.643667135" Mar 12 08:12:33 crc kubenswrapper[4809]: I0312 08:12:33.066014 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-4jx86" podStartSLOduration=3.591726272 podStartE2EDuration="7.066002702s" podCreationTimestamp="2026-03-12 08:12:26 +0000 UTC" firstStartedPulling="2026-03-12 08:12:27.37387676 +0000 UTC m=+820.955912493" lastFinishedPulling="2026-03-12 08:12:30.84815319 +0000 UTC m=+824.430188923" observedRunningTime="2026-03-12 08:12:32.074863428 +0000 UTC m=+825.656899151" watchObservedRunningTime="2026-03-12 08:12:33.066002702 +0000 UTC m=+826.648038435" Mar 12 08:12:36 crc kubenswrapper[4809]: I0312 08:12:36.948574 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-4jx86" Mar 12 08:12:58 crc kubenswrapper[4809]: I0312 08:12:58.165006 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989rdjh9"] Mar 12 08:12:58 crc kubenswrapper[4809]: I0312 08:12:58.166835 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989rdjh9" Mar 12 08:12:58 crc kubenswrapper[4809]: I0312 08:12:58.168796 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 12 08:12:58 crc kubenswrapper[4809]: I0312 08:12:58.176836 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989rdjh9"] Mar 12 08:12:58 crc kubenswrapper[4809]: I0312 08:12:58.223789 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/39c32eb2-651b-4aee-a586-4a8b123f07f8-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989rdjh9\" (UID: \"39c32eb2-651b-4aee-a586-4a8b123f07f8\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989rdjh9" Mar 12 08:12:58 crc kubenswrapper[4809]: I0312 08:12:58.223845 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clm49\" (UniqueName: \"kubernetes.io/projected/39c32eb2-651b-4aee-a586-4a8b123f07f8-kube-api-access-clm49\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989rdjh9\" (UID: \"39c32eb2-651b-4aee-a586-4a8b123f07f8\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989rdjh9" Mar 12 08:12:58 crc kubenswrapper[4809]: I0312 08:12:58.224019 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/39c32eb2-651b-4aee-a586-4a8b123f07f8-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989rdjh9\" (UID: \"39c32eb2-651b-4aee-a586-4a8b123f07f8\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989rdjh9" Mar 12 08:12:58 crc kubenswrapper[4809]: I0312 08:12:58.325722 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/39c32eb2-651b-4aee-a586-4a8b123f07f8-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989rdjh9\" (UID: \"39c32eb2-651b-4aee-a586-4a8b123f07f8\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989rdjh9" Mar 12 08:12:58 crc kubenswrapper[4809]: I0312 08:12:58.326004 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/39c32eb2-651b-4aee-a586-4a8b123f07f8-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989rdjh9\" (UID: \"39c32eb2-651b-4aee-a586-4a8b123f07f8\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989rdjh9" Mar 12 08:12:58 crc kubenswrapper[4809]: I0312 08:12:58.326104 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clm49\" (UniqueName: \"kubernetes.io/projected/39c32eb2-651b-4aee-a586-4a8b123f07f8-kube-api-access-clm49\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989rdjh9\" (UID: \"39c32eb2-651b-4aee-a586-4a8b123f07f8\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989rdjh9" Mar 12 08:12:58 crc kubenswrapper[4809]: I0312 08:12:58.326417 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/39c32eb2-651b-4aee-a586-4a8b123f07f8-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989rdjh9\" (UID: \"39c32eb2-651b-4aee-a586-4a8b123f07f8\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989rdjh9" Mar 12 08:12:58 crc kubenswrapper[4809]: I0312 08:12:58.326445 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/39c32eb2-651b-4aee-a586-4a8b123f07f8-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989rdjh9\" (UID: \"39c32eb2-651b-4aee-a586-4a8b123f07f8\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989rdjh9" Mar 12 08:12:58 crc kubenswrapper[4809]: I0312 08:12:58.350513 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clm49\" (UniqueName: \"kubernetes.io/projected/39c32eb2-651b-4aee-a586-4a8b123f07f8-kube-api-access-clm49\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989rdjh9\" (UID: \"39c32eb2-651b-4aee-a586-4a8b123f07f8\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989rdjh9" Mar 12 08:12:58 crc kubenswrapper[4809]: I0312 08:12:58.356082 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jxn6b"] Mar 12 08:12:58 crc kubenswrapper[4809]: I0312 08:12:58.357658 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jxn6b" Mar 12 08:12:58 crc kubenswrapper[4809]: I0312 08:12:58.362354 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jxn6b"] Mar 12 08:12:58 crc kubenswrapper[4809]: I0312 08:12:58.427317 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2jqw\" (UniqueName: \"kubernetes.io/projected/72104bf6-f1c5-4957-b4d7-f6254d1c2121-kube-api-access-n2jqw\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jxn6b\" (UID: \"72104bf6-f1c5-4957-b4d7-f6254d1c2121\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jxn6b" Mar 12 08:12:58 crc kubenswrapper[4809]: I0312 08:12:58.427357 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/72104bf6-f1c5-4957-b4d7-f6254d1c2121-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jxn6b\" (UID: \"72104bf6-f1c5-4957-b4d7-f6254d1c2121\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jxn6b" Mar 12 08:12:58 crc kubenswrapper[4809]: I0312 08:12:58.427451 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/72104bf6-f1c5-4957-b4d7-f6254d1c2121-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jxn6b\" (UID: \"72104bf6-f1c5-4957-b4d7-f6254d1c2121\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jxn6b" Mar 12 08:12:58 crc kubenswrapper[4809]: I0312 08:12:58.481376 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989rdjh9" Mar 12 08:12:58 crc kubenswrapper[4809]: I0312 08:12:58.528650 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/72104bf6-f1c5-4957-b4d7-f6254d1c2121-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jxn6b\" (UID: \"72104bf6-f1c5-4957-b4d7-f6254d1c2121\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jxn6b" Mar 12 08:12:58 crc kubenswrapper[4809]: I0312 08:12:58.528930 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2jqw\" (UniqueName: \"kubernetes.io/projected/72104bf6-f1c5-4957-b4d7-f6254d1c2121-kube-api-access-n2jqw\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jxn6b\" (UID: \"72104bf6-f1c5-4957-b4d7-f6254d1c2121\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jxn6b" Mar 12 08:12:58 crc kubenswrapper[4809]: I0312 08:12:58.528958 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/72104bf6-f1c5-4957-b4d7-f6254d1c2121-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jxn6b\" (UID: \"72104bf6-f1c5-4957-b4d7-f6254d1c2121\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jxn6b" Mar 12 08:12:58 crc kubenswrapper[4809]: I0312 08:12:58.529445 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/72104bf6-f1c5-4957-b4d7-f6254d1c2121-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jxn6b\" (UID: \"72104bf6-f1c5-4957-b4d7-f6254d1c2121\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jxn6b" Mar 12 08:12:58 crc kubenswrapper[4809]: I0312 08:12:58.533251 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/72104bf6-f1c5-4957-b4d7-f6254d1c2121-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jxn6b\" (UID: \"72104bf6-f1c5-4957-b4d7-f6254d1c2121\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jxn6b" Mar 12 08:12:58 crc kubenswrapper[4809]: I0312 08:12:58.546043 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2jqw\" (UniqueName: \"kubernetes.io/projected/72104bf6-f1c5-4957-b4d7-f6254d1c2121-kube-api-access-n2jqw\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jxn6b\" (UID: \"72104bf6-f1c5-4957-b4d7-f6254d1c2121\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jxn6b" Mar 12 08:12:58 crc kubenswrapper[4809]: I0312 08:12:58.577298 4809 scope.go:117] "RemoveContainer" containerID="db18b0ca54d6405a8c8d7af663fc2d374cb6ee863ee7f0a7b8b1203f3c5fadb6" Mar 12 08:12:58 crc kubenswrapper[4809]: I0312 08:12:58.692694 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jxn6b" Mar 12 08:12:58 crc kubenswrapper[4809]: I0312 08:12:58.894017 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989rdjh9"] Mar 12 08:12:58 crc kubenswrapper[4809]: I0312 08:12:58.909324 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jxn6b"] Mar 12 08:12:59 crc kubenswrapper[4809]: I0312 08:12:59.242959 4809 generic.go:334] "Generic (PLEG): container finished" podID="39c32eb2-651b-4aee-a586-4a8b123f07f8" containerID="c6a743309d84c4ff87fc99b158077bd4e5dad905286b76383bde38ad79757a84" exitCode=0 Mar 12 08:12:59 crc kubenswrapper[4809]: I0312 08:12:59.243020 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989rdjh9" event={"ID":"39c32eb2-651b-4aee-a586-4a8b123f07f8","Type":"ContainerDied","Data":"c6a743309d84c4ff87fc99b158077bd4e5dad905286b76383bde38ad79757a84"} Mar 12 08:12:59 crc kubenswrapper[4809]: I0312 08:12:59.243046 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989rdjh9" event={"ID":"39c32eb2-651b-4aee-a586-4a8b123f07f8","Type":"ContainerStarted","Data":"f4bafadb74f520aacef3b646051c06f6609ec04fcc68863659fb52ae8aaf90bc"} Mar 12 08:12:59 crc kubenswrapper[4809]: I0312 08:12:59.245672 4809 generic.go:334] "Generic (PLEG): container finished" podID="72104bf6-f1c5-4957-b4d7-f6254d1c2121" containerID="fd438b8e01ab7fed24f54683806210264ba48febf61bff651cf30717655b64db" exitCode=0 Mar 12 08:12:59 crc kubenswrapper[4809]: I0312 08:12:59.245708 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jxn6b" event={"ID":"72104bf6-f1c5-4957-b4d7-f6254d1c2121","Type":"ContainerDied","Data":"fd438b8e01ab7fed24f54683806210264ba48febf61bff651cf30717655b64db"} Mar 12 08:12:59 crc kubenswrapper[4809]: I0312 08:12:59.245732 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jxn6b" event={"ID":"72104bf6-f1c5-4957-b4d7-f6254d1c2121","Type":"ContainerStarted","Data":"67a96492dc566d8e42da1c3eeb58184b72d68243618f88abe26557b426e06c7a"} Mar 12 08:13:01 crc kubenswrapper[4809]: I0312 08:13:01.260272 4809 generic.go:334] "Generic (PLEG): container finished" podID="72104bf6-f1c5-4957-b4d7-f6254d1c2121" containerID="d2dc545319d104dadc2dab5b983f20eca9946b0d55d70a9dda2356fe894c6705" exitCode=0 Mar 12 08:13:01 crc kubenswrapper[4809]: I0312 08:13:01.260347 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jxn6b" event={"ID":"72104bf6-f1c5-4957-b4d7-f6254d1c2121","Type":"ContainerDied","Data":"d2dc545319d104dadc2dab5b983f20eca9946b0d55d70a9dda2356fe894c6705"} Mar 12 08:13:01 crc kubenswrapper[4809]: I0312 08:13:01.917024 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hrnwf"] Mar 12 08:13:01 crc kubenswrapper[4809]: I0312 08:13:01.919694 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hrnwf" Mar 12 08:13:01 crc kubenswrapper[4809]: I0312 08:13:01.931545 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hrnwf"] Mar 12 08:13:01 crc kubenswrapper[4809]: I0312 08:13:01.990709 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx27n\" (UniqueName: \"kubernetes.io/projected/de95c02b-e54f-4f9c-9420-815e4fdb694c-kube-api-access-gx27n\") pod \"redhat-operators-hrnwf\" (UID: \"de95c02b-e54f-4f9c-9420-815e4fdb694c\") " pod="openshift-marketplace/redhat-operators-hrnwf" Mar 12 08:13:01 crc kubenswrapper[4809]: I0312 08:13:01.990788 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de95c02b-e54f-4f9c-9420-815e4fdb694c-catalog-content\") pod \"redhat-operators-hrnwf\" (UID: \"de95c02b-e54f-4f9c-9420-815e4fdb694c\") " pod="openshift-marketplace/redhat-operators-hrnwf" Mar 12 08:13:01 crc kubenswrapper[4809]: I0312 08:13:01.990829 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de95c02b-e54f-4f9c-9420-815e4fdb694c-utilities\") pod \"redhat-operators-hrnwf\" (UID: \"de95c02b-e54f-4f9c-9420-815e4fdb694c\") " pod="openshift-marketplace/redhat-operators-hrnwf" Mar 12 08:13:02 crc kubenswrapper[4809]: I0312 08:13:02.092306 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de95c02b-e54f-4f9c-9420-815e4fdb694c-catalog-content\") pod \"redhat-operators-hrnwf\" (UID: \"de95c02b-e54f-4f9c-9420-815e4fdb694c\") " pod="openshift-marketplace/redhat-operators-hrnwf" Mar 12 08:13:02 crc kubenswrapper[4809]: I0312 08:13:02.092715 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de95c02b-e54f-4f9c-9420-815e4fdb694c-utilities\") pod \"redhat-operators-hrnwf\" (UID: \"de95c02b-e54f-4f9c-9420-815e4fdb694c\") " pod="openshift-marketplace/redhat-operators-hrnwf" Mar 12 08:13:02 crc kubenswrapper[4809]: I0312 08:13:02.092795 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gx27n\" (UniqueName: \"kubernetes.io/projected/de95c02b-e54f-4f9c-9420-815e4fdb694c-kube-api-access-gx27n\") pod \"redhat-operators-hrnwf\" (UID: \"de95c02b-e54f-4f9c-9420-815e4fdb694c\") " pod="openshift-marketplace/redhat-operators-hrnwf" Mar 12 08:13:02 crc kubenswrapper[4809]: I0312 08:13:02.094004 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de95c02b-e54f-4f9c-9420-815e4fdb694c-catalog-content\") pod \"redhat-operators-hrnwf\" (UID: \"de95c02b-e54f-4f9c-9420-815e4fdb694c\") " pod="openshift-marketplace/redhat-operators-hrnwf" Mar 12 08:13:02 crc kubenswrapper[4809]: I0312 08:13:02.094249 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de95c02b-e54f-4f9c-9420-815e4fdb694c-utilities\") pod \"redhat-operators-hrnwf\" (UID: \"de95c02b-e54f-4f9c-9420-815e4fdb694c\") " pod="openshift-marketplace/redhat-operators-hrnwf" Mar 12 08:13:02 crc kubenswrapper[4809]: I0312 08:13:02.120189 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gx27n\" (UniqueName: \"kubernetes.io/projected/de95c02b-e54f-4f9c-9420-815e4fdb694c-kube-api-access-gx27n\") pod \"redhat-operators-hrnwf\" (UID: \"de95c02b-e54f-4f9c-9420-815e4fdb694c\") " pod="openshift-marketplace/redhat-operators-hrnwf" Mar 12 08:13:02 crc kubenswrapper[4809]: I0312 08:13:02.269924 4809 generic.go:334] "Generic (PLEG): container finished" podID="72104bf6-f1c5-4957-b4d7-f6254d1c2121" containerID="fa96a7ecfb3c30ce305e2214f5dd8169c1555f648903a47720ab6dec828d4d42" exitCode=0 Mar 12 08:13:02 crc kubenswrapper[4809]: I0312 08:13:02.269965 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jxn6b" event={"ID":"72104bf6-f1c5-4957-b4d7-f6254d1c2121","Type":"ContainerDied","Data":"fa96a7ecfb3c30ce305e2214f5dd8169c1555f648903a47720ab6dec828d4d42"} Mar 12 08:13:02 crc kubenswrapper[4809]: I0312 08:13:02.325353 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hrnwf" Mar 12 08:13:02 crc kubenswrapper[4809]: I0312 08:13:02.828572 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hrnwf"] Mar 12 08:13:03 crc kubenswrapper[4809]: I0312 08:13:03.278625 4809 generic.go:334] "Generic (PLEG): container finished" podID="de95c02b-e54f-4f9c-9420-815e4fdb694c" containerID="42ff0e63a794164bebdd036cf6ba58399d64bbc65d24738da3af50a621338ca1" exitCode=0 Mar 12 08:13:03 crc kubenswrapper[4809]: I0312 08:13:03.278733 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hrnwf" event={"ID":"de95c02b-e54f-4f9c-9420-815e4fdb694c","Type":"ContainerDied","Data":"42ff0e63a794164bebdd036cf6ba58399d64bbc65d24738da3af50a621338ca1"} Mar 12 08:13:03 crc kubenswrapper[4809]: I0312 08:13:03.279429 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hrnwf" event={"ID":"de95c02b-e54f-4f9c-9420-815e4fdb694c","Type":"ContainerStarted","Data":"eda3841700f9af3994e1faccf952ef05172cfd0f60dacb952596a1483fe2e842"} Mar 12 08:13:03 crc kubenswrapper[4809]: I0312 08:13:03.543669 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jxn6b" Mar 12 08:13:03 crc kubenswrapper[4809]: I0312 08:13:03.723216 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2jqw\" (UniqueName: \"kubernetes.io/projected/72104bf6-f1c5-4957-b4d7-f6254d1c2121-kube-api-access-n2jqw\") pod \"72104bf6-f1c5-4957-b4d7-f6254d1c2121\" (UID: \"72104bf6-f1c5-4957-b4d7-f6254d1c2121\") " Mar 12 08:13:03 crc kubenswrapper[4809]: I0312 08:13:03.723671 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/72104bf6-f1c5-4957-b4d7-f6254d1c2121-util\") pod \"72104bf6-f1c5-4957-b4d7-f6254d1c2121\" (UID: \"72104bf6-f1c5-4957-b4d7-f6254d1c2121\") " Mar 12 08:13:03 crc kubenswrapper[4809]: I0312 08:13:03.723710 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/72104bf6-f1c5-4957-b4d7-f6254d1c2121-bundle\") pod \"72104bf6-f1c5-4957-b4d7-f6254d1c2121\" (UID: \"72104bf6-f1c5-4957-b4d7-f6254d1c2121\") " Mar 12 08:13:03 crc kubenswrapper[4809]: I0312 08:13:03.724886 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72104bf6-f1c5-4957-b4d7-f6254d1c2121-bundle" (OuterVolumeSpecName: "bundle") pod "72104bf6-f1c5-4957-b4d7-f6254d1c2121" (UID: "72104bf6-f1c5-4957-b4d7-f6254d1c2121"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:13:03 crc kubenswrapper[4809]: I0312 08:13:03.736759 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72104bf6-f1c5-4957-b4d7-f6254d1c2121-kube-api-access-n2jqw" (OuterVolumeSpecName: "kube-api-access-n2jqw") pod "72104bf6-f1c5-4957-b4d7-f6254d1c2121" (UID: "72104bf6-f1c5-4957-b4d7-f6254d1c2121"). InnerVolumeSpecName "kube-api-access-n2jqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:13:03 crc kubenswrapper[4809]: I0312 08:13:03.742285 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72104bf6-f1c5-4957-b4d7-f6254d1c2121-util" (OuterVolumeSpecName: "util") pod "72104bf6-f1c5-4957-b4d7-f6254d1c2121" (UID: "72104bf6-f1c5-4957-b4d7-f6254d1c2121"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:13:03 crc kubenswrapper[4809]: I0312 08:13:03.825402 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2jqw\" (UniqueName: \"kubernetes.io/projected/72104bf6-f1c5-4957-b4d7-f6254d1c2121-kube-api-access-n2jqw\") on node \"crc\" DevicePath \"\"" Mar 12 08:13:03 crc kubenswrapper[4809]: I0312 08:13:03.825444 4809 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/72104bf6-f1c5-4957-b4d7-f6254d1c2121-util\") on node \"crc\" DevicePath \"\"" Mar 12 08:13:03 crc kubenswrapper[4809]: I0312 08:13:03.825458 4809 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/72104bf6-f1c5-4957-b4d7-f6254d1c2121-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:13:04 crc kubenswrapper[4809]: I0312 08:13:04.287669 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jxn6b" Mar 12 08:13:04 crc kubenswrapper[4809]: I0312 08:13:04.287673 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jxn6b" event={"ID":"72104bf6-f1c5-4957-b4d7-f6254d1c2121","Type":"ContainerDied","Data":"67a96492dc566d8e42da1c3eeb58184b72d68243618f88abe26557b426e06c7a"} Mar 12 08:13:04 crc kubenswrapper[4809]: I0312 08:13:04.287715 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67a96492dc566d8e42da1c3eeb58184b72d68243618f88abe26557b426e06c7a" Mar 12 08:13:04 crc kubenswrapper[4809]: I0312 08:13:04.289279 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hrnwf" event={"ID":"de95c02b-e54f-4f9c-9420-815e4fdb694c","Type":"ContainerStarted","Data":"3add114f0fbcc3925d853f89aef1a045f43453345f62b333ba959dcbae7b1b19"} Mar 12 08:13:05 crc kubenswrapper[4809]: I0312 08:13:05.301683 4809 generic.go:334] "Generic (PLEG): container finished" podID="de95c02b-e54f-4f9c-9420-815e4fdb694c" containerID="3add114f0fbcc3925d853f89aef1a045f43453345f62b333ba959dcbae7b1b19" exitCode=0 Mar 12 08:13:05 crc kubenswrapper[4809]: I0312 08:13:05.302028 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hrnwf" event={"ID":"de95c02b-e54f-4f9c-9420-815e4fdb694c","Type":"ContainerDied","Data":"3add114f0fbcc3925d853f89aef1a045f43453345f62b333ba959dcbae7b1b19"} Mar 12 08:13:06 crc kubenswrapper[4809]: I0312 08:13:06.310531 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hrnwf" event={"ID":"de95c02b-e54f-4f9c-9420-815e4fdb694c","Type":"ContainerStarted","Data":"18393e575e2e68df5b8b93a640b33f38fdd28527d832facbf45219d8831005a1"} Mar 12 08:13:06 crc kubenswrapper[4809]: I0312 08:13:06.340018 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hrnwf" podStartSLOduration=2.89650362 podStartE2EDuration="5.339991524s" podCreationTimestamp="2026-03-12 08:13:01 +0000 UTC" firstStartedPulling="2026-03-12 08:13:03.280090909 +0000 UTC m=+856.862126642" lastFinishedPulling="2026-03-12 08:13:05.723578813 +0000 UTC m=+859.305614546" observedRunningTime="2026-03-12 08:13:06.332308553 +0000 UTC m=+859.914344276" watchObservedRunningTime="2026-03-12 08:13:06.339991524 +0000 UTC m=+859.922027277" Mar 12 08:13:09 crc kubenswrapper[4809]: I0312 08:13:09.337895 4809 generic.go:334] "Generic (PLEG): container finished" podID="39c32eb2-651b-4aee-a586-4a8b123f07f8" containerID="2569adf000b5ac9884ba45c6cca3edf053ade9b2d68559c47207b1869a445086" exitCode=0 Mar 12 08:13:09 crc kubenswrapper[4809]: I0312 08:13:09.338484 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989rdjh9" event={"ID":"39c32eb2-651b-4aee-a586-4a8b123f07f8","Type":"ContainerDied","Data":"2569adf000b5ac9884ba45c6cca3edf053ade9b2d68559c47207b1869a445086"} Mar 12 08:13:10 crc kubenswrapper[4809]: I0312 08:13:10.348651 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989rdjh9" event={"ID":"39c32eb2-651b-4aee-a586-4a8b123f07f8","Type":"ContainerStarted","Data":"bb6b22f20d404ef861cd2ec83fa8ae451d32e0a39cf8580bd29de2ffc54f806e"} Mar 12 08:13:10 crc kubenswrapper[4809]: I0312 08:13:10.382165 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989rdjh9" podStartSLOduration=3.139928901 podStartE2EDuration="12.382143331s" podCreationTimestamp="2026-03-12 08:12:58 +0000 UTC" firstStartedPulling="2026-03-12 08:12:59.244245636 +0000 UTC m=+852.826281369" lastFinishedPulling="2026-03-12 08:13:08.486460066 +0000 UTC m=+862.068495799" observedRunningTime="2026-03-12 08:13:10.377157994 +0000 UTC m=+863.959193747" watchObservedRunningTime="2026-03-12 08:13:10.382143331 +0000 UTC m=+863.964179064" Mar 12 08:13:11 crc kubenswrapper[4809]: I0312 08:13:11.358190 4809 generic.go:334] "Generic (PLEG): container finished" podID="39c32eb2-651b-4aee-a586-4a8b123f07f8" containerID="bb6b22f20d404ef861cd2ec83fa8ae451d32e0a39cf8580bd29de2ffc54f806e" exitCode=0 Mar 12 08:13:11 crc kubenswrapper[4809]: I0312 08:13:11.358259 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989rdjh9" event={"ID":"39c32eb2-651b-4aee-a586-4a8b123f07f8","Type":"ContainerDied","Data":"bb6b22f20d404ef861cd2ec83fa8ae451d32e0a39cf8580bd29de2ffc54f806e"} Mar 12 08:13:12 crc kubenswrapper[4809]: I0312 08:13:12.183356 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-z59kc"] Mar 12 08:13:12 crc kubenswrapper[4809]: E0312 08:13:12.184168 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72104bf6-f1c5-4957-b4d7-f6254d1c2121" containerName="extract" Mar 12 08:13:12 crc kubenswrapper[4809]: I0312 08:13:12.184186 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="72104bf6-f1c5-4957-b4d7-f6254d1c2121" containerName="extract" Mar 12 08:13:12 crc kubenswrapper[4809]: E0312 08:13:12.184202 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72104bf6-f1c5-4957-b4d7-f6254d1c2121" containerName="util" Mar 12 08:13:12 crc kubenswrapper[4809]: I0312 08:13:12.184210 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="72104bf6-f1c5-4957-b4d7-f6254d1c2121" containerName="util" Mar 12 08:13:12 crc kubenswrapper[4809]: E0312 08:13:12.184233 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72104bf6-f1c5-4957-b4d7-f6254d1c2121" containerName="pull" Mar 12 08:13:12 crc kubenswrapper[4809]: I0312 08:13:12.184240 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="72104bf6-f1c5-4957-b4d7-f6254d1c2121" containerName="pull" Mar 12 08:13:12 crc kubenswrapper[4809]: I0312 08:13:12.184401 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="72104bf6-f1c5-4957-b4d7-f6254d1c2121" containerName="extract" Mar 12 08:13:12 crc kubenswrapper[4809]: I0312 08:13:12.185068 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-c769fd969-z59kc" Mar 12 08:13:12 crc kubenswrapper[4809]: I0312 08:13:12.188834 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"cluster-logging-operator-dockercfg-dsjnp" Mar 12 08:13:12 crc kubenswrapper[4809]: I0312 08:13:12.192435 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"kube-root-ca.crt" Mar 12 08:13:12 crc kubenswrapper[4809]: I0312 08:13:12.192453 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"openshift-service-ca.crt" Mar 12 08:13:12 crc kubenswrapper[4809]: I0312 08:13:12.204202 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-z59kc"] Mar 12 08:13:12 crc kubenswrapper[4809]: I0312 08:13:12.208257 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmtbl\" (UniqueName: \"kubernetes.io/projected/594fada9-d745-48a4-888c-a162cae5bf71-kube-api-access-vmtbl\") pod \"cluster-logging-operator-c769fd969-z59kc\" (UID: \"594fada9-d745-48a4-888c-a162cae5bf71\") " pod="openshift-logging/cluster-logging-operator-c769fd969-z59kc" Mar 12 08:13:12 crc kubenswrapper[4809]: I0312 08:13:12.310007 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmtbl\" (UniqueName: \"kubernetes.io/projected/594fada9-d745-48a4-888c-a162cae5bf71-kube-api-access-vmtbl\") pod \"cluster-logging-operator-c769fd969-z59kc\" (UID: \"594fada9-d745-48a4-888c-a162cae5bf71\") " pod="openshift-logging/cluster-logging-operator-c769fd969-z59kc" Mar 12 08:13:12 crc kubenswrapper[4809]: I0312 08:13:12.326539 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hrnwf" Mar 12 08:13:12 crc kubenswrapper[4809]: I0312 08:13:12.326609 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hrnwf" Mar 12 08:13:12 crc kubenswrapper[4809]: I0312 08:13:12.355380 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmtbl\" (UniqueName: \"kubernetes.io/projected/594fada9-d745-48a4-888c-a162cae5bf71-kube-api-access-vmtbl\") pod \"cluster-logging-operator-c769fd969-z59kc\" (UID: \"594fada9-d745-48a4-888c-a162cae5bf71\") " pod="openshift-logging/cluster-logging-operator-c769fd969-z59kc" Mar 12 08:13:12 crc kubenswrapper[4809]: I0312 08:13:12.505069 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-c769fd969-z59kc" Mar 12 08:13:12 crc kubenswrapper[4809]: I0312 08:13:12.759044 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989rdjh9" Mar 12 08:13:12 crc kubenswrapper[4809]: I0312 08:13:12.919781 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/39c32eb2-651b-4aee-a586-4a8b123f07f8-bundle\") pod \"39c32eb2-651b-4aee-a586-4a8b123f07f8\" (UID: \"39c32eb2-651b-4aee-a586-4a8b123f07f8\") " Mar 12 08:13:12 crc kubenswrapper[4809]: I0312 08:13:12.919866 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clm49\" (UniqueName: \"kubernetes.io/projected/39c32eb2-651b-4aee-a586-4a8b123f07f8-kube-api-access-clm49\") pod \"39c32eb2-651b-4aee-a586-4a8b123f07f8\" (UID: \"39c32eb2-651b-4aee-a586-4a8b123f07f8\") " Mar 12 08:13:12 crc kubenswrapper[4809]: I0312 08:13:12.920055 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/39c32eb2-651b-4aee-a586-4a8b123f07f8-util\") pod \"39c32eb2-651b-4aee-a586-4a8b123f07f8\" (UID: \"39c32eb2-651b-4aee-a586-4a8b123f07f8\") " Mar 12 08:13:12 crc kubenswrapper[4809]: I0312 08:13:12.920827 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39c32eb2-651b-4aee-a586-4a8b123f07f8-bundle" (OuterVolumeSpecName: "bundle") pod "39c32eb2-651b-4aee-a586-4a8b123f07f8" (UID: "39c32eb2-651b-4aee-a586-4a8b123f07f8"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:13:12 crc kubenswrapper[4809]: I0312 08:13:12.927028 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39c32eb2-651b-4aee-a586-4a8b123f07f8-kube-api-access-clm49" (OuterVolumeSpecName: "kube-api-access-clm49") pod "39c32eb2-651b-4aee-a586-4a8b123f07f8" (UID: "39c32eb2-651b-4aee-a586-4a8b123f07f8"). InnerVolumeSpecName "kube-api-access-clm49". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:13:12 crc kubenswrapper[4809]: I0312 08:13:12.930830 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39c32eb2-651b-4aee-a586-4a8b123f07f8-util" (OuterVolumeSpecName: "util") pod "39c32eb2-651b-4aee-a586-4a8b123f07f8" (UID: "39c32eb2-651b-4aee-a586-4a8b123f07f8"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:13:13 crc kubenswrapper[4809]: I0312 08:13:13.016236 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-z59kc"] Mar 12 08:13:13 crc kubenswrapper[4809]: I0312 08:13:13.022397 4809 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/39c32eb2-651b-4aee-a586-4a8b123f07f8-util\") on node \"crc\" DevicePath \"\"" Mar 12 08:13:13 crc kubenswrapper[4809]: I0312 08:13:13.022842 4809 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/39c32eb2-651b-4aee-a586-4a8b123f07f8-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:13:13 crc kubenswrapper[4809]: I0312 08:13:13.022861 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clm49\" (UniqueName: \"kubernetes.io/projected/39c32eb2-651b-4aee-a586-4a8b123f07f8-kube-api-access-clm49\") on node \"crc\" DevicePath \"\"" Mar 12 08:13:13 crc kubenswrapper[4809]: I0312 08:13:13.395565 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989rdjh9" Mar 12 08:13:13 crc kubenswrapper[4809]: I0312 08:13:13.395771 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hrnwf" podUID="de95c02b-e54f-4f9c-9420-815e4fdb694c" containerName="registry-server" probeResult="failure" output=< Mar 12 08:13:13 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 08:13:13 crc kubenswrapper[4809]: > Mar 12 08:13:13 crc kubenswrapper[4809]: I0312 08:13:13.395774 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989rdjh9" event={"ID":"39c32eb2-651b-4aee-a586-4a8b123f07f8","Type":"ContainerDied","Data":"f4bafadb74f520aacef3b646051c06f6609ec04fcc68863659fb52ae8aaf90bc"} Mar 12 08:13:13 crc kubenswrapper[4809]: I0312 08:13:13.396218 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4bafadb74f520aacef3b646051c06f6609ec04fcc68863659fb52ae8aaf90bc" Mar 12 08:13:13 crc kubenswrapper[4809]: I0312 08:13:13.397655 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-c769fd969-z59kc" event={"ID":"594fada9-d745-48a4-888c-a162cae5bf71","Type":"ContainerStarted","Data":"5f06037f39c02968a93aed2e4d739fa8f6953d8d3845a11d021cee1a67265d62"} Mar 12 08:13:19 crc kubenswrapper[4809]: I0312 08:13:19.467702 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-c769fd969-z59kc" event={"ID":"594fada9-d745-48a4-888c-a162cae5bf71","Type":"ContainerStarted","Data":"3ce6f6df3d2bdaf79f2ec847d539e23c224cbb464d310c205fb5f2569136036c"} Mar 12 08:13:19 crc kubenswrapper[4809]: I0312 08:13:19.527878 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/cluster-logging-operator-c769fd969-z59kc" podStartSLOduration=1.344061474 podStartE2EDuration="7.52786303s" podCreationTimestamp="2026-03-12 08:13:12 +0000 UTC" firstStartedPulling="2026-03-12 08:13:13.035596646 +0000 UTC m=+866.617632379" lastFinishedPulling="2026-03-12 08:13:19.219398202 +0000 UTC m=+872.801433935" observedRunningTime="2026-03-12 08:13:19.522999996 +0000 UTC m=+873.105035739" watchObservedRunningTime="2026-03-12 08:13:19.52786303 +0000 UTC m=+873.109898763" Mar 12 08:13:22 crc kubenswrapper[4809]: I0312 08:13:22.380698 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hrnwf" Mar 12 08:13:22 crc kubenswrapper[4809]: I0312 08:13:22.435301 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hrnwf" Mar 12 08:13:23 crc kubenswrapper[4809]: I0312 08:13:23.492632 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-65bb5b59df-8jk5h"] Mar 12 08:13:23 crc kubenswrapper[4809]: E0312 08:13:23.492894 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39c32eb2-651b-4aee-a586-4a8b123f07f8" containerName="pull" Mar 12 08:13:23 crc kubenswrapper[4809]: I0312 08:13:23.492906 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="39c32eb2-651b-4aee-a586-4a8b123f07f8" containerName="pull" Mar 12 08:13:23 crc kubenswrapper[4809]: E0312 08:13:23.492923 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39c32eb2-651b-4aee-a586-4a8b123f07f8" containerName="extract" Mar 12 08:13:23 crc kubenswrapper[4809]: I0312 08:13:23.492929 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="39c32eb2-651b-4aee-a586-4a8b123f07f8" containerName="extract" Mar 12 08:13:23 crc kubenswrapper[4809]: E0312 08:13:23.492941 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39c32eb2-651b-4aee-a586-4a8b123f07f8" containerName="util" Mar 12 08:13:23 crc kubenswrapper[4809]: I0312 08:13:23.492948 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="39c32eb2-651b-4aee-a586-4a8b123f07f8" containerName="util" Mar 12 08:13:23 crc kubenswrapper[4809]: I0312 08:13:23.493085 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="39c32eb2-651b-4aee-a586-4a8b123f07f8" containerName="extract" Mar 12 08:13:23 crc kubenswrapper[4809]: I0312 08:13:23.493937 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-65bb5b59df-8jk5h" Mar 12 08:13:23 crc kubenswrapper[4809]: I0312 08:13:23.496355 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Mar 12 08:13:23 crc kubenswrapper[4809]: I0312 08:13:23.496580 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Mar 12 08:13:23 crc kubenswrapper[4809]: I0312 08:13:23.496999 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Mar 12 08:13:23 crc kubenswrapper[4809]: I0312 08:13:23.497215 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Mar 12 08:13:23 crc kubenswrapper[4809]: I0312 08:13:23.497475 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-2z49d" Mar 12 08:13:23 crc kubenswrapper[4809]: I0312 08:13:23.498444 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Mar 12 08:13:23 crc kubenswrapper[4809]: I0312 08:13:23.512983 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9klx6\" (UniqueName: \"kubernetes.io/projected/861c2912-a932-4142-9b25-c7c0e1aaf062-kube-api-access-9klx6\") pod \"loki-operator-controller-manager-65bb5b59df-8jk5h\" (UID: \"861c2912-a932-4142-9b25-c7c0e1aaf062\") " pod="openshift-operators-redhat/loki-operator-controller-manager-65bb5b59df-8jk5h" Mar 12 08:13:23 crc kubenswrapper[4809]: I0312 08:13:23.513026 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/861c2912-a932-4142-9b25-c7c0e1aaf062-manager-config\") pod \"loki-operator-controller-manager-65bb5b59df-8jk5h\" (UID: \"861c2912-a932-4142-9b25-c7c0e1aaf062\") " pod="openshift-operators-redhat/loki-operator-controller-manager-65bb5b59df-8jk5h" Mar 12 08:13:23 crc kubenswrapper[4809]: I0312 08:13:23.513049 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/861c2912-a932-4142-9b25-c7c0e1aaf062-apiservice-cert\") pod \"loki-operator-controller-manager-65bb5b59df-8jk5h\" (UID: \"861c2912-a932-4142-9b25-c7c0e1aaf062\") " pod="openshift-operators-redhat/loki-operator-controller-manager-65bb5b59df-8jk5h" Mar 12 08:13:23 crc kubenswrapper[4809]: I0312 08:13:23.513082 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/861c2912-a932-4142-9b25-c7c0e1aaf062-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-65bb5b59df-8jk5h\" (UID: \"861c2912-a932-4142-9b25-c7c0e1aaf062\") " pod="openshift-operators-redhat/loki-operator-controller-manager-65bb5b59df-8jk5h" Mar 12 08:13:23 crc kubenswrapper[4809]: I0312 08:13:23.513161 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/861c2912-a932-4142-9b25-c7c0e1aaf062-webhook-cert\") pod \"loki-operator-controller-manager-65bb5b59df-8jk5h\" (UID: \"861c2912-a932-4142-9b25-c7c0e1aaf062\") " pod="openshift-operators-redhat/loki-operator-controller-manager-65bb5b59df-8jk5h" Mar 12 08:13:23 crc kubenswrapper[4809]: I0312 08:13:23.520577 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-65bb5b59df-8jk5h"] Mar 12 08:13:23 crc kubenswrapper[4809]: I0312 08:13:23.615183 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/861c2912-a932-4142-9b25-c7c0e1aaf062-webhook-cert\") pod \"loki-operator-controller-manager-65bb5b59df-8jk5h\" (UID: \"861c2912-a932-4142-9b25-c7c0e1aaf062\") " pod="openshift-operators-redhat/loki-operator-controller-manager-65bb5b59df-8jk5h" Mar 12 08:13:23 crc kubenswrapper[4809]: I0312 08:13:23.615249 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9klx6\" (UniqueName: \"kubernetes.io/projected/861c2912-a932-4142-9b25-c7c0e1aaf062-kube-api-access-9klx6\") pod \"loki-operator-controller-manager-65bb5b59df-8jk5h\" (UID: \"861c2912-a932-4142-9b25-c7c0e1aaf062\") " pod="openshift-operators-redhat/loki-operator-controller-manager-65bb5b59df-8jk5h" Mar 12 08:13:23 crc kubenswrapper[4809]: I0312 08:13:23.615279 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/861c2912-a932-4142-9b25-c7c0e1aaf062-manager-config\") pod \"loki-operator-controller-manager-65bb5b59df-8jk5h\" (UID: \"861c2912-a932-4142-9b25-c7c0e1aaf062\") " pod="openshift-operators-redhat/loki-operator-controller-manager-65bb5b59df-8jk5h" Mar 12 08:13:23 crc kubenswrapper[4809]: I0312 08:13:23.615296 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/861c2912-a932-4142-9b25-c7c0e1aaf062-apiservice-cert\") pod \"loki-operator-controller-manager-65bb5b59df-8jk5h\" (UID: \"861c2912-a932-4142-9b25-c7c0e1aaf062\") " pod="openshift-operators-redhat/loki-operator-controller-manager-65bb5b59df-8jk5h" Mar 12 08:13:23 crc kubenswrapper[4809]: I0312 08:13:23.615342 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/861c2912-a932-4142-9b25-c7c0e1aaf062-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-65bb5b59df-8jk5h\" (UID: \"861c2912-a932-4142-9b25-c7c0e1aaf062\") " pod="openshift-operators-redhat/loki-operator-controller-manager-65bb5b59df-8jk5h" Mar 12 08:13:23 crc kubenswrapper[4809]: I0312 08:13:23.618177 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/861c2912-a932-4142-9b25-c7c0e1aaf062-manager-config\") pod \"loki-operator-controller-manager-65bb5b59df-8jk5h\" (UID: \"861c2912-a932-4142-9b25-c7c0e1aaf062\") " pod="openshift-operators-redhat/loki-operator-controller-manager-65bb5b59df-8jk5h" Mar 12 08:13:23 crc kubenswrapper[4809]: I0312 08:13:23.622840 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/861c2912-a932-4142-9b25-c7c0e1aaf062-apiservice-cert\") pod \"loki-operator-controller-manager-65bb5b59df-8jk5h\" (UID: \"861c2912-a932-4142-9b25-c7c0e1aaf062\") " pod="openshift-operators-redhat/loki-operator-controller-manager-65bb5b59df-8jk5h" Mar 12 08:13:23 crc kubenswrapper[4809]: I0312 08:13:23.627682 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/861c2912-a932-4142-9b25-c7c0e1aaf062-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-65bb5b59df-8jk5h\" (UID: \"861c2912-a932-4142-9b25-c7c0e1aaf062\") " pod="openshift-operators-redhat/loki-operator-controller-manager-65bb5b59df-8jk5h" Mar 12 08:13:23 crc kubenswrapper[4809]: I0312 08:13:23.628003 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/861c2912-a932-4142-9b25-c7c0e1aaf062-webhook-cert\") pod \"loki-operator-controller-manager-65bb5b59df-8jk5h\" (UID: \"861c2912-a932-4142-9b25-c7c0e1aaf062\") " pod="openshift-operators-redhat/loki-operator-controller-manager-65bb5b59df-8jk5h" Mar 12 08:13:23 crc kubenswrapper[4809]: I0312 08:13:23.638670 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9klx6\" (UniqueName: \"kubernetes.io/projected/861c2912-a932-4142-9b25-c7c0e1aaf062-kube-api-access-9klx6\") pod \"loki-operator-controller-manager-65bb5b59df-8jk5h\" (UID: \"861c2912-a932-4142-9b25-c7c0e1aaf062\") " pod="openshift-operators-redhat/loki-operator-controller-manager-65bb5b59df-8jk5h" Mar 12 08:13:23 crc kubenswrapper[4809]: I0312 08:13:23.813501 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-65bb5b59df-8jk5h" Mar 12 08:13:24 crc kubenswrapper[4809]: I0312 08:13:24.363353 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-65bb5b59df-8jk5h"] Mar 12 08:13:24 crc kubenswrapper[4809]: W0312 08:13:24.378191 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod861c2912_a932_4142_9b25_c7c0e1aaf062.slice/crio-ff47a8f92c41b9b1dd38ad02019c0967d53f8aafd6b764a123a7fc28a2075878 WatchSource:0}: Error finding container ff47a8f92c41b9b1dd38ad02019c0967d53f8aafd6b764a123a7fc28a2075878: Status 404 returned error can't find the container with id ff47a8f92c41b9b1dd38ad02019c0967d53f8aafd6b764a123a7fc28a2075878 Mar 12 08:13:24 crc kubenswrapper[4809]: I0312 08:13:24.510954 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-65bb5b59df-8jk5h" event={"ID":"861c2912-a932-4142-9b25-c7c0e1aaf062","Type":"ContainerStarted","Data":"ff47a8f92c41b9b1dd38ad02019c0967d53f8aafd6b764a123a7fc28a2075878"} Mar 12 08:13:25 crc kubenswrapper[4809]: I0312 08:13:25.910088 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hrnwf"] Mar 12 08:13:25 crc kubenswrapper[4809]: I0312 08:13:25.910334 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hrnwf" podUID="de95c02b-e54f-4f9c-9420-815e4fdb694c" containerName="registry-server" containerID="cri-o://18393e575e2e68df5b8b93a640b33f38fdd28527d832facbf45219d8831005a1" gracePeriod=2 Mar 12 08:13:26 crc kubenswrapper[4809]: I0312 08:13:26.307468 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hrnwf" Mar 12 08:13:26 crc kubenswrapper[4809]: I0312 08:13:26.358629 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de95c02b-e54f-4f9c-9420-815e4fdb694c-catalog-content\") pod \"de95c02b-e54f-4f9c-9420-815e4fdb694c\" (UID: \"de95c02b-e54f-4f9c-9420-815e4fdb694c\") " Mar 12 08:13:26 crc kubenswrapper[4809]: I0312 08:13:26.358927 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de95c02b-e54f-4f9c-9420-815e4fdb694c-utilities\") pod \"de95c02b-e54f-4f9c-9420-815e4fdb694c\" (UID: \"de95c02b-e54f-4f9c-9420-815e4fdb694c\") " Mar 12 08:13:26 crc kubenswrapper[4809]: I0312 08:13:26.358960 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gx27n\" (UniqueName: \"kubernetes.io/projected/de95c02b-e54f-4f9c-9420-815e4fdb694c-kube-api-access-gx27n\") pod \"de95c02b-e54f-4f9c-9420-815e4fdb694c\" (UID: \"de95c02b-e54f-4f9c-9420-815e4fdb694c\") " Mar 12 08:13:26 crc kubenswrapper[4809]: I0312 08:13:26.359729 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de95c02b-e54f-4f9c-9420-815e4fdb694c-utilities" (OuterVolumeSpecName: "utilities") pod "de95c02b-e54f-4f9c-9420-815e4fdb694c" (UID: "de95c02b-e54f-4f9c-9420-815e4fdb694c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:13:26 crc kubenswrapper[4809]: I0312 08:13:26.365298 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de95c02b-e54f-4f9c-9420-815e4fdb694c-kube-api-access-gx27n" (OuterVolumeSpecName: "kube-api-access-gx27n") pod "de95c02b-e54f-4f9c-9420-815e4fdb694c" (UID: "de95c02b-e54f-4f9c-9420-815e4fdb694c"). InnerVolumeSpecName "kube-api-access-gx27n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:13:26 crc kubenswrapper[4809]: I0312 08:13:26.461013 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de95c02b-e54f-4f9c-9420-815e4fdb694c-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 08:13:26 crc kubenswrapper[4809]: I0312 08:13:26.461061 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gx27n\" (UniqueName: \"kubernetes.io/projected/de95c02b-e54f-4f9c-9420-815e4fdb694c-kube-api-access-gx27n\") on node \"crc\" DevicePath \"\"" Mar 12 08:13:26 crc kubenswrapper[4809]: I0312 08:13:26.491602 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de95c02b-e54f-4f9c-9420-815e4fdb694c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "de95c02b-e54f-4f9c-9420-815e4fdb694c" (UID: "de95c02b-e54f-4f9c-9420-815e4fdb694c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:13:26 crc kubenswrapper[4809]: I0312 08:13:26.528403 4809 generic.go:334] "Generic (PLEG): container finished" podID="de95c02b-e54f-4f9c-9420-815e4fdb694c" containerID="18393e575e2e68df5b8b93a640b33f38fdd28527d832facbf45219d8831005a1" exitCode=0 Mar 12 08:13:26 crc kubenswrapper[4809]: I0312 08:13:26.528457 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hrnwf" event={"ID":"de95c02b-e54f-4f9c-9420-815e4fdb694c","Type":"ContainerDied","Data":"18393e575e2e68df5b8b93a640b33f38fdd28527d832facbf45219d8831005a1"} Mar 12 08:13:26 crc kubenswrapper[4809]: I0312 08:13:26.528496 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hrnwf" event={"ID":"de95c02b-e54f-4f9c-9420-815e4fdb694c","Type":"ContainerDied","Data":"eda3841700f9af3994e1faccf952ef05172cfd0f60dacb952596a1483fe2e842"} Mar 12 08:13:26 crc kubenswrapper[4809]: I0312 08:13:26.528510 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hrnwf" Mar 12 08:13:26 crc kubenswrapper[4809]: I0312 08:13:26.528522 4809 scope.go:117] "RemoveContainer" containerID="18393e575e2e68df5b8b93a640b33f38fdd28527d832facbf45219d8831005a1" Mar 12 08:13:26 crc kubenswrapper[4809]: I0312 08:13:26.556071 4809 scope.go:117] "RemoveContainer" containerID="3add114f0fbcc3925d853f89aef1a045f43453345f62b333ba959dcbae7b1b19" Mar 12 08:13:26 crc kubenswrapper[4809]: I0312 08:13:26.576316 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de95c02b-e54f-4f9c-9420-815e4fdb694c-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 08:13:26 crc kubenswrapper[4809]: I0312 08:13:26.613861 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hrnwf"] Mar 12 08:13:26 crc kubenswrapper[4809]: I0312 08:13:26.620403 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hrnwf"] Mar 12 08:13:26 crc kubenswrapper[4809]: I0312 08:13:26.638510 4809 scope.go:117] "RemoveContainer" containerID="42ff0e63a794164bebdd036cf6ba58399d64bbc65d24738da3af50a621338ca1" Mar 12 08:13:26 crc kubenswrapper[4809]: I0312 08:13:26.667219 4809 scope.go:117] "RemoveContainer" containerID="18393e575e2e68df5b8b93a640b33f38fdd28527d832facbf45219d8831005a1" Mar 12 08:13:26 crc kubenswrapper[4809]: E0312 08:13:26.668197 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18393e575e2e68df5b8b93a640b33f38fdd28527d832facbf45219d8831005a1\": container with ID starting with 18393e575e2e68df5b8b93a640b33f38fdd28527d832facbf45219d8831005a1 not found: ID does not exist" containerID="18393e575e2e68df5b8b93a640b33f38fdd28527d832facbf45219d8831005a1" Mar 12 08:13:26 crc kubenswrapper[4809]: I0312 08:13:26.668248 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18393e575e2e68df5b8b93a640b33f38fdd28527d832facbf45219d8831005a1"} err="failed to get container status \"18393e575e2e68df5b8b93a640b33f38fdd28527d832facbf45219d8831005a1\": rpc error: code = NotFound desc = could not find container \"18393e575e2e68df5b8b93a640b33f38fdd28527d832facbf45219d8831005a1\": container with ID starting with 18393e575e2e68df5b8b93a640b33f38fdd28527d832facbf45219d8831005a1 not found: ID does not exist" Mar 12 08:13:26 crc kubenswrapper[4809]: I0312 08:13:26.668278 4809 scope.go:117] "RemoveContainer" containerID="3add114f0fbcc3925d853f89aef1a045f43453345f62b333ba959dcbae7b1b19" Mar 12 08:13:26 crc kubenswrapper[4809]: E0312 08:13:26.668950 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3add114f0fbcc3925d853f89aef1a045f43453345f62b333ba959dcbae7b1b19\": container with ID starting with 3add114f0fbcc3925d853f89aef1a045f43453345f62b333ba959dcbae7b1b19 not found: ID does not exist" containerID="3add114f0fbcc3925d853f89aef1a045f43453345f62b333ba959dcbae7b1b19" Mar 12 08:13:26 crc kubenswrapper[4809]: I0312 08:13:26.668986 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3add114f0fbcc3925d853f89aef1a045f43453345f62b333ba959dcbae7b1b19"} err="failed to get container status \"3add114f0fbcc3925d853f89aef1a045f43453345f62b333ba959dcbae7b1b19\": rpc error: code = NotFound desc = could not find container \"3add114f0fbcc3925d853f89aef1a045f43453345f62b333ba959dcbae7b1b19\": container with ID starting with 3add114f0fbcc3925d853f89aef1a045f43453345f62b333ba959dcbae7b1b19 not found: ID does not exist" Mar 12 08:13:26 crc kubenswrapper[4809]: I0312 08:13:26.669007 4809 scope.go:117] "RemoveContainer" containerID="42ff0e63a794164bebdd036cf6ba58399d64bbc65d24738da3af50a621338ca1" Mar 12 08:13:26 crc kubenswrapper[4809]: E0312 08:13:26.669446 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42ff0e63a794164bebdd036cf6ba58399d64bbc65d24738da3af50a621338ca1\": container with ID starting with 42ff0e63a794164bebdd036cf6ba58399d64bbc65d24738da3af50a621338ca1 not found: ID does not exist" containerID="42ff0e63a794164bebdd036cf6ba58399d64bbc65d24738da3af50a621338ca1" Mar 12 08:13:26 crc kubenswrapper[4809]: I0312 08:13:26.669516 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42ff0e63a794164bebdd036cf6ba58399d64bbc65d24738da3af50a621338ca1"} err="failed to get container status \"42ff0e63a794164bebdd036cf6ba58399d64bbc65d24738da3af50a621338ca1\": rpc error: code = NotFound desc = could not find container \"42ff0e63a794164bebdd036cf6ba58399d64bbc65d24738da3af50a621338ca1\": container with ID starting with 42ff0e63a794164bebdd036cf6ba58399d64bbc65d24738da3af50a621338ca1 not found: ID does not exist" Mar 12 08:13:27 crc kubenswrapper[4809]: I0312 08:13:27.116373 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de95c02b-e54f-4f9c-9420-815e4fdb694c" path="/var/lib/kubelet/pods/de95c02b-e54f-4f9c-9420-815e4fdb694c/volumes" Mar 12 08:13:30 crc kubenswrapper[4809]: I0312 08:13:30.557887 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-65bb5b59df-8jk5h" event={"ID":"861c2912-a932-4142-9b25-c7c0e1aaf062","Type":"ContainerStarted","Data":"c10217f3a70e7abcb63f0bcc4277f61fc5d5c179240d9daed5170bc2caa8e343"} Mar 12 08:13:36 crc kubenswrapper[4809]: I0312 08:13:36.322231 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5njnf"] Mar 12 08:13:36 crc kubenswrapper[4809]: E0312 08:13:36.323086 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de95c02b-e54f-4f9c-9420-815e4fdb694c" containerName="extract-utilities" Mar 12 08:13:36 crc kubenswrapper[4809]: I0312 08:13:36.323096 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="de95c02b-e54f-4f9c-9420-815e4fdb694c" containerName="extract-utilities" Mar 12 08:13:36 crc kubenswrapper[4809]: E0312 08:13:36.323141 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de95c02b-e54f-4f9c-9420-815e4fdb694c" containerName="extract-content" Mar 12 08:13:36 crc kubenswrapper[4809]: I0312 08:13:36.323148 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="de95c02b-e54f-4f9c-9420-815e4fdb694c" containerName="extract-content" Mar 12 08:13:36 crc kubenswrapper[4809]: E0312 08:13:36.323157 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de95c02b-e54f-4f9c-9420-815e4fdb694c" containerName="registry-server" Mar 12 08:13:36 crc kubenswrapper[4809]: I0312 08:13:36.323163 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="de95c02b-e54f-4f9c-9420-815e4fdb694c" containerName="registry-server" Mar 12 08:13:36 crc kubenswrapper[4809]: I0312 08:13:36.323282 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="de95c02b-e54f-4f9c-9420-815e4fdb694c" containerName="registry-server" Mar 12 08:13:36 crc kubenswrapper[4809]: I0312 08:13:36.324167 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5njnf" Mar 12 08:13:36 crc kubenswrapper[4809]: I0312 08:13:36.339014 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5njnf"] Mar 12 08:13:36 crc kubenswrapper[4809]: I0312 08:13:36.349769 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e70edc9e-c0c9-49d3-ba51-e3967bc1b54e-catalog-content\") pod \"community-operators-5njnf\" (UID: \"e70edc9e-c0c9-49d3-ba51-e3967bc1b54e\") " pod="openshift-marketplace/community-operators-5njnf" Mar 12 08:13:36 crc kubenswrapper[4809]: I0312 08:13:36.349855 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e70edc9e-c0c9-49d3-ba51-e3967bc1b54e-utilities\") pod \"community-operators-5njnf\" (UID: \"e70edc9e-c0c9-49d3-ba51-e3967bc1b54e\") " pod="openshift-marketplace/community-operators-5njnf" Mar 12 08:13:36 crc kubenswrapper[4809]: I0312 08:13:36.349928 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcch9\" (UniqueName: \"kubernetes.io/projected/e70edc9e-c0c9-49d3-ba51-e3967bc1b54e-kube-api-access-kcch9\") pod \"community-operators-5njnf\" (UID: \"e70edc9e-c0c9-49d3-ba51-e3967bc1b54e\") " pod="openshift-marketplace/community-operators-5njnf" Mar 12 08:13:36 crc kubenswrapper[4809]: I0312 08:13:36.451783 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcch9\" (UniqueName: \"kubernetes.io/projected/e70edc9e-c0c9-49d3-ba51-e3967bc1b54e-kube-api-access-kcch9\") pod \"community-operators-5njnf\" (UID: \"e70edc9e-c0c9-49d3-ba51-e3967bc1b54e\") " pod="openshift-marketplace/community-operators-5njnf" Mar 12 08:13:36 crc kubenswrapper[4809]: I0312 08:13:36.451945 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e70edc9e-c0c9-49d3-ba51-e3967bc1b54e-catalog-content\") pod \"community-operators-5njnf\" (UID: \"e70edc9e-c0c9-49d3-ba51-e3967bc1b54e\") " pod="openshift-marketplace/community-operators-5njnf" Mar 12 08:13:36 crc kubenswrapper[4809]: I0312 08:13:36.452005 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e70edc9e-c0c9-49d3-ba51-e3967bc1b54e-utilities\") pod \"community-operators-5njnf\" (UID: \"e70edc9e-c0c9-49d3-ba51-e3967bc1b54e\") " pod="openshift-marketplace/community-operators-5njnf" Mar 12 08:13:36 crc kubenswrapper[4809]: I0312 08:13:36.452560 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e70edc9e-c0c9-49d3-ba51-e3967bc1b54e-catalog-content\") pod \"community-operators-5njnf\" (UID: \"e70edc9e-c0c9-49d3-ba51-e3967bc1b54e\") " pod="openshift-marketplace/community-operators-5njnf" Mar 12 08:13:36 crc kubenswrapper[4809]: I0312 08:13:36.453184 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e70edc9e-c0c9-49d3-ba51-e3967bc1b54e-utilities\") pod \"community-operators-5njnf\" (UID: \"e70edc9e-c0c9-49d3-ba51-e3967bc1b54e\") " pod="openshift-marketplace/community-operators-5njnf" Mar 12 08:13:36 crc kubenswrapper[4809]: I0312 08:13:36.476881 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcch9\" (UniqueName: \"kubernetes.io/projected/e70edc9e-c0c9-49d3-ba51-e3967bc1b54e-kube-api-access-kcch9\") pod \"community-operators-5njnf\" (UID: \"e70edc9e-c0c9-49d3-ba51-e3967bc1b54e\") " pod="openshift-marketplace/community-operators-5njnf" Mar 12 08:13:36 crc kubenswrapper[4809]: I0312 08:13:36.651253 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5njnf" Mar 12 08:13:38 crc kubenswrapper[4809]: I0312 08:13:38.040962 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5njnf"] Mar 12 08:13:40 crc kubenswrapper[4809]: I0312 08:13:40.186516 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5njnf" event={"ID":"e70edc9e-c0c9-49d3-ba51-e3967bc1b54e","Type":"ContainerStarted","Data":"4795f52035524d1ab2781f3495e33e23e3b0dcbe6941fe023b5e264a0ac0bc9f"} Mar 12 08:13:40 crc kubenswrapper[4809]: I0312 08:13:40.198995 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-65bb5b59df-8jk5h" event={"ID":"861c2912-a932-4142-9b25-c7c0e1aaf062","Type":"ContainerStarted","Data":"a0dfb2593db45a01f781be363b4524988dacd8347196f590fe73a27730754385"} Mar 12 08:13:40 crc kubenswrapper[4809]: I0312 08:13:40.222316 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-njprg" podUID="88d136d4-995f-44d0-8691-e84bfacb68c3" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 08:13:41 crc kubenswrapper[4809]: I0312 08:13:41.211829 4809 generic.go:334] "Generic (PLEG): container finished" podID="e70edc9e-c0c9-49d3-ba51-e3967bc1b54e" containerID="bdf160afbdbee75d9c41ccb87f8b7d201ba9ae327ca27734d3ebc8bf246f9570" exitCode=0 Mar 12 08:13:41 crc kubenswrapper[4809]: I0312 08:13:41.211872 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5njnf" event={"ID":"e70edc9e-c0c9-49d3-ba51-e3967bc1b54e","Type":"ContainerDied","Data":"bdf160afbdbee75d9c41ccb87f8b7d201ba9ae327ca27734d3ebc8bf246f9570"} Mar 12 08:13:41 crc kubenswrapper[4809]: I0312 08:13:41.212490 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-65bb5b59df-8jk5h" Mar 12 08:13:41 crc kubenswrapper[4809]: I0312 08:13:41.216356 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-65bb5b59df-8jk5h" Mar 12 08:13:41 crc kubenswrapper[4809]: I0312 08:13:41.255628 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-65bb5b59df-8jk5h" podStartSLOduration=4.720563692 podStartE2EDuration="18.255602493s" podCreationTimestamp="2026-03-12 08:13:23 +0000 UTC" firstStartedPulling="2026-03-12 08:13:24.383197925 +0000 UTC m=+877.965233658" lastFinishedPulling="2026-03-12 08:13:37.918236726 +0000 UTC m=+891.500272459" observedRunningTime="2026-03-12 08:13:41.25434597 +0000 UTC m=+894.836381763" watchObservedRunningTime="2026-03-12 08:13:41.255602493 +0000 UTC m=+894.837638256" Mar 12 08:13:42 crc kubenswrapper[4809]: I0312 08:13:42.228793 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5njnf" event={"ID":"e70edc9e-c0c9-49d3-ba51-e3967bc1b54e","Type":"ContainerStarted","Data":"3e27a489def4b2ac986e1f24cb44cc9566a8165305b3b7782c1f804b9bc0a971"} Mar 12 08:13:43 crc kubenswrapper[4809]: I0312 08:13:43.239188 4809 generic.go:334] "Generic (PLEG): container finished" podID="e70edc9e-c0c9-49d3-ba51-e3967bc1b54e" containerID="3e27a489def4b2ac986e1f24cb44cc9566a8165305b3b7782c1f804b9bc0a971" exitCode=0 Mar 12 08:13:43 crc kubenswrapper[4809]: I0312 08:13:43.239234 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5njnf" event={"ID":"e70edc9e-c0c9-49d3-ba51-e3967bc1b54e","Type":"ContainerDied","Data":"3e27a489def4b2ac986e1f24cb44cc9566a8165305b3b7782c1f804b9bc0a971"} Mar 12 08:13:44 crc kubenswrapper[4809]: I0312 08:13:44.247504 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5njnf" event={"ID":"e70edc9e-c0c9-49d3-ba51-e3967bc1b54e","Type":"ContainerStarted","Data":"ba3e5353df3f14cf8e1aca6aed7bf8aee24cd5cf59f9dbdf65805186c430fad1"} Mar 12 08:13:44 crc kubenswrapper[4809]: I0312 08:13:44.274723 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5njnf" podStartSLOduration=5.759312096 podStartE2EDuration="8.274705954s" podCreationTimestamp="2026-03-12 08:13:36 +0000 UTC" firstStartedPulling="2026-03-12 08:13:41.214944592 +0000 UTC m=+894.796980345" lastFinishedPulling="2026-03-12 08:13:43.73033847 +0000 UTC m=+897.312374203" observedRunningTime="2026-03-12 08:13:44.269380407 +0000 UTC m=+897.851416160" watchObservedRunningTime="2026-03-12 08:13:44.274705954 +0000 UTC m=+897.856741687" Mar 12 08:13:45 crc kubenswrapper[4809]: I0312 08:13:45.176284 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Mar 12 08:13:45 crc kubenswrapper[4809]: I0312 08:13:45.177294 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Mar 12 08:13:45 crc kubenswrapper[4809]: I0312 08:13:45.180012 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Mar 12 08:13:45 crc kubenswrapper[4809]: I0312 08:13:45.180041 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Mar 12 08:13:45 crc kubenswrapper[4809]: I0312 08:13:45.180727 4809 reflector.go:368] Caches populated for *v1.Secret from object-"minio-dev"/"default-dockercfg-m5lbh" Mar 12 08:13:45 crc kubenswrapper[4809]: I0312 08:13:45.191446 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Mar 12 08:13:45 crc kubenswrapper[4809]: I0312 08:13:45.369703 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvgdt\" (UniqueName: \"kubernetes.io/projected/b0947d9f-d3aa-4afa-94ea-0606b17d7d0b-kube-api-access-cvgdt\") pod \"minio\" (UID: \"b0947d9f-d3aa-4afa-94ea-0606b17d7d0b\") " pod="minio-dev/minio" Mar 12 08:13:45 crc kubenswrapper[4809]: I0312 08:13:45.370494 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7c25735d-4b89-46bd-82f2-72b891e3d466\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7c25735d-4b89-46bd-82f2-72b891e3d466\") pod \"minio\" (UID: \"b0947d9f-d3aa-4afa-94ea-0606b17d7d0b\") " pod="minio-dev/minio" Mar 12 08:13:45 crc kubenswrapper[4809]: I0312 08:13:45.472253 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7c25735d-4b89-46bd-82f2-72b891e3d466\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7c25735d-4b89-46bd-82f2-72b891e3d466\") pod \"minio\" (UID: \"b0947d9f-d3aa-4afa-94ea-0606b17d7d0b\") " pod="minio-dev/minio" Mar 12 08:13:45 crc kubenswrapper[4809]: I0312 08:13:45.472403 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvgdt\" (UniqueName: \"kubernetes.io/projected/b0947d9f-d3aa-4afa-94ea-0606b17d7d0b-kube-api-access-cvgdt\") pod \"minio\" (UID: \"b0947d9f-d3aa-4afa-94ea-0606b17d7d0b\") " pod="minio-dev/minio" Mar 12 08:13:45 crc kubenswrapper[4809]: I0312 08:13:45.474805 4809 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 08:13:45 crc kubenswrapper[4809]: I0312 08:13:45.474847 4809 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7c25735d-4b89-46bd-82f2-72b891e3d466\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7c25735d-4b89-46bd-82f2-72b891e3d466\") pod \"minio\" (UID: \"b0947d9f-d3aa-4afa-94ea-0606b17d7d0b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ebb4a95d9645e7c924a7c8c3abd8ba1621f9ec2680fb3503bb0370f88791a8e9/globalmount\"" pod="minio-dev/minio" Mar 12 08:13:45 crc kubenswrapper[4809]: I0312 08:13:45.500851 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7c25735d-4b89-46bd-82f2-72b891e3d466\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7c25735d-4b89-46bd-82f2-72b891e3d466\") pod \"minio\" (UID: \"b0947d9f-d3aa-4afa-94ea-0606b17d7d0b\") " pod="minio-dev/minio" Mar 12 08:13:45 crc kubenswrapper[4809]: I0312 08:13:45.505991 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvgdt\" (UniqueName: \"kubernetes.io/projected/b0947d9f-d3aa-4afa-94ea-0606b17d7d0b-kube-api-access-cvgdt\") pod \"minio\" (UID: \"b0947d9f-d3aa-4afa-94ea-0606b17d7d0b\") " pod="minio-dev/minio" Mar 12 08:13:45 crc kubenswrapper[4809]: I0312 08:13:45.542512 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Mar 12 08:13:45 crc kubenswrapper[4809]: I0312 08:13:45.958765 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Mar 12 08:13:46 crc kubenswrapper[4809]: I0312 08:13:46.262756 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"b0947d9f-d3aa-4afa-94ea-0606b17d7d0b","Type":"ContainerStarted","Data":"6ec95434030064e13a0b5fed0ae4408e6fc083938f4fa34a4f1cf2743145c312"} Mar 12 08:13:46 crc kubenswrapper[4809]: I0312 08:13:46.652993 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5njnf" Mar 12 08:13:46 crc kubenswrapper[4809]: I0312 08:13:46.653026 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5njnf" Mar 12 08:13:46 crc kubenswrapper[4809]: I0312 08:13:46.708311 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5njnf" Mar 12 08:13:51 crc kubenswrapper[4809]: I0312 08:13:51.312942 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"b0947d9f-d3aa-4afa-94ea-0606b17d7d0b","Type":"ContainerStarted","Data":"d602d5d5cd0702a99e896a1ef9dbdfd13e8d1a744a047ce38cfbcf011cc3907a"} Mar 12 08:13:51 crc kubenswrapper[4809]: I0312 08:13:51.336350 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=4.20450678 podStartE2EDuration="9.336324811s" podCreationTimestamp="2026-03-12 08:13:42 +0000 UTC" firstStartedPulling="2026-03-12 08:13:45.969515148 +0000 UTC m=+899.551550871" lastFinishedPulling="2026-03-12 08:13:51.101333159 +0000 UTC m=+904.683368902" observedRunningTime="2026-03-12 08:13:51.330075219 +0000 UTC m=+904.912110962" watchObservedRunningTime="2026-03-12 08:13:51.336324811 +0000 UTC m=+904.918360544" Mar 12 08:13:56 crc kubenswrapper[4809]: I0312 08:13:56.716572 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5njnf" Mar 12 08:13:56 crc kubenswrapper[4809]: I0312 08:13:56.789048 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5njnf"] Mar 12 08:13:57 crc kubenswrapper[4809]: I0312 08:13:57.361513 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5njnf" podUID="e70edc9e-c0c9-49d3-ba51-e3967bc1b54e" containerName="registry-server" containerID="cri-o://ba3e5353df3f14cf8e1aca6aed7bf8aee24cd5cf59f9dbdf65805186c430fad1" gracePeriod=2 Mar 12 08:13:57 crc kubenswrapper[4809]: I0312 08:13:57.663911 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-pbfks"] Mar 12 08:13:57 crc kubenswrapper[4809]: I0312 08:13:57.665536 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-pbfks" Mar 12 08:13:57 crc kubenswrapper[4809]: I0312 08:13:57.672709 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-config" Mar 12 08:13:57 crc kubenswrapper[4809]: I0312 08:13:57.674229 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-dockercfg-49rlt" Mar 12 08:13:57 crc kubenswrapper[4809]: I0312 08:13:57.674814 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-grpc" Mar 12 08:13:57 crc kubenswrapper[4809]: I0312 08:13:57.675140 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-http" Mar 12 08:13:57 crc kubenswrapper[4809]: I0312 08:13:57.679630 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-pbfks"] Mar 12 08:13:57 crc kubenswrapper[4809]: I0312 08:13:57.683575 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-ca-bundle" Mar 12 08:13:57 crc kubenswrapper[4809]: I0312 08:13:57.820424 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/8ac4723a-9ff0-4186-8177-8a86f6db8b9f-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-pbfks\" (UID: \"8ac4723a-9ff0-4186-8177-8a86f6db8b9f\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-pbfks" Mar 12 08:13:57 crc kubenswrapper[4809]: I0312 08:13:57.820523 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7chjc\" (UniqueName: \"kubernetes.io/projected/8ac4723a-9ff0-4186-8177-8a86f6db8b9f-kube-api-access-7chjc\") pod \"logging-loki-distributor-5d5548c9f5-pbfks\" (UID: \"8ac4723a-9ff0-4186-8177-8a86f6db8b9f\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-pbfks" Mar 12 08:13:57 crc kubenswrapper[4809]: I0312 08:13:57.820553 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/8ac4723a-9ff0-4186-8177-8a86f6db8b9f-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-pbfks\" (UID: \"8ac4723a-9ff0-4186-8177-8a86f6db8b9f\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-pbfks" Mar 12 08:13:57 crc kubenswrapper[4809]: I0312 08:13:57.820597 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ac4723a-9ff0-4186-8177-8a86f6db8b9f-config\") pod \"logging-loki-distributor-5d5548c9f5-pbfks\" (UID: \"8ac4723a-9ff0-4186-8177-8a86f6db8b9f\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-pbfks" Mar 12 08:13:57 crc kubenswrapper[4809]: I0312 08:13:57.820633 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ac4723a-9ff0-4186-8177-8a86f6db8b9f-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-pbfks\" (UID: \"8ac4723a-9ff0-4186-8177-8a86f6db8b9f\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-pbfks" Mar 12 08:13:57 crc kubenswrapper[4809]: I0312 08:13:57.851579 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5njnf" Mar 12 08:13:57 crc kubenswrapper[4809]: I0312 08:13:57.895515 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-zb8df"] Mar 12 08:13:57 crc kubenswrapper[4809]: E0312 08:13:57.904428 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e70edc9e-c0c9-49d3-ba51-e3967bc1b54e" containerName="registry-server" Mar 12 08:13:57 crc kubenswrapper[4809]: I0312 08:13:57.904470 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="e70edc9e-c0c9-49d3-ba51-e3967bc1b54e" containerName="registry-server" Mar 12 08:13:57 crc kubenswrapper[4809]: E0312 08:13:57.904510 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e70edc9e-c0c9-49d3-ba51-e3967bc1b54e" containerName="extract-utilities" Mar 12 08:13:57 crc kubenswrapper[4809]: I0312 08:13:57.904519 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="e70edc9e-c0c9-49d3-ba51-e3967bc1b54e" containerName="extract-utilities" Mar 12 08:13:57 crc kubenswrapper[4809]: E0312 08:13:57.904533 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e70edc9e-c0c9-49d3-ba51-e3967bc1b54e" containerName="extract-content" Mar 12 08:13:57 crc kubenswrapper[4809]: I0312 08:13:57.904543 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="e70edc9e-c0c9-49d3-ba51-e3967bc1b54e" containerName="extract-content" Mar 12 08:13:57 crc kubenswrapper[4809]: I0312 08:13:57.905054 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="e70edc9e-c0c9-49d3-ba51-e3967bc1b54e" containerName="registry-server" Mar 12 08:13:57 crc kubenswrapper[4809]: I0312 08:13:57.905944 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76bf7b6d45-zb8df" Mar 12 08:13:57 crc kubenswrapper[4809]: I0312 08:13:57.916094 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-s3" Mar 12 08:13:57 crc kubenswrapper[4809]: I0312 08:13:57.916359 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-grpc" Mar 12 08:13:57 crc kubenswrapper[4809]: I0312 08:13:57.916564 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-http" Mar 12 08:13:57 crc kubenswrapper[4809]: I0312 08:13:57.918013 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-zb8df"] Mar 12 08:13:57 crc kubenswrapper[4809]: I0312 08:13:57.923129 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7chjc\" (UniqueName: \"kubernetes.io/projected/8ac4723a-9ff0-4186-8177-8a86f6db8b9f-kube-api-access-7chjc\") pod \"logging-loki-distributor-5d5548c9f5-pbfks\" (UID: \"8ac4723a-9ff0-4186-8177-8a86f6db8b9f\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-pbfks" Mar 12 08:13:57 crc kubenswrapper[4809]: I0312 08:13:57.923175 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/8ac4723a-9ff0-4186-8177-8a86f6db8b9f-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-pbfks\" (UID: \"8ac4723a-9ff0-4186-8177-8a86f6db8b9f\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-pbfks" Mar 12 08:13:57 crc kubenswrapper[4809]: I0312 08:13:57.923224 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ac4723a-9ff0-4186-8177-8a86f6db8b9f-config\") pod \"logging-loki-distributor-5d5548c9f5-pbfks\" (UID: \"8ac4723a-9ff0-4186-8177-8a86f6db8b9f\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-pbfks" Mar 12 08:13:57 crc kubenswrapper[4809]: I0312 08:13:57.923255 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ac4723a-9ff0-4186-8177-8a86f6db8b9f-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-pbfks\" (UID: \"8ac4723a-9ff0-4186-8177-8a86f6db8b9f\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-pbfks" Mar 12 08:13:57 crc kubenswrapper[4809]: I0312 08:13:57.923325 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/8ac4723a-9ff0-4186-8177-8a86f6db8b9f-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-pbfks\" (UID: \"8ac4723a-9ff0-4186-8177-8a86f6db8b9f\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-pbfks" Mar 12 08:13:57 crc kubenswrapper[4809]: I0312 08:13:57.929436 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ac4723a-9ff0-4186-8177-8a86f6db8b9f-config\") pod \"logging-loki-distributor-5d5548c9f5-pbfks\" (UID: \"8ac4723a-9ff0-4186-8177-8a86f6db8b9f\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-pbfks" Mar 12 08:13:57 crc kubenswrapper[4809]: I0312 08:13:57.930006 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ac4723a-9ff0-4186-8177-8a86f6db8b9f-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-pbfks\" (UID: \"8ac4723a-9ff0-4186-8177-8a86f6db8b9f\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-pbfks" Mar 12 08:13:57 crc kubenswrapper[4809]: I0312 08:13:57.935715 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/8ac4723a-9ff0-4186-8177-8a86f6db8b9f-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-pbfks\" (UID: \"8ac4723a-9ff0-4186-8177-8a86f6db8b9f\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-pbfks" Mar 12 08:13:57 crc kubenswrapper[4809]: I0312 08:13:57.936358 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/8ac4723a-9ff0-4186-8177-8a86f6db8b9f-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-pbfks\" (UID: \"8ac4723a-9ff0-4186-8177-8a86f6db8b9f\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-pbfks" Mar 12 08:13:57 crc kubenswrapper[4809]: I0312 08:13:57.954980 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7chjc\" (UniqueName: \"kubernetes.io/projected/8ac4723a-9ff0-4186-8177-8a86f6db8b9f-kube-api-access-7chjc\") pod \"logging-loki-distributor-5d5548c9f5-pbfks\" (UID: \"8ac4723a-9ff0-4186-8177-8a86f6db8b9f\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-pbfks" Mar 12 08:13:57 crc kubenswrapper[4809]: I0312 08:13:57.966197 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-6pp5s"] Mar 12 08:13:57 crc kubenswrapper[4809]: I0312 08:13:57.967152 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-6pp5s" Mar 12 08:13:57 crc kubenswrapper[4809]: I0312 08:13:57.972937 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-http" Mar 12 08:13:57 crc kubenswrapper[4809]: I0312 08:13:57.976653 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-grpc" Mar 12 08:13:57 crc kubenswrapper[4809]: I0312 08:13:57.986411 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-6pp5s"] Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.008407 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-pbfks" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.025230 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcch9\" (UniqueName: \"kubernetes.io/projected/e70edc9e-c0c9-49d3-ba51-e3967bc1b54e-kube-api-access-kcch9\") pod \"e70edc9e-c0c9-49d3-ba51-e3967bc1b54e\" (UID: \"e70edc9e-c0c9-49d3-ba51-e3967bc1b54e\") " Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.025330 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e70edc9e-c0c9-49d3-ba51-e3967bc1b54e-catalog-content\") pod \"e70edc9e-c0c9-49d3-ba51-e3967bc1b54e\" (UID: \"e70edc9e-c0c9-49d3-ba51-e3967bc1b54e\") " Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.025461 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e70edc9e-c0c9-49d3-ba51-e3967bc1b54e-utilities\") pod \"e70edc9e-c0c9-49d3-ba51-e3967bc1b54e\" (UID: \"e70edc9e-c0c9-49d3-ba51-e3967bc1b54e\") " Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.025648 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3630a5f-f4c4-42af-8335-60dbcbdb4961-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-zb8df\" (UID: \"c3630a5f-f4c4-42af-8335-60dbcbdb4961\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-zb8df" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.025707 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/c3630a5f-f4c4-42af-8335-60dbcbdb4961-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-zb8df\" (UID: \"c3630a5f-f4c4-42af-8335-60dbcbdb4961\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-zb8df" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.025737 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bl6f\" (UniqueName: \"kubernetes.io/projected/c3630a5f-f4c4-42af-8335-60dbcbdb4961-kube-api-access-4bl6f\") pod \"logging-loki-querier-76bf7b6d45-zb8df\" (UID: \"c3630a5f-f4c4-42af-8335-60dbcbdb4961\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-zb8df" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.025755 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/c3630a5f-f4c4-42af-8335-60dbcbdb4961-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-zb8df\" (UID: \"c3630a5f-f4c4-42af-8335-60dbcbdb4961\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-zb8df" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.025772 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/c3630a5f-f4c4-42af-8335-60dbcbdb4961-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-zb8df\" (UID: \"c3630a5f-f4c4-42af-8335-60dbcbdb4961\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-zb8df" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.025797 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3630a5f-f4c4-42af-8335-60dbcbdb4961-config\") pod \"logging-loki-querier-76bf7b6d45-zb8df\" (UID: \"c3630a5f-f4c4-42af-8335-60dbcbdb4961\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-zb8df" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.029833 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e70edc9e-c0c9-49d3-ba51-e3967bc1b54e-utilities" (OuterVolumeSpecName: "utilities") pod "e70edc9e-c0c9-49d3-ba51-e3967bc1b54e" (UID: "e70edc9e-c0c9-49d3-ba51-e3967bc1b54e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.055496 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e70edc9e-c0c9-49d3-ba51-e3967bc1b54e-kube-api-access-kcch9" (OuterVolumeSpecName: "kube-api-access-kcch9") pod "e70edc9e-c0c9-49d3-ba51-e3967bc1b54e" (UID: "e70edc9e-c0c9-49d3-ba51-e3967bc1b54e"). InnerVolumeSpecName "kube-api-access-kcch9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.085530 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-fd898bfdd-dhc65"] Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.086863 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-fd898bfdd-dhc65" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.088989 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-http" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.089380 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.094387 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.094648 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-client-http" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.094794 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway-ca-bundle" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.104594 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e70edc9e-c0c9-49d3-ba51-e3967bc1b54e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e70edc9e-c0c9-49d3-ba51-e3967bc1b54e" (UID: "e70edc9e-c0c9-49d3-ba51-e3967bc1b54e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.106815 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-fd898bfdd-tbqw2"] Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.107977 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-fd898bfdd-tbqw2" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.115867 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-dockercfg-pb64q" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.127544 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/9fc47673-0fe3-49f6-a2bb-06845a2f3fc4-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-6pp5s\" (UID: \"9fc47673-0fe3-49f6-a2bb-06845a2f3fc4\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-6pp5s" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.127604 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fc47673-0fe3-49f6-a2bb-06845a2f3fc4-config\") pod \"logging-loki-query-frontend-6d6859c548-6pp5s\" (UID: \"9fc47673-0fe3-49f6-a2bb-06845a2f3fc4\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-6pp5s" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.127638 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/9fc47673-0fe3-49f6-a2bb-06845a2f3fc4-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-6pp5s\" (UID: \"9fc47673-0fe3-49f6-a2bb-06845a2f3fc4\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-6pp5s" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.127678 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfggs\" (UniqueName: \"kubernetes.io/projected/9fc47673-0fe3-49f6-a2bb-06845a2f3fc4-kube-api-access-dfggs\") pod \"logging-loki-query-frontend-6d6859c548-6pp5s\" (UID: \"9fc47673-0fe3-49f6-a2bb-06845a2f3fc4\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-6pp5s" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.127707 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/c3630a5f-f4c4-42af-8335-60dbcbdb4961-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-zb8df\" (UID: \"c3630a5f-f4c4-42af-8335-60dbcbdb4961\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-zb8df" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.127735 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9fc47673-0fe3-49f6-a2bb-06845a2f3fc4-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-6pp5s\" (UID: \"9fc47673-0fe3-49f6-a2bb-06845a2f3fc4\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-6pp5s" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.127772 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bl6f\" (UniqueName: \"kubernetes.io/projected/c3630a5f-f4c4-42af-8335-60dbcbdb4961-kube-api-access-4bl6f\") pod \"logging-loki-querier-76bf7b6d45-zb8df\" (UID: \"c3630a5f-f4c4-42af-8335-60dbcbdb4961\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-zb8df" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.127800 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/c3630a5f-f4c4-42af-8335-60dbcbdb4961-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-zb8df\" (UID: \"c3630a5f-f4c4-42af-8335-60dbcbdb4961\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-zb8df" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.127820 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/c3630a5f-f4c4-42af-8335-60dbcbdb4961-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-zb8df\" (UID: \"c3630a5f-f4c4-42af-8335-60dbcbdb4961\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-zb8df" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.127846 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3630a5f-f4c4-42af-8335-60dbcbdb4961-config\") pod \"logging-loki-querier-76bf7b6d45-zb8df\" (UID: \"c3630a5f-f4c4-42af-8335-60dbcbdb4961\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-zb8df" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.127896 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3630a5f-f4c4-42af-8335-60dbcbdb4961-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-zb8df\" (UID: \"c3630a5f-f4c4-42af-8335-60dbcbdb4961\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-zb8df" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.127937 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e70edc9e-c0c9-49d3-ba51-e3967bc1b54e-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.127947 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e70edc9e-c0c9-49d3-ba51-e3967bc1b54e-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.127957 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcch9\" (UniqueName: \"kubernetes.io/projected/e70edc9e-c0c9-49d3-ba51-e3967bc1b54e-kube-api-access-kcch9\") on node \"crc\" DevicePath \"\"" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.128806 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3630a5f-f4c4-42af-8335-60dbcbdb4961-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-zb8df\" (UID: \"c3630a5f-f4c4-42af-8335-60dbcbdb4961\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-zb8df" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.140808 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-fd898bfdd-dhc65"] Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.141559 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3630a5f-f4c4-42af-8335-60dbcbdb4961-config\") pod \"logging-loki-querier-76bf7b6d45-zb8df\" (UID: \"c3630a5f-f4c4-42af-8335-60dbcbdb4961\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-zb8df" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.143581 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/c3630a5f-f4c4-42af-8335-60dbcbdb4961-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-zb8df\" (UID: \"c3630a5f-f4c4-42af-8335-60dbcbdb4961\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-zb8df" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.144439 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/c3630a5f-f4c4-42af-8335-60dbcbdb4961-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-zb8df\" (UID: \"c3630a5f-f4c4-42af-8335-60dbcbdb4961\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-zb8df" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.144683 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/c3630a5f-f4c4-42af-8335-60dbcbdb4961-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-zb8df\" (UID: \"c3630a5f-f4c4-42af-8335-60dbcbdb4961\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-zb8df" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.181356 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-fd898bfdd-tbqw2"] Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.203644 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bl6f\" (UniqueName: \"kubernetes.io/projected/c3630a5f-f4c4-42af-8335-60dbcbdb4961-kube-api-access-4bl6f\") pod \"logging-loki-querier-76bf7b6d45-zb8df\" (UID: \"c3630a5f-f4c4-42af-8335-60dbcbdb4961\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-zb8df" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.234485 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9fc47673-0fe3-49f6-a2bb-06845a2f3fc4-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-6pp5s\" (UID: \"9fc47673-0fe3-49f6-a2bb-06845a2f3fc4\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-6pp5s" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.234556 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7546fb46-f601-417f-ad26-69a4fb625fdc-logging-loki-ca-bundle\") pod \"logging-loki-gateway-fd898bfdd-tbqw2\" (UID: \"7546fb46-f601-417f-ad26-69a4fb625fdc\") " pod="openshift-logging/logging-loki-gateway-fd898bfdd-tbqw2" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.234585 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/7546fb46-f601-417f-ad26-69a4fb625fdc-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-fd898bfdd-tbqw2\" (UID: \"7546fb46-f601-417f-ad26-69a4fb625fdc\") " pod="openshift-logging/logging-loki-gateway-fd898bfdd-tbqw2" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.234619 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/8232d992-4bfb-46ca-a440-647d8c006309-lokistack-gateway\") pod \"logging-loki-gateway-fd898bfdd-dhc65\" (UID: \"8232d992-4bfb-46ca-a440-647d8c006309\") " pod="openshift-logging/logging-loki-gateway-fd898bfdd-dhc65" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.234650 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwxzx\" (UniqueName: \"kubernetes.io/projected/8232d992-4bfb-46ca-a440-647d8c006309-kube-api-access-rwxzx\") pod \"logging-loki-gateway-fd898bfdd-dhc65\" (UID: \"8232d992-4bfb-46ca-a440-647d8c006309\") " pod="openshift-logging/logging-loki-gateway-fd898bfdd-dhc65" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.234690 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8232d992-4bfb-46ca-a440-647d8c006309-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-fd898bfdd-dhc65\" (UID: \"8232d992-4bfb-46ca-a440-647d8c006309\") " pod="openshift-logging/logging-loki-gateway-fd898bfdd-dhc65" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.234740 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/7546fb46-f601-417f-ad26-69a4fb625fdc-tenants\") pod \"logging-loki-gateway-fd898bfdd-tbqw2\" (UID: \"7546fb46-f601-417f-ad26-69a4fb625fdc\") " pod="openshift-logging/logging-loki-gateway-fd898bfdd-tbqw2" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.234778 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/7546fb46-f601-417f-ad26-69a4fb625fdc-tls-secret\") pod \"logging-loki-gateway-fd898bfdd-tbqw2\" (UID: \"7546fb46-f601-417f-ad26-69a4fb625fdc\") " pod="openshift-logging/logging-loki-gateway-fd898bfdd-tbqw2" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.234800 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/8232d992-4bfb-46ca-a440-647d8c006309-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-fd898bfdd-dhc65\" (UID: \"8232d992-4bfb-46ca-a440-647d8c006309\") " pod="openshift-logging/logging-loki-gateway-fd898bfdd-dhc65" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.235663 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8232d992-4bfb-46ca-a440-647d8c006309-logging-loki-ca-bundle\") pod \"logging-loki-gateway-fd898bfdd-dhc65\" (UID: \"8232d992-4bfb-46ca-a440-647d8c006309\") " pod="openshift-logging/logging-loki-gateway-fd898bfdd-dhc65" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.235744 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/9fc47673-0fe3-49f6-a2bb-06845a2f3fc4-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-6pp5s\" (UID: \"9fc47673-0fe3-49f6-a2bb-06845a2f3fc4\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-6pp5s" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.235825 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7546fb46-f601-417f-ad26-69a4fb625fdc-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-fd898bfdd-tbqw2\" (UID: \"7546fb46-f601-417f-ad26-69a4fb625fdc\") " pod="openshift-logging/logging-loki-gateway-fd898bfdd-tbqw2" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.237758 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fc47673-0fe3-49f6-a2bb-06845a2f3fc4-config\") pod \"logging-loki-query-frontend-6d6859c548-6pp5s\" (UID: \"9fc47673-0fe3-49f6-a2bb-06845a2f3fc4\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-6pp5s" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.237902 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/9fc47673-0fe3-49f6-a2bb-06845a2f3fc4-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-6pp5s\" (UID: \"9fc47673-0fe3-49f6-a2bb-06845a2f3fc4\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-6pp5s" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.237999 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/8232d992-4bfb-46ca-a440-647d8c006309-tenants\") pod \"logging-loki-gateway-fd898bfdd-dhc65\" (UID: \"8232d992-4bfb-46ca-a440-647d8c006309\") " pod="openshift-logging/logging-loki-gateway-fd898bfdd-dhc65" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.238099 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/7546fb46-f601-417f-ad26-69a4fb625fdc-lokistack-gateway\") pod \"logging-loki-gateway-fd898bfdd-tbqw2\" (UID: \"7546fb46-f601-417f-ad26-69a4fb625fdc\") " pod="openshift-logging/logging-loki-gateway-fd898bfdd-tbqw2" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.238285 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/8232d992-4bfb-46ca-a440-647d8c006309-rbac\") pod \"logging-loki-gateway-fd898bfdd-dhc65\" (UID: \"8232d992-4bfb-46ca-a440-647d8c006309\") " pod="openshift-logging/logging-loki-gateway-fd898bfdd-dhc65" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.238385 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jmf7\" (UniqueName: \"kubernetes.io/projected/7546fb46-f601-417f-ad26-69a4fb625fdc-kube-api-access-9jmf7\") pod \"logging-loki-gateway-fd898bfdd-tbqw2\" (UID: \"7546fb46-f601-417f-ad26-69a4fb625fdc\") " pod="openshift-logging/logging-loki-gateway-fd898bfdd-tbqw2" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.238504 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfggs\" (UniqueName: \"kubernetes.io/projected/9fc47673-0fe3-49f6-a2bb-06845a2f3fc4-kube-api-access-dfggs\") pod \"logging-loki-query-frontend-6d6859c548-6pp5s\" (UID: \"9fc47673-0fe3-49f6-a2bb-06845a2f3fc4\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-6pp5s" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.238606 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/8232d992-4bfb-46ca-a440-647d8c006309-tls-secret\") pod \"logging-loki-gateway-fd898bfdd-dhc65\" (UID: \"8232d992-4bfb-46ca-a440-647d8c006309\") " pod="openshift-logging/logging-loki-gateway-fd898bfdd-dhc65" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.238721 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/7546fb46-f601-417f-ad26-69a4fb625fdc-rbac\") pod \"logging-loki-gateway-fd898bfdd-tbqw2\" (UID: \"7546fb46-f601-417f-ad26-69a4fb625fdc\") " pod="openshift-logging/logging-loki-gateway-fd898bfdd-tbqw2" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.240668 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fc47673-0fe3-49f6-a2bb-06845a2f3fc4-config\") pod \"logging-loki-query-frontend-6d6859c548-6pp5s\" (UID: \"9fc47673-0fe3-49f6-a2bb-06845a2f3fc4\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-6pp5s" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.238319 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9fc47673-0fe3-49f6-a2bb-06845a2f3fc4-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-6pp5s\" (UID: \"9fc47673-0fe3-49f6-a2bb-06845a2f3fc4\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-6pp5s" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.257714 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/9fc47673-0fe3-49f6-a2bb-06845a2f3fc4-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-6pp5s\" (UID: \"9fc47673-0fe3-49f6-a2bb-06845a2f3fc4\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-6pp5s" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.267091 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/9fc47673-0fe3-49f6-a2bb-06845a2f3fc4-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-6pp5s\" (UID: \"9fc47673-0fe3-49f6-a2bb-06845a2f3fc4\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-6pp5s" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.289062 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76bf7b6d45-zb8df" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.309928 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfggs\" (UniqueName: \"kubernetes.io/projected/9fc47673-0fe3-49f6-a2bb-06845a2f3fc4-kube-api-access-dfggs\") pod \"logging-loki-query-frontend-6d6859c548-6pp5s\" (UID: \"9fc47673-0fe3-49f6-a2bb-06845a2f3fc4\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-6pp5s" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.334337 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-6pp5s" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.341372 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/7546fb46-f601-417f-ad26-69a4fb625fdc-tls-secret\") pod \"logging-loki-gateway-fd898bfdd-tbqw2\" (UID: \"7546fb46-f601-417f-ad26-69a4fb625fdc\") " pod="openshift-logging/logging-loki-gateway-fd898bfdd-tbqw2" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.341440 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/8232d992-4bfb-46ca-a440-647d8c006309-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-fd898bfdd-dhc65\" (UID: \"8232d992-4bfb-46ca-a440-647d8c006309\") " pod="openshift-logging/logging-loki-gateway-fd898bfdd-dhc65" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.341466 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8232d992-4bfb-46ca-a440-647d8c006309-logging-loki-ca-bundle\") pod \"logging-loki-gateway-fd898bfdd-dhc65\" (UID: \"8232d992-4bfb-46ca-a440-647d8c006309\") " pod="openshift-logging/logging-loki-gateway-fd898bfdd-dhc65" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.341499 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7546fb46-f601-417f-ad26-69a4fb625fdc-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-fd898bfdd-tbqw2\" (UID: \"7546fb46-f601-417f-ad26-69a4fb625fdc\") " pod="openshift-logging/logging-loki-gateway-fd898bfdd-tbqw2" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.341525 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/8232d992-4bfb-46ca-a440-647d8c006309-tenants\") pod \"logging-loki-gateway-fd898bfdd-dhc65\" (UID: \"8232d992-4bfb-46ca-a440-647d8c006309\") " pod="openshift-logging/logging-loki-gateway-fd898bfdd-dhc65" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.341546 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/7546fb46-f601-417f-ad26-69a4fb625fdc-lokistack-gateway\") pod \"logging-loki-gateway-fd898bfdd-tbqw2\" (UID: \"7546fb46-f601-417f-ad26-69a4fb625fdc\") " pod="openshift-logging/logging-loki-gateway-fd898bfdd-tbqw2" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.341569 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/8232d992-4bfb-46ca-a440-647d8c006309-rbac\") pod \"logging-loki-gateway-fd898bfdd-dhc65\" (UID: \"8232d992-4bfb-46ca-a440-647d8c006309\") " pod="openshift-logging/logging-loki-gateway-fd898bfdd-dhc65" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.341589 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jmf7\" (UniqueName: \"kubernetes.io/projected/7546fb46-f601-417f-ad26-69a4fb625fdc-kube-api-access-9jmf7\") pod \"logging-loki-gateway-fd898bfdd-tbqw2\" (UID: \"7546fb46-f601-417f-ad26-69a4fb625fdc\") " pod="openshift-logging/logging-loki-gateway-fd898bfdd-tbqw2" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.341618 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/8232d992-4bfb-46ca-a440-647d8c006309-tls-secret\") pod \"logging-loki-gateway-fd898bfdd-dhc65\" (UID: \"8232d992-4bfb-46ca-a440-647d8c006309\") " pod="openshift-logging/logging-loki-gateway-fd898bfdd-dhc65" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.341637 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/7546fb46-f601-417f-ad26-69a4fb625fdc-rbac\") pod \"logging-loki-gateway-fd898bfdd-tbqw2\" (UID: \"7546fb46-f601-417f-ad26-69a4fb625fdc\") " pod="openshift-logging/logging-loki-gateway-fd898bfdd-tbqw2" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.341659 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7546fb46-f601-417f-ad26-69a4fb625fdc-logging-loki-ca-bundle\") pod \"logging-loki-gateway-fd898bfdd-tbqw2\" (UID: \"7546fb46-f601-417f-ad26-69a4fb625fdc\") " pod="openshift-logging/logging-loki-gateway-fd898bfdd-tbqw2" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.341680 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/7546fb46-f601-417f-ad26-69a4fb625fdc-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-fd898bfdd-tbqw2\" (UID: \"7546fb46-f601-417f-ad26-69a4fb625fdc\") " pod="openshift-logging/logging-loki-gateway-fd898bfdd-tbqw2" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.341705 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/8232d992-4bfb-46ca-a440-647d8c006309-lokistack-gateway\") pod \"logging-loki-gateway-fd898bfdd-dhc65\" (UID: \"8232d992-4bfb-46ca-a440-647d8c006309\") " pod="openshift-logging/logging-loki-gateway-fd898bfdd-dhc65" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.341736 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwxzx\" (UniqueName: \"kubernetes.io/projected/8232d992-4bfb-46ca-a440-647d8c006309-kube-api-access-rwxzx\") pod \"logging-loki-gateway-fd898bfdd-dhc65\" (UID: \"8232d992-4bfb-46ca-a440-647d8c006309\") " pod="openshift-logging/logging-loki-gateway-fd898bfdd-dhc65" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.341759 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8232d992-4bfb-46ca-a440-647d8c006309-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-fd898bfdd-dhc65\" (UID: \"8232d992-4bfb-46ca-a440-647d8c006309\") " pod="openshift-logging/logging-loki-gateway-fd898bfdd-dhc65" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.341803 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/7546fb46-f601-417f-ad26-69a4fb625fdc-tenants\") pod \"logging-loki-gateway-fd898bfdd-tbqw2\" (UID: \"7546fb46-f601-417f-ad26-69a4fb625fdc\") " pod="openshift-logging/logging-loki-gateway-fd898bfdd-tbqw2" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.346943 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8232d992-4bfb-46ca-a440-647d8c006309-logging-loki-ca-bundle\") pod \"logging-loki-gateway-fd898bfdd-dhc65\" (UID: \"8232d992-4bfb-46ca-a440-647d8c006309\") " pod="openshift-logging/logging-loki-gateway-fd898bfdd-dhc65" Mar 12 08:13:58 crc kubenswrapper[4809]: E0312 08:13:58.348599 4809 secret.go:188] Couldn't get secret openshift-logging/logging-loki-gateway-http: secret "logging-loki-gateway-http" not found Mar 12 08:13:58 crc kubenswrapper[4809]: E0312 08:13:58.349079 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7546fb46-f601-417f-ad26-69a4fb625fdc-tls-secret podName:7546fb46-f601-417f-ad26-69a4fb625fdc nodeName:}" failed. No retries permitted until 2026-03-12 08:13:58.849058148 +0000 UTC m=+912.431093881 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/7546fb46-f601-417f-ad26-69a4fb625fdc-tls-secret") pod "logging-loki-gateway-fd898bfdd-tbqw2" (UID: "7546fb46-f601-417f-ad26-69a4fb625fdc") : secret "logging-loki-gateway-http" not found Mar 12 08:13:58 crc kubenswrapper[4809]: E0312 08:13:58.350203 4809 secret.go:188] Couldn't get secret openshift-logging/logging-loki-gateway-http: secret "logging-loki-gateway-http" not found Mar 12 08:13:58 crc kubenswrapper[4809]: E0312 08:13:58.350273 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8232d992-4bfb-46ca-a440-647d8c006309-tls-secret podName:8232d992-4bfb-46ca-a440-647d8c006309 nodeName:}" failed. No retries permitted until 2026-03-12 08:13:58.850253221 +0000 UTC m=+912.432288954 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/8232d992-4bfb-46ca-a440-647d8c006309-tls-secret") pod "logging-loki-gateway-fd898bfdd-dhc65" (UID: "8232d992-4bfb-46ca-a440-647d8c006309") : secret "logging-loki-gateway-http" not found Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.364167 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/7546fb46-f601-417f-ad26-69a4fb625fdc-tenants\") pod \"logging-loki-gateway-fd898bfdd-tbqw2\" (UID: \"7546fb46-f601-417f-ad26-69a4fb625fdc\") " pod="openshift-logging/logging-loki-gateway-fd898bfdd-tbqw2" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.369644 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/8232d992-4bfb-46ca-a440-647d8c006309-rbac\") pod \"logging-loki-gateway-fd898bfdd-dhc65\" (UID: \"8232d992-4bfb-46ca-a440-647d8c006309\") " pod="openshift-logging/logging-loki-gateway-fd898bfdd-dhc65" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.373014 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/7546fb46-f601-417f-ad26-69a4fb625fdc-rbac\") pod \"logging-loki-gateway-fd898bfdd-tbqw2\" (UID: \"7546fb46-f601-417f-ad26-69a4fb625fdc\") " pod="openshift-logging/logging-loki-gateway-fd898bfdd-tbqw2" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.373871 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/7546fb46-f601-417f-ad26-69a4fb625fdc-lokistack-gateway\") pod \"logging-loki-gateway-fd898bfdd-tbqw2\" (UID: \"7546fb46-f601-417f-ad26-69a4fb625fdc\") " pod="openshift-logging/logging-loki-gateway-fd898bfdd-tbqw2" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.374149 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7546fb46-f601-417f-ad26-69a4fb625fdc-logging-loki-ca-bundle\") pod \"logging-loki-gateway-fd898bfdd-tbqw2\" (UID: \"7546fb46-f601-417f-ad26-69a4fb625fdc\") " pod="openshift-logging/logging-loki-gateway-fd898bfdd-tbqw2" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.374302 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7546fb46-f601-417f-ad26-69a4fb625fdc-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-fd898bfdd-tbqw2\" (UID: \"7546fb46-f601-417f-ad26-69a4fb625fdc\") " pod="openshift-logging/logging-loki-gateway-fd898bfdd-tbqw2" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.374990 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/8232d992-4bfb-46ca-a440-647d8c006309-lokistack-gateway\") pod \"logging-loki-gateway-fd898bfdd-dhc65\" (UID: \"8232d992-4bfb-46ca-a440-647d8c006309\") " pod="openshift-logging/logging-loki-gateway-fd898bfdd-dhc65" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.375032 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8232d992-4bfb-46ca-a440-647d8c006309-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-fd898bfdd-dhc65\" (UID: \"8232d992-4bfb-46ca-a440-647d8c006309\") " pod="openshift-logging/logging-loki-gateway-fd898bfdd-dhc65" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.377093 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/8232d992-4bfb-46ca-a440-647d8c006309-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-fd898bfdd-dhc65\" (UID: \"8232d992-4bfb-46ca-a440-647d8c006309\") " pod="openshift-logging/logging-loki-gateway-fd898bfdd-dhc65" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.379778 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwxzx\" (UniqueName: \"kubernetes.io/projected/8232d992-4bfb-46ca-a440-647d8c006309-kube-api-access-rwxzx\") pod \"logging-loki-gateway-fd898bfdd-dhc65\" (UID: \"8232d992-4bfb-46ca-a440-647d8c006309\") " pod="openshift-logging/logging-loki-gateway-fd898bfdd-dhc65" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.388979 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/8232d992-4bfb-46ca-a440-647d8c006309-tenants\") pod \"logging-loki-gateway-fd898bfdd-dhc65\" (UID: \"8232d992-4bfb-46ca-a440-647d8c006309\") " pod="openshift-logging/logging-loki-gateway-fd898bfdd-dhc65" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.392092 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/7546fb46-f601-417f-ad26-69a4fb625fdc-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-fd898bfdd-tbqw2\" (UID: \"7546fb46-f601-417f-ad26-69a4fb625fdc\") " pod="openshift-logging/logging-loki-gateway-fd898bfdd-tbqw2" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.395711 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jmf7\" (UniqueName: \"kubernetes.io/projected/7546fb46-f601-417f-ad26-69a4fb625fdc-kube-api-access-9jmf7\") pod \"logging-loki-gateway-fd898bfdd-tbqw2\" (UID: \"7546fb46-f601-417f-ad26-69a4fb625fdc\") " pod="openshift-logging/logging-loki-gateway-fd898bfdd-tbqw2" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.403539 4809 generic.go:334] "Generic (PLEG): container finished" podID="e70edc9e-c0c9-49d3-ba51-e3967bc1b54e" containerID="ba3e5353df3f14cf8e1aca6aed7bf8aee24cd5cf59f9dbdf65805186c430fad1" exitCode=0 Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.403586 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5njnf" event={"ID":"e70edc9e-c0c9-49d3-ba51-e3967bc1b54e","Type":"ContainerDied","Data":"ba3e5353df3f14cf8e1aca6aed7bf8aee24cd5cf59f9dbdf65805186c430fad1"} Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.403617 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5njnf" event={"ID":"e70edc9e-c0c9-49d3-ba51-e3967bc1b54e","Type":"ContainerDied","Data":"4795f52035524d1ab2781f3495e33e23e3b0dcbe6941fe023b5e264a0ac0bc9f"} Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.403637 4809 scope.go:117] "RemoveContainer" containerID="ba3e5353df3f14cf8e1aca6aed7bf8aee24cd5cf59f9dbdf65805186c430fad1" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.403767 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5njnf" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.457581 4809 scope.go:117] "RemoveContainer" containerID="3e27a489def4b2ac986e1f24cb44cc9566a8165305b3b7782c1f804b9bc0a971" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.495744 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5njnf"] Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.500846 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5njnf"] Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.509541 4809 scope.go:117] "RemoveContainer" containerID="bdf160afbdbee75d9c41ccb87f8b7d201ba9ae327ca27734d3ebc8bf246f9570" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.579044 4809 scope.go:117] "RemoveContainer" containerID="ba3e5353df3f14cf8e1aca6aed7bf8aee24cd5cf59f9dbdf65805186c430fad1" Mar 12 08:13:58 crc kubenswrapper[4809]: E0312 08:13:58.580897 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba3e5353df3f14cf8e1aca6aed7bf8aee24cd5cf59f9dbdf65805186c430fad1\": container with ID starting with ba3e5353df3f14cf8e1aca6aed7bf8aee24cd5cf59f9dbdf65805186c430fad1 not found: ID does not exist" containerID="ba3e5353df3f14cf8e1aca6aed7bf8aee24cd5cf59f9dbdf65805186c430fad1" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.580962 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba3e5353df3f14cf8e1aca6aed7bf8aee24cd5cf59f9dbdf65805186c430fad1"} err="failed to get container status \"ba3e5353df3f14cf8e1aca6aed7bf8aee24cd5cf59f9dbdf65805186c430fad1\": rpc error: code = NotFound desc = could not find container \"ba3e5353df3f14cf8e1aca6aed7bf8aee24cd5cf59f9dbdf65805186c430fad1\": container with ID starting with ba3e5353df3f14cf8e1aca6aed7bf8aee24cd5cf59f9dbdf65805186c430fad1 not found: ID does not exist" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.581004 4809 scope.go:117] "RemoveContainer" containerID="3e27a489def4b2ac986e1f24cb44cc9566a8165305b3b7782c1f804b9bc0a971" Mar 12 08:13:58 crc kubenswrapper[4809]: E0312 08:13:58.583531 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e27a489def4b2ac986e1f24cb44cc9566a8165305b3b7782c1f804b9bc0a971\": container with ID starting with 3e27a489def4b2ac986e1f24cb44cc9566a8165305b3b7782c1f804b9bc0a971 not found: ID does not exist" containerID="3e27a489def4b2ac986e1f24cb44cc9566a8165305b3b7782c1f804b9bc0a971" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.583570 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e27a489def4b2ac986e1f24cb44cc9566a8165305b3b7782c1f804b9bc0a971"} err="failed to get container status \"3e27a489def4b2ac986e1f24cb44cc9566a8165305b3b7782c1f804b9bc0a971\": rpc error: code = NotFound desc = could not find container \"3e27a489def4b2ac986e1f24cb44cc9566a8165305b3b7782c1f804b9bc0a971\": container with ID starting with 3e27a489def4b2ac986e1f24cb44cc9566a8165305b3b7782c1f804b9bc0a971 not found: ID does not exist" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.583590 4809 scope.go:117] "RemoveContainer" containerID="bdf160afbdbee75d9c41ccb87f8b7d201ba9ae327ca27734d3ebc8bf246f9570" Mar 12 08:13:58 crc kubenswrapper[4809]: E0312 08:13:58.585875 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bdf160afbdbee75d9c41ccb87f8b7d201ba9ae327ca27734d3ebc8bf246f9570\": container with ID starting with bdf160afbdbee75d9c41ccb87f8b7d201ba9ae327ca27734d3ebc8bf246f9570 not found: ID does not exist" containerID="bdf160afbdbee75d9c41ccb87f8b7d201ba9ae327ca27734d3ebc8bf246f9570" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.585913 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdf160afbdbee75d9c41ccb87f8b7d201ba9ae327ca27734d3ebc8bf246f9570"} err="failed to get container status \"bdf160afbdbee75d9c41ccb87f8b7d201ba9ae327ca27734d3ebc8bf246f9570\": rpc error: code = NotFound desc = could not find container \"bdf160afbdbee75d9c41ccb87f8b7d201ba9ae327ca27734d3ebc8bf246f9570\": container with ID starting with bdf160afbdbee75d9c41ccb87f8b7d201ba9ae327ca27734d3ebc8bf246f9570 not found: ID does not exist" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.669001 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-pbfks"] Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.719221 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-zb8df"] Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.857489 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/8232d992-4bfb-46ca-a440-647d8c006309-tls-secret\") pod \"logging-loki-gateway-fd898bfdd-dhc65\" (UID: \"8232d992-4bfb-46ca-a440-647d8c006309\") " pod="openshift-logging/logging-loki-gateway-fd898bfdd-dhc65" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.857605 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/7546fb46-f601-417f-ad26-69a4fb625fdc-tls-secret\") pod \"logging-loki-gateway-fd898bfdd-tbqw2\" (UID: \"7546fb46-f601-417f-ad26-69a4fb625fdc\") " pod="openshift-logging/logging-loki-gateway-fd898bfdd-tbqw2" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.861943 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.862895 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.864388 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/7546fb46-f601-417f-ad26-69a4fb625fdc-tls-secret\") pod \"logging-loki-gateway-fd898bfdd-tbqw2\" (UID: \"7546fb46-f601-417f-ad26-69a4fb625fdc\") " pod="openshift-logging/logging-loki-gateway-fd898bfdd-tbqw2" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.864450 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/8232d992-4bfb-46ca-a440-647d8c006309-tls-secret\") pod \"logging-loki-gateway-fd898bfdd-dhc65\" (UID: \"8232d992-4bfb-46ca-a440-647d8c006309\") " pod="openshift-logging/logging-loki-gateway-fd898bfdd-dhc65" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.866266 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-grpc" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.866493 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-http" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.883688 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.924990 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.926029 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.928749 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-grpc" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.928948 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-http" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.939053 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.959553 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ef2f625-286b-49c8-97d9-a98350cfea7b-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"1ef2f625-286b-49c8-97d9-a98350cfea7b\") " pod="openshift-logging/logging-loki-ingester-0" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.959646 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ef2f625-286b-49c8-97d9-a98350cfea7b-config\") pod \"logging-loki-ingester-0\" (UID: \"1ef2f625-286b-49c8-97d9-a98350cfea7b\") " pod="openshift-logging/logging-loki-ingester-0" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.959731 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/1ef2f625-286b-49c8-97d9-a98350cfea7b-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"1ef2f625-286b-49c8-97d9-a98350cfea7b\") " pod="openshift-logging/logging-loki-ingester-0" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.959779 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-fe9dbe9a-09cf-4bd4-bf62-5610a0d17e8c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fe9dbe9a-09cf-4bd4-bf62-5610a0d17e8c\") pod \"logging-loki-ingester-0\" (UID: \"1ef2f625-286b-49c8-97d9-a98350cfea7b\") " pod="openshift-logging/logging-loki-ingester-0" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.959924 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9c4b2f9f-af28-424e-bf99-ef8734aea9e7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9c4b2f9f-af28-424e-bf99-ef8734aea9e7\") pod \"logging-loki-ingester-0\" (UID: \"1ef2f625-286b-49c8-97d9-a98350cfea7b\") " pod="openshift-logging/logging-loki-ingester-0" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.959969 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2km9\" (UniqueName: \"kubernetes.io/projected/1ef2f625-286b-49c8-97d9-a98350cfea7b-kube-api-access-v2km9\") pod \"logging-loki-ingester-0\" (UID: \"1ef2f625-286b-49c8-97d9-a98350cfea7b\") " pod="openshift-logging/logging-loki-ingester-0" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.960027 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/1ef2f625-286b-49c8-97d9-a98350cfea7b-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"1ef2f625-286b-49c8-97d9-a98350cfea7b\") " pod="openshift-logging/logging-loki-ingester-0" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.960105 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/1ef2f625-286b-49c8-97d9-a98350cfea7b-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"1ef2f625-286b-49c8-97d9-a98350cfea7b\") " pod="openshift-logging/logging-loki-ingester-0" Mar 12 08:13:58 crc kubenswrapper[4809]: I0312 08:13:58.974799 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-6pp5s"] Mar 12 08:13:58 crc kubenswrapper[4809]: W0312 08:13:58.975938 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9fc47673_0fe3_49f6_a2bb_06845a2f3fc4.slice/crio-4b8437e362fd4af15c1daa8f6a181c0dbc95bcca9b918fa8f1463143152176ff WatchSource:0}: Error finding container 4b8437e362fd4af15c1daa8f6a181c0dbc95bcca9b918fa8f1463143152176ff: Status 404 returned error can't find the container with id 4b8437e362fd4af15c1daa8f6a181c0dbc95bcca9b918fa8f1463143152176ff Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.012080 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.013573 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.016961 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-http" Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.017060 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-grpc" Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.021980 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.047742 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-fd898bfdd-dhc65" Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.061989 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9c4b2f9f-af28-424e-bf99-ef8734aea9e7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9c4b2f9f-af28-424e-bf99-ef8734aea9e7\") pod \"logging-loki-ingester-0\" (UID: \"1ef2f625-286b-49c8-97d9-a98350cfea7b\") " pod="openshift-logging/logging-loki-ingester-0" Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.062037 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2km9\" (UniqueName: \"kubernetes.io/projected/1ef2f625-286b-49c8-97d9-a98350cfea7b-kube-api-access-v2km9\") pod \"logging-loki-ingester-0\" (UID: \"1ef2f625-286b-49c8-97d9-a98350cfea7b\") " pod="openshift-logging/logging-loki-ingester-0" Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.062074 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/1ef2f625-286b-49c8-97d9-a98350cfea7b-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"1ef2f625-286b-49c8-97d9-a98350cfea7b\") " pod="openshift-logging/logging-loki-ingester-0" Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.062108 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/ba5833e7-becf-412f-879b-6cab8777fb0b-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"ba5833e7-becf-412f-879b-6cab8777fb0b\") " pod="openshift-logging/logging-loki-compactor-0" Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.062147 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/1ef2f625-286b-49c8-97d9-a98350cfea7b-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"1ef2f625-286b-49c8-97d9-a98350cfea7b\") " pod="openshift-logging/logging-loki-ingester-0" Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.062167 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba5833e7-becf-412f-879b-6cab8777fb0b-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"ba5833e7-becf-412f-879b-6cab8777fb0b\") " pod="openshift-logging/logging-loki-compactor-0" Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.062201 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/ba5833e7-becf-412f-879b-6cab8777fb0b-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"ba5833e7-becf-412f-879b-6cab8777fb0b\") " pod="openshift-logging/logging-loki-compactor-0" Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.062237 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ef2f625-286b-49c8-97d9-a98350cfea7b-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"1ef2f625-286b-49c8-97d9-a98350cfea7b\") " pod="openshift-logging/logging-loki-ingester-0" Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.062268 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ef2f625-286b-49c8-97d9-a98350cfea7b-config\") pod \"logging-loki-ingester-0\" (UID: \"1ef2f625-286b-49c8-97d9-a98350cfea7b\") " pod="openshift-logging/logging-loki-ingester-0" Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.062299 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/ba5833e7-becf-412f-879b-6cab8777fb0b-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"ba5833e7-becf-412f-879b-6cab8777fb0b\") " pod="openshift-logging/logging-loki-compactor-0" Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.062323 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mzj6\" (UniqueName: \"kubernetes.io/projected/ba5833e7-becf-412f-879b-6cab8777fb0b-kube-api-access-7mzj6\") pod \"logging-loki-compactor-0\" (UID: \"ba5833e7-becf-412f-879b-6cab8777fb0b\") " pod="openshift-logging/logging-loki-compactor-0" Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.062352 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba5833e7-becf-412f-879b-6cab8777fb0b-config\") pod \"logging-loki-compactor-0\" (UID: \"ba5833e7-becf-412f-879b-6cab8777fb0b\") " pod="openshift-logging/logging-loki-compactor-0" Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.062377 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/1ef2f625-286b-49c8-97d9-a98350cfea7b-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"1ef2f625-286b-49c8-97d9-a98350cfea7b\") " pod="openshift-logging/logging-loki-ingester-0" Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.062403 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-fe9dbe9a-09cf-4bd4-bf62-5610a0d17e8c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fe9dbe9a-09cf-4bd4-bf62-5610a0d17e8c\") pod \"logging-loki-ingester-0\" (UID: \"1ef2f625-286b-49c8-97d9-a98350cfea7b\") " pod="openshift-logging/logging-loki-ingester-0" Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.062440 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b1f5bc57-936f-415f-8dbe-af72dbaad3a2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1f5bc57-936f-415f-8dbe-af72dbaad3a2\") pod \"logging-loki-compactor-0\" (UID: \"ba5833e7-becf-412f-879b-6cab8777fb0b\") " pod="openshift-logging/logging-loki-compactor-0" Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.064876 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ef2f625-286b-49c8-97d9-a98350cfea7b-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"1ef2f625-286b-49c8-97d9-a98350cfea7b\") " pod="openshift-logging/logging-loki-ingester-0" Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.065784 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ef2f625-286b-49c8-97d9-a98350cfea7b-config\") pod \"logging-loki-ingester-0\" (UID: \"1ef2f625-286b-49c8-97d9-a98350cfea7b\") " pod="openshift-logging/logging-loki-ingester-0" Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.068453 4809 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.068493 4809 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9c4b2f9f-af28-424e-bf99-ef8734aea9e7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9c4b2f9f-af28-424e-bf99-ef8734aea9e7\") pod \"logging-loki-ingester-0\" (UID: \"1ef2f625-286b-49c8-97d9-a98350cfea7b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3ea1aac08c2f6e94a2f5e1c85684376cb85ee150575ad370082167658f455f1b/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.068497 4809 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.068547 4809 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-fe9dbe9a-09cf-4bd4-bf62-5610a0d17e8c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fe9dbe9a-09cf-4bd4-bf62-5610a0d17e8c\") pod \"logging-loki-ingester-0\" (UID: \"1ef2f625-286b-49c8-97d9-a98350cfea7b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6e592c6d893d62ebee19440657d9998dcb10b71ef33905219f494c6d47f74b10/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.069881 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/1ef2f625-286b-49c8-97d9-a98350cfea7b-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"1ef2f625-286b-49c8-97d9-a98350cfea7b\") " pod="openshift-logging/logging-loki-ingester-0" Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.072439 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/1ef2f625-286b-49c8-97d9-a98350cfea7b-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"1ef2f625-286b-49c8-97d9-a98350cfea7b\") " pod="openshift-logging/logging-loki-ingester-0" Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.076693 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/1ef2f625-286b-49c8-97d9-a98350cfea7b-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"1ef2f625-286b-49c8-97d9-a98350cfea7b\") " pod="openshift-logging/logging-loki-ingester-0" Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.081357 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2km9\" (UniqueName: \"kubernetes.io/projected/1ef2f625-286b-49c8-97d9-a98350cfea7b-kube-api-access-v2km9\") pod \"logging-loki-ingester-0\" (UID: \"1ef2f625-286b-49c8-97d9-a98350cfea7b\") " pod="openshift-logging/logging-loki-ingester-0" Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.099314 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-fe9dbe9a-09cf-4bd4-bf62-5610a0d17e8c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fe9dbe9a-09cf-4bd4-bf62-5610a0d17e8c\") pod \"logging-loki-ingester-0\" (UID: \"1ef2f625-286b-49c8-97d9-a98350cfea7b\") " pod="openshift-logging/logging-loki-ingester-0" Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.100673 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-fd898bfdd-tbqw2" Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.111647 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9c4b2f9f-af28-424e-bf99-ef8734aea9e7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9c4b2f9f-af28-424e-bf99-ef8734aea9e7\") pod \"logging-loki-ingester-0\" (UID: \"1ef2f625-286b-49c8-97d9-a98350cfea7b\") " pod="openshift-logging/logging-loki-ingester-0" Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.126273 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e70edc9e-c0c9-49d3-ba51-e3967bc1b54e" path="/var/lib/kubelet/pods/e70edc9e-c0c9-49d3-ba51-e3967bc1b54e/volumes" Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.163934 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/ba5833e7-becf-412f-879b-6cab8777fb0b-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"ba5833e7-becf-412f-879b-6cab8777fb0b\") " pod="openshift-logging/logging-loki-compactor-0" Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.163990 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba5833e7-becf-412f-879b-6cab8777fb0b-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"ba5833e7-becf-412f-879b-6cab8777fb0b\") " pod="openshift-logging/logging-loki-compactor-0" Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.164262 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/bcc0a610-6eb0-4a5f-88d9-5d069f760c14-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"bcc0a610-6eb0-4a5f-88d9-5d069f760c14\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.164438 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/ba5833e7-becf-412f-879b-6cab8777fb0b-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"ba5833e7-becf-412f-879b-6cab8777fb0b\") " pod="openshift-logging/logging-loki-compactor-0" Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.164478 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/bcc0a610-6eb0-4a5f-88d9-5d069f760c14-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"bcc0a610-6eb0-4a5f-88d9-5d069f760c14\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.164509 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/ba5833e7-becf-412f-879b-6cab8777fb0b-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"ba5833e7-becf-412f-879b-6cab8777fb0b\") " pod="openshift-logging/logging-loki-compactor-0" Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.164538 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mzj6\" (UniqueName: \"kubernetes.io/projected/ba5833e7-becf-412f-879b-6cab8777fb0b-kube-api-access-7mzj6\") pod \"logging-loki-compactor-0\" (UID: \"ba5833e7-becf-412f-879b-6cab8777fb0b\") " pod="openshift-logging/logging-loki-compactor-0" Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.164577 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba5833e7-becf-412f-879b-6cab8777fb0b-config\") pod \"logging-loki-compactor-0\" (UID: \"ba5833e7-becf-412f-879b-6cab8777fb0b\") " pod="openshift-logging/logging-loki-compactor-0" Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.164629 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bcc0a610-6eb0-4a5f-88d9-5d069f760c14-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"bcc0a610-6eb0-4a5f-88d9-5d069f760c14\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.164661 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/bcc0a610-6eb0-4a5f-88d9-5d069f760c14-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"bcc0a610-6eb0-4a5f-88d9-5d069f760c14\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.164729 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcc0a610-6eb0-4a5f-88d9-5d069f760c14-config\") pod \"logging-loki-index-gateway-0\" (UID: \"bcc0a610-6eb0-4a5f-88d9-5d069f760c14\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.164753 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnxws\" (UniqueName: \"kubernetes.io/projected/bcc0a610-6eb0-4a5f-88d9-5d069f760c14-kube-api-access-bnxws\") pod \"logging-loki-index-gateway-0\" (UID: \"bcc0a610-6eb0-4a5f-88d9-5d069f760c14\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.164780 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b1f5bc57-936f-415f-8dbe-af72dbaad3a2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1f5bc57-936f-415f-8dbe-af72dbaad3a2\") pod \"logging-loki-compactor-0\" (UID: \"ba5833e7-becf-412f-879b-6cab8777fb0b\") " pod="openshift-logging/logging-loki-compactor-0" Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.164804 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-28e1744d-4d66-4ff7-8c04-512105dfcc28\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-28e1744d-4d66-4ff7-8c04-512105dfcc28\") pod \"logging-loki-index-gateway-0\" (UID: \"bcc0a610-6eb0-4a5f-88d9-5d069f760c14\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.165527 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba5833e7-becf-412f-879b-6cab8777fb0b-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"ba5833e7-becf-412f-879b-6cab8777fb0b\") " pod="openshift-logging/logging-loki-compactor-0" Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.167884 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba5833e7-becf-412f-879b-6cab8777fb0b-config\") pod \"logging-loki-compactor-0\" (UID: \"ba5833e7-becf-412f-879b-6cab8777fb0b\") " pod="openshift-logging/logging-loki-compactor-0" Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.171317 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/ba5833e7-becf-412f-879b-6cab8777fb0b-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"ba5833e7-becf-412f-879b-6cab8777fb0b\") " pod="openshift-logging/logging-loki-compactor-0" Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.173278 4809 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.174280 4809 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b1f5bc57-936f-415f-8dbe-af72dbaad3a2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1f5bc57-936f-415f-8dbe-af72dbaad3a2\") pod \"logging-loki-compactor-0\" (UID: \"ba5833e7-becf-412f-879b-6cab8777fb0b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d9b35a4f6dc035ef0c9f3d178c4ebc4e47d868e60aa8b070ecb6eeff04d21bc3/globalmount\"" pod="openshift-logging/logging-loki-compactor-0" Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.174667 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/ba5833e7-becf-412f-879b-6cab8777fb0b-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"ba5833e7-becf-412f-879b-6cab8777fb0b\") " pod="openshift-logging/logging-loki-compactor-0" Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.192872 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/ba5833e7-becf-412f-879b-6cab8777fb0b-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"ba5833e7-becf-412f-879b-6cab8777fb0b\") " pod="openshift-logging/logging-loki-compactor-0" Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.193868 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mzj6\" (UniqueName: \"kubernetes.io/projected/ba5833e7-becf-412f-879b-6cab8777fb0b-kube-api-access-7mzj6\") pod \"logging-loki-compactor-0\" (UID: \"ba5833e7-becf-412f-879b-6cab8777fb0b\") " pod="openshift-logging/logging-loki-compactor-0" Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.202012 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b1f5bc57-936f-415f-8dbe-af72dbaad3a2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1f5bc57-936f-415f-8dbe-af72dbaad3a2\") pod \"logging-loki-compactor-0\" (UID: \"ba5833e7-becf-412f-879b-6cab8777fb0b\") " pod="openshift-logging/logging-loki-compactor-0" Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.202627 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.239838 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.266655 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcc0a610-6eb0-4a5f-88d9-5d069f760c14-config\") pod \"logging-loki-index-gateway-0\" (UID: \"bcc0a610-6eb0-4a5f-88d9-5d069f760c14\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.266708 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnxws\" (UniqueName: \"kubernetes.io/projected/bcc0a610-6eb0-4a5f-88d9-5d069f760c14-kube-api-access-bnxws\") pod \"logging-loki-index-gateway-0\" (UID: \"bcc0a610-6eb0-4a5f-88d9-5d069f760c14\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.266745 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-28e1744d-4d66-4ff7-8c04-512105dfcc28\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-28e1744d-4d66-4ff7-8c04-512105dfcc28\") pod \"logging-loki-index-gateway-0\" (UID: \"bcc0a610-6eb0-4a5f-88d9-5d069f760c14\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.266796 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/bcc0a610-6eb0-4a5f-88d9-5d069f760c14-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"bcc0a610-6eb0-4a5f-88d9-5d069f760c14\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.266820 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/bcc0a610-6eb0-4a5f-88d9-5d069f760c14-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"bcc0a610-6eb0-4a5f-88d9-5d069f760c14\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.266867 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bcc0a610-6eb0-4a5f-88d9-5d069f760c14-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"bcc0a610-6eb0-4a5f-88d9-5d069f760c14\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.266894 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/bcc0a610-6eb0-4a5f-88d9-5d069f760c14-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"bcc0a610-6eb0-4a5f-88d9-5d069f760c14\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.271489 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcc0a610-6eb0-4a5f-88d9-5d069f760c14-config\") pod \"logging-loki-index-gateway-0\" (UID: \"bcc0a610-6eb0-4a5f-88d9-5d069f760c14\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.272646 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/bcc0a610-6eb0-4a5f-88d9-5d069f760c14-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"bcc0a610-6eb0-4a5f-88d9-5d069f760c14\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.273419 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bcc0a610-6eb0-4a5f-88d9-5d069f760c14-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"bcc0a610-6eb0-4a5f-88d9-5d069f760c14\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.275893 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/bcc0a610-6eb0-4a5f-88d9-5d069f760c14-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"bcc0a610-6eb0-4a5f-88d9-5d069f760c14\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.278314 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/bcc0a610-6eb0-4a5f-88d9-5d069f760c14-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"bcc0a610-6eb0-4a5f-88d9-5d069f760c14\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.282567 4809 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.282625 4809 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-28e1744d-4d66-4ff7-8c04-512105dfcc28\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-28e1744d-4d66-4ff7-8c04-512105dfcc28\") pod \"logging-loki-index-gateway-0\" (UID: \"bcc0a610-6eb0-4a5f-88d9-5d069f760c14\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a1fee480776a62adb5cf09bcfe48a588a310c2396e42b80302e4181754a82a9d/globalmount\"" pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.309587 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnxws\" (UniqueName: \"kubernetes.io/projected/bcc0a610-6eb0-4a5f-88d9-5d069f760c14-kube-api-access-bnxws\") pod \"logging-loki-index-gateway-0\" (UID: \"bcc0a610-6eb0-4a5f-88d9-5d069f760c14\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.311163 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-fd898bfdd-dhc65"] Mar 12 08:13:59 crc kubenswrapper[4809]: W0312 08:13:59.314995 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8232d992_4bfb_46ca_a440_647d8c006309.slice/crio-1f8a5a062b56e1831a37e8c55f9d832000365f769c5fc4d543c3a02cdea3c01d WatchSource:0}: Error finding container 1f8a5a062b56e1831a37e8c55f9d832000365f769c5fc4d543c3a02cdea3c01d: Status 404 returned error can't find the container with id 1f8a5a062b56e1831a37e8c55f9d832000365f769c5fc4d543c3a02cdea3c01d Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.324988 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-28e1744d-4d66-4ff7-8c04-512105dfcc28\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-28e1744d-4d66-4ff7-8c04-512105dfcc28\") pod \"logging-loki-index-gateway-0\" (UID: \"bcc0a610-6eb0-4a5f-88d9-5d069f760c14\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.339615 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.382756 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-fd898bfdd-tbqw2"] Mar 12 08:13:59 crc kubenswrapper[4809]: W0312 08:13:59.398027 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7546fb46_f601_417f_ad26_69a4fb625fdc.slice/crio-75ba5feff6c940693ccc1fb02a5c48a708220650801921aac70b6287827112d7 WatchSource:0}: Error finding container 75ba5feff6c940693ccc1fb02a5c48a708220650801921aac70b6287827112d7: Status 404 returned error can't find the container with id 75ba5feff6c940693ccc1fb02a5c48a708220650801921aac70b6287827112d7 Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.417514 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-fd898bfdd-tbqw2" event={"ID":"7546fb46-f601-417f-ad26-69a4fb625fdc","Type":"ContainerStarted","Data":"75ba5feff6c940693ccc1fb02a5c48a708220650801921aac70b6287827112d7"} Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.419304 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76bf7b6d45-zb8df" event={"ID":"c3630a5f-f4c4-42af-8335-60dbcbdb4961","Type":"ContainerStarted","Data":"45707a58fe3c584c29064aa588ebe88a3b6a62dc315a576784d28b31dae7a02f"} Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.420568 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-pbfks" event={"ID":"8ac4723a-9ff0-4186-8177-8a86f6db8b9f","Type":"ContainerStarted","Data":"af8731663d986277c61fa052f462a002db3707be38dd02882b33b80e83855fc5"} Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.422228 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-fd898bfdd-dhc65" event={"ID":"8232d992-4bfb-46ca-a440-647d8c006309","Type":"ContainerStarted","Data":"1f8a5a062b56e1831a37e8c55f9d832000365f769c5fc4d543c3a02cdea3c01d"} Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.423645 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-6pp5s" event={"ID":"9fc47673-0fe3-49f6-a2bb-06845a2f3fc4","Type":"ContainerStarted","Data":"4b8437e362fd4af15c1daa8f6a181c0dbc95bcca9b918fa8f1463143152176ff"} Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.577553 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Mar 12 08:13:59 crc kubenswrapper[4809]: W0312 08:13:59.586386 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1ef2f625_286b_49c8_97d9_a98350cfea7b.slice/crio-35b126cbcd902bc317bd76583c19a416a7b7b30306f8fbb133d421ee2b053161 WatchSource:0}: Error finding container 35b126cbcd902bc317bd76583c19a416a7b7b30306f8fbb133d421ee2b053161: Status 404 returned error can't find the container with id 35b126cbcd902bc317bd76583c19a416a7b7b30306f8fbb133d421ee2b053161 Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.700323 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Mar 12 08:13:59 crc kubenswrapper[4809]: W0312 08:13:59.707639 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba5833e7_becf_412f_879b_6cab8777fb0b.slice/crio-8284a8de82d49a50b640997517cb9dea9a3ff890b6caa668af769bb442b04ca2 WatchSource:0}: Error finding container 8284a8de82d49a50b640997517cb9dea9a3ff890b6caa668af769bb442b04ca2: Status 404 returned error can't find the container with id 8284a8de82d49a50b640997517cb9dea9a3ff890b6caa668af769bb442b04ca2 Mar 12 08:13:59 crc kubenswrapper[4809]: I0312 08:13:59.832222 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Mar 12 08:13:59 crc kubenswrapper[4809]: W0312 08:13:59.839873 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbcc0a610_6eb0_4a5f_88d9_5d069f760c14.slice/crio-e21e865589f9ebd4c012c26fa2625fd057c075ef6b8bfc8e1ee274549dcb1670 WatchSource:0}: Error finding container e21e865589f9ebd4c012c26fa2625fd057c075ef6b8bfc8e1ee274549dcb1670: Status 404 returned error can't find the container with id e21e865589f9ebd4c012c26fa2625fd057c075ef6b8bfc8e1ee274549dcb1670 Mar 12 08:14:00 crc kubenswrapper[4809]: I0312 08:14:00.150359 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555054-4zjhg"] Mar 12 08:14:00 crc kubenswrapper[4809]: I0312 08:14:00.153510 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555054-4zjhg" Mar 12 08:14:00 crc kubenswrapper[4809]: I0312 08:14:00.156677 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 08:14:00 crc kubenswrapper[4809]: I0312 08:14:00.157074 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 08:14:00 crc kubenswrapper[4809]: I0312 08:14:00.157886 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-tvxqv" Mar 12 08:14:00 crc kubenswrapper[4809]: I0312 08:14:00.164824 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555054-4zjhg"] Mar 12 08:14:00 crc kubenswrapper[4809]: I0312 08:14:00.290816 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbrhl\" (UniqueName: \"kubernetes.io/projected/fcdb6b44-c8aa-4f25-82eb-f49c0566fc41-kube-api-access-rbrhl\") pod \"auto-csr-approver-29555054-4zjhg\" (UID: \"fcdb6b44-c8aa-4f25-82eb-f49c0566fc41\") " pod="openshift-infra/auto-csr-approver-29555054-4zjhg" Mar 12 08:14:00 crc kubenswrapper[4809]: I0312 08:14:00.393370 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbrhl\" (UniqueName: \"kubernetes.io/projected/fcdb6b44-c8aa-4f25-82eb-f49c0566fc41-kube-api-access-rbrhl\") pod \"auto-csr-approver-29555054-4zjhg\" (UID: \"fcdb6b44-c8aa-4f25-82eb-f49c0566fc41\") " pod="openshift-infra/auto-csr-approver-29555054-4zjhg" Mar 12 08:14:00 crc kubenswrapper[4809]: I0312 08:14:00.422051 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbrhl\" (UniqueName: \"kubernetes.io/projected/fcdb6b44-c8aa-4f25-82eb-f49c0566fc41-kube-api-access-rbrhl\") pod \"auto-csr-approver-29555054-4zjhg\" (UID: \"fcdb6b44-c8aa-4f25-82eb-f49c0566fc41\") " pod="openshift-infra/auto-csr-approver-29555054-4zjhg" Mar 12 08:14:00 crc kubenswrapper[4809]: I0312 08:14:00.434151 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"1ef2f625-286b-49c8-97d9-a98350cfea7b","Type":"ContainerStarted","Data":"35b126cbcd902bc317bd76583c19a416a7b7b30306f8fbb133d421ee2b053161"} Mar 12 08:14:00 crc kubenswrapper[4809]: I0312 08:14:00.437018 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"bcc0a610-6eb0-4a5f-88d9-5d069f760c14","Type":"ContainerStarted","Data":"e21e865589f9ebd4c012c26fa2625fd057c075ef6b8bfc8e1ee274549dcb1670"} Mar 12 08:14:00 crc kubenswrapper[4809]: I0312 08:14:00.438828 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"ba5833e7-becf-412f-879b-6cab8777fb0b","Type":"ContainerStarted","Data":"8284a8de82d49a50b640997517cb9dea9a3ff890b6caa668af769bb442b04ca2"} Mar 12 08:14:00 crc kubenswrapper[4809]: I0312 08:14:00.508080 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555054-4zjhg" Mar 12 08:14:00 crc kubenswrapper[4809]: I0312 08:14:00.987515 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555054-4zjhg"] Mar 12 08:14:01 crc kubenswrapper[4809]: I0312 08:14:01.449132 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555054-4zjhg" event={"ID":"fcdb6b44-c8aa-4f25-82eb-f49c0566fc41","Type":"ContainerStarted","Data":"f8b3d9208c86dd3c8140cb80575e4d364090ffcd0f8c6a657f36a6e852d34c3e"} Mar 12 08:14:04 crc kubenswrapper[4809]: I0312 08:14:04.475101 4809 generic.go:334] "Generic (PLEG): container finished" podID="fcdb6b44-c8aa-4f25-82eb-f49c0566fc41" containerID="d9f1ba49fb9a53764bd9806a21df910c36608b752a45981f23ee548921c3bf9a" exitCode=0 Mar 12 08:14:04 crc kubenswrapper[4809]: I0312 08:14:04.475356 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555054-4zjhg" event={"ID":"fcdb6b44-c8aa-4f25-82eb-f49c0566fc41","Type":"ContainerDied","Data":"d9f1ba49fb9a53764bd9806a21df910c36608b752a45981f23ee548921c3bf9a"} Mar 12 08:14:04 crc kubenswrapper[4809]: I0312 08:14:04.478386 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-fd898bfdd-tbqw2" event={"ID":"7546fb46-f601-417f-ad26-69a4fb625fdc","Type":"ContainerStarted","Data":"4d575b102b94948be7708634a06a36f27c02c3130015cc907911d075d8472210"} Mar 12 08:14:04 crc kubenswrapper[4809]: I0312 08:14:04.479879 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"bcc0a610-6eb0-4a5f-88d9-5d069f760c14","Type":"ContainerStarted","Data":"7b2b44c365524e465a47808d1f96f6879eb0a1cc8dcb674561aa249561e50243"} Mar 12 08:14:04 crc kubenswrapper[4809]: I0312 08:14:04.480306 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 08:14:04 crc kubenswrapper[4809]: I0312 08:14:04.488019 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76bf7b6d45-zb8df" event={"ID":"c3630a5f-f4c4-42af-8335-60dbcbdb4961","Type":"ContainerStarted","Data":"119b18916e72f4dc33bbc5bc201aa68d5be3b9e62d53f8eb5a8e3d4ea8b72c91"} Mar 12 08:14:04 crc kubenswrapper[4809]: I0312 08:14:04.488122 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-76bf7b6d45-zb8df" Mar 12 08:14:04 crc kubenswrapper[4809]: I0312 08:14:04.491266 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"ba5833e7-becf-412f-879b-6cab8777fb0b","Type":"ContainerStarted","Data":"f6de25c8513c9a778a9294b2a9d01e58b9574d2a136d420fd133d6dce808d319"} Mar 12 08:14:04 crc kubenswrapper[4809]: I0312 08:14:04.491547 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-compactor-0" Mar 12 08:14:04 crc kubenswrapper[4809]: I0312 08:14:04.493105 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-pbfks" event={"ID":"8ac4723a-9ff0-4186-8177-8a86f6db8b9f","Type":"ContainerStarted","Data":"8ef3e8be000831870c6412ba75edeca91f6bf1f2ddfe6219dbd6a9067bc32c91"} Mar 12 08:14:04 crc kubenswrapper[4809]: I0312 08:14:04.493705 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-pbfks" Mar 12 08:14:04 crc kubenswrapper[4809]: I0312 08:14:04.495564 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-fd898bfdd-dhc65" event={"ID":"8232d992-4bfb-46ca-a440-647d8c006309","Type":"ContainerStarted","Data":"e5568bb0980218bcb69ee211a0132c6b0de5c50303f25a7d4e96c96e69a33b4a"} Mar 12 08:14:04 crc kubenswrapper[4809]: I0312 08:14:04.497633 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"1ef2f625-286b-49c8-97d9-a98350cfea7b","Type":"ContainerStarted","Data":"ad3da89966c6506906f0b08e2f8d38ca618f49b7509475bbcaffce534aa5ab05"} Mar 12 08:14:04 crc kubenswrapper[4809]: I0312 08:14:04.497826 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-ingester-0" Mar 12 08:14:04 crc kubenswrapper[4809]: I0312 08:14:04.498762 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-6pp5s" event={"ID":"9fc47673-0fe3-49f6-a2bb-06845a2f3fc4","Type":"ContainerStarted","Data":"3f5373f1c8a484c139b35bb7b8dc339eaac067cee9720abd8ece1163b2498c80"} Mar 12 08:14:04 crc kubenswrapper[4809]: I0312 08:14:04.499252 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-6pp5s" Mar 12 08:14:04 crc kubenswrapper[4809]: I0312 08:14:04.547392 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-ingester-0" podStartSLOduration=3.6445582979999998 podStartE2EDuration="7.547362379s" podCreationTimestamp="2026-03-12 08:13:57 +0000 UTC" firstStartedPulling="2026-03-12 08:13:59.590078807 +0000 UTC m=+913.172114540" lastFinishedPulling="2026-03-12 08:14:03.492882888 +0000 UTC m=+917.074918621" observedRunningTime="2026-03-12 08:14:04.541474527 +0000 UTC m=+918.123510260" watchObservedRunningTime="2026-03-12 08:14:04.547362379 +0000 UTC m=+918.129398112" Mar 12 08:14:04 crc kubenswrapper[4809]: I0312 08:14:04.563411 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-index-gateway-0" podStartSLOduration=3.918403003 podStartE2EDuration="7.56339168s" podCreationTimestamp="2026-03-12 08:13:57 +0000 UTC" firstStartedPulling="2026-03-12 08:13:59.844271191 +0000 UTC m=+913.426306924" lastFinishedPulling="2026-03-12 08:14:03.489259868 +0000 UTC m=+917.071295601" observedRunningTime="2026-03-12 08:14:04.56230851 +0000 UTC m=+918.144344263" watchObservedRunningTime="2026-03-12 08:14:04.56339168 +0000 UTC m=+918.145427413" Mar 12 08:14:04 crc kubenswrapper[4809]: I0312 08:14:04.583961 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-6pp5s" podStartSLOduration=3.044625951 podStartE2EDuration="7.583931346s" podCreationTimestamp="2026-03-12 08:13:57 +0000 UTC" firstStartedPulling="2026-03-12 08:13:58.978767257 +0000 UTC m=+912.560803000" lastFinishedPulling="2026-03-12 08:14:03.518072652 +0000 UTC m=+917.100108395" observedRunningTime="2026-03-12 08:14:04.582439375 +0000 UTC m=+918.164475128" watchObservedRunningTime="2026-03-12 08:14:04.583931346 +0000 UTC m=+918.165967089" Mar 12 08:14:04 crc kubenswrapper[4809]: I0312 08:14:04.607617 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-querier-76bf7b6d45-zb8df" podStartSLOduration=2.726149606 podStartE2EDuration="7.607591788s" podCreationTimestamp="2026-03-12 08:13:57 +0000 UTC" firstStartedPulling="2026-03-12 08:13:58.73028311 +0000 UTC m=+912.312318843" lastFinishedPulling="2026-03-12 08:14:03.611725292 +0000 UTC m=+917.193761025" observedRunningTime="2026-03-12 08:14:04.602335593 +0000 UTC m=+918.184371326" watchObservedRunningTime="2026-03-12 08:14:04.607591788 +0000 UTC m=+918.189627521" Mar 12 08:14:04 crc kubenswrapper[4809]: I0312 08:14:04.624599 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-compactor-0" podStartSLOduration=3.722790844 podStartE2EDuration="7.624581016s" podCreationTimestamp="2026-03-12 08:13:57 +0000 UTC" firstStartedPulling="2026-03-12 08:13:59.709898069 +0000 UTC m=+913.291933802" lastFinishedPulling="2026-03-12 08:14:03.611688241 +0000 UTC m=+917.193723974" observedRunningTime="2026-03-12 08:14:04.622816258 +0000 UTC m=+918.204851991" watchObservedRunningTime="2026-03-12 08:14:04.624581016 +0000 UTC m=+918.206616749" Mar 12 08:14:04 crc kubenswrapper[4809]: I0312 08:14:04.649079 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-pbfks" podStartSLOduration=2.879260525 podStartE2EDuration="7.64906204s" podCreationTimestamp="2026-03-12 08:13:57 +0000 UTC" firstStartedPulling="2026-03-12 08:13:58.690541776 +0000 UTC m=+912.272577509" lastFinishedPulling="2026-03-12 08:14:03.460343281 +0000 UTC m=+917.042379024" observedRunningTime="2026-03-12 08:14:04.643004973 +0000 UTC m=+918.225040716" watchObservedRunningTime="2026-03-12 08:14:04.64906204 +0000 UTC m=+918.231097773" Mar 12 08:14:05 crc kubenswrapper[4809]: I0312 08:14:05.883810 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555054-4zjhg" Mar 12 08:14:05 crc kubenswrapper[4809]: I0312 08:14:05.966873 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbrhl\" (UniqueName: \"kubernetes.io/projected/fcdb6b44-c8aa-4f25-82eb-f49c0566fc41-kube-api-access-rbrhl\") pod \"fcdb6b44-c8aa-4f25-82eb-f49c0566fc41\" (UID: \"fcdb6b44-c8aa-4f25-82eb-f49c0566fc41\") " Mar 12 08:14:05 crc kubenswrapper[4809]: I0312 08:14:05.976276 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcdb6b44-c8aa-4f25-82eb-f49c0566fc41-kube-api-access-rbrhl" (OuterVolumeSpecName: "kube-api-access-rbrhl") pod "fcdb6b44-c8aa-4f25-82eb-f49c0566fc41" (UID: "fcdb6b44-c8aa-4f25-82eb-f49c0566fc41"). InnerVolumeSpecName "kube-api-access-rbrhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:14:06 crc kubenswrapper[4809]: I0312 08:14:06.069227 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbrhl\" (UniqueName: \"kubernetes.io/projected/fcdb6b44-c8aa-4f25-82eb-f49c0566fc41-kube-api-access-rbrhl\") on node \"crc\" DevicePath \"\"" Mar 12 08:14:06 crc kubenswrapper[4809]: I0312 08:14:06.520245 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555054-4zjhg" Mar 12 08:14:06 crc kubenswrapper[4809]: I0312 08:14:06.520253 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555054-4zjhg" event={"ID":"fcdb6b44-c8aa-4f25-82eb-f49c0566fc41","Type":"ContainerDied","Data":"f8b3d9208c86dd3c8140cb80575e4d364090ffcd0f8c6a657f36a6e852d34c3e"} Mar 12 08:14:06 crc kubenswrapper[4809]: I0312 08:14:06.520316 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8b3d9208c86dd3c8140cb80575e4d364090ffcd0f8c6a657f36a6e852d34c3e" Mar 12 08:14:06 crc kubenswrapper[4809]: I0312 08:14:06.979019 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555048-5nxr7"] Mar 12 08:14:06 crc kubenswrapper[4809]: I0312 08:14:06.994757 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555048-5nxr7"] Mar 12 08:14:07 crc kubenswrapper[4809]: I0312 08:14:07.129145 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d51c3ce-6d91-4798-9fed-d45c18fad38d" path="/var/lib/kubelet/pods/3d51c3ce-6d91-4798-9fed-d45c18fad38d/volumes" Mar 12 08:14:09 crc kubenswrapper[4809]: I0312 08:14:09.555537 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-fd898bfdd-tbqw2" event={"ID":"7546fb46-f601-417f-ad26-69a4fb625fdc","Type":"ContainerStarted","Data":"c7b25fa765c99f23e580dc34fb532e78b76bb30d3caccc15fc47d0f3cc99e6b2"} Mar 12 08:14:09 crc kubenswrapper[4809]: I0312 08:14:09.556224 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-fd898bfdd-tbqw2" Mar 12 08:14:09 crc kubenswrapper[4809]: I0312 08:14:09.556251 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-fd898bfdd-tbqw2" Mar 12 08:14:09 crc kubenswrapper[4809]: I0312 08:14:09.557831 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-fd898bfdd-dhc65" event={"ID":"8232d992-4bfb-46ca-a440-647d8c006309","Type":"ContainerStarted","Data":"c1eaae48da6b22e5f911f4aa9b9e3fde1ac8f80a980ba60012200abb0f4b2d59"} Mar 12 08:14:09 crc kubenswrapper[4809]: I0312 08:14:09.558043 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-fd898bfdd-dhc65" Mar 12 08:14:09 crc kubenswrapper[4809]: I0312 08:14:09.568588 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-fd898bfdd-tbqw2" Mar 12 08:14:09 crc kubenswrapper[4809]: I0312 08:14:09.570273 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-fd898bfdd-dhc65" Mar 12 08:14:09 crc kubenswrapper[4809]: I0312 08:14:09.574132 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-fd898bfdd-tbqw2" Mar 12 08:14:09 crc kubenswrapper[4809]: I0312 08:14:09.584705 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-fd898bfdd-tbqw2" podStartSLOduration=2.604792901 podStartE2EDuration="11.584678904s" podCreationTimestamp="2026-03-12 08:13:58 +0000 UTC" firstStartedPulling="2026-03-12 08:13:59.400879345 +0000 UTC m=+912.982915078" lastFinishedPulling="2026-03-12 08:14:08.380765348 +0000 UTC m=+921.962801081" observedRunningTime="2026-03-12 08:14:09.579920464 +0000 UTC m=+923.161956257" watchObservedRunningTime="2026-03-12 08:14:09.584678904 +0000 UTC m=+923.166714677" Mar 12 08:14:09 crc kubenswrapper[4809]: I0312 08:14:09.655199 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-fd898bfdd-dhc65" podStartSLOduration=2.5964902629999997 podStartE2EDuration="11.655175157s" podCreationTimestamp="2026-03-12 08:13:58 +0000 UTC" firstStartedPulling="2026-03-12 08:13:59.319575545 +0000 UTC m=+912.901611278" lastFinishedPulling="2026-03-12 08:14:08.378260399 +0000 UTC m=+921.960296172" observedRunningTime="2026-03-12 08:14:09.649208473 +0000 UTC m=+923.231244216" watchObservedRunningTime="2026-03-12 08:14:09.655175157 +0000 UTC m=+923.237210890" Mar 12 08:14:10 crc kubenswrapper[4809]: I0312 08:14:10.568180 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-fd898bfdd-dhc65" Mar 12 08:14:10 crc kubenswrapper[4809]: I0312 08:14:10.581456 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-fd898bfdd-dhc65" Mar 12 08:14:15 crc kubenswrapper[4809]: I0312 08:14:15.049065 4809 patch_prober.go:28] interesting pod/machine-config-daemon-h6d4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 08:14:15 crc kubenswrapper[4809]: I0312 08:14:15.049690 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 08:14:17 crc kubenswrapper[4809]: I0312 08:14:17.020914 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jpxmj"] Mar 12 08:14:17 crc kubenswrapper[4809]: E0312 08:14:17.021690 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcdb6b44-c8aa-4f25-82eb-f49c0566fc41" containerName="oc" Mar 12 08:14:17 crc kubenswrapper[4809]: I0312 08:14:17.021704 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcdb6b44-c8aa-4f25-82eb-f49c0566fc41" containerName="oc" Mar 12 08:14:17 crc kubenswrapper[4809]: I0312 08:14:17.021878 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcdb6b44-c8aa-4f25-82eb-f49c0566fc41" containerName="oc" Mar 12 08:14:17 crc kubenswrapper[4809]: I0312 08:14:17.022855 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jpxmj" Mar 12 08:14:17 crc kubenswrapper[4809]: I0312 08:14:17.051144 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jpxmj"] Mar 12 08:14:17 crc kubenswrapper[4809]: I0312 08:14:17.159167 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrzsm\" (UniqueName: \"kubernetes.io/projected/2f899d57-2fec-46c4-834b-1aab4aa33534-kube-api-access-zrzsm\") pod \"certified-operators-jpxmj\" (UID: \"2f899d57-2fec-46c4-834b-1aab4aa33534\") " pod="openshift-marketplace/certified-operators-jpxmj" Mar 12 08:14:17 crc kubenswrapper[4809]: I0312 08:14:17.159534 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f899d57-2fec-46c4-834b-1aab4aa33534-catalog-content\") pod \"certified-operators-jpxmj\" (UID: \"2f899d57-2fec-46c4-834b-1aab4aa33534\") " pod="openshift-marketplace/certified-operators-jpxmj" Mar 12 08:14:17 crc kubenswrapper[4809]: I0312 08:14:17.159652 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f899d57-2fec-46c4-834b-1aab4aa33534-utilities\") pod \"certified-operators-jpxmj\" (UID: \"2f899d57-2fec-46c4-834b-1aab4aa33534\") " pod="openshift-marketplace/certified-operators-jpxmj" Mar 12 08:14:17 crc kubenswrapper[4809]: I0312 08:14:17.261653 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f899d57-2fec-46c4-834b-1aab4aa33534-catalog-content\") pod \"certified-operators-jpxmj\" (UID: \"2f899d57-2fec-46c4-834b-1aab4aa33534\") " pod="openshift-marketplace/certified-operators-jpxmj" Mar 12 08:14:17 crc kubenswrapper[4809]: I0312 08:14:17.261716 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f899d57-2fec-46c4-834b-1aab4aa33534-utilities\") pod \"certified-operators-jpxmj\" (UID: \"2f899d57-2fec-46c4-834b-1aab4aa33534\") " pod="openshift-marketplace/certified-operators-jpxmj" Mar 12 08:14:17 crc kubenswrapper[4809]: I0312 08:14:17.261787 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrzsm\" (UniqueName: \"kubernetes.io/projected/2f899d57-2fec-46c4-834b-1aab4aa33534-kube-api-access-zrzsm\") pod \"certified-operators-jpxmj\" (UID: \"2f899d57-2fec-46c4-834b-1aab4aa33534\") " pod="openshift-marketplace/certified-operators-jpxmj" Mar 12 08:14:17 crc kubenswrapper[4809]: I0312 08:14:17.262161 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f899d57-2fec-46c4-834b-1aab4aa33534-catalog-content\") pod \"certified-operators-jpxmj\" (UID: \"2f899d57-2fec-46c4-834b-1aab4aa33534\") " pod="openshift-marketplace/certified-operators-jpxmj" Mar 12 08:14:17 crc kubenswrapper[4809]: I0312 08:14:17.262319 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f899d57-2fec-46c4-834b-1aab4aa33534-utilities\") pod \"certified-operators-jpxmj\" (UID: \"2f899d57-2fec-46c4-834b-1aab4aa33534\") " pod="openshift-marketplace/certified-operators-jpxmj" Mar 12 08:14:17 crc kubenswrapper[4809]: I0312 08:14:17.286394 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrzsm\" (UniqueName: \"kubernetes.io/projected/2f899d57-2fec-46c4-834b-1aab4aa33534-kube-api-access-zrzsm\") pod \"certified-operators-jpxmj\" (UID: \"2f899d57-2fec-46c4-834b-1aab4aa33534\") " pod="openshift-marketplace/certified-operators-jpxmj" Mar 12 08:14:17 crc kubenswrapper[4809]: I0312 08:14:17.350393 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jpxmj" Mar 12 08:14:17 crc kubenswrapper[4809]: I0312 08:14:17.824610 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jpxmj"] Mar 12 08:14:18 crc kubenswrapper[4809]: I0312 08:14:18.020635 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-pbfks" Mar 12 08:14:18 crc kubenswrapper[4809]: I0312 08:14:18.297377 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-76bf7b6d45-zb8df" Mar 12 08:14:18 crc kubenswrapper[4809]: I0312 08:14:18.343722 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-6pp5s" Mar 12 08:14:18 crc kubenswrapper[4809]: I0312 08:14:18.630322 4809 generic.go:334] "Generic (PLEG): container finished" podID="2f899d57-2fec-46c4-834b-1aab4aa33534" containerID="74f05b875c8ebcb279673960d6230dfccd0e0fbcc6814b2f270658c2d0d5664f" exitCode=0 Mar 12 08:14:18 crc kubenswrapper[4809]: I0312 08:14:18.630380 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jpxmj" event={"ID":"2f899d57-2fec-46c4-834b-1aab4aa33534","Type":"ContainerDied","Data":"74f05b875c8ebcb279673960d6230dfccd0e0fbcc6814b2f270658c2d0d5664f"} Mar 12 08:14:18 crc kubenswrapper[4809]: I0312 08:14:18.630461 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jpxmj" event={"ID":"2f899d57-2fec-46c4-834b-1aab4aa33534","Type":"ContainerStarted","Data":"4159d32e311319828fed6e14887f25cf26b6bfed9d7d0bec1adc3de510c78bbf"} Mar 12 08:14:19 crc kubenswrapper[4809]: I0312 08:14:19.210399 4809 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Mar 12 08:14:19 crc kubenswrapper[4809]: I0312 08:14:19.210811 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="1ef2f625-286b-49c8-97d9-a98350cfea7b" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 12 08:14:19 crc kubenswrapper[4809]: I0312 08:14:19.247930 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-compactor-0" Mar 12 08:14:19 crc kubenswrapper[4809]: I0312 08:14:19.350908 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 08:14:19 crc kubenswrapper[4809]: I0312 08:14:19.640380 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jpxmj" event={"ID":"2f899d57-2fec-46c4-834b-1aab4aa33534","Type":"ContainerStarted","Data":"591771d71ee5c854467d6d58e9763f0de5b749cececcda98b52c1dba0b832a9e"} Mar 12 08:14:20 crc kubenswrapper[4809]: I0312 08:14:20.649751 4809 generic.go:334] "Generic (PLEG): container finished" podID="2f899d57-2fec-46c4-834b-1aab4aa33534" containerID="591771d71ee5c854467d6d58e9763f0de5b749cececcda98b52c1dba0b832a9e" exitCode=0 Mar 12 08:14:20 crc kubenswrapper[4809]: I0312 08:14:20.649835 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jpxmj" event={"ID":"2f899d57-2fec-46c4-834b-1aab4aa33534","Type":"ContainerDied","Data":"591771d71ee5c854467d6d58e9763f0de5b749cececcda98b52c1dba0b832a9e"} Mar 12 08:14:21 crc kubenswrapper[4809]: I0312 08:14:21.664258 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jpxmj" event={"ID":"2f899d57-2fec-46c4-834b-1aab4aa33534","Type":"ContainerStarted","Data":"6cd522c9a16b0dde77907362d13ee619d6994caa1fe596ed271b72235ec305a2"} Mar 12 08:14:21 crc kubenswrapper[4809]: I0312 08:14:21.699093 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jpxmj" podStartSLOduration=2.304300365 podStartE2EDuration="4.699060281s" podCreationTimestamp="2026-03-12 08:14:17 +0000 UTC" firstStartedPulling="2026-03-12 08:14:18.632068336 +0000 UTC m=+932.214104079" lastFinishedPulling="2026-03-12 08:14:21.026828262 +0000 UTC m=+934.608863995" observedRunningTime="2026-03-12 08:14:21.691653607 +0000 UTC m=+935.273689360" watchObservedRunningTime="2026-03-12 08:14:21.699060281 +0000 UTC m=+935.281096054" Mar 12 08:14:27 crc kubenswrapper[4809]: I0312 08:14:27.350686 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jpxmj" Mar 12 08:14:27 crc kubenswrapper[4809]: I0312 08:14:27.351433 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jpxmj" Mar 12 08:14:27 crc kubenswrapper[4809]: I0312 08:14:27.411449 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jpxmj" Mar 12 08:14:27 crc kubenswrapper[4809]: I0312 08:14:27.786407 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jpxmj" Mar 12 08:14:27 crc kubenswrapper[4809]: I0312 08:14:27.843589 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jpxmj"] Mar 12 08:14:29 crc kubenswrapper[4809]: I0312 08:14:29.215168 4809 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Mar 12 08:14:29 crc kubenswrapper[4809]: I0312 08:14:29.215618 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="1ef2f625-286b-49c8-97d9-a98350cfea7b" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 12 08:14:29 crc kubenswrapper[4809]: I0312 08:14:29.736733 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jpxmj" podUID="2f899d57-2fec-46c4-834b-1aab4aa33534" containerName="registry-server" containerID="cri-o://6cd522c9a16b0dde77907362d13ee619d6994caa1fe596ed271b72235ec305a2" gracePeriod=2 Mar 12 08:14:30 crc kubenswrapper[4809]: I0312 08:14:30.144667 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jpxmj" Mar 12 08:14:30 crc kubenswrapper[4809]: I0312 08:14:30.218371 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrzsm\" (UniqueName: \"kubernetes.io/projected/2f899d57-2fec-46c4-834b-1aab4aa33534-kube-api-access-zrzsm\") pod \"2f899d57-2fec-46c4-834b-1aab4aa33534\" (UID: \"2f899d57-2fec-46c4-834b-1aab4aa33534\") " Mar 12 08:14:30 crc kubenswrapper[4809]: I0312 08:14:30.218477 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f899d57-2fec-46c4-834b-1aab4aa33534-utilities\") pod \"2f899d57-2fec-46c4-834b-1aab4aa33534\" (UID: \"2f899d57-2fec-46c4-834b-1aab4aa33534\") " Mar 12 08:14:30 crc kubenswrapper[4809]: I0312 08:14:30.218734 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f899d57-2fec-46c4-834b-1aab4aa33534-catalog-content\") pod \"2f899d57-2fec-46c4-834b-1aab4aa33534\" (UID: \"2f899d57-2fec-46c4-834b-1aab4aa33534\") " Mar 12 08:14:30 crc kubenswrapper[4809]: I0312 08:14:30.219904 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f899d57-2fec-46c4-834b-1aab4aa33534-utilities" (OuterVolumeSpecName: "utilities") pod "2f899d57-2fec-46c4-834b-1aab4aa33534" (UID: "2f899d57-2fec-46c4-834b-1aab4aa33534"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:14:30 crc kubenswrapper[4809]: I0312 08:14:30.240651 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f899d57-2fec-46c4-834b-1aab4aa33534-kube-api-access-zrzsm" (OuterVolumeSpecName: "kube-api-access-zrzsm") pod "2f899d57-2fec-46c4-834b-1aab4aa33534" (UID: "2f899d57-2fec-46c4-834b-1aab4aa33534"). InnerVolumeSpecName "kube-api-access-zrzsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:14:30 crc kubenswrapper[4809]: I0312 08:14:30.320877 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrzsm\" (UniqueName: \"kubernetes.io/projected/2f899d57-2fec-46c4-834b-1aab4aa33534-kube-api-access-zrzsm\") on node \"crc\" DevicePath \"\"" Mar 12 08:14:30 crc kubenswrapper[4809]: I0312 08:14:30.320946 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f899d57-2fec-46c4-834b-1aab4aa33534-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 08:14:30 crc kubenswrapper[4809]: I0312 08:14:30.751707 4809 generic.go:334] "Generic (PLEG): container finished" podID="2f899d57-2fec-46c4-834b-1aab4aa33534" containerID="6cd522c9a16b0dde77907362d13ee619d6994caa1fe596ed271b72235ec305a2" exitCode=0 Mar 12 08:14:30 crc kubenswrapper[4809]: I0312 08:14:30.751779 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jpxmj" event={"ID":"2f899d57-2fec-46c4-834b-1aab4aa33534","Type":"ContainerDied","Data":"6cd522c9a16b0dde77907362d13ee619d6994caa1fe596ed271b72235ec305a2"} Mar 12 08:14:30 crc kubenswrapper[4809]: I0312 08:14:30.751832 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jpxmj" Mar 12 08:14:30 crc kubenswrapper[4809]: I0312 08:14:30.751855 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jpxmj" event={"ID":"2f899d57-2fec-46c4-834b-1aab4aa33534","Type":"ContainerDied","Data":"4159d32e311319828fed6e14887f25cf26b6bfed9d7d0bec1adc3de510c78bbf"} Mar 12 08:14:30 crc kubenswrapper[4809]: I0312 08:14:30.751877 4809 scope.go:117] "RemoveContainer" containerID="6cd522c9a16b0dde77907362d13ee619d6994caa1fe596ed271b72235ec305a2" Mar 12 08:14:30 crc kubenswrapper[4809]: I0312 08:14:30.781267 4809 scope.go:117] "RemoveContainer" containerID="591771d71ee5c854467d6d58e9763f0de5b749cececcda98b52c1dba0b832a9e" Mar 12 08:14:30 crc kubenswrapper[4809]: I0312 08:14:30.829148 4809 scope.go:117] "RemoveContainer" containerID="74f05b875c8ebcb279673960d6230dfccd0e0fbcc6814b2f270658c2d0d5664f" Mar 12 08:14:30 crc kubenswrapper[4809]: I0312 08:14:30.848662 4809 scope.go:117] "RemoveContainer" containerID="6cd522c9a16b0dde77907362d13ee619d6994caa1fe596ed271b72235ec305a2" Mar 12 08:14:30 crc kubenswrapper[4809]: E0312 08:14:30.849243 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cd522c9a16b0dde77907362d13ee619d6994caa1fe596ed271b72235ec305a2\": container with ID starting with 6cd522c9a16b0dde77907362d13ee619d6994caa1fe596ed271b72235ec305a2 not found: ID does not exist" containerID="6cd522c9a16b0dde77907362d13ee619d6994caa1fe596ed271b72235ec305a2" Mar 12 08:14:30 crc kubenswrapper[4809]: I0312 08:14:30.849275 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cd522c9a16b0dde77907362d13ee619d6994caa1fe596ed271b72235ec305a2"} err="failed to get container status \"6cd522c9a16b0dde77907362d13ee619d6994caa1fe596ed271b72235ec305a2\": rpc error: code = NotFound desc = could not find container \"6cd522c9a16b0dde77907362d13ee619d6994caa1fe596ed271b72235ec305a2\": container with ID starting with 6cd522c9a16b0dde77907362d13ee619d6994caa1fe596ed271b72235ec305a2 not found: ID does not exist" Mar 12 08:14:30 crc kubenswrapper[4809]: I0312 08:14:30.849297 4809 scope.go:117] "RemoveContainer" containerID="591771d71ee5c854467d6d58e9763f0de5b749cececcda98b52c1dba0b832a9e" Mar 12 08:14:30 crc kubenswrapper[4809]: E0312 08:14:30.849754 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"591771d71ee5c854467d6d58e9763f0de5b749cececcda98b52c1dba0b832a9e\": container with ID starting with 591771d71ee5c854467d6d58e9763f0de5b749cececcda98b52c1dba0b832a9e not found: ID does not exist" containerID="591771d71ee5c854467d6d58e9763f0de5b749cececcda98b52c1dba0b832a9e" Mar 12 08:14:30 crc kubenswrapper[4809]: I0312 08:14:30.849785 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"591771d71ee5c854467d6d58e9763f0de5b749cececcda98b52c1dba0b832a9e"} err="failed to get container status \"591771d71ee5c854467d6d58e9763f0de5b749cececcda98b52c1dba0b832a9e\": rpc error: code = NotFound desc = could not find container \"591771d71ee5c854467d6d58e9763f0de5b749cececcda98b52c1dba0b832a9e\": container with ID starting with 591771d71ee5c854467d6d58e9763f0de5b749cececcda98b52c1dba0b832a9e not found: ID does not exist" Mar 12 08:14:30 crc kubenswrapper[4809]: I0312 08:14:30.849800 4809 scope.go:117] "RemoveContainer" containerID="74f05b875c8ebcb279673960d6230dfccd0e0fbcc6814b2f270658c2d0d5664f" Mar 12 08:14:30 crc kubenswrapper[4809]: E0312 08:14:30.850077 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74f05b875c8ebcb279673960d6230dfccd0e0fbcc6814b2f270658c2d0d5664f\": container with ID starting with 74f05b875c8ebcb279673960d6230dfccd0e0fbcc6814b2f270658c2d0d5664f not found: ID does not exist" containerID="74f05b875c8ebcb279673960d6230dfccd0e0fbcc6814b2f270658c2d0d5664f" Mar 12 08:14:30 crc kubenswrapper[4809]: I0312 08:14:30.850100 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74f05b875c8ebcb279673960d6230dfccd0e0fbcc6814b2f270658c2d0d5664f"} err="failed to get container status \"74f05b875c8ebcb279673960d6230dfccd0e0fbcc6814b2f270658c2d0d5664f\": rpc error: code = NotFound desc = could not find container \"74f05b875c8ebcb279673960d6230dfccd0e0fbcc6814b2f270658c2d0d5664f\": container with ID starting with 74f05b875c8ebcb279673960d6230dfccd0e0fbcc6814b2f270658c2d0d5664f not found: ID does not exist" Mar 12 08:14:31 crc kubenswrapper[4809]: I0312 08:14:31.193274 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f899d57-2fec-46c4-834b-1aab4aa33534-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2f899d57-2fec-46c4-834b-1aab4aa33534" (UID: "2f899d57-2fec-46c4-834b-1aab4aa33534"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:14:31 crc kubenswrapper[4809]: I0312 08:14:31.237355 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f899d57-2fec-46c4-834b-1aab4aa33534-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 08:14:31 crc kubenswrapper[4809]: I0312 08:14:31.392835 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jpxmj"] Mar 12 08:14:31 crc kubenswrapper[4809]: I0312 08:14:31.401377 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jpxmj"] Mar 12 08:14:33 crc kubenswrapper[4809]: I0312 08:14:33.119456 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f899d57-2fec-46c4-834b-1aab4aa33534" path="/var/lib/kubelet/pods/2f899d57-2fec-46c4-834b-1aab4aa33534/volumes" Mar 12 08:14:39 crc kubenswrapper[4809]: I0312 08:14:39.215931 4809 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Mar 12 08:14:39 crc kubenswrapper[4809]: I0312 08:14:39.217092 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="1ef2f625-286b-49c8-97d9-a98350cfea7b" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 12 08:14:45 crc kubenswrapper[4809]: I0312 08:14:45.049467 4809 patch_prober.go:28] interesting pod/machine-config-daemon-h6d4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 08:14:45 crc kubenswrapper[4809]: I0312 08:14:45.050303 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 08:14:49 crc kubenswrapper[4809]: I0312 08:14:49.209657 4809 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Mar 12 08:14:49 crc kubenswrapper[4809]: I0312 08:14:49.211005 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="1ef2f625-286b-49c8-97d9-a98350cfea7b" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 12 08:14:58 crc kubenswrapper[4809]: I0312 08:14:58.723651 4809 scope.go:117] "RemoveContainer" containerID="2c948b97be361afc93d6dd63a1bf112c8ed27c0075ff33bc83cde3bbe8ddf6d4" Mar 12 08:14:58 crc kubenswrapper[4809]: I0312 08:14:58.845269 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sdbgn"] Mar 12 08:14:58 crc kubenswrapper[4809]: E0312 08:14:58.845578 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f899d57-2fec-46c4-834b-1aab4aa33534" containerName="extract-content" Mar 12 08:14:58 crc kubenswrapper[4809]: I0312 08:14:58.845597 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f899d57-2fec-46c4-834b-1aab4aa33534" containerName="extract-content" Mar 12 08:14:58 crc kubenswrapper[4809]: E0312 08:14:58.845617 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f899d57-2fec-46c4-834b-1aab4aa33534" containerName="registry-server" Mar 12 08:14:58 crc kubenswrapper[4809]: I0312 08:14:58.845626 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f899d57-2fec-46c4-834b-1aab4aa33534" containerName="registry-server" Mar 12 08:14:58 crc kubenswrapper[4809]: E0312 08:14:58.845637 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f899d57-2fec-46c4-834b-1aab4aa33534" containerName="extract-utilities" Mar 12 08:14:58 crc kubenswrapper[4809]: I0312 08:14:58.845646 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f899d57-2fec-46c4-834b-1aab4aa33534" containerName="extract-utilities" Mar 12 08:14:58 crc kubenswrapper[4809]: I0312 08:14:58.845790 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f899d57-2fec-46c4-834b-1aab4aa33534" containerName="registry-server" Mar 12 08:14:58 crc kubenswrapper[4809]: I0312 08:14:58.847031 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sdbgn" Mar 12 08:14:58 crc kubenswrapper[4809]: I0312 08:14:58.860352 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sdbgn"] Mar 12 08:14:59 crc kubenswrapper[4809]: I0312 08:14:59.045318 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b1e551d-d0e4-42e3-8eae-36b028ba581b-catalog-content\") pod \"redhat-marketplace-sdbgn\" (UID: \"8b1e551d-d0e4-42e3-8eae-36b028ba581b\") " pod="openshift-marketplace/redhat-marketplace-sdbgn" Mar 12 08:14:59 crc kubenswrapper[4809]: I0312 08:14:59.046140 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b1e551d-d0e4-42e3-8eae-36b028ba581b-utilities\") pod \"redhat-marketplace-sdbgn\" (UID: \"8b1e551d-d0e4-42e3-8eae-36b028ba581b\") " pod="openshift-marketplace/redhat-marketplace-sdbgn" Mar 12 08:14:59 crc kubenswrapper[4809]: I0312 08:14:59.046839 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zg8f\" (UniqueName: \"kubernetes.io/projected/8b1e551d-d0e4-42e3-8eae-36b028ba581b-kube-api-access-2zg8f\") pod \"redhat-marketplace-sdbgn\" (UID: \"8b1e551d-d0e4-42e3-8eae-36b028ba581b\") " pod="openshift-marketplace/redhat-marketplace-sdbgn" Mar 12 08:14:59 crc kubenswrapper[4809]: I0312 08:14:59.148956 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b1e551d-d0e4-42e3-8eae-36b028ba581b-utilities\") pod \"redhat-marketplace-sdbgn\" (UID: \"8b1e551d-d0e4-42e3-8eae-36b028ba581b\") " pod="openshift-marketplace/redhat-marketplace-sdbgn" Mar 12 08:14:59 crc kubenswrapper[4809]: I0312 08:14:59.149060 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zg8f\" (UniqueName: \"kubernetes.io/projected/8b1e551d-d0e4-42e3-8eae-36b028ba581b-kube-api-access-2zg8f\") pod \"redhat-marketplace-sdbgn\" (UID: \"8b1e551d-d0e4-42e3-8eae-36b028ba581b\") " pod="openshift-marketplace/redhat-marketplace-sdbgn" Mar 12 08:14:59 crc kubenswrapper[4809]: I0312 08:14:59.149146 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b1e551d-d0e4-42e3-8eae-36b028ba581b-catalog-content\") pod \"redhat-marketplace-sdbgn\" (UID: \"8b1e551d-d0e4-42e3-8eae-36b028ba581b\") " pod="openshift-marketplace/redhat-marketplace-sdbgn" Mar 12 08:14:59 crc kubenswrapper[4809]: I0312 08:14:59.149727 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b1e551d-d0e4-42e3-8eae-36b028ba581b-utilities\") pod \"redhat-marketplace-sdbgn\" (UID: \"8b1e551d-d0e4-42e3-8eae-36b028ba581b\") " pod="openshift-marketplace/redhat-marketplace-sdbgn" Mar 12 08:14:59 crc kubenswrapper[4809]: I0312 08:14:59.149768 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b1e551d-d0e4-42e3-8eae-36b028ba581b-catalog-content\") pod \"redhat-marketplace-sdbgn\" (UID: \"8b1e551d-d0e4-42e3-8eae-36b028ba581b\") " pod="openshift-marketplace/redhat-marketplace-sdbgn" Mar 12 08:14:59 crc kubenswrapper[4809]: I0312 08:14:59.170520 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zg8f\" (UniqueName: \"kubernetes.io/projected/8b1e551d-d0e4-42e3-8eae-36b028ba581b-kube-api-access-2zg8f\") pod \"redhat-marketplace-sdbgn\" (UID: \"8b1e551d-d0e4-42e3-8eae-36b028ba581b\") " pod="openshift-marketplace/redhat-marketplace-sdbgn" Mar 12 08:14:59 crc kubenswrapper[4809]: I0312 08:14:59.187042 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sdbgn" Mar 12 08:14:59 crc kubenswrapper[4809]: I0312 08:14:59.211711 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-ingester-0" Mar 12 08:14:59 crc kubenswrapper[4809]: I0312 08:14:59.718881 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sdbgn"] Mar 12 08:15:00 crc kubenswrapper[4809]: I0312 08:15:00.140514 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555055-wxgnb"] Mar 12 08:15:00 crc kubenswrapper[4809]: I0312 08:15:00.142056 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555055-wxgnb" Mar 12 08:15:00 crc kubenswrapper[4809]: I0312 08:15:00.144449 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 12 08:15:00 crc kubenswrapper[4809]: I0312 08:15:00.144559 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 12 08:15:00 crc kubenswrapper[4809]: I0312 08:15:00.150027 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555055-wxgnb"] Mar 12 08:15:00 crc kubenswrapper[4809]: I0312 08:15:00.267525 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f8580f49-42bb-4276-8173-4d83e59bb923-secret-volume\") pod \"collect-profiles-29555055-wxgnb\" (UID: \"f8580f49-42bb-4276-8173-4d83e59bb923\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555055-wxgnb" Mar 12 08:15:00 crc kubenswrapper[4809]: I0312 08:15:00.267628 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f8580f49-42bb-4276-8173-4d83e59bb923-config-volume\") pod \"collect-profiles-29555055-wxgnb\" (UID: \"f8580f49-42bb-4276-8173-4d83e59bb923\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555055-wxgnb" Mar 12 08:15:00 crc kubenswrapper[4809]: I0312 08:15:00.267844 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7q9cs\" (UniqueName: \"kubernetes.io/projected/f8580f49-42bb-4276-8173-4d83e59bb923-kube-api-access-7q9cs\") pod \"collect-profiles-29555055-wxgnb\" (UID: \"f8580f49-42bb-4276-8173-4d83e59bb923\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555055-wxgnb" Mar 12 08:15:00 crc kubenswrapper[4809]: I0312 08:15:00.369816 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7q9cs\" (UniqueName: \"kubernetes.io/projected/f8580f49-42bb-4276-8173-4d83e59bb923-kube-api-access-7q9cs\") pod \"collect-profiles-29555055-wxgnb\" (UID: \"f8580f49-42bb-4276-8173-4d83e59bb923\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555055-wxgnb" Mar 12 08:15:00 crc kubenswrapper[4809]: I0312 08:15:00.369942 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f8580f49-42bb-4276-8173-4d83e59bb923-secret-volume\") pod \"collect-profiles-29555055-wxgnb\" (UID: \"f8580f49-42bb-4276-8173-4d83e59bb923\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555055-wxgnb" Mar 12 08:15:00 crc kubenswrapper[4809]: I0312 08:15:00.369993 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f8580f49-42bb-4276-8173-4d83e59bb923-config-volume\") pod \"collect-profiles-29555055-wxgnb\" (UID: \"f8580f49-42bb-4276-8173-4d83e59bb923\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555055-wxgnb" Mar 12 08:15:00 crc kubenswrapper[4809]: I0312 08:15:00.371288 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f8580f49-42bb-4276-8173-4d83e59bb923-config-volume\") pod \"collect-profiles-29555055-wxgnb\" (UID: \"f8580f49-42bb-4276-8173-4d83e59bb923\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555055-wxgnb" Mar 12 08:15:00 crc kubenswrapper[4809]: I0312 08:15:00.379298 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f8580f49-42bb-4276-8173-4d83e59bb923-secret-volume\") pod \"collect-profiles-29555055-wxgnb\" (UID: \"f8580f49-42bb-4276-8173-4d83e59bb923\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555055-wxgnb" Mar 12 08:15:00 crc kubenswrapper[4809]: I0312 08:15:00.388964 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7q9cs\" (UniqueName: \"kubernetes.io/projected/f8580f49-42bb-4276-8173-4d83e59bb923-kube-api-access-7q9cs\") pod \"collect-profiles-29555055-wxgnb\" (UID: \"f8580f49-42bb-4276-8173-4d83e59bb923\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555055-wxgnb" Mar 12 08:15:00 crc kubenswrapper[4809]: I0312 08:15:00.409196 4809 generic.go:334] "Generic (PLEG): container finished" podID="8b1e551d-d0e4-42e3-8eae-36b028ba581b" containerID="f4087b911b4e159342c3964413e012bcd0e9333c64b6b1b7df06a20cd850e81d" exitCode=0 Mar 12 08:15:00 crc kubenswrapper[4809]: I0312 08:15:00.409260 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sdbgn" event={"ID":"8b1e551d-d0e4-42e3-8eae-36b028ba581b","Type":"ContainerDied","Data":"f4087b911b4e159342c3964413e012bcd0e9333c64b6b1b7df06a20cd850e81d"} Mar 12 08:15:00 crc kubenswrapper[4809]: I0312 08:15:00.409295 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sdbgn" event={"ID":"8b1e551d-d0e4-42e3-8eae-36b028ba581b","Type":"ContainerStarted","Data":"5587b65adf35c3f0135748ea1c1dd0fc9980e8429ac66a5e811c5f13a71a7166"} Mar 12 08:15:00 crc kubenswrapper[4809]: I0312 08:15:00.459589 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555055-wxgnb" Mar 12 08:15:00 crc kubenswrapper[4809]: I0312 08:15:00.710342 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555055-wxgnb"] Mar 12 08:15:00 crc kubenswrapper[4809]: W0312 08:15:00.715857 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf8580f49_42bb_4276_8173_4d83e59bb923.slice/crio-319fed20093650d61930e610506e8f8f8a6f2412b0d76862cbb21ed965805292 WatchSource:0}: Error finding container 319fed20093650d61930e610506e8f8f8a6f2412b0d76862cbb21ed965805292: Status 404 returned error can't find the container with id 319fed20093650d61930e610506e8f8f8a6f2412b0d76862cbb21ed965805292 Mar 12 08:15:01 crc kubenswrapper[4809]: I0312 08:15:01.417753 4809 generic.go:334] "Generic (PLEG): container finished" podID="f8580f49-42bb-4276-8173-4d83e59bb923" containerID="b5301b8ec3a60e237fbbbf50703eb0ba1a7bddeb2d250bd34645eba699761fab" exitCode=0 Mar 12 08:15:01 crc kubenswrapper[4809]: I0312 08:15:01.417934 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555055-wxgnb" event={"ID":"f8580f49-42bb-4276-8173-4d83e59bb923","Type":"ContainerDied","Data":"b5301b8ec3a60e237fbbbf50703eb0ba1a7bddeb2d250bd34645eba699761fab"} Mar 12 08:15:01 crc kubenswrapper[4809]: I0312 08:15:01.418166 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555055-wxgnb" event={"ID":"f8580f49-42bb-4276-8173-4d83e59bb923","Type":"ContainerStarted","Data":"319fed20093650d61930e610506e8f8f8a6f2412b0d76862cbb21ed965805292"} Mar 12 08:15:02 crc kubenswrapper[4809]: I0312 08:15:02.426588 4809 generic.go:334] "Generic (PLEG): container finished" podID="8b1e551d-d0e4-42e3-8eae-36b028ba581b" containerID="50925230a067ea8e021e4417ca414688ab5b5d542fa78f4faa6964b9466bd79b" exitCode=0 Mar 12 08:15:02 crc kubenswrapper[4809]: I0312 08:15:02.427306 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sdbgn" event={"ID":"8b1e551d-d0e4-42e3-8eae-36b028ba581b","Type":"ContainerDied","Data":"50925230a067ea8e021e4417ca414688ab5b5d542fa78f4faa6964b9466bd79b"} Mar 12 08:15:02 crc kubenswrapper[4809]: I0312 08:15:02.837850 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555055-wxgnb" Mar 12 08:15:02 crc kubenswrapper[4809]: I0312 08:15:02.941701 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f8580f49-42bb-4276-8173-4d83e59bb923-secret-volume\") pod \"f8580f49-42bb-4276-8173-4d83e59bb923\" (UID: \"f8580f49-42bb-4276-8173-4d83e59bb923\") " Mar 12 08:15:02 crc kubenswrapper[4809]: I0312 08:15:02.941979 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7q9cs\" (UniqueName: \"kubernetes.io/projected/f8580f49-42bb-4276-8173-4d83e59bb923-kube-api-access-7q9cs\") pod \"f8580f49-42bb-4276-8173-4d83e59bb923\" (UID: \"f8580f49-42bb-4276-8173-4d83e59bb923\") " Mar 12 08:15:02 crc kubenswrapper[4809]: I0312 08:15:02.942098 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f8580f49-42bb-4276-8173-4d83e59bb923-config-volume\") pod \"f8580f49-42bb-4276-8173-4d83e59bb923\" (UID: \"f8580f49-42bb-4276-8173-4d83e59bb923\") " Mar 12 08:15:02 crc kubenswrapper[4809]: I0312 08:15:02.947728 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8580f49-42bb-4276-8173-4d83e59bb923-config-volume" (OuterVolumeSpecName: "config-volume") pod "f8580f49-42bb-4276-8173-4d83e59bb923" (UID: "f8580f49-42bb-4276-8173-4d83e59bb923"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:15:02 crc kubenswrapper[4809]: I0312 08:15:02.955987 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8580f49-42bb-4276-8173-4d83e59bb923-kube-api-access-7q9cs" (OuterVolumeSpecName: "kube-api-access-7q9cs") pod "f8580f49-42bb-4276-8173-4d83e59bb923" (UID: "f8580f49-42bb-4276-8173-4d83e59bb923"). InnerVolumeSpecName "kube-api-access-7q9cs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:15:02 crc kubenswrapper[4809]: I0312 08:15:02.956814 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8580f49-42bb-4276-8173-4d83e59bb923-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f8580f49-42bb-4276-8173-4d83e59bb923" (UID: "f8580f49-42bb-4276-8173-4d83e59bb923"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:15:03 crc kubenswrapper[4809]: I0312 08:15:03.043625 4809 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f8580f49-42bb-4276-8173-4d83e59bb923-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 12 08:15:03 crc kubenswrapper[4809]: I0312 08:15:03.043667 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7q9cs\" (UniqueName: \"kubernetes.io/projected/f8580f49-42bb-4276-8173-4d83e59bb923-kube-api-access-7q9cs\") on node \"crc\" DevicePath \"\"" Mar 12 08:15:03 crc kubenswrapper[4809]: I0312 08:15:03.043679 4809 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f8580f49-42bb-4276-8173-4d83e59bb923-config-volume\") on node \"crc\" DevicePath \"\"" Mar 12 08:15:03 crc kubenswrapper[4809]: I0312 08:15:03.443213 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sdbgn" event={"ID":"8b1e551d-d0e4-42e3-8eae-36b028ba581b","Type":"ContainerStarted","Data":"8b24970d220fa820f86f264dc057cb5a9a80ee70005b7a868a00c9e0fa6f9668"} Mar 12 08:15:03 crc kubenswrapper[4809]: I0312 08:15:03.445884 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555055-wxgnb" event={"ID":"f8580f49-42bb-4276-8173-4d83e59bb923","Type":"ContainerDied","Data":"319fed20093650d61930e610506e8f8f8a6f2412b0d76862cbb21ed965805292"} Mar 12 08:15:03 crc kubenswrapper[4809]: I0312 08:15:03.445922 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="319fed20093650d61930e610506e8f8f8a6f2412b0d76862cbb21ed965805292" Mar 12 08:15:03 crc kubenswrapper[4809]: I0312 08:15:03.445934 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555055-wxgnb" Mar 12 08:15:03 crc kubenswrapper[4809]: I0312 08:15:03.480409 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sdbgn" podStartSLOduration=3.054071656 podStartE2EDuration="5.48038856s" podCreationTimestamp="2026-03-12 08:14:58 +0000 UTC" firstStartedPulling="2026-03-12 08:15:00.411202065 +0000 UTC m=+973.993237798" lastFinishedPulling="2026-03-12 08:15:02.837518969 +0000 UTC m=+976.419554702" observedRunningTime="2026-03-12 08:15:03.476253576 +0000 UTC m=+977.058289379" watchObservedRunningTime="2026-03-12 08:15:03.48038856 +0000 UTC m=+977.062424293" Mar 12 08:15:09 crc kubenswrapper[4809]: I0312 08:15:09.187443 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sdbgn" Mar 12 08:15:09 crc kubenswrapper[4809]: I0312 08:15:09.188493 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sdbgn" Mar 12 08:15:09 crc kubenswrapper[4809]: I0312 08:15:09.279222 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sdbgn" Mar 12 08:15:09 crc kubenswrapper[4809]: I0312 08:15:09.587821 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sdbgn" Mar 12 08:15:09 crc kubenswrapper[4809]: I0312 08:15:09.671669 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sdbgn"] Mar 12 08:15:11 crc kubenswrapper[4809]: I0312 08:15:11.537600 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sdbgn" podUID="8b1e551d-d0e4-42e3-8eae-36b028ba581b" containerName="registry-server" containerID="cri-o://8b24970d220fa820f86f264dc057cb5a9a80ee70005b7a868a00c9e0fa6f9668" gracePeriod=2 Mar 12 08:15:12 crc kubenswrapper[4809]: I0312 08:15:12.027138 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sdbgn" Mar 12 08:15:12 crc kubenswrapper[4809]: I0312 08:15:12.116375 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b1e551d-d0e4-42e3-8eae-36b028ba581b-utilities\") pod \"8b1e551d-d0e4-42e3-8eae-36b028ba581b\" (UID: \"8b1e551d-d0e4-42e3-8eae-36b028ba581b\") " Mar 12 08:15:12 crc kubenswrapper[4809]: I0312 08:15:12.120465 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b1e551d-d0e4-42e3-8eae-36b028ba581b-utilities" (OuterVolumeSpecName: "utilities") pod "8b1e551d-d0e4-42e3-8eae-36b028ba581b" (UID: "8b1e551d-d0e4-42e3-8eae-36b028ba581b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:15:12 crc kubenswrapper[4809]: I0312 08:15:12.116515 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zg8f\" (UniqueName: \"kubernetes.io/projected/8b1e551d-d0e4-42e3-8eae-36b028ba581b-kube-api-access-2zg8f\") pod \"8b1e551d-d0e4-42e3-8eae-36b028ba581b\" (UID: \"8b1e551d-d0e4-42e3-8eae-36b028ba581b\") " Mar 12 08:15:12 crc kubenswrapper[4809]: I0312 08:15:12.120696 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b1e551d-d0e4-42e3-8eae-36b028ba581b-catalog-content\") pod \"8b1e551d-d0e4-42e3-8eae-36b028ba581b\" (UID: \"8b1e551d-d0e4-42e3-8eae-36b028ba581b\") " Mar 12 08:15:12 crc kubenswrapper[4809]: I0312 08:15:12.121472 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b1e551d-d0e4-42e3-8eae-36b028ba581b-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 08:15:12 crc kubenswrapper[4809]: I0312 08:15:12.129103 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b1e551d-d0e4-42e3-8eae-36b028ba581b-kube-api-access-2zg8f" (OuterVolumeSpecName: "kube-api-access-2zg8f") pod "8b1e551d-d0e4-42e3-8eae-36b028ba581b" (UID: "8b1e551d-d0e4-42e3-8eae-36b028ba581b"). InnerVolumeSpecName "kube-api-access-2zg8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:15:12 crc kubenswrapper[4809]: I0312 08:15:12.161574 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b1e551d-d0e4-42e3-8eae-36b028ba581b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8b1e551d-d0e4-42e3-8eae-36b028ba581b" (UID: "8b1e551d-d0e4-42e3-8eae-36b028ba581b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:15:12 crc kubenswrapper[4809]: I0312 08:15:12.223917 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zg8f\" (UniqueName: \"kubernetes.io/projected/8b1e551d-d0e4-42e3-8eae-36b028ba581b-kube-api-access-2zg8f\") on node \"crc\" DevicePath \"\"" Mar 12 08:15:12 crc kubenswrapper[4809]: I0312 08:15:12.223991 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b1e551d-d0e4-42e3-8eae-36b028ba581b-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 08:15:12 crc kubenswrapper[4809]: I0312 08:15:12.550776 4809 generic.go:334] "Generic (PLEG): container finished" podID="8b1e551d-d0e4-42e3-8eae-36b028ba581b" containerID="8b24970d220fa820f86f264dc057cb5a9a80ee70005b7a868a00c9e0fa6f9668" exitCode=0 Mar 12 08:15:12 crc kubenswrapper[4809]: I0312 08:15:12.550846 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sdbgn" event={"ID":"8b1e551d-d0e4-42e3-8eae-36b028ba581b","Type":"ContainerDied","Data":"8b24970d220fa820f86f264dc057cb5a9a80ee70005b7a868a00c9e0fa6f9668"} Mar 12 08:15:12 crc kubenswrapper[4809]: I0312 08:15:12.550937 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sdbgn" event={"ID":"8b1e551d-d0e4-42e3-8eae-36b028ba581b","Type":"ContainerDied","Data":"5587b65adf35c3f0135748ea1c1dd0fc9980e8429ac66a5e811c5f13a71a7166"} Mar 12 08:15:12 crc kubenswrapper[4809]: I0312 08:15:12.550897 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sdbgn" Mar 12 08:15:12 crc kubenswrapper[4809]: I0312 08:15:12.550988 4809 scope.go:117] "RemoveContainer" containerID="8b24970d220fa820f86f264dc057cb5a9a80ee70005b7a868a00c9e0fa6f9668" Mar 12 08:15:12 crc kubenswrapper[4809]: I0312 08:15:12.590789 4809 scope.go:117] "RemoveContainer" containerID="50925230a067ea8e021e4417ca414688ab5b5d542fa78f4faa6964b9466bd79b" Mar 12 08:15:12 crc kubenswrapper[4809]: I0312 08:15:12.609849 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sdbgn"] Mar 12 08:15:12 crc kubenswrapper[4809]: I0312 08:15:12.614261 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sdbgn"] Mar 12 08:15:12 crc kubenswrapper[4809]: I0312 08:15:12.627272 4809 scope.go:117] "RemoveContainer" containerID="f4087b911b4e159342c3964413e012bcd0e9333c64b6b1b7df06a20cd850e81d" Mar 12 08:15:12 crc kubenswrapper[4809]: I0312 08:15:12.668491 4809 scope.go:117] "RemoveContainer" containerID="8b24970d220fa820f86f264dc057cb5a9a80ee70005b7a868a00c9e0fa6f9668" Mar 12 08:15:12 crc kubenswrapper[4809]: E0312 08:15:12.669108 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b24970d220fa820f86f264dc057cb5a9a80ee70005b7a868a00c9e0fa6f9668\": container with ID starting with 8b24970d220fa820f86f264dc057cb5a9a80ee70005b7a868a00c9e0fa6f9668 not found: ID does not exist" containerID="8b24970d220fa820f86f264dc057cb5a9a80ee70005b7a868a00c9e0fa6f9668" Mar 12 08:15:12 crc kubenswrapper[4809]: I0312 08:15:12.669167 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b24970d220fa820f86f264dc057cb5a9a80ee70005b7a868a00c9e0fa6f9668"} err="failed to get container status \"8b24970d220fa820f86f264dc057cb5a9a80ee70005b7a868a00c9e0fa6f9668\": rpc error: code = NotFound desc = could not find container \"8b24970d220fa820f86f264dc057cb5a9a80ee70005b7a868a00c9e0fa6f9668\": container with ID starting with 8b24970d220fa820f86f264dc057cb5a9a80ee70005b7a868a00c9e0fa6f9668 not found: ID does not exist" Mar 12 08:15:12 crc kubenswrapper[4809]: I0312 08:15:12.669189 4809 scope.go:117] "RemoveContainer" containerID="50925230a067ea8e021e4417ca414688ab5b5d542fa78f4faa6964b9466bd79b" Mar 12 08:15:12 crc kubenswrapper[4809]: E0312 08:15:12.669525 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50925230a067ea8e021e4417ca414688ab5b5d542fa78f4faa6964b9466bd79b\": container with ID starting with 50925230a067ea8e021e4417ca414688ab5b5d542fa78f4faa6964b9466bd79b not found: ID does not exist" containerID="50925230a067ea8e021e4417ca414688ab5b5d542fa78f4faa6964b9466bd79b" Mar 12 08:15:12 crc kubenswrapper[4809]: I0312 08:15:12.669578 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50925230a067ea8e021e4417ca414688ab5b5d542fa78f4faa6964b9466bd79b"} err="failed to get container status \"50925230a067ea8e021e4417ca414688ab5b5d542fa78f4faa6964b9466bd79b\": rpc error: code = NotFound desc = could not find container \"50925230a067ea8e021e4417ca414688ab5b5d542fa78f4faa6964b9466bd79b\": container with ID starting with 50925230a067ea8e021e4417ca414688ab5b5d542fa78f4faa6964b9466bd79b not found: ID does not exist" Mar 12 08:15:12 crc kubenswrapper[4809]: I0312 08:15:12.669613 4809 scope.go:117] "RemoveContainer" containerID="f4087b911b4e159342c3964413e012bcd0e9333c64b6b1b7df06a20cd850e81d" Mar 12 08:15:12 crc kubenswrapper[4809]: E0312 08:15:12.670068 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4087b911b4e159342c3964413e012bcd0e9333c64b6b1b7df06a20cd850e81d\": container with ID starting with f4087b911b4e159342c3964413e012bcd0e9333c64b6b1b7df06a20cd850e81d not found: ID does not exist" containerID="f4087b911b4e159342c3964413e012bcd0e9333c64b6b1b7df06a20cd850e81d" Mar 12 08:15:12 crc kubenswrapper[4809]: I0312 08:15:12.670093 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4087b911b4e159342c3964413e012bcd0e9333c64b6b1b7df06a20cd850e81d"} err="failed to get container status \"f4087b911b4e159342c3964413e012bcd0e9333c64b6b1b7df06a20cd850e81d\": rpc error: code = NotFound desc = could not find container \"f4087b911b4e159342c3964413e012bcd0e9333c64b6b1b7df06a20cd850e81d\": container with ID starting with f4087b911b4e159342c3964413e012bcd0e9333c64b6b1b7df06a20cd850e81d not found: ID does not exist" Mar 12 08:15:13 crc kubenswrapper[4809]: I0312 08:15:13.146247 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b1e551d-d0e4-42e3-8eae-36b028ba581b" path="/var/lib/kubelet/pods/8b1e551d-d0e4-42e3-8eae-36b028ba581b/volumes" Mar 12 08:15:15 crc kubenswrapper[4809]: I0312 08:15:15.049022 4809 patch_prober.go:28] interesting pod/machine-config-daemon-h6d4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 08:15:15 crc kubenswrapper[4809]: I0312 08:15:15.049798 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 08:15:15 crc kubenswrapper[4809]: I0312 08:15:15.049904 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" Mar 12 08:15:15 crc kubenswrapper[4809]: I0312 08:15:15.051259 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5a05a1c82af92d83a47e94a370bbfc613a47227fe6fbddbdc5d572ba375fcf3f"} pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 08:15:15 crc kubenswrapper[4809]: I0312 08:15:15.051431 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" containerID="cri-o://5a05a1c82af92d83a47e94a370bbfc613a47227fe6fbddbdc5d572ba375fcf3f" gracePeriod=600 Mar 12 08:15:15 crc kubenswrapper[4809]: I0312 08:15:15.590041 4809 generic.go:334] "Generic (PLEG): container finished" podID="101483ba-8ed3-40eb-9855-077e9add029f" containerID="5a05a1c82af92d83a47e94a370bbfc613a47227fe6fbddbdc5d572ba375fcf3f" exitCode=0 Mar 12 08:15:15 crc kubenswrapper[4809]: I0312 08:15:15.590135 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" event={"ID":"101483ba-8ed3-40eb-9855-077e9add029f","Type":"ContainerDied","Data":"5a05a1c82af92d83a47e94a370bbfc613a47227fe6fbddbdc5d572ba375fcf3f"} Mar 12 08:15:15 crc kubenswrapper[4809]: I0312 08:15:15.590576 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" event={"ID":"101483ba-8ed3-40eb-9855-077e9add029f","Type":"ContainerStarted","Data":"943f7c8e36ba68e565889e8e2f0d5e4ed7b8e09774d0614bb78547c2761cd4cc"} Mar 12 08:15:15 crc kubenswrapper[4809]: I0312 08:15:15.590606 4809 scope.go:117] "RemoveContainer" containerID="ca8598ebfe987f1558a37c5bf134fcec2279989f25be8a694a644461aa780ee9" Mar 12 08:15:15 crc kubenswrapper[4809]: I0312 08:15:15.697194 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-dm6zl"] Mar 12 08:15:15 crc kubenswrapper[4809]: E0312 08:15:15.697578 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b1e551d-d0e4-42e3-8eae-36b028ba581b" containerName="registry-server" Mar 12 08:15:15 crc kubenswrapper[4809]: I0312 08:15:15.697595 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b1e551d-d0e4-42e3-8eae-36b028ba581b" containerName="registry-server" Mar 12 08:15:15 crc kubenswrapper[4809]: E0312 08:15:15.697623 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b1e551d-d0e4-42e3-8eae-36b028ba581b" containerName="extract-utilities" Mar 12 08:15:15 crc kubenswrapper[4809]: I0312 08:15:15.697632 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b1e551d-d0e4-42e3-8eae-36b028ba581b" containerName="extract-utilities" Mar 12 08:15:15 crc kubenswrapper[4809]: E0312 08:15:15.697640 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b1e551d-d0e4-42e3-8eae-36b028ba581b" containerName="extract-content" Mar 12 08:15:15 crc kubenswrapper[4809]: I0312 08:15:15.697647 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b1e551d-d0e4-42e3-8eae-36b028ba581b" containerName="extract-content" Mar 12 08:15:15 crc kubenswrapper[4809]: E0312 08:15:15.697663 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8580f49-42bb-4276-8173-4d83e59bb923" containerName="collect-profiles" Mar 12 08:15:15 crc kubenswrapper[4809]: I0312 08:15:15.697668 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8580f49-42bb-4276-8173-4d83e59bb923" containerName="collect-profiles" Mar 12 08:15:15 crc kubenswrapper[4809]: I0312 08:15:15.697799 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b1e551d-d0e4-42e3-8eae-36b028ba581b" containerName="registry-server" Mar 12 08:15:15 crc kubenswrapper[4809]: I0312 08:15:15.697812 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8580f49-42bb-4276-8173-4d83e59bb923" containerName="collect-profiles" Mar 12 08:15:15 crc kubenswrapper[4809]: I0312 08:15:15.698388 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-dm6zl" Mar 12 08:15:15 crc kubenswrapper[4809]: I0312 08:15:15.713161 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-dm6zl"] Mar 12 08:15:15 crc kubenswrapper[4809]: I0312 08:15:15.716979 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Mar 12 08:15:15 crc kubenswrapper[4809]: I0312 08:15:15.721660 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Mar 12 08:15:15 crc kubenswrapper[4809]: I0312 08:15:15.721915 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-v56z4" Mar 12 08:15:15 crc kubenswrapper[4809]: I0312 08:15:15.722176 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Mar 12 08:15:15 crc kubenswrapper[4809]: I0312 08:15:15.722306 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Mar 12 08:15:15 crc kubenswrapper[4809]: I0312 08:15:15.722429 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Mar 12 08:15:15 crc kubenswrapper[4809]: I0312 08:15:15.791338 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-dm6zl"] Mar 12 08:15:15 crc kubenswrapper[4809]: E0312 08:15:15.791934 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-4qrr8 metrics sa-token tmp trusted-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-logging/collector-dm6zl" podUID="5d134e61-c886-41c1-a540-76cf92fb8a5f" Mar 12 08:15:15 crc kubenswrapper[4809]: I0312 08:15:15.801900 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/5d134e61-c886-41c1-a540-76cf92fb8a5f-entrypoint\") pod \"collector-dm6zl\" (UID: \"5d134e61-c886-41c1-a540-76cf92fb8a5f\") " pod="openshift-logging/collector-dm6zl" Mar 12 08:15:15 crc kubenswrapper[4809]: I0312 08:15:15.802322 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5d134e61-c886-41c1-a540-76cf92fb8a5f-tmp\") pod \"collector-dm6zl\" (UID: \"5d134e61-c886-41c1-a540-76cf92fb8a5f\") " pod="openshift-logging/collector-dm6zl" Mar 12 08:15:15 crc kubenswrapper[4809]: I0312 08:15:15.802521 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5d134e61-c886-41c1-a540-76cf92fb8a5f-trusted-ca\") pod \"collector-dm6zl\" (UID: \"5d134e61-c886-41c1-a540-76cf92fb8a5f\") " pod="openshift-logging/collector-dm6zl" Mar 12 08:15:15 crc kubenswrapper[4809]: I0312 08:15:15.802731 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/5d134e61-c886-41c1-a540-76cf92fb8a5f-sa-token\") pod \"collector-dm6zl\" (UID: \"5d134e61-c886-41c1-a540-76cf92fb8a5f\") " pod="openshift-logging/collector-dm6zl" Mar 12 08:15:15 crc kubenswrapper[4809]: I0312 08:15:15.802870 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d134e61-c886-41c1-a540-76cf92fb8a5f-config\") pod \"collector-dm6zl\" (UID: \"5d134e61-c886-41c1-a540-76cf92fb8a5f\") " pod="openshift-logging/collector-dm6zl" Mar 12 08:15:15 crc kubenswrapper[4809]: I0312 08:15:15.803025 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/5d134e61-c886-41c1-a540-76cf92fb8a5f-metrics\") pod \"collector-dm6zl\" (UID: \"5d134e61-c886-41c1-a540-76cf92fb8a5f\") " pod="openshift-logging/collector-dm6zl" Mar 12 08:15:15 crc kubenswrapper[4809]: I0312 08:15:15.803261 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/5d134e61-c886-41c1-a540-76cf92fb8a5f-collector-syslog-receiver\") pod \"collector-dm6zl\" (UID: \"5d134e61-c886-41c1-a540-76cf92fb8a5f\") " pod="openshift-logging/collector-dm6zl" Mar 12 08:15:15 crc kubenswrapper[4809]: I0312 08:15:15.803378 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qrr8\" (UniqueName: \"kubernetes.io/projected/5d134e61-c886-41c1-a540-76cf92fb8a5f-kube-api-access-4qrr8\") pod \"collector-dm6zl\" (UID: \"5d134e61-c886-41c1-a540-76cf92fb8a5f\") " pod="openshift-logging/collector-dm6zl" Mar 12 08:15:15 crc kubenswrapper[4809]: I0312 08:15:15.803427 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/5d134e61-c886-41c1-a540-76cf92fb8a5f-collector-token\") pod \"collector-dm6zl\" (UID: \"5d134e61-c886-41c1-a540-76cf92fb8a5f\") " pod="openshift-logging/collector-dm6zl" Mar 12 08:15:15 crc kubenswrapper[4809]: I0312 08:15:15.803481 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/5d134e61-c886-41c1-a540-76cf92fb8a5f-datadir\") pod \"collector-dm6zl\" (UID: \"5d134e61-c886-41c1-a540-76cf92fb8a5f\") " pod="openshift-logging/collector-dm6zl" Mar 12 08:15:15 crc kubenswrapper[4809]: I0312 08:15:15.803592 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/5d134e61-c886-41c1-a540-76cf92fb8a5f-config-openshift-service-cacrt\") pod \"collector-dm6zl\" (UID: \"5d134e61-c886-41c1-a540-76cf92fb8a5f\") " pod="openshift-logging/collector-dm6zl" Mar 12 08:15:15 crc kubenswrapper[4809]: I0312 08:15:15.905519 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/5d134e61-c886-41c1-a540-76cf92fb8a5f-entrypoint\") pod \"collector-dm6zl\" (UID: \"5d134e61-c886-41c1-a540-76cf92fb8a5f\") " pod="openshift-logging/collector-dm6zl" Mar 12 08:15:15 crc kubenswrapper[4809]: I0312 08:15:15.905988 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5d134e61-c886-41c1-a540-76cf92fb8a5f-tmp\") pod \"collector-dm6zl\" (UID: \"5d134e61-c886-41c1-a540-76cf92fb8a5f\") " pod="openshift-logging/collector-dm6zl" Mar 12 08:15:15 crc kubenswrapper[4809]: I0312 08:15:15.906019 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5d134e61-c886-41c1-a540-76cf92fb8a5f-trusted-ca\") pod \"collector-dm6zl\" (UID: \"5d134e61-c886-41c1-a540-76cf92fb8a5f\") " pod="openshift-logging/collector-dm6zl" Mar 12 08:15:15 crc kubenswrapper[4809]: I0312 08:15:15.906063 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/5d134e61-c886-41c1-a540-76cf92fb8a5f-sa-token\") pod \"collector-dm6zl\" (UID: \"5d134e61-c886-41c1-a540-76cf92fb8a5f\") " pod="openshift-logging/collector-dm6zl" Mar 12 08:15:15 crc kubenswrapper[4809]: I0312 08:15:15.906090 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d134e61-c886-41c1-a540-76cf92fb8a5f-config\") pod \"collector-dm6zl\" (UID: \"5d134e61-c886-41c1-a540-76cf92fb8a5f\") " pod="openshift-logging/collector-dm6zl" Mar 12 08:15:15 crc kubenswrapper[4809]: I0312 08:15:15.906139 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/5d134e61-c886-41c1-a540-76cf92fb8a5f-metrics\") pod \"collector-dm6zl\" (UID: \"5d134e61-c886-41c1-a540-76cf92fb8a5f\") " pod="openshift-logging/collector-dm6zl" Mar 12 08:15:15 crc kubenswrapper[4809]: I0312 08:15:15.906161 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/5d134e61-c886-41c1-a540-76cf92fb8a5f-collector-syslog-receiver\") pod \"collector-dm6zl\" (UID: \"5d134e61-c886-41c1-a540-76cf92fb8a5f\") " pod="openshift-logging/collector-dm6zl" Mar 12 08:15:15 crc kubenswrapper[4809]: I0312 08:15:15.906186 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qrr8\" (UniqueName: \"kubernetes.io/projected/5d134e61-c886-41c1-a540-76cf92fb8a5f-kube-api-access-4qrr8\") pod \"collector-dm6zl\" (UID: \"5d134e61-c886-41c1-a540-76cf92fb8a5f\") " pod="openshift-logging/collector-dm6zl" Mar 12 08:15:15 crc kubenswrapper[4809]: I0312 08:15:15.906205 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/5d134e61-c886-41c1-a540-76cf92fb8a5f-collector-token\") pod \"collector-dm6zl\" (UID: \"5d134e61-c886-41c1-a540-76cf92fb8a5f\") " pod="openshift-logging/collector-dm6zl" Mar 12 08:15:15 crc kubenswrapper[4809]: I0312 08:15:15.906225 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/5d134e61-c886-41c1-a540-76cf92fb8a5f-datadir\") pod \"collector-dm6zl\" (UID: \"5d134e61-c886-41c1-a540-76cf92fb8a5f\") " pod="openshift-logging/collector-dm6zl" Mar 12 08:15:15 crc kubenswrapper[4809]: I0312 08:15:15.906248 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/5d134e61-c886-41c1-a540-76cf92fb8a5f-config-openshift-service-cacrt\") pod \"collector-dm6zl\" (UID: \"5d134e61-c886-41c1-a540-76cf92fb8a5f\") " pod="openshift-logging/collector-dm6zl" Mar 12 08:15:15 crc kubenswrapper[4809]: I0312 08:15:15.906981 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/5d134e61-c886-41c1-a540-76cf92fb8a5f-config-openshift-service-cacrt\") pod \"collector-dm6zl\" (UID: \"5d134e61-c886-41c1-a540-76cf92fb8a5f\") " pod="openshift-logging/collector-dm6zl" Mar 12 08:15:15 crc kubenswrapper[4809]: I0312 08:15:15.907090 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5d134e61-c886-41c1-a540-76cf92fb8a5f-trusted-ca\") pod \"collector-dm6zl\" (UID: \"5d134e61-c886-41c1-a540-76cf92fb8a5f\") " pod="openshift-logging/collector-dm6zl" Mar 12 08:15:15 crc kubenswrapper[4809]: I0312 08:15:15.907092 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/5d134e61-c886-41c1-a540-76cf92fb8a5f-datadir\") pod \"collector-dm6zl\" (UID: \"5d134e61-c886-41c1-a540-76cf92fb8a5f\") " pod="openshift-logging/collector-dm6zl" Mar 12 08:15:15 crc kubenswrapper[4809]: I0312 08:15:15.907705 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/5d134e61-c886-41c1-a540-76cf92fb8a5f-entrypoint\") pod \"collector-dm6zl\" (UID: \"5d134e61-c886-41c1-a540-76cf92fb8a5f\") " pod="openshift-logging/collector-dm6zl" Mar 12 08:15:15 crc kubenswrapper[4809]: I0312 08:15:15.907824 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d134e61-c886-41c1-a540-76cf92fb8a5f-config\") pod \"collector-dm6zl\" (UID: \"5d134e61-c886-41c1-a540-76cf92fb8a5f\") " pod="openshift-logging/collector-dm6zl" Mar 12 08:15:15 crc kubenswrapper[4809]: I0312 08:15:15.913735 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/5d134e61-c886-41c1-a540-76cf92fb8a5f-collector-syslog-receiver\") pod \"collector-dm6zl\" (UID: \"5d134e61-c886-41c1-a540-76cf92fb8a5f\") " pod="openshift-logging/collector-dm6zl" Mar 12 08:15:15 crc kubenswrapper[4809]: I0312 08:15:15.914042 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/5d134e61-c886-41c1-a540-76cf92fb8a5f-collector-token\") pod \"collector-dm6zl\" (UID: \"5d134e61-c886-41c1-a540-76cf92fb8a5f\") " pod="openshift-logging/collector-dm6zl" Mar 12 08:15:15 crc kubenswrapper[4809]: I0312 08:15:15.920493 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5d134e61-c886-41c1-a540-76cf92fb8a5f-tmp\") pod \"collector-dm6zl\" (UID: \"5d134e61-c886-41c1-a540-76cf92fb8a5f\") " pod="openshift-logging/collector-dm6zl" Mar 12 08:15:15 crc kubenswrapper[4809]: I0312 08:15:15.924541 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/5d134e61-c886-41c1-a540-76cf92fb8a5f-sa-token\") pod \"collector-dm6zl\" (UID: \"5d134e61-c886-41c1-a540-76cf92fb8a5f\") " pod="openshift-logging/collector-dm6zl" Mar 12 08:15:15 crc kubenswrapper[4809]: I0312 08:15:15.932628 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qrr8\" (UniqueName: \"kubernetes.io/projected/5d134e61-c886-41c1-a540-76cf92fb8a5f-kube-api-access-4qrr8\") pod \"collector-dm6zl\" (UID: \"5d134e61-c886-41c1-a540-76cf92fb8a5f\") " pod="openshift-logging/collector-dm6zl" Mar 12 08:15:15 crc kubenswrapper[4809]: I0312 08:15:15.938017 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/5d134e61-c886-41c1-a540-76cf92fb8a5f-metrics\") pod \"collector-dm6zl\" (UID: \"5d134e61-c886-41c1-a540-76cf92fb8a5f\") " pod="openshift-logging/collector-dm6zl" Mar 12 08:15:16 crc kubenswrapper[4809]: I0312 08:15:16.615616 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-dm6zl" Mar 12 08:15:16 crc kubenswrapper[4809]: I0312 08:15:16.635878 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-dm6zl" Mar 12 08:15:16 crc kubenswrapper[4809]: I0312 08:15:16.720706 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/5d134e61-c886-41c1-a540-76cf92fb8a5f-collector-token\") pod \"5d134e61-c886-41c1-a540-76cf92fb8a5f\" (UID: \"5d134e61-c886-41c1-a540-76cf92fb8a5f\") " Mar 12 08:15:16 crc kubenswrapper[4809]: I0312 08:15:16.720800 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5d134e61-c886-41c1-a540-76cf92fb8a5f-tmp\") pod \"5d134e61-c886-41c1-a540-76cf92fb8a5f\" (UID: \"5d134e61-c886-41c1-a540-76cf92fb8a5f\") " Mar 12 08:15:16 crc kubenswrapper[4809]: I0312 08:15:16.720886 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qrr8\" (UniqueName: \"kubernetes.io/projected/5d134e61-c886-41c1-a540-76cf92fb8a5f-kube-api-access-4qrr8\") pod \"5d134e61-c886-41c1-a540-76cf92fb8a5f\" (UID: \"5d134e61-c886-41c1-a540-76cf92fb8a5f\") " Mar 12 08:15:16 crc kubenswrapper[4809]: I0312 08:15:16.720993 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/5d134e61-c886-41c1-a540-76cf92fb8a5f-datadir\") pod \"5d134e61-c886-41c1-a540-76cf92fb8a5f\" (UID: \"5d134e61-c886-41c1-a540-76cf92fb8a5f\") " Mar 12 08:15:16 crc kubenswrapper[4809]: I0312 08:15:16.721069 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/5d134e61-c886-41c1-a540-76cf92fb8a5f-collector-syslog-receiver\") pod \"5d134e61-c886-41c1-a540-76cf92fb8a5f\" (UID: \"5d134e61-c886-41c1-a540-76cf92fb8a5f\") " Mar 12 08:15:16 crc kubenswrapper[4809]: I0312 08:15:16.721139 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/5d134e61-c886-41c1-a540-76cf92fb8a5f-config-openshift-service-cacrt\") pod \"5d134e61-c886-41c1-a540-76cf92fb8a5f\" (UID: \"5d134e61-c886-41c1-a540-76cf92fb8a5f\") " Mar 12 08:15:16 crc kubenswrapper[4809]: I0312 08:15:16.721169 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5d134e61-c886-41c1-a540-76cf92fb8a5f-trusted-ca\") pod \"5d134e61-c886-41c1-a540-76cf92fb8a5f\" (UID: \"5d134e61-c886-41c1-a540-76cf92fb8a5f\") " Mar 12 08:15:16 crc kubenswrapper[4809]: I0312 08:15:16.721157 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d134e61-c886-41c1-a540-76cf92fb8a5f-datadir" (OuterVolumeSpecName: "datadir") pod "5d134e61-c886-41c1-a540-76cf92fb8a5f" (UID: "5d134e61-c886-41c1-a540-76cf92fb8a5f"). InnerVolumeSpecName "datadir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 08:15:16 crc kubenswrapper[4809]: I0312 08:15:16.721242 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/5d134e61-c886-41c1-a540-76cf92fb8a5f-metrics\") pod \"5d134e61-c886-41c1-a540-76cf92fb8a5f\" (UID: \"5d134e61-c886-41c1-a540-76cf92fb8a5f\") " Mar 12 08:15:16 crc kubenswrapper[4809]: I0312 08:15:16.721377 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d134e61-c886-41c1-a540-76cf92fb8a5f-config\") pod \"5d134e61-c886-41c1-a540-76cf92fb8a5f\" (UID: \"5d134e61-c886-41c1-a540-76cf92fb8a5f\") " Mar 12 08:15:16 crc kubenswrapper[4809]: I0312 08:15:16.721619 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/5d134e61-c886-41c1-a540-76cf92fb8a5f-sa-token\") pod \"5d134e61-c886-41c1-a540-76cf92fb8a5f\" (UID: \"5d134e61-c886-41c1-a540-76cf92fb8a5f\") " Mar 12 08:15:16 crc kubenswrapper[4809]: I0312 08:15:16.721688 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/5d134e61-c886-41c1-a540-76cf92fb8a5f-entrypoint\") pod \"5d134e61-c886-41c1-a540-76cf92fb8a5f\" (UID: \"5d134e61-c886-41c1-a540-76cf92fb8a5f\") " Mar 12 08:15:16 crc kubenswrapper[4809]: I0312 08:15:16.722183 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d134e61-c886-41c1-a540-76cf92fb8a5f-config" (OuterVolumeSpecName: "config") pod "5d134e61-c886-41c1-a540-76cf92fb8a5f" (UID: "5d134e61-c886-41c1-a540-76cf92fb8a5f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:15:16 crc kubenswrapper[4809]: I0312 08:15:16.722307 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d134e61-c886-41c1-a540-76cf92fb8a5f-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "5d134e61-c886-41c1-a540-76cf92fb8a5f" (UID: "5d134e61-c886-41c1-a540-76cf92fb8a5f"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:15:16 crc kubenswrapper[4809]: I0312 08:15:16.722605 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d134e61-c886-41c1-a540-76cf92fb8a5f-config-openshift-service-cacrt" (OuterVolumeSpecName: "config-openshift-service-cacrt") pod "5d134e61-c886-41c1-a540-76cf92fb8a5f" (UID: "5d134e61-c886-41c1-a540-76cf92fb8a5f"). InnerVolumeSpecName "config-openshift-service-cacrt". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:15:16 crc kubenswrapper[4809]: I0312 08:15:16.722801 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d134e61-c886-41c1-a540-76cf92fb8a5f-entrypoint" (OuterVolumeSpecName: "entrypoint") pod "5d134e61-c886-41c1-a540-76cf92fb8a5f" (UID: "5d134e61-c886-41c1-a540-76cf92fb8a5f"). InnerVolumeSpecName "entrypoint". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:15:16 crc kubenswrapper[4809]: I0312 08:15:16.723719 4809 reconciler_common.go:293] "Volume detached for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/5d134e61-c886-41c1-a540-76cf92fb8a5f-datadir\") on node \"crc\" DevicePath \"\"" Mar 12 08:15:16 crc kubenswrapper[4809]: I0312 08:15:16.723772 4809 reconciler_common.go:293] "Volume detached for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/5d134e61-c886-41c1-a540-76cf92fb8a5f-config-openshift-service-cacrt\") on node \"crc\" DevicePath \"\"" Mar 12 08:15:16 crc kubenswrapper[4809]: I0312 08:15:16.723801 4809 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5d134e61-c886-41c1-a540-76cf92fb8a5f-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 12 08:15:16 crc kubenswrapper[4809]: I0312 08:15:16.723833 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d134e61-c886-41c1-a540-76cf92fb8a5f-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:15:16 crc kubenswrapper[4809]: I0312 08:15:16.723858 4809 reconciler_common.go:293] "Volume detached for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/5d134e61-c886-41c1-a540-76cf92fb8a5f-entrypoint\") on node \"crc\" DevicePath \"\"" Mar 12 08:15:16 crc kubenswrapper[4809]: I0312 08:15:16.726176 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d134e61-c886-41c1-a540-76cf92fb8a5f-sa-token" (OuterVolumeSpecName: "sa-token") pod "5d134e61-c886-41c1-a540-76cf92fb8a5f" (UID: "5d134e61-c886-41c1-a540-76cf92fb8a5f"). InnerVolumeSpecName "sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:15:16 crc kubenswrapper[4809]: I0312 08:15:16.726956 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d134e61-c886-41c1-a540-76cf92fb8a5f-tmp" (OuterVolumeSpecName: "tmp") pod "5d134e61-c886-41c1-a540-76cf92fb8a5f" (UID: "5d134e61-c886-41c1-a540-76cf92fb8a5f"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:15:16 crc kubenswrapper[4809]: I0312 08:15:16.726937 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d134e61-c886-41c1-a540-76cf92fb8a5f-collector-syslog-receiver" (OuterVolumeSpecName: "collector-syslog-receiver") pod "5d134e61-c886-41c1-a540-76cf92fb8a5f" (UID: "5d134e61-c886-41c1-a540-76cf92fb8a5f"). InnerVolumeSpecName "collector-syslog-receiver". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:15:16 crc kubenswrapper[4809]: I0312 08:15:16.727356 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d134e61-c886-41c1-a540-76cf92fb8a5f-collector-token" (OuterVolumeSpecName: "collector-token") pod "5d134e61-c886-41c1-a540-76cf92fb8a5f" (UID: "5d134e61-c886-41c1-a540-76cf92fb8a5f"). InnerVolumeSpecName "collector-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:15:16 crc kubenswrapper[4809]: I0312 08:15:16.727536 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d134e61-c886-41c1-a540-76cf92fb8a5f-metrics" (OuterVolumeSpecName: "metrics") pod "5d134e61-c886-41c1-a540-76cf92fb8a5f" (UID: "5d134e61-c886-41c1-a540-76cf92fb8a5f"). InnerVolumeSpecName "metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:15:16 crc kubenswrapper[4809]: I0312 08:15:16.729008 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d134e61-c886-41c1-a540-76cf92fb8a5f-kube-api-access-4qrr8" (OuterVolumeSpecName: "kube-api-access-4qrr8") pod "5d134e61-c886-41c1-a540-76cf92fb8a5f" (UID: "5d134e61-c886-41c1-a540-76cf92fb8a5f"). InnerVolumeSpecName "kube-api-access-4qrr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:15:16 crc kubenswrapper[4809]: I0312 08:15:16.826058 4809 reconciler_common.go:293] "Volume detached for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/5d134e61-c886-41c1-a540-76cf92fb8a5f-collector-syslog-receiver\") on node \"crc\" DevicePath \"\"" Mar 12 08:15:16 crc kubenswrapper[4809]: I0312 08:15:16.826141 4809 reconciler_common.go:293] "Volume detached for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/5d134e61-c886-41c1-a540-76cf92fb8a5f-metrics\") on node \"crc\" DevicePath \"\"" Mar 12 08:15:16 crc kubenswrapper[4809]: I0312 08:15:16.826159 4809 reconciler_common.go:293] "Volume detached for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/5d134e61-c886-41c1-a540-76cf92fb8a5f-sa-token\") on node \"crc\" DevicePath \"\"" Mar 12 08:15:16 crc kubenswrapper[4809]: I0312 08:15:16.826176 4809 reconciler_common.go:293] "Volume detached for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/5d134e61-c886-41c1-a540-76cf92fb8a5f-collector-token\") on node \"crc\" DevicePath \"\"" Mar 12 08:15:16 crc kubenswrapper[4809]: I0312 08:15:16.826191 4809 reconciler_common.go:293] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5d134e61-c886-41c1-a540-76cf92fb8a5f-tmp\") on node \"crc\" DevicePath \"\"" Mar 12 08:15:16 crc kubenswrapper[4809]: I0312 08:15:16.826205 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qrr8\" (UniqueName: \"kubernetes.io/projected/5d134e61-c886-41c1-a540-76cf92fb8a5f-kube-api-access-4qrr8\") on node \"crc\" DevicePath \"\"" Mar 12 08:15:17 crc kubenswrapper[4809]: I0312 08:15:17.621156 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-dm6zl" Mar 12 08:15:17 crc kubenswrapper[4809]: I0312 08:15:17.686314 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-dm6zl"] Mar 12 08:15:17 crc kubenswrapper[4809]: I0312 08:15:17.692082 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/collector-dm6zl"] Mar 12 08:15:17 crc kubenswrapper[4809]: I0312 08:15:17.708061 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-n9p24"] Mar 12 08:15:17 crc kubenswrapper[4809]: I0312 08:15:17.713216 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-n9p24" Mar 12 08:15:17 crc kubenswrapper[4809]: I0312 08:15:17.722580 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Mar 12 08:15:17 crc kubenswrapper[4809]: I0312 08:15:17.723133 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Mar 12 08:15:17 crc kubenswrapper[4809]: I0312 08:15:17.724516 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-v56z4" Mar 12 08:15:17 crc kubenswrapper[4809]: I0312 08:15:17.725014 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Mar 12 08:15:17 crc kubenswrapper[4809]: I0312 08:15:17.725477 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Mar 12 08:15:17 crc kubenswrapper[4809]: I0312 08:15:17.731676 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-n9p24"] Mar 12 08:15:17 crc kubenswrapper[4809]: I0312 08:15:17.734683 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Mar 12 08:15:17 crc kubenswrapper[4809]: I0312 08:15:17.844409 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20298793-4684-4286-a96e-36aea4b5be08-config\") pod \"collector-n9p24\" (UID: \"20298793-4684-4286-a96e-36aea4b5be08\") " pod="openshift-logging/collector-n9p24" Mar 12 08:15:17 crc kubenswrapper[4809]: I0312 08:15:17.844482 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/20298793-4684-4286-a96e-36aea4b5be08-metrics\") pod \"collector-n9p24\" (UID: \"20298793-4684-4286-a96e-36aea4b5be08\") " pod="openshift-logging/collector-n9p24" Mar 12 08:15:17 crc kubenswrapper[4809]: I0312 08:15:17.845160 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/20298793-4684-4286-a96e-36aea4b5be08-sa-token\") pod \"collector-n9p24\" (UID: \"20298793-4684-4286-a96e-36aea4b5be08\") " pod="openshift-logging/collector-n9p24" Mar 12 08:15:17 crc kubenswrapper[4809]: I0312 08:15:17.845231 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/20298793-4684-4286-a96e-36aea4b5be08-entrypoint\") pod \"collector-n9p24\" (UID: \"20298793-4684-4286-a96e-36aea4b5be08\") " pod="openshift-logging/collector-n9p24" Mar 12 08:15:17 crc kubenswrapper[4809]: I0312 08:15:17.845348 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/20298793-4684-4286-a96e-36aea4b5be08-config-openshift-service-cacrt\") pod \"collector-n9p24\" (UID: \"20298793-4684-4286-a96e-36aea4b5be08\") " pod="openshift-logging/collector-n9p24" Mar 12 08:15:17 crc kubenswrapper[4809]: I0312 08:15:17.845405 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20298793-4684-4286-a96e-36aea4b5be08-trusted-ca\") pod \"collector-n9p24\" (UID: \"20298793-4684-4286-a96e-36aea4b5be08\") " pod="openshift-logging/collector-n9p24" Mar 12 08:15:17 crc kubenswrapper[4809]: I0312 08:15:17.845457 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/20298793-4684-4286-a96e-36aea4b5be08-collector-syslog-receiver\") pod \"collector-n9p24\" (UID: \"20298793-4684-4286-a96e-36aea4b5be08\") " pod="openshift-logging/collector-n9p24" Mar 12 08:15:17 crc kubenswrapper[4809]: I0312 08:15:17.845524 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/20298793-4684-4286-a96e-36aea4b5be08-collector-token\") pod \"collector-n9p24\" (UID: \"20298793-4684-4286-a96e-36aea4b5be08\") " pod="openshift-logging/collector-n9p24" Mar 12 08:15:17 crc kubenswrapper[4809]: I0312 08:15:17.845575 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20298793-4684-4286-a96e-36aea4b5be08-tmp\") pod \"collector-n9p24\" (UID: \"20298793-4684-4286-a96e-36aea4b5be08\") " pod="openshift-logging/collector-n9p24" Mar 12 08:15:17 crc kubenswrapper[4809]: I0312 08:15:17.845614 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s8js\" (UniqueName: \"kubernetes.io/projected/20298793-4684-4286-a96e-36aea4b5be08-kube-api-access-7s8js\") pod \"collector-n9p24\" (UID: \"20298793-4684-4286-a96e-36aea4b5be08\") " pod="openshift-logging/collector-n9p24" Mar 12 08:15:17 crc kubenswrapper[4809]: I0312 08:15:17.845664 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/20298793-4684-4286-a96e-36aea4b5be08-datadir\") pod \"collector-n9p24\" (UID: \"20298793-4684-4286-a96e-36aea4b5be08\") " pod="openshift-logging/collector-n9p24" Mar 12 08:15:17 crc kubenswrapper[4809]: I0312 08:15:17.947219 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/20298793-4684-4286-a96e-36aea4b5be08-sa-token\") pod \"collector-n9p24\" (UID: \"20298793-4684-4286-a96e-36aea4b5be08\") " pod="openshift-logging/collector-n9p24" Mar 12 08:15:17 crc kubenswrapper[4809]: I0312 08:15:17.947295 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/20298793-4684-4286-a96e-36aea4b5be08-entrypoint\") pod \"collector-n9p24\" (UID: \"20298793-4684-4286-a96e-36aea4b5be08\") " pod="openshift-logging/collector-n9p24" Mar 12 08:15:17 crc kubenswrapper[4809]: I0312 08:15:17.947357 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/20298793-4684-4286-a96e-36aea4b5be08-config-openshift-service-cacrt\") pod \"collector-n9p24\" (UID: \"20298793-4684-4286-a96e-36aea4b5be08\") " pod="openshift-logging/collector-n9p24" Mar 12 08:15:17 crc kubenswrapper[4809]: I0312 08:15:17.947392 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20298793-4684-4286-a96e-36aea4b5be08-trusted-ca\") pod \"collector-n9p24\" (UID: \"20298793-4684-4286-a96e-36aea4b5be08\") " pod="openshift-logging/collector-n9p24" Mar 12 08:15:17 crc kubenswrapper[4809]: I0312 08:15:17.947435 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/20298793-4684-4286-a96e-36aea4b5be08-collector-syslog-receiver\") pod \"collector-n9p24\" (UID: \"20298793-4684-4286-a96e-36aea4b5be08\") " pod="openshift-logging/collector-n9p24" Mar 12 08:15:17 crc kubenswrapper[4809]: I0312 08:15:17.947493 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/20298793-4684-4286-a96e-36aea4b5be08-collector-token\") pod \"collector-n9p24\" (UID: \"20298793-4684-4286-a96e-36aea4b5be08\") " pod="openshift-logging/collector-n9p24" Mar 12 08:15:17 crc kubenswrapper[4809]: I0312 08:15:17.947540 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20298793-4684-4286-a96e-36aea4b5be08-tmp\") pod \"collector-n9p24\" (UID: \"20298793-4684-4286-a96e-36aea4b5be08\") " pod="openshift-logging/collector-n9p24" Mar 12 08:15:17 crc kubenswrapper[4809]: I0312 08:15:17.947573 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7s8js\" (UniqueName: \"kubernetes.io/projected/20298793-4684-4286-a96e-36aea4b5be08-kube-api-access-7s8js\") pod \"collector-n9p24\" (UID: \"20298793-4684-4286-a96e-36aea4b5be08\") " pod="openshift-logging/collector-n9p24" Mar 12 08:15:17 crc kubenswrapper[4809]: I0312 08:15:17.947610 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/20298793-4684-4286-a96e-36aea4b5be08-datadir\") pod \"collector-n9p24\" (UID: \"20298793-4684-4286-a96e-36aea4b5be08\") " pod="openshift-logging/collector-n9p24" Mar 12 08:15:17 crc kubenswrapper[4809]: I0312 08:15:17.947679 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20298793-4684-4286-a96e-36aea4b5be08-config\") pod \"collector-n9p24\" (UID: \"20298793-4684-4286-a96e-36aea4b5be08\") " pod="openshift-logging/collector-n9p24" Mar 12 08:15:17 crc kubenswrapper[4809]: I0312 08:15:17.947717 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/20298793-4684-4286-a96e-36aea4b5be08-metrics\") pod \"collector-n9p24\" (UID: \"20298793-4684-4286-a96e-36aea4b5be08\") " pod="openshift-logging/collector-n9p24" Mar 12 08:15:17 crc kubenswrapper[4809]: I0312 08:15:17.948482 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/20298793-4684-4286-a96e-36aea4b5be08-datadir\") pod \"collector-n9p24\" (UID: \"20298793-4684-4286-a96e-36aea4b5be08\") " pod="openshift-logging/collector-n9p24" Mar 12 08:15:17 crc kubenswrapper[4809]: I0312 08:15:17.949387 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/20298793-4684-4286-a96e-36aea4b5be08-entrypoint\") pod \"collector-n9p24\" (UID: \"20298793-4684-4286-a96e-36aea4b5be08\") " pod="openshift-logging/collector-n9p24" Mar 12 08:15:17 crc kubenswrapper[4809]: I0312 08:15:17.949449 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/20298793-4684-4286-a96e-36aea4b5be08-config-openshift-service-cacrt\") pod \"collector-n9p24\" (UID: \"20298793-4684-4286-a96e-36aea4b5be08\") " pod="openshift-logging/collector-n9p24" Mar 12 08:15:17 crc kubenswrapper[4809]: I0312 08:15:17.949496 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20298793-4684-4286-a96e-36aea4b5be08-trusted-ca\") pod \"collector-n9p24\" (UID: \"20298793-4684-4286-a96e-36aea4b5be08\") " pod="openshift-logging/collector-n9p24" Mar 12 08:15:17 crc kubenswrapper[4809]: I0312 08:15:17.950321 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20298793-4684-4286-a96e-36aea4b5be08-config\") pod \"collector-n9p24\" (UID: \"20298793-4684-4286-a96e-36aea4b5be08\") " pod="openshift-logging/collector-n9p24" Mar 12 08:15:17 crc kubenswrapper[4809]: I0312 08:15:17.956063 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/20298793-4684-4286-a96e-36aea4b5be08-collector-syslog-receiver\") pod \"collector-n9p24\" (UID: \"20298793-4684-4286-a96e-36aea4b5be08\") " pod="openshift-logging/collector-n9p24" Mar 12 08:15:17 crc kubenswrapper[4809]: I0312 08:15:17.956390 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20298793-4684-4286-a96e-36aea4b5be08-tmp\") pod \"collector-n9p24\" (UID: \"20298793-4684-4286-a96e-36aea4b5be08\") " pod="openshift-logging/collector-n9p24" Mar 12 08:15:17 crc kubenswrapper[4809]: I0312 08:15:17.957944 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/20298793-4684-4286-a96e-36aea4b5be08-collector-token\") pod \"collector-n9p24\" (UID: \"20298793-4684-4286-a96e-36aea4b5be08\") " pod="openshift-logging/collector-n9p24" Mar 12 08:15:17 crc kubenswrapper[4809]: I0312 08:15:17.969817 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/20298793-4684-4286-a96e-36aea4b5be08-sa-token\") pod \"collector-n9p24\" (UID: \"20298793-4684-4286-a96e-36aea4b5be08\") " pod="openshift-logging/collector-n9p24" Mar 12 08:15:17 crc kubenswrapper[4809]: I0312 08:15:17.969867 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/20298793-4684-4286-a96e-36aea4b5be08-metrics\") pod \"collector-n9p24\" (UID: \"20298793-4684-4286-a96e-36aea4b5be08\") " pod="openshift-logging/collector-n9p24" Mar 12 08:15:17 crc kubenswrapper[4809]: I0312 08:15:17.972262 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7s8js\" (UniqueName: \"kubernetes.io/projected/20298793-4684-4286-a96e-36aea4b5be08-kube-api-access-7s8js\") pod \"collector-n9p24\" (UID: \"20298793-4684-4286-a96e-36aea4b5be08\") " pod="openshift-logging/collector-n9p24" Mar 12 08:15:18 crc kubenswrapper[4809]: I0312 08:15:18.036345 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-n9p24" Mar 12 08:15:18 crc kubenswrapper[4809]: I0312 08:15:18.525947 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-n9p24"] Mar 12 08:15:18 crc kubenswrapper[4809]: I0312 08:15:18.630665 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-n9p24" event={"ID":"20298793-4684-4286-a96e-36aea4b5be08","Type":"ContainerStarted","Data":"42fcac375587e8a011844611bf63a6632e97cea9b5c67450f9e149bd51554a48"} Mar 12 08:15:19 crc kubenswrapper[4809]: I0312 08:15:19.118637 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d134e61-c886-41c1-a540-76cf92fb8a5f" path="/var/lib/kubelet/pods/5d134e61-c886-41c1-a540-76cf92fb8a5f/volumes" Mar 12 08:15:25 crc kubenswrapper[4809]: I0312 08:15:25.696429 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-n9p24" event={"ID":"20298793-4684-4286-a96e-36aea4b5be08","Type":"ContainerStarted","Data":"fabf349dc5b86ce054943ad4c035a31374d46e37161342c96a71cbdea8dde31e"} Mar 12 08:15:25 crc kubenswrapper[4809]: I0312 08:15:25.751539 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/collector-n9p24" podStartSLOduration=2.68679975 podStartE2EDuration="8.75149787s" podCreationTimestamp="2026-03-12 08:15:17 +0000 UTC" firstStartedPulling="2026-03-12 08:15:18.538319779 +0000 UTC m=+992.120355512" lastFinishedPulling="2026-03-12 08:15:24.603017899 +0000 UTC m=+998.185053632" observedRunningTime="2026-03-12 08:15:25.731592501 +0000 UTC m=+999.313628324" watchObservedRunningTime="2026-03-12 08:15:25.75149787 +0000 UTC m=+999.333533683" Mar 12 08:15:55 crc kubenswrapper[4809]: I0312 08:15:55.076939 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rnmb7"] Mar 12 08:15:55 crc kubenswrapper[4809]: I0312 08:15:55.078783 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rnmb7" Mar 12 08:15:55 crc kubenswrapper[4809]: I0312 08:15:55.081071 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 12 08:15:55 crc kubenswrapper[4809]: I0312 08:15:55.093610 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rnmb7"] Mar 12 08:15:55 crc kubenswrapper[4809]: I0312 08:15:55.216724 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/85e8b987-8bed-4d15-b39c-5fd8834e6994-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rnmb7\" (UID: \"85e8b987-8bed-4d15-b39c-5fd8834e6994\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rnmb7" Mar 12 08:15:55 crc kubenswrapper[4809]: I0312 08:15:55.216776 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/85e8b987-8bed-4d15-b39c-5fd8834e6994-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rnmb7\" (UID: \"85e8b987-8bed-4d15-b39c-5fd8834e6994\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rnmb7" Mar 12 08:15:55 crc kubenswrapper[4809]: I0312 08:15:55.216895 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wk8dp\" (UniqueName: \"kubernetes.io/projected/85e8b987-8bed-4d15-b39c-5fd8834e6994-kube-api-access-wk8dp\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rnmb7\" (UID: \"85e8b987-8bed-4d15-b39c-5fd8834e6994\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rnmb7" Mar 12 08:15:55 crc kubenswrapper[4809]: I0312 08:15:55.318587 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/85e8b987-8bed-4d15-b39c-5fd8834e6994-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rnmb7\" (UID: \"85e8b987-8bed-4d15-b39c-5fd8834e6994\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rnmb7" Mar 12 08:15:55 crc kubenswrapper[4809]: I0312 08:15:55.318634 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/85e8b987-8bed-4d15-b39c-5fd8834e6994-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rnmb7\" (UID: \"85e8b987-8bed-4d15-b39c-5fd8834e6994\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rnmb7" Mar 12 08:15:55 crc kubenswrapper[4809]: I0312 08:15:55.318699 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wk8dp\" (UniqueName: \"kubernetes.io/projected/85e8b987-8bed-4d15-b39c-5fd8834e6994-kube-api-access-wk8dp\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rnmb7\" (UID: \"85e8b987-8bed-4d15-b39c-5fd8834e6994\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rnmb7" Mar 12 08:15:55 crc kubenswrapper[4809]: I0312 08:15:55.319199 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/85e8b987-8bed-4d15-b39c-5fd8834e6994-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rnmb7\" (UID: \"85e8b987-8bed-4d15-b39c-5fd8834e6994\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rnmb7" Mar 12 08:15:55 crc kubenswrapper[4809]: I0312 08:15:55.319457 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/85e8b987-8bed-4d15-b39c-5fd8834e6994-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rnmb7\" (UID: \"85e8b987-8bed-4d15-b39c-5fd8834e6994\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rnmb7" Mar 12 08:15:55 crc kubenswrapper[4809]: I0312 08:15:55.342988 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wk8dp\" (UniqueName: \"kubernetes.io/projected/85e8b987-8bed-4d15-b39c-5fd8834e6994-kube-api-access-wk8dp\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rnmb7\" (UID: \"85e8b987-8bed-4d15-b39c-5fd8834e6994\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rnmb7" Mar 12 08:15:55 crc kubenswrapper[4809]: I0312 08:15:55.397350 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rnmb7" Mar 12 08:15:55 crc kubenswrapper[4809]: I0312 08:15:55.979596 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rnmb7"] Mar 12 08:15:57 crc kubenswrapper[4809]: I0312 08:15:57.014818 4809 generic.go:334] "Generic (PLEG): container finished" podID="85e8b987-8bed-4d15-b39c-5fd8834e6994" containerID="14eb046f16f656cc6193e5575919f6d229307f7524d88ca79418e6cd950af4d1" exitCode=0 Mar 12 08:15:57 crc kubenswrapper[4809]: I0312 08:15:57.015040 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rnmb7" event={"ID":"85e8b987-8bed-4d15-b39c-5fd8834e6994","Type":"ContainerDied","Data":"14eb046f16f656cc6193e5575919f6d229307f7524d88ca79418e6cd950af4d1"} Mar 12 08:15:57 crc kubenswrapper[4809]: I0312 08:15:57.015225 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rnmb7" event={"ID":"85e8b987-8bed-4d15-b39c-5fd8834e6994","Type":"ContainerStarted","Data":"481fa4a734a4c0868e50790ffb3e166b6e698404cc8ea2be2e96a776a128b3a8"} Mar 12 08:15:59 crc kubenswrapper[4809]: I0312 08:15:59.032973 4809 generic.go:334] "Generic (PLEG): container finished" podID="85e8b987-8bed-4d15-b39c-5fd8834e6994" containerID="8141106ed5c1c27ebbc94c742271c27ee399ab3bd1b5b716c05a5d596f7fb8de" exitCode=0 Mar 12 08:15:59 crc kubenswrapper[4809]: I0312 08:15:59.033076 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rnmb7" event={"ID":"85e8b987-8bed-4d15-b39c-5fd8834e6994","Type":"ContainerDied","Data":"8141106ed5c1c27ebbc94c742271c27ee399ab3bd1b5b716c05a5d596f7fb8de"} Mar 12 08:16:00 crc kubenswrapper[4809]: I0312 08:16:00.049807 4809 generic.go:334] "Generic (PLEG): container finished" podID="85e8b987-8bed-4d15-b39c-5fd8834e6994" containerID="c6d88fc56afb7e29ebfcb707612fcdd29188f0a55c44b41cf3829598f8d2c04a" exitCode=0 Mar 12 08:16:00 crc kubenswrapper[4809]: I0312 08:16:00.049926 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rnmb7" event={"ID":"85e8b987-8bed-4d15-b39c-5fd8834e6994","Type":"ContainerDied","Data":"c6d88fc56afb7e29ebfcb707612fcdd29188f0a55c44b41cf3829598f8d2c04a"} Mar 12 08:16:00 crc kubenswrapper[4809]: I0312 08:16:00.167075 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555056-49bj8"] Mar 12 08:16:00 crc kubenswrapper[4809]: I0312 08:16:00.168914 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555056-49bj8" Mar 12 08:16:00 crc kubenswrapper[4809]: I0312 08:16:00.171167 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-tvxqv" Mar 12 08:16:00 crc kubenswrapper[4809]: I0312 08:16:00.172522 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 08:16:00 crc kubenswrapper[4809]: I0312 08:16:00.177047 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 08:16:00 crc kubenswrapper[4809]: I0312 08:16:00.180605 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555056-49bj8"] Mar 12 08:16:00 crc kubenswrapper[4809]: I0312 08:16:00.216181 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vql8h\" (UniqueName: \"kubernetes.io/projected/549c9413-b716-47a7-975c-2b9ebf41d33a-kube-api-access-vql8h\") pod \"auto-csr-approver-29555056-49bj8\" (UID: \"549c9413-b716-47a7-975c-2b9ebf41d33a\") " pod="openshift-infra/auto-csr-approver-29555056-49bj8" Mar 12 08:16:00 crc kubenswrapper[4809]: I0312 08:16:00.318273 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vql8h\" (UniqueName: \"kubernetes.io/projected/549c9413-b716-47a7-975c-2b9ebf41d33a-kube-api-access-vql8h\") pod \"auto-csr-approver-29555056-49bj8\" (UID: \"549c9413-b716-47a7-975c-2b9ebf41d33a\") " pod="openshift-infra/auto-csr-approver-29555056-49bj8" Mar 12 08:16:00 crc kubenswrapper[4809]: I0312 08:16:00.356266 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vql8h\" (UniqueName: \"kubernetes.io/projected/549c9413-b716-47a7-975c-2b9ebf41d33a-kube-api-access-vql8h\") pod \"auto-csr-approver-29555056-49bj8\" (UID: \"549c9413-b716-47a7-975c-2b9ebf41d33a\") " pod="openshift-infra/auto-csr-approver-29555056-49bj8" Mar 12 08:16:00 crc kubenswrapper[4809]: I0312 08:16:00.492963 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555056-49bj8" Mar 12 08:16:00 crc kubenswrapper[4809]: I0312 08:16:00.771544 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555056-49bj8"] Mar 12 08:16:01 crc kubenswrapper[4809]: I0312 08:16:01.064656 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555056-49bj8" event={"ID":"549c9413-b716-47a7-975c-2b9ebf41d33a","Type":"ContainerStarted","Data":"313798986ef308d818fcef8b73b85a6525202860b004c500b32420afc43215b6"} Mar 12 08:16:01 crc kubenswrapper[4809]: I0312 08:16:01.419982 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rnmb7" Mar 12 08:16:01 crc kubenswrapper[4809]: I0312 08:16:01.538440 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/85e8b987-8bed-4d15-b39c-5fd8834e6994-bundle\") pod \"85e8b987-8bed-4d15-b39c-5fd8834e6994\" (UID: \"85e8b987-8bed-4d15-b39c-5fd8834e6994\") " Mar 12 08:16:01 crc kubenswrapper[4809]: I0312 08:16:01.538558 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wk8dp\" (UniqueName: \"kubernetes.io/projected/85e8b987-8bed-4d15-b39c-5fd8834e6994-kube-api-access-wk8dp\") pod \"85e8b987-8bed-4d15-b39c-5fd8834e6994\" (UID: \"85e8b987-8bed-4d15-b39c-5fd8834e6994\") " Mar 12 08:16:01 crc kubenswrapper[4809]: I0312 08:16:01.538604 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/85e8b987-8bed-4d15-b39c-5fd8834e6994-util\") pod \"85e8b987-8bed-4d15-b39c-5fd8834e6994\" (UID: \"85e8b987-8bed-4d15-b39c-5fd8834e6994\") " Mar 12 08:16:01 crc kubenswrapper[4809]: I0312 08:16:01.539215 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85e8b987-8bed-4d15-b39c-5fd8834e6994-bundle" (OuterVolumeSpecName: "bundle") pod "85e8b987-8bed-4d15-b39c-5fd8834e6994" (UID: "85e8b987-8bed-4d15-b39c-5fd8834e6994"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:16:01 crc kubenswrapper[4809]: I0312 08:16:01.549050 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85e8b987-8bed-4d15-b39c-5fd8834e6994-util" (OuterVolumeSpecName: "util") pod "85e8b987-8bed-4d15-b39c-5fd8834e6994" (UID: "85e8b987-8bed-4d15-b39c-5fd8834e6994"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:16:01 crc kubenswrapper[4809]: I0312 08:16:01.551016 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85e8b987-8bed-4d15-b39c-5fd8834e6994-kube-api-access-wk8dp" (OuterVolumeSpecName: "kube-api-access-wk8dp") pod "85e8b987-8bed-4d15-b39c-5fd8834e6994" (UID: "85e8b987-8bed-4d15-b39c-5fd8834e6994"). InnerVolumeSpecName "kube-api-access-wk8dp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:16:01 crc kubenswrapper[4809]: I0312 08:16:01.640677 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wk8dp\" (UniqueName: \"kubernetes.io/projected/85e8b987-8bed-4d15-b39c-5fd8834e6994-kube-api-access-wk8dp\") on node \"crc\" DevicePath \"\"" Mar 12 08:16:01 crc kubenswrapper[4809]: I0312 08:16:01.640929 4809 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/85e8b987-8bed-4d15-b39c-5fd8834e6994-util\") on node \"crc\" DevicePath \"\"" Mar 12 08:16:01 crc kubenswrapper[4809]: I0312 08:16:01.641135 4809 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/85e8b987-8bed-4d15-b39c-5fd8834e6994-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:16:02 crc kubenswrapper[4809]: I0312 08:16:02.079066 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rnmb7" event={"ID":"85e8b987-8bed-4d15-b39c-5fd8834e6994","Type":"ContainerDied","Data":"481fa4a734a4c0868e50790ffb3e166b6e698404cc8ea2be2e96a776a128b3a8"} Mar 12 08:16:02 crc kubenswrapper[4809]: I0312 08:16:02.079143 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="481fa4a734a4c0868e50790ffb3e166b6e698404cc8ea2be2e96a776a128b3a8" Mar 12 08:16:02 crc kubenswrapper[4809]: I0312 08:16:02.079150 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rnmb7" Mar 12 08:16:02 crc kubenswrapper[4809]: I0312 08:16:02.080809 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555056-49bj8" event={"ID":"549c9413-b716-47a7-975c-2b9ebf41d33a","Type":"ContainerStarted","Data":"a943e7d437baa098c22ea2afb32f2ab7a588ef39c670ce6fe89510e51b74ed6f"} Mar 12 08:16:02 crc kubenswrapper[4809]: I0312 08:16:02.112798 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555056-49bj8" podStartSLOduration=1.265663971 podStartE2EDuration="2.112771958s" podCreationTimestamp="2026-03-12 08:16:00 +0000 UTC" firstStartedPulling="2026-03-12 08:16:00.781840732 +0000 UTC m=+1034.363876475" lastFinishedPulling="2026-03-12 08:16:01.628948729 +0000 UTC m=+1035.210984462" observedRunningTime="2026-03-12 08:16:02.107909244 +0000 UTC m=+1035.689944977" watchObservedRunningTime="2026-03-12 08:16:02.112771958 +0000 UTC m=+1035.694807691" Mar 12 08:16:03 crc kubenswrapper[4809]: I0312 08:16:03.091311 4809 generic.go:334] "Generic (PLEG): container finished" podID="549c9413-b716-47a7-975c-2b9ebf41d33a" containerID="a943e7d437baa098c22ea2afb32f2ab7a588ef39c670ce6fe89510e51b74ed6f" exitCode=0 Mar 12 08:16:03 crc kubenswrapper[4809]: I0312 08:16:03.091370 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555056-49bj8" event={"ID":"549c9413-b716-47a7-975c-2b9ebf41d33a","Type":"ContainerDied","Data":"a943e7d437baa098c22ea2afb32f2ab7a588ef39c670ce6fe89510e51b74ed6f"} Mar 12 08:16:04 crc kubenswrapper[4809]: I0312 08:16:04.376149 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555056-49bj8" Mar 12 08:16:04 crc kubenswrapper[4809]: I0312 08:16:04.496154 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vql8h\" (UniqueName: \"kubernetes.io/projected/549c9413-b716-47a7-975c-2b9ebf41d33a-kube-api-access-vql8h\") pod \"549c9413-b716-47a7-975c-2b9ebf41d33a\" (UID: \"549c9413-b716-47a7-975c-2b9ebf41d33a\") " Mar 12 08:16:04 crc kubenswrapper[4809]: I0312 08:16:04.505447 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/549c9413-b716-47a7-975c-2b9ebf41d33a-kube-api-access-vql8h" (OuterVolumeSpecName: "kube-api-access-vql8h") pod "549c9413-b716-47a7-975c-2b9ebf41d33a" (UID: "549c9413-b716-47a7-975c-2b9ebf41d33a"). InnerVolumeSpecName "kube-api-access-vql8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:16:04 crc kubenswrapper[4809]: I0312 08:16:04.598867 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vql8h\" (UniqueName: \"kubernetes.io/projected/549c9413-b716-47a7-975c-2b9ebf41d33a-kube-api-access-vql8h\") on node \"crc\" DevicePath \"\"" Mar 12 08:16:04 crc kubenswrapper[4809]: I0312 08:16:04.733467 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-cjfhr"] Mar 12 08:16:04 crc kubenswrapper[4809]: E0312 08:16:04.733802 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85e8b987-8bed-4d15-b39c-5fd8834e6994" containerName="extract" Mar 12 08:16:04 crc kubenswrapper[4809]: I0312 08:16:04.733825 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="85e8b987-8bed-4d15-b39c-5fd8834e6994" containerName="extract" Mar 12 08:16:04 crc kubenswrapper[4809]: E0312 08:16:04.733850 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85e8b987-8bed-4d15-b39c-5fd8834e6994" containerName="util" Mar 12 08:16:04 crc kubenswrapper[4809]: I0312 08:16:04.733860 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="85e8b987-8bed-4d15-b39c-5fd8834e6994" containerName="util" Mar 12 08:16:04 crc kubenswrapper[4809]: E0312 08:16:04.733870 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="549c9413-b716-47a7-975c-2b9ebf41d33a" containerName="oc" Mar 12 08:16:04 crc kubenswrapper[4809]: I0312 08:16:04.733881 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="549c9413-b716-47a7-975c-2b9ebf41d33a" containerName="oc" Mar 12 08:16:04 crc kubenswrapper[4809]: E0312 08:16:04.733896 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85e8b987-8bed-4d15-b39c-5fd8834e6994" containerName="pull" Mar 12 08:16:04 crc kubenswrapper[4809]: I0312 08:16:04.733906 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="85e8b987-8bed-4d15-b39c-5fd8834e6994" containerName="pull" Mar 12 08:16:04 crc kubenswrapper[4809]: I0312 08:16:04.734068 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="549c9413-b716-47a7-975c-2b9ebf41d33a" containerName="oc" Mar 12 08:16:04 crc kubenswrapper[4809]: I0312 08:16:04.734098 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="85e8b987-8bed-4d15-b39c-5fd8834e6994" containerName="extract" Mar 12 08:16:04 crc kubenswrapper[4809]: I0312 08:16:04.734810 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-796d4cfff4-cjfhr" Mar 12 08:16:04 crc kubenswrapper[4809]: I0312 08:16:04.737702 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Mar 12 08:16:04 crc kubenswrapper[4809]: I0312 08:16:04.737775 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Mar 12 08:16:04 crc kubenswrapper[4809]: I0312 08:16:04.738300 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-qc9wq" Mar 12 08:16:04 crc kubenswrapper[4809]: I0312 08:16:04.750644 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-cjfhr"] Mar 12 08:16:04 crc kubenswrapper[4809]: I0312 08:16:04.802266 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmmdf\" (UniqueName: \"kubernetes.io/projected/5fcddb8c-3912-454f-9b48-9137114837a7-kube-api-access-fmmdf\") pod \"nmstate-operator-796d4cfff4-cjfhr\" (UID: \"5fcddb8c-3912-454f-9b48-9137114837a7\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-cjfhr" Mar 12 08:16:04 crc kubenswrapper[4809]: I0312 08:16:04.904409 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmmdf\" (UniqueName: \"kubernetes.io/projected/5fcddb8c-3912-454f-9b48-9137114837a7-kube-api-access-fmmdf\") pod \"nmstate-operator-796d4cfff4-cjfhr\" (UID: \"5fcddb8c-3912-454f-9b48-9137114837a7\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-cjfhr" Mar 12 08:16:04 crc kubenswrapper[4809]: I0312 08:16:04.928037 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmmdf\" (UniqueName: \"kubernetes.io/projected/5fcddb8c-3912-454f-9b48-9137114837a7-kube-api-access-fmmdf\") pod \"nmstate-operator-796d4cfff4-cjfhr\" (UID: \"5fcddb8c-3912-454f-9b48-9137114837a7\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-cjfhr" Mar 12 08:16:05 crc kubenswrapper[4809]: I0312 08:16:05.096485 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-796d4cfff4-cjfhr" Mar 12 08:16:05 crc kubenswrapper[4809]: I0312 08:16:05.109655 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555056-49bj8" Mar 12 08:16:05 crc kubenswrapper[4809]: I0312 08:16:05.116468 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555056-49bj8" event={"ID":"549c9413-b716-47a7-975c-2b9ebf41d33a","Type":"ContainerDied","Data":"313798986ef308d818fcef8b73b85a6525202860b004c500b32420afc43215b6"} Mar 12 08:16:05 crc kubenswrapper[4809]: I0312 08:16:05.116768 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="313798986ef308d818fcef8b73b85a6525202860b004c500b32420afc43215b6" Mar 12 08:16:05 crc kubenswrapper[4809]: I0312 08:16:05.207071 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555050-vrdkj"] Mar 12 08:16:05 crc kubenswrapper[4809]: I0312 08:16:05.220650 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555050-vrdkj"] Mar 12 08:16:05 crc kubenswrapper[4809]: I0312 08:16:05.710548 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-cjfhr"] Mar 12 08:16:06 crc kubenswrapper[4809]: I0312 08:16:06.121907 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-796d4cfff4-cjfhr" event={"ID":"5fcddb8c-3912-454f-9b48-9137114837a7","Type":"ContainerStarted","Data":"b9aa254596419d70e78dbe55e4f71ebd62968ab375966cf8633d4fe7e873d469"} Mar 12 08:16:07 crc kubenswrapper[4809]: I0312 08:16:07.121840 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3652d677-b5e0-425e-9805-aa4e6acc9437" path="/var/lib/kubelet/pods/3652d677-b5e0-425e-9805-aa4e6acc9437/volumes" Mar 12 08:16:09 crc kubenswrapper[4809]: I0312 08:16:09.160228 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-796d4cfff4-cjfhr" event={"ID":"5fcddb8c-3912-454f-9b48-9137114837a7","Type":"ContainerStarted","Data":"8940c6668a462ac4094d15d11aaa272b9f6eb93022186d83fe53621c752d2dc4"} Mar 12 08:16:09 crc kubenswrapper[4809]: I0312 08:16:09.183612 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-796d4cfff4-cjfhr" podStartSLOduration=2.212504006 podStartE2EDuration="5.183579247s" podCreationTimestamp="2026-03-12 08:16:04 +0000 UTC" firstStartedPulling="2026-03-12 08:16:05.722325255 +0000 UTC m=+1039.304361028" lastFinishedPulling="2026-03-12 08:16:08.693400536 +0000 UTC m=+1042.275436269" observedRunningTime="2026-03-12 08:16:09.17891436 +0000 UTC m=+1042.760950123" watchObservedRunningTime="2026-03-12 08:16:09.183579247 +0000 UTC m=+1042.765614980" Mar 12 08:16:10 crc kubenswrapper[4809]: I0312 08:16:10.151378 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-bpg7k"] Mar 12 08:16:10 crc kubenswrapper[4809]: I0312 08:16:10.152845 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-bpg7k" Mar 12 08:16:10 crc kubenswrapper[4809]: I0312 08:16:10.158848 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-ltkx5" Mar 12 08:16:10 crc kubenswrapper[4809]: I0312 08:16:10.165497 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-bpg7k"] Mar 12 08:16:10 crc kubenswrapper[4809]: I0312 08:16:10.203566 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rjks\" (UniqueName: \"kubernetes.io/projected/f05544ad-32fb-491d-ae13-7849090c1f34-kube-api-access-6rjks\") pod \"nmstate-metrics-9b8c8685d-bpg7k\" (UID: \"f05544ad-32fb-491d-ae13-7849090c1f34\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-bpg7k" Mar 12 08:16:10 crc kubenswrapper[4809]: I0312 08:16:10.213853 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-xxj6b"] Mar 12 08:16:10 crc kubenswrapper[4809]: I0312 08:16:10.214796 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f558f5558-xxj6b" Mar 12 08:16:10 crc kubenswrapper[4809]: I0312 08:16:10.225967 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Mar 12 08:16:10 crc kubenswrapper[4809]: I0312 08:16:10.252018 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-jgxtq"] Mar 12 08:16:10 crc kubenswrapper[4809]: I0312 08:16:10.252931 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-jgxtq" Mar 12 08:16:10 crc kubenswrapper[4809]: I0312 08:16:10.260382 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-xxj6b"] Mar 12 08:16:10 crc kubenswrapper[4809]: I0312 08:16:10.306502 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rjks\" (UniqueName: \"kubernetes.io/projected/f05544ad-32fb-491d-ae13-7849090c1f34-kube-api-access-6rjks\") pod \"nmstate-metrics-9b8c8685d-bpg7k\" (UID: \"f05544ad-32fb-491d-ae13-7849090c1f34\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-bpg7k" Mar 12 08:16:10 crc kubenswrapper[4809]: I0312 08:16:10.338314 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rjks\" (UniqueName: \"kubernetes.io/projected/f05544ad-32fb-491d-ae13-7849090c1f34-kube-api-access-6rjks\") pod \"nmstate-metrics-9b8c8685d-bpg7k\" (UID: \"f05544ad-32fb-491d-ae13-7849090c1f34\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-bpg7k" Mar 12 08:16:10 crc kubenswrapper[4809]: I0312 08:16:10.382635 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-72xtc"] Mar 12 08:16:10 crc kubenswrapper[4809]: I0312 08:16:10.383517 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-72xtc" Mar 12 08:16:10 crc kubenswrapper[4809]: I0312 08:16:10.399879 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-gq27m" Mar 12 08:16:10 crc kubenswrapper[4809]: I0312 08:16:10.400135 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Mar 12 08:16:10 crc kubenswrapper[4809]: I0312 08:16:10.400264 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Mar 12 08:16:10 crc kubenswrapper[4809]: I0312 08:16:10.408707 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnm6l\" (UniqueName: \"kubernetes.io/projected/9a21a990-10ce-4677-95d2-df00083cbe34-kube-api-access-rnm6l\") pod \"nmstate-console-plugin-86f58fcf4-72xtc\" (UID: \"9a21a990-10ce-4677-95d2-df00083cbe34\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-72xtc" Mar 12 08:16:10 crc kubenswrapper[4809]: I0312 08:16:10.408751 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/9a21a990-10ce-4677-95d2-df00083cbe34-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-72xtc\" (UID: \"9a21a990-10ce-4677-95d2-df00083cbe34\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-72xtc" Mar 12 08:16:10 crc kubenswrapper[4809]: I0312 08:16:10.408795 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/9a21a990-10ce-4677-95d2-df00083cbe34-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-72xtc\" (UID: \"9a21a990-10ce-4677-95d2-df00083cbe34\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-72xtc" Mar 12 08:16:10 crc kubenswrapper[4809]: I0312 08:16:10.408825 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/f89f6199-4afe-4ace-a7a9-2b8c91451d40-dbus-socket\") pod \"nmstate-handler-jgxtq\" (UID: \"f89f6199-4afe-4ace-a7a9-2b8c91451d40\") " pod="openshift-nmstate/nmstate-handler-jgxtq" Mar 12 08:16:10 crc kubenswrapper[4809]: I0312 08:16:10.408848 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/f89f6199-4afe-4ace-a7a9-2b8c91451d40-ovs-socket\") pod \"nmstate-handler-jgxtq\" (UID: \"f89f6199-4afe-4ace-a7a9-2b8c91451d40\") " pod="openshift-nmstate/nmstate-handler-jgxtq" Mar 12 08:16:10 crc kubenswrapper[4809]: I0312 08:16:10.408884 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/293a6f5b-33ba-4398-a1c7-a5f97db11950-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-xxj6b\" (UID: \"293a6f5b-33ba-4398-a1c7-a5f97db11950\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-xxj6b" Mar 12 08:16:10 crc kubenswrapper[4809]: I0312 08:16:10.408908 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7t89\" (UniqueName: \"kubernetes.io/projected/f89f6199-4afe-4ace-a7a9-2b8c91451d40-kube-api-access-j7t89\") pod \"nmstate-handler-jgxtq\" (UID: \"f89f6199-4afe-4ace-a7a9-2b8c91451d40\") " pod="openshift-nmstate/nmstate-handler-jgxtq" Mar 12 08:16:10 crc kubenswrapper[4809]: I0312 08:16:10.408946 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9784f\" (UniqueName: \"kubernetes.io/projected/293a6f5b-33ba-4398-a1c7-a5f97db11950-kube-api-access-9784f\") pod \"nmstate-webhook-5f558f5558-xxj6b\" (UID: \"293a6f5b-33ba-4398-a1c7-a5f97db11950\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-xxj6b" Mar 12 08:16:10 crc kubenswrapper[4809]: I0312 08:16:10.408966 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/f89f6199-4afe-4ace-a7a9-2b8c91451d40-nmstate-lock\") pod \"nmstate-handler-jgxtq\" (UID: \"f89f6199-4afe-4ace-a7a9-2b8c91451d40\") " pod="openshift-nmstate/nmstate-handler-jgxtq" Mar 12 08:16:10 crc kubenswrapper[4809]: I0312 08:16:10.414370 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-72xtc"] Mar 12 08:16:10 crc kubenswrapper[4809]: I0312 08:16:10.473789 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-bpg7k" Mar 12 08:16:10 crc kubenswrapper[4809]: I0312 08:16:10.510858 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9784f\" (UniqueName: \"kubernetes.io/projected/293a6f5b-33ba-4398-a1c7-a5f97db11950-kube-api-access-9784f\") pod \"nmstate-webhook-5f558f5558-xxj6b\" (UID: \"293a6f5b-33ba-4398-a1c7-a5f97db11950\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-xxj6b" Mar 12 08:16:10 crc kubenswrapper[4809]: I0312 08:16:10.510900 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/f89f6199-4afe-4ace-a7a9-2b8c91451d40-nmstate-lock\") pod \"nmstate-handler-jgxtq\" (UID: \"f89f6199-4afe-4ace-a7a9-2b8c91451d40\") " pod="openshift-nmstate/nmstate-handler-jgxtq" Mar 12 08:16:10 crc kubenswrapper[4809]: I0312 08:16:10.510938 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnm6l\" (UniqueName: \"kubernetes.io/projected/9a21a990-10ce-4677-95d2-df00083cbe34-kube-api-access-rnm6l\") pod \"nmstate-console-plugin-86f58fcf4-72xtc\" (UID: \"9a21a990-10ce-4677-95d2-df00083cbe34\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-72xtc" Mar 12 08:16:10 crc kubenswrapper[4809]: I0312 08:16:10.510955 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/9a21a990-10ce-4677-95d2-df00083cbe34-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-72xtc\" (UID: \"9a21a990-10ce-4677-95d2-df00083cbe34\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-72xtc" Mar 12 08:16:10 crc kubenswrapper[4809]: I0312 08:16:10.510988 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/9a21a990-10ce-4677-95d2-df00083cbe34-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-72xtc\" (UID: \"9a21a990-10ce-4677-95d2-df00083cbe34\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-72xtc" Mar 12 08:16:10 crc kubenswrapper[4809]: I0312 08:16:10.511017 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/f89f6199-4afe-4ace-a7a9-2b8c91451d40-dbus-socket\") pod \"nmstate-handler-jgxtq\" (UID: \"f89f6199-4afe-4ace-a7a9-2b8c91451d40\") " pod="openshift-nmstate/nmstate-handler-jgxtq" Mar 12 08:16:10 crc kubenswrapper[4809]: I0312 08:16:10.511040 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/f89f6199-4afe-4ace-a7a9-2b8c91451d40-ovs-socket\") pod \"nmstate-handler-jgxtq\" (UID: \"f89f6199-4afe-4ace-a7a9-2b8c91451d40\") " pod="openshift-nmstate/nmstate-handler-jgxtq" Mar 12 08:16:10 crc kubenswrapper[4809]: I0312 08:16:10.511073 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/293a6f5b-33ba-4398-a1c7-a5f97db11950-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-xxj6b\" (UID: \"293a6f5b-33ba-4398-a1c7-a5f97db11950\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-xxj6b" Mar 12 08:16:10 crc kubenswrapper[4809]: I0312 08:16:10.511094 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7t89\" (UniqueName: \"kubernetes.io/projected/f89f6199-4afe-4ace-a7a9-2b8c91451d40-kube-api-access-j7t89\") pod \"nmstate-handler-jgxtq\" (UID: \"f89f6199-4afe-4ace-a7a9-2b8c91451d40\") " pod="openshift-nmstate/nmstate-handler-jgxtq" Mar 12 08:16:10 crc kubenswrapper[4809]: I0312 08:16:10.512044 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/f89f6199-4afe-4ace-a7a9-2b8c91451d40-nmstate-lock\") pod \"nmstate-handler-jgxtq\" (UID: \"f89f6199-4afe-4ace-a7a9-2b8c91451d40\") " pod="openshift-nmstate/nmstate-handler-jgxtq" Mar 12 08:16:10 crc kubenswrapper[4809]: E0312 08:16:10.512287 4809 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Mar 12 08:16:10 crc kubenswrapper[4809]: E0312 08:16:10.512332 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9a21a990-10ce-4677-95d2-df00083cbe34-plugin-serving-cert podName:9a21a990-10ce-4677-95d2-df00083cbe34 nodeName:}" failed. No retries permitted until 2026-03-12 08:16:11.012318053 +0000 UTC m=+1044.594353786 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/9a21a990-10ce-4677-95d2-df00083cbe34-plugin-serving-cert") pod "nmstate-console-plugin-86f58fcf4-72xtc" (UID: "9a21a990-10ce-4677-95d2-df00083cbe34") : secret "plugin-serving-cert" not found Mar 12 08:16:10 crc kubenswrapper[4809]: I0312 08:16:10.513175 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/9a21a990-10ce-4677-95d2-df00083cbe34-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-72xtc\" (UID: \"9a21a990-10ce-4677-95d2-df00083cbe34\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-72xtc" Mar 12 08:16:10 crc kubenswrapper[4809]: I0312 08:16:10.513380 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/f89f6199-4afe-4ace-a7a9-2b8c91451d40-dbus-socket\") pod \"nmstate-handler-jgxtq\" (UID: \"f89f6199-4afe-4ace-a7a9-2b8c91451d40\") " pod="openshift-nmstate/nmstate-handler-jgxtq" Mar 12 08:16:10 crc kubenswrapper[4809]: I0312 08:16:10.513408 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/f89f6199-4afe-4ace-a7a9-2b8c91451d40-ovs-socket\") pod \"nmstate-handler-jgxtq\" (UID: \"f89f6199-4afe-4ace-a7a9-2b8c91451d40\") " pod="openshift-nmstate/nmstate-handler-jgxtq" Mar 12 08:16:10 crc kubenswrapper[4809]: E0312 08:16:10.513448 4809 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Mar 12 08:16:10 crc kubenswrapper[4809]: E0312 08:16:10.513473 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/293a6f5b-33ba-4398-a1c7-a5f97db11950-tls-key-pair podName:293a6f5b-33ba-4398-a1c7-a5f97db11950 nodeName:}" failed. No retries permitted until 2026-03-12 08:16:11.013463514 +0000 UTC m=+1044.595499247 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/293a6f5b-33ba-4398-a1c7-a5f97db11950-tls-key-pair") pod "nmstate-webhook-5f558f5558-xxj6b" (UID: "293a6f5b-33ba-4398-a1c7-a5f97db11950") : secret "openshift-nmstate-webhook" not found Mar 12 08:16:10 crc kubenswrapper[4809]: I0312 08:16:10.533393 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7t89\" (UniqueName: \"kubernetes.io/projected/f89f6199-4afe-4ace-a7a9-2b8c91451d40-kube-api-access-j7t89\") pod \"nmstate-handler-jgxtq\" (UID: \"f89f6199-4afe-4ace-a7a9-2b8c91451d40\") " pod="openshift-nmstate/nmstate-handler-jgxtq" Mar 12 08:16:10 crc kubenswrapper[4809]: I0312 08:16:10.533972 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnm6l\" (UniqueName: \"kubernetes.io/projected/9a21a990-10ce-4677-95d2-df00083cbe34-kube-api-access-rnm6l\") pod \"nmstate-console-plugin-86f58fcf4-72xtc\" (UID: \"9a21a990-10ce-4677-95d2-df00083cbe34\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-72xtc" Mar 12 08:16:10 crc kubenswrapper[4809]: I0312 08:16:10.553922 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9784f\" (UniqueName: \"kubernetes.io/projected/293a6f5b-33ba-4398-a1c7-a5f97db11950-kube-api-access-9784f\") pod \"nmstate-webhook-5f558f5558-xxj6b\" (UID: \"293a6f5b-33ba-4398-a1c7-a5f97db11950\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-xxj6b" Mar 12 08:16:10 crc kubenswrapper[4809]: I0312 08:16:10.583640 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-jgxtq" Mar 12 08:16:10 crc kubenswrapper[4809]: I0312 08:16:10.630565 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-57dffbbf6c-jl4cx"] Mar 12 08:16:10 crc kubenswrapper[4809]: I0312 08:16:10.631915 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-57dffbbf6c-jl4cx" Mar 12 08:16:10 crc kubenswrapper[4809]: I0312 08:16:10.639089 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-57dffbbf6c-jl4cx"] Mar 12 08:16:10 crc kubenswrapper[4809]: W0312 08:16:10.658750 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf89f6199_4afe_4ace_a7a9_2b8c91451d40.slice/crio-b9e9cc87fd05519f3037280709e7511cc3b9d8261ca58e3ad3147d147358d46c WatchSource:0}: Error finding container b9e9cc87fd05519f3037280709e7511cc3b9d8261ca58e3ad3147d147358d46c: Status 404 returned error can't find the container with id b9e9cc87fd05519f3037280709e7511cc3b9d8261ca58e3ad3147d147358d46c Mar 12 08:16:10 crc kubenswrapper[4809]: I0312 08:16:10.816752 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/85ec0faa-1bca-4382-807d-35941e6d88fb-console-oauth-config\") pod \"console-57dffbbf6c-jl4cx\" (UID: \"85ec0faa-1bca-4382-807d-35941e6d88fb\") " pod="openshift-console/console-57dffbbf6c-jl4cx" Mar 12 08:16:10 crc kubenswrapper[4809]: I0312 08:16:10.817168 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/85ec0faa-1bca-4382-807d-35941e6d88fb-oauth-serving-cert\") pod \"console-57dffbbf6c-jl4cx\" (UID: \"85ec0faa-1bca-4382-807d-35941e6d88fb\") " pod="openshift-console/console-57dffbbf6c-jl4cx" Mar 12 08:16:10 crc kubenswrapper[4809]: I0312 08:16:10.817345 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdvv8\" (UniqueName: \"kubernetes.io/projected/85ec0faa-1bca-4382-807d-35941e6d88fb-kube-api-access-qdvv8\") pod \"console-57dffbbf6c-jl4cx\" (UID: \"85ec0faa-1bca-4382-807d-35941e6d88fb\") " pod="openshift-console/console-57dffbbf6c-jl4cx" Mar 12 08:16:10 crc kubenswrapper[4809]: I0312 08:16:10.817385 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/85ec0faa-1bca-4382-807d-35941e6d88fb-console-serving-cert\") pod \"console-57dffbbf6c-jl4cx\" (UID: \"85ec0faa-1bca-4382-807d-35941e6d88fb\") " pod="openshift-console/console-57dffbbf6c-jl4cx" Mar 12 08:16:10 crc kubenswrapper[4809]: I0312 08:16:10.817407 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85ec0faa-1bca-4382-807d-35941e6d88fb-trusted-ca-bundle\") pod \"console-57dffbbf6c-jl4cx\" (UID: \"85ec0faa-1bca-4382-807d-35941e6d88fb\") " pod="openshift-console/console-57dffbbf6c-jl4cx" Mar 12 08:16:10 crc kubenswrapper[4809]: I0312 08:16:10.817475 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/85ec0faa-1bca-4382-807d-35941e6d88fb-service-ca\") pod \"console-57dffbbf6c-jl4cx\" (UID: \"85ec0faa-1bca-4382-807d-35941e6d88fb\") " pod="openshift-console/console-57dffbbf6c-jl4cx" Mar 12 08:16:10 crc kubenswrapper[4809]: I0312 08:16:10.817502 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/85ec0faa-1bca-4382-807d-35941e6d88fb-console-config\") pod \"console-57dffbbf6c-jl4cx\" (UID: \"85ec0faa-1bca-4382-807d-35941e6d88fb\") " pod="openshift-console/console-57dffbbf6c-jl4cx" Mar 12 08:16:10 crc kubenswrapper[4809]: I0312 08:16:10.919258 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/85ec0faa-1bca-4382-807d-35941e6d88fb-service-ca\") pod \"console-57dffbbf6c-jl4cx\" (UID: \"85ec0faa-1bca-4382-807d-35941e6d88fb\") " pod="openshift-console/console-57dffbbf6c-jl4cx" Mar 12 08:16:10 crc kubenswrapper[4809]: I0312 08:16:10.919308 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/85ec0faa-1bca-4382-807d-35941e6d88fb-console-config\") pod \"console-57dffbbf6c-jl4cx\" (UID: \"85ec0faa-1bca-4382-807d-35941e6d88fb\") " pod="openshift-console/console-57dffbbf6c-jl4cx" Mar 12 08:16:10 crc kubenswrapper[4809]: I0312 08:16:10.919341 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/85ec0faa-1bca-4382-807d-35941e6d88fb-console-oauth-config\") pod \"console-57dffbbf6c-jl4cx\" (UID: \"85ec0faa-1bca-4382-807d-35941e6d88fb\") " pod="openshift-console/console-57dffbbf6c-jl4cx" Mar 12 08:16:10 crc kubenswrapper[4809]: I0312 08:16:10.919378 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/85ec0faa-1bca-4382-807d-35941e6d88fb-oauth-serving-cert\") pod \"console-57dffbbf6c-jl4cx\" (UID: \"85ec0faa-1bca-4382-807d-35941e6d88fb\") " pod="openshift-console/console-57dffbbf6c-jl4cx" Mar 12 08:16:10 crc kubenswrapper[4809]: I0312 08:16:10.919410 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdvv8\" (UniqueName: \"kubernetes.io/projected/85ec0faa-1bca-4382-807d-35941e6d88fb-kube-api-access-qdvv8\") pod \"console-57dffbbf6c-jl4cx\" (UID: \"85ec0faa-1bca-4382-807d-35941e6d88fb\") " pod="openshift-console/console-57dffbbf6c-jl4cx" Mar 12 08:16:10 crc kubenswrapper[4809]: I0312 08:16:10.919442 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/85ec0faa-1bca-4382-807d-35941e6d88fb-console-serving-cert\") pod \"console-57dffbbf6c-jl4cx\" (UID: \"85ec0faa-1bca-4382-807d-35941e6d88fb\") " pod="openshift-console/console-57dffbbf6c-jl4cx" Mar 12 08:16:10 crc kubenswrapper[4809]: I0312 08:16:10.919463 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85ec0faa-1bca-4382-807d-35941e6d88fb-trusted-ca-bundle\") pod \"console-57dffbbf6c-jl4cx\" (UID: \"85ec0faa-1bca-4382-807d-35941e6d88fb\") " pod="openshift-console/console-57dffbbf6c-jl4cx" Mar 12 08:16:10 crc kubenswrapper[4809]: I0312 08:16:10.920194 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/85ec0faa-1bca-4382-807d-35941e6d88fb-service-ca\") pod \"console-57dffbbf6c-jl4cx\" (UID: \"85ec0faa-1bca-4382-807d-35941e6d88fb\") " pod="openshift-console/console-57dffbbf6c-jl4cx" Mar 12 08:16:10 crc kubenswrapper[4809]: I0312 08:16:10.920336 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/85ec0faa-1bca-4382-807d-35941e6d88fb-console-config\") pod \"console-57dffbbf6c-jl4cx\" (UID: \"85ec0faa-1bca-4382-807d-35941e6d88fb\") " pod="openshift-console/console-57dffbbf6c-jl4cx" Mar 12 08:16:10 crc kubenswrapper[4809]: I0312 08:16:10.920940 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/85ec0faa-1bca-4382-807d-35941e6d88fb-oauth-serving-cert\") pod \"console-57dffbbf6c-jl4cx\" (UID: \"85ec0faa-1bca-4382-807d-35941e6d88fb\") " pod="openshift-console/console-57dffbbf6c-jl4cx" Mar 12 08:16:10 crc kubenswrapper[4809]: I0312 08:16:10.921265 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85ec0faa-1bca-4382-807d-35941e6d88fb-trusted-ca-bundle\") pod \"console-57dffbbf6c-jl4cx\" (UID: \"85ec0faa-1bca-4382-807d-35941e6d88fb\") " pod="openshift-console/console-57dffbbf6c-jl4cx" Mar 12 08:16:10 crc kubenswrapper[4809]: I0312 08:16:10.927030 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/85ec0faa-1bca-4382-807d-35941e6d88fb-console-oauth-config\") pod \"console-57dffbbf6c-jl4cx\" (UID: \"85ec0faa-1bca-4382-807d-35941e6d88fb\") " pod="openshift-console/console-57dffbbf6c-jl4cx" Mar 12 08:16:10 crc kubenswrapper[4809]: I0312 08:16:10.927135 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/85ec0faa-1bca-4382-807d-35941e6d88fb-console-serving-cert\") pod \"console-57dffbbf6c-jl4cx\" (UID: \"85ec0faa-1bca-4382-807d-35941e6d88fb\") " pod="openshift-console/console-57dffbbf6c-jl4cx" Mar 12 08:16:10 crc kubenswrapper[4809]: I0312 08:16:10.940921 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdvv8\" (UniqueName: \"kubernetes.io/projected/85ec0faa-1bca-4382-807d-35941e6d88fb-kube-api-access-qdvv8\") pod \"console-57dffbbf6c-jl4cx\" (UID: \"85ec0faa-1bca-4382-807d-35941e6d88fb\") " pod="openshift-console/console-57dffbbf6c-jl4cx" Mar 12 08:16:10 crc kubenswrapper[4809]: I0312 08:16:10.981868 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-57dffbbf6c-jl4cx" Mar 12 08:16:11 crc kubenswrapper[4809]: I0312 08:16:11.021556 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/293a6f5b-33ba-4398-a1c7-a5f97db11950-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-xxj6b\" (UID: \"293a6f5b-33ba-4398-a1c7-a5f97db11950\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-xxj6b" Mar 12 08:16:11 crc kubenswrapper[4809]: I0312 08:16:11.021679 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/9a21a990-10ce-4677-95d2-df00083cbe34-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-72xtc\" (UID: \"9a21a990-10ce-4677-95d2-df00083cbe34\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-72xtc" Mar 12 08:16:11 crc kubenswrapper[4809]: I0312 08:16:11.027002 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/293a6f5b-33ba-4398-a1c7-a5f97db11950-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-xxj6b\" (UID: \"293a6f5b-33ba-4398-a1c7-a5f97db11950\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-xxj6b" Mar 12 08:16:11 crc kubenswrapper[4809]: I0312 08:16:11.030936 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/9a21a990-10ce-4677-95d2-df00083cbe34-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-72xtc\" (UID: \"9a21a990-10ce-4677-95d2-df00083cbe34\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-72xtc" Mar 12 08:16:11 crc kubenswrapper[4809]: I0312 08:16:11.056196 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-bpg7k"] Mar 12 08:16:11 crc kubenswrapper[4809]: W0312 08:16:11.062780 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf05544ad_32fb_491d_ae13_7849090c1f34.slice/crio-e9f7154aa4e36298b0e0fe1d3e5abe0c71aad3aa1c579ea737f8efcbb6ddd1d7 WatchSource:0}: Error finding container e9f7154aa4e36298b0e0fe1d3e5abe0c71aad3aa1c579ea737f8efcbb6ddd1d7: Status 404 returned error can't find the container with id e9f7154aa4e36298b0e0fe1d3e5abe0c71aad3aa1c579ea737f8efcbb6ddd1d7 Mar 12 08:16:11 crc kubenswrapper[4809]: I0312 08:16:11.144714 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f558f5558-xxj6b" Mar 12 08:16:11 crc kubenswrapper[4809]: I0312 08:16:11.179848 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-jgxtq" event={"ID":"f89f6199-4afe-4ace-a7a9-2b8c91451d40","Type":"ContainerStarted","Data":"b9e9cc87fd05519f3037280709e7511cc3b9d8261ca58e3ad3147d147358d46c"} Mar 12 08:16:11 crc kubenswrapper[4809]: I0312 08:16:11.181543 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-bpg7k" event={"ID":"f05544ad-32fb-491d-ae13-7849090c1f34","Type":"ContainerStarted","Data":"e9f7154aa4e36298b0e0fe1d3e5abe0c71aad3aa1c579ea737f8efcbb6ddd1d7"} Mar 12 08:16:11 crc kubenswrapper[4809]: I0312 08:16:11.321204 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-72xtc" Mar 12 08:16:11 crc kubenswrapper[4809]: I0312 08:16:11.362998 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-xxj6b"] Mar 12 08:16:11 crc kubenswrapper[4809]: I0312 08:16:11.415162 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-57dffbbf6c-jl4cx"] Mar 12 08:16:11 crc kubenswrapper[4809]: W0312 08:16:11.434875 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod85ec0faa_1bca_4382_807d_35941e6d88fb.slice/crio-4d6b9a7c2ec5581de0fc1a6e456880f2629179afd572b05e672853a7bde66f67 WatchSource:0}: Error finding container 4d6b9a7c2ec5581de0fc1a6e456880f2629179afd572b05e672853a7bde66f67: Status 404 returned error can't find the container with id 4d6b9a7c2ec5581de0fc1a6e456880f2629179afd572b05e672853a7bde66f67 Mar 12 08:16:11 crc kubenswrapper[4809]: I0312 08:16:11.747264 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-72xtc"] Mar 12 08:16:12 crc kubenswrapper[4809]: I0312 08:16:12.191995 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f558f5558-xxj6b" event={"ID":"293a6f5b-33ba-4398-a1c7-a5f97db11950","Type":"ContainerStarted","Data":"6d44eb8430e3da2ae29c6a02079bfc036666f2c8594db0fc67f551d6dbe2475a"} Mar 12 08:16:12 crc kubenswrapper[4809]: I0312 08:16:12.194256 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-72xtc" event={"ID":"9a21a990-10ce-4677-95d2-df00083cbe34","Type":"ContainerStarted","Data":"250482759ad5b202b53a6240773d167dbef2c889f31ba70dbf977893dac683c3"} Mar 12 08:16:12 crc kubenswrapper[4809]: I0312 08:16:12.196491 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-57dffbbf6c-jl4cx" event={"ID":"85ec0faa-1bca-4382-807d-35941e6d88fb","Type":"ContainerStarted","Data":"c248c5956e33ecc134cd6c5a667469cc0ccb18db90f222daea902159ffc04638"} Mar 12 08:16:12 crc kubenswrapper[4809]: I0312 08:16:12.196517 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-57dffbbf6c-jl4cx" event={"ID":"85ec0faa-1bca-4382-807d-35941e6d88fb","Type":"ContainerStarted","Data":"4d6b9a7c2ec5581de0fc1a6e456880f2629179afd572b05e672853a7bde66f67"} Mar 12 08:16:12 crc kubenswrapper[4809]: I0312 08:16:12.231982 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-57dffbbf6c-jl4cx" podStartSLOduration=2.231949541 podStartE2EDuration="2.231949541s" podCreationTimestamp="2026-03-12 08:16:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:16:12.230439669 +0000 UTC m=+1045.812475412" watchObservedRunningTime="2026-03-12 08:16:12.231949541 +0000 UTC m=+1045.813985274" Mar 12 08:16:14 crc kubenswrapper[4809]: I0312 08:16:14.214401 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-jgxtq" event={"ID":"f89f6199-4afe-4ace-a7a9-2b8c91451d40","Type":"ContainerStarted","Data":"8e59ee4ca9f7070fa8185310f6418d38fbc192eba3db88b1797cdbeb9091de44"} Mar 12 08:16:14 crc kubenswrapper[4809]: I0312 08:16:14.215237 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-jgxtq" Mar 12 08:16:14 crc kubenswrapper[4809]: I0312 08:16:14.218586 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f558f5558-xxj6b" event={"ID":"293a6f5b-33ba-4398-a1c7-a5f97db11950","Type":"ContainerStarted","Data":"fe218de454205d1fcd8a6e28e3f1395a1e19bfc1aff0ff1dd188e37598e9d55d"} Mar 12 08:16:14 crc kubenswrapper[4809]: I0312 08:16:14.218682 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f558f5558-xxj6b" Mar 12 08:16:14 crc kubenswrapper[4809]: I0312 08:16:14.219756 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-bpg7k" event={"ID":"f05544ad-32fb-491d-ae13-7849090c1f34","Type":"ContainerStarted","Data":"e6cd0a4d13c8ce10f4b774bb783800936a37a789d1c929678979bd42faaf70e4"} Mar 12 08:16:14 crc kubenswrapper[4809]: I0312 08:16:14.233440 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-jgxtq" podStartSLOduration=1.456244264 podStartE2EDuration="4.233424172s" podCreationTimestamp="2026-03-12 08:16:10 +0000 UTC" firstStartedPulling="2026-03-12 08:16:10.661995524 +0000 UTC m=+1044.244031257" lastFinishedPulling="2026-03-12 08:16:13.439175432 +0000 UTC m=+1047.021211165" observedRunningTime="2026-03-12 08:16:14.23294718 +0000 UTC m=+1047.814982913" watchObservedRunningTime="2026-03-12 08:16:14.233424172 +0000 UTC m=+1047.815459905" Mar 12 08:16:14 crc kubenswrapper[4809]: I0312 08:16:14.252520 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f558f5558-xxj6b" podStartSLOduration=2.185145027 podStartE2EDuration="4.252496541s" podCreationTimestamp="2026-03-12 08:16:10 +0000 UTC" firstStartedPulling="2026-03-12 08:16:11.373096163 +0000 UTC m=+1044.955131896" lastFinishedPulling="2026-03-12 08:16:13.440447677 +0000 UTC m=+1047.022483410" observedRunningTime="2026-03-12 08:16:14.24987383 +0000 UTC m=+1047.831909563" watchObservedRunningTime="2026-03-12 08:16:14.252496541 +0000 UTC m=+1047.834532274" Mar 12 08:16:15 crc kubenswrapper[4809]: I0312 08:16:15.233873 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-72xtc" event={"ID":"9a21a990-10ce-4677-95d2-df00083cbe34","Type":"ContainerStarted","Data":"7b28e86f2df36c1c0d9e430b0ba4187696e68068fda1b81f22e695b3002483f4"} Mar 12 08:16:15 crc kubenswrapper[4809]: I0312 08:16:15.269757 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-72xtc" podStartSLOduration=2.066591883 podStartE2EDuration="5.269718225s" podCreationTimestamp="2026-03-12 08:16:10 +0000 UTC" firstStartedPulling="2026-03-12 08:16:11.756514481 +0000 UTC m=+1045.338550214" lastFinishedPulling="2026-03-12 08:16:14.959640823 +0000 UTC m=+1048.541676556" observedRunningTime="2026-03-12 08:16:15.257544114 +0000 UTC m=+1048.839579867" watchObservedRunningTime="2026-03-12 08:16:15.269718225 +0000 UTC m=+1048.851753988" Mar 12 08:16:17 crc kubenswrapper[4809]: I0312 08:16:17.255726 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-bpg7k" event={"ID":"f05544ad-32fb-491d-ae13-7849090c1f34","Type":"ContainerStarted","Data":"8f54e056b45d8919be8a9b3eed628958b52c7a1d56c5e1f3c28e7c5b35fe87d9"} Mar 12 08:16:17 crc kubenswrapper[4809]: I0312 08:16:17.294951 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-bpg7k" podStartSLOduration=2.043250408 podStartE2EDuration="7.294920903s" podCreationTimestamp="2026-03-12 08:16:10 +0000 UTC" firstStartedPulling="2026-03-12 08:16:11.06633545 +0000 UTC m=+1044.648371183" lastFinishedPulling="2026-03-12 08:16:16.318005945 +0000 UTC m=+1049.900041678" observedRunningTime="2026-03-12 08:16:17.274267091 +0000 UTC m=+1050.856302834" watchObservedRunningTime="2026-03-12 08:16:17.294920903 +0000 UTC m=+1050.876956636" Mar 12 08:16:20 crc kubenswrapper[4809]: I0312 08:16:20.623650 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-jgxtq" Mar 12 08:16:20 crc kubenswrapper[4809]: I0312 08:16:20.982667 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-57dffbbf6c-jl4cx" Mar 12 08:16:20 crc kubenswrapper[4809]: I0312 08:16:20.982711 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-57dffbbf6c-jl4cx" Mar 12 08:16:20 crc kubenswrapper[4809]: I0312 08:16:20.991823 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-57dffbbf6c-jl4cx" Mar 12 08:16:21 crc kubenswrapper[4809]: I0312 08:16:21.291541 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-57dffbbf6c-jl4cx" Mar 12 08:16:21 crc kubenswrapper[4809]: I0312 08:16:21.355555 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-86779b9dcc-qsh2c"] Mar 12 08:16:31 crc kubenswrapper[4809]: I0312 08:16:31.152741 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f558f5558-xxj6b" Mar 12 08:16:46 crc kubenswrapper[4809]: I0312 08:16:46.417917 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-86779b9dcc-qsh2c" podUID="c4d10085-7b35-4cc0-ae7a-b9c3f443e326" containerName="console" containerID="cri-o://8fb7cb25a6f08e2581183caaee3ad2a30d71a135aebabbe23188555a771c696c" gracePeriod=15 Mar 12 08:16:46 crc kubenswrapper[4809]: I0312 08:16:46.582467 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-86779b9dcc-qsh2c_c4d10085-7b35-4cc0-ae7a-b9c3f443e326/console/0.log" Mar 12 08:16:46 crc kubenswrapper[4809]: I0312 08:16:46.583335 4809 generic.go:334] "Generic (PLEG): container finished" podID="c4d10085-7b35-4cc0-ae7a-b9c3f443e326" containerID="8fb7cb25a6f08e2581183caaee3ad2a30d71a135aebabbe23188555a771c696c" exitCode=2 Mar 12 08:16:46 crc kubenswrapper[4809]: I0312 08:16:46.583373 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-86779b9dcc-qsh2c" event={"ID":"c4d10085-7b35-4cc0-ae7a-b9c3f443e326","Type":"ContainerDied","Data":"8fb7cb25a6f08e2581183caaee3ad2a30d71a135aebabbe23188555a771c696c"} Mar 12 08:16:46 crc kubenswrapper[4809]: I0312 08:16:46.952065 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-86779b9dcc-qsh2c_c4d10085-7b35-4cc0-ae7a-b9c3f443e326/console/0.log" Mar 12 08:16:46 crc kubenswrapper[4809]: I0312 08:16:46.952550 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-86779b9dcc-qsh2c" Mar 12 08:16:47 crc kubenswrapper[4809]: I0312 08:16:47.049390 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c4d10085-7b35-4cc0-ae7a-b9c3f443e326-console-oauth-config\") pod \"c4d10085-7b35-4cc0-ae7a-b9c3f443e326\" (UID: \"c4d10085-7b35-4cc0-ae7a-b9c3f443e326\") " Mar 12 08:16:47 crc kubenswrapper[4809]: I0312 08:16:47.049478 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4d10085-7b35-4cc0-ae7a-b9c3f443e326-trusted-ca-bundle\") pod \"c4d10085-7b35-4cc0-ae7a-b9c3f443e326\" (UID: \"c4d10085-7b35-4cc0-ae7a-b9c3f443e326\") " Mar 12 08:16:47 crc kubenswrapper[4809]: I0312 08:16:47.049604 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zd2rx\" (UniqueName: \"kubernetes.io/projected/c4d10085-7b35-4cc0-ae7a-b9c3f443e326-kube-api-access-zd2rx\") pod \"c4d10085-7b35-4cc0-ae7a-b9c3f443e326\" (UID: \"c4d10085-7b35-4cc0-ae7a-b9c3f443e326\") " Mar 12 08:16:47 crc kubenswrapper[4809]: I0312 08:16:47.049645 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c4d10085-7b35-4cc0-ae7a-b9c3f443e326-console-serving-cert\") pod \"c4d10085-7b35-4cc0-ae7a-b9c3f443e326\" (UID: \"c4d10085-7b35-4cc0-ae7a-b9c3f443e326\") " Mar 12 08:16:47 crc kubenswrapper[4809]: I0312 08:16:47.049667 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c4d10085-7b35-4cc0-ae7a-b9c3f443e326-service-ca\") pod \"c4d10085-7b35-4cc0-ae7a-b9c3f443e326\" (UID: \"c4d10085-7b35-4cc0-ae7a-b9c3f443e326\") " Mar 12 08:16:47 crc kubenswrapper[4809]: I0312 08:16:47.049721 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c4d10085-7b35-4cc0-ae7a-b9c3f443e326-console-config\") pod \"c4d10085-7b35-4cc0-ae7a-b9c3f443e326\" (UID: \"c4d10085-7b35-4cc0-ae7a-b9c3f443e326\") " Mar 12 08:16:47 crc kubenswrapper[4809]: I0312 08:16:47.050885 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c4d10085-7b35-4cc0-ae7a-b9c3f443e326-oauth-serving-cert\") pod \"c4d10085-7b35-4cc0-ae7a-b9c3f443e326\" (UID: \"c4d10085-7b35-4cc0-ae7a-b9c3f443e326\") " Mar 12 08:16:47 crc kubenswrapper[4809]: I0312 08:16:47.055227 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4d10085-7b35-4cc0-ae7a-b9c3f443e326-console-config" (OuterVolumeSpecName: "console-config") pod "c4d10085-7b35-4cc0-ae7a-b9c3f443e326" (UID: "c4d10085-7b35-4cc0-ae7a-b9c3f443e326"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:16:47 crc kubenswrapper[4809]: I0312 08:16:47.055985 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4d10085-7b35-4cc0-ae7a-b9c3f443e326-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "c4d10085-7b35-4cc0-ae7a-b9c3f443e326" (UID: "c4d10085-7b35-4cc0-ae7a-b9c3f443e326"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:16:47 crc kubenswrapper[4809]: I0312 08:16:47.055627 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4d10085-7b35-4cc0-ae7a-b9c3f443e326-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "c4d10085-7b35-4cc0-ae7a-b9c3f443e326" (UID: "c4d10085-7b35-4cc0-ae7a-b9c3f443e326"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:16:47 crc kubenswrapper[4809]: I0312 08:16:47.059463 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4d10085-7b35-4cc0-ae7a-b9c3f443e326-service-ca" (OuterVolumeSpecName: "service-ca") pod "c4d10085-7b35-4cc0-ae7a-b9c3f443e326" (UID: "c4d10085-7b35-4cc0-ae7a-b9c3f443e326"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:16:47 crc kubenswrapper[4809]: I0312 08:16:47.060253 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4d10085-7b35-4cc0-ae7a-b9c3f443e326-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "c4d10085-7b35-4cc0-ae7a-b9c3f443e326" (UID: "c4d10085-7b35-4cc0-ae7a-b9c3f443e326"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:16:47 crc kubenswrapper[4809]: I0312 08:16:47.062523 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4d10085-7b35-4cc0-ae7a-b9c3f443e326-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "c4d10085-7b35-4cc0-ae7a-b9c3f443e326" (UID: "c4d10085-7b35-4cc0-ae7a-b9c3f443e326"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:16:47 crc kubenswrapper[4809]: I0312 08:16:47.074770 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4d10085-7b35-4cc0-ae7a-b9c3f443e326-kube-api-access-zd2rx" (OuterVolumeSpecName: "kube-api-access-zd2rx") pod "c4d10085-7b35-4cc0-ae7a-b9c3f443e326" (UID: "c4d10085-7b35-4cc0-ae7a-b9c3f443e326"). InnerVolumeSpecName "kube-api-access-zd2rx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:16:47 crc kubenswrapper[4809]: I0312 08:16:47.159819 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zd2rx\" (UniqueName: \"kubernetes.io/projected/c4d10085-7b35-4cc0-ae7a-b9c3f443e326-kube-api-access-zd2rx\") on node \"crc\" DevicePath \"\"" Mar 12 08:16:47 crc kubenswrapper[4809]: I0312 08:16:47.159857 4809 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c4d10085-7b35-4cc0-ae7a-b9c3f443e326-console-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 08:16:47 crc kubenswrapper[4809]: I0312 08:16:47.159870 4809 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c4d10085-7b35-4cc0-ae7a-b9c3f443e326-service-ca\") on node \"crc\" DevicePath \"\"" Mar 12 08:16:47 crc kubenswrapper[4809]: I0312 08:16:47.159881 4809 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c4d10085-7b35-4cc0-ae7a-b9c3f443e326-console-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:16:47 crc kubenswrapper[4809]: I0312 08:16:47.159890 4809 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c4d10085-7b35-4cc0-ae7a-b9c3f443e326-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 08:16:47 crc kubenswrapper[4809]: I0312 08:16:47.159901 4809 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c4d10085-7b35-4cc0-ae7a-b9c3f443e326-console-oauth-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:16:47 crc kubenswrapper[4809]: I0312 08:16:47.159912 4809 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4d10085-7b35-4cc0-ae7a-b9c3f443e326-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:16:47 crc kubenswrapper[4809]: I0312 08:16:47.599578 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-86779b9dcc-qsh2c_c4d10085-7b35-4cc0-ae7a-b9c3f443e326/console/0.log" Mar 12 08:16:47 crc kubenswrapper[4809]: I0312 08:16:47.600226 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-86779b9dcc-qsh2c" event={"ID":"c4d10085-7b35-4cc0-ae7a-b9c3f443e326","Type":"ContainerDied","Data":"21b91a806d7df23458eb6f3f3b68456ace9b8c28a896e20bfb5d5337a1b76555"} Mar 12 08:16:47 crc kubenswrapper[4809]: I0312 08:16:47.600301 4809 scope.go:117] "RemoveContainer" containerID="8fb7cb25a6f08e2581183caaee3ad2a30d71a135aebabbe23188555a771c696c" Mar 12 08:16:47 crc kubenswrapper[4809]: I0312 08:16:47.600671 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-86779b9dcc-qsh2c" Mar 12 08:16:47 crc kubenswrapper[4809]: I0312 08:16:47.641696 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-86779b9dcc-qsh2c"] Mar 12 08:16:47 crc kubenswrapper[4809]: I0312 08:16:47.659092 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-86779b9dcc-qsh2c"] Mar 12 08:16:48 crc kubenswrapper[4809]: E0312 08:16:48.239271 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4d10085_7b35_4cc0_ae7a_b9c3f443e326.slice\": RecentStats: unable to find data in memory cache]" Mar 12 08:16:48 crc kubenswrapper[4809]: E0312 08:16:48.239372 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4d10085_7b35_4cc0_ae7a_b9c3f443e326.slice\": RecentStats: unable to find data in memory cache]" Mar 12 08:16:49 crc kubenswrapper[4809]: I0312 08:16:49.116426 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4d10085-7b35-4cc0-ae7a-b9c3f443e326" path="/var/lib/kubelet/pods/c4d10085-7b35-4cc0-ae7a-b9c3f443e326/volumes" Mar 12 08:16:49 crc kubenswrapper[4809]: E0312 08:16:49.263922 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4d10085_7b35_4cc0_ae7a_b9c3f443e326.slice\": RecentStats: unable to find data in memory cache]" Mar 12 08:16:50 crc kubenswrapper[4809]: E0312 08:16:50.475588 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4d10085_7b35_4cc0_ae7a_b9c3f443e326.slice\": RecentStats: unable to find data in memory cache]" Mar 12 08:16:52 crc kubenswrapper[4809]: I0312 08:16:52.019764 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14f9hp"] Mar 12 08:16:52 crc kubenswrapper[4809]: E0312 08:16:52.020637 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4d10085-7b35-4cc0-ae7a-b9c3f443e326" containerName="console" Mar 12 08:16:52 crc kubenswrapper[4809]: I0312 08:16:52.020654 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4d10085-7b35-4cc0-ae7a-b9c3f443e326" containerName="console" Mar 12 08:16:52 crc kubenswrapper[4809]: I0312 08:16:52.020833 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4d10085-7b35-4cc0-ae7a-b9c3f443e326" containerName="console" Mar 12 08:16:52 crc kubenswrapper[4809]: I0312 08:16:52.022061 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14f9hp" Mar 12 08:16:52 crc kubenswrapper[4809]: I0312 08:16:52.023870 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 12 08:16:52 crc kubenswrapper[4809]: I0312 08:16:52.032929 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14f9hp"] Mar 12 08:16:52 crc kubenswrapper[4809]: I0312 08:16:52.173175 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/829eb981-1af8-4c1f-982b-47e8141d9154-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14f9hp\" (UID: \"829eb981-1af8-4c1f-982b-47e8141d9154\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14f9hp" Mar 12 08:16:52 crc kubenswrapper[4809]: I0312 08:16:52.173295 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/829eb981-1af8-4c1f-982b-47e8141d9154-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14f9hp\" (UID: \"829eb981-1af8-4c1f-982b-47e8141d9154\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14f9hp" Mar 12 08:16:52 crc kubenswrapper[4809]: I0312 08:16:52.173337 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzf4g\" (UniqueName: \"kubernetes.io/projected/829eb981-1af8-4c1f-982b-47e8141d9154-kube-api-access-gzf4g\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14f9hp\" (UID: \"829eb981-1af8-4c1f-982b-47e8141d9154\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14f9hp" Mar 12 08:16:52 crc kubenswrapper[4809]: I0312 08:16:52.275087 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/829eb981-1af8-4c1f-982b-47e8141d9154-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14f9hp\" (UID: \"829eb981-1af8-4c1f-982b-47e8141d9154\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14f9hp" Mar 12 08:16:52 crc kubenswrapper[4809]: I0312 08:16:52.275243 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/829eb981-1af8-4c1f-982b-47e8141d9154-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14f9hp\" (UID: \"829eb981-1af8-4c1f-982b-47e8141d9154\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14f9hp" Mar 12 08:16:52 crc kubenswrapper[4809]: I0312 08:16:52.275284 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzf4g\" (UniqueName: \"kubernetes.io/projected/829eb981-1af8-4c1f-982b-47e8141d9154-kube-api-access-gzf4g\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14f9hp\" (UID: \"829eb981-1af8-4c1f-982b-47e8141d9154\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14f9hp" Mar 12 08:16:52 crc kubenswrapper[4809]: I0312 08:16:52.276024 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/829eb981-1af8-4c1f-982b-47e8141d9154-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14f9hp\" (UID: \"829eb981-1af8-4c1f-982b-47e8141d9154\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14f9hp" Mar 12 08:16:52 crc kubenswrapper[4809]: I0312 08:16:52.276477 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/829eb981-1af8-4c1f-982b-47e8141d9154-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14f9hp\" (UID: \"829eb981-1af8-4c1f-982b-47e8141d9154\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14f9hp" Mar 12 08:16:52 crc kubenswrapper[4809]: I0312 08:16:52.304277 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzf4g\" (UniqueName: \"kubernetes.io/projected/829eb981-1af8-4c1f-982b-47e8141d9154-kube-api-access-gzf4g\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14f9hp\" (UID: \"829eb981-1af8-4c1f-982b-47e8141d9154\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14f9hp" Mar 12 08:16:52 crc kubenswrapper[4809]: I0312 08:16:52.350222 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14f9hp" Mar 12 08:16:52 crc kubenswrapper[4809]: I0312 08:16:52.690248 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14f9hp"] Mar 12 08:16:53 crc kubenswrapper[4809]: I0312 08:16:53.665548 4809 generic.go:334] "Generic (PLEG): container finished" podID="829eb981-1af8-4c1f-982b-47e8141d9154" containerID="8fbe2e5936d2f2aa7dc810eaff74708a31568b0f16ec222a0ac70122485d357f" exitCode=0 Mar 12 08:16:53 crc kubenswrapper[4809]: I0312 08:16:53.665722 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14f9hp" event={"ID":"829eb981-1af8-4c1f-982b-47e8141d9154","Type":"ContainerDied","Data":"8fbe2e5936d2f2aa7dc810eaff74708a31568b0f16ec222a0ac70122485d357f"} Mar 12 08:16:53 crc kubenswrapper[4809]: I0312 08:16:53.667999 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14f9hp" event={"ID":"829eb981-1af8-4c1f-982b-47e8141d9154","Type":"ContainerStarted","Data":"dbc627f1ea32d1de225ca4c51b04f0b7ed5818a409bf7eb58ef920d1304b6e3c"} Mar 12 08:16:57 crc kubenswrapper[4809]: I0312 08:16:57.719690 4809 generic.go:334] "Generic (PLEG): container finished" podID="829eb981-1af8-4c1f-982b-47e8141d9154" containerID="e80ae4a456569eb038471856344c962ad815e0cac129289ab8ef3ae84ff00044" exitCode=0 Mar 12 08:16:57 crc kubenswrapper[4809]: I0312 08:16:57.719785 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14f9hp" event={"ID":"829eb981-1af8-4c1f-982b-47e8141d9154","Type":"ContainerDied","Data":"e80ae4a456569eb038471856344c962ad815e0cac129289ab8ef3ae84ff00044"} Mar 12 08:16:58 crc kubenswrapper[4809]: I0312 08:16:58.732247 4809 generic.go:334] "Generic (PLEG): container finished" podID="829eb981-1af8-4c1f-982b-47e8141d9154" containerID="841b2b4f637c9694f1a75b8495e8d632d16b6ba49f99c5f057d2ce3ffc69e7c4" exitCode=0 Mar 12 08:16:58 crc kubenswrapper[4809]: I0312 08:16:58.732339 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14f9hp" event={"ID":"829eb981-1af8-4c1f-982b-47e8141d9154","Type":"ContainerDied","Data":"841b2b4f637c9694f1a75b8495e8d632d16b6ba49f99c5f057d2ce3ffc69e7c4"} Mar 12 08:16:58 crc kubenswrapper[4809]: I0312 08:16:58.855678 4809 scope.go:117] "RemoveContainer" containerID="f65f18fe505c98f1153e0d27e39102c27241b5a85047688d9a1aa9184c0cf4c6" Mar 12 08:16:59 crc kubenswrapper[4809]: E0312 08:16:59.428315 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4d10085_7b35_4cc0_ae7a_b9c3f443e326.slice\": RecentStats: unable to find data in memory cache]" Mar 12 08:17:00 crc kubenswrapper[4809]: I0312 08:17:00.123670 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14f9hp" Mar 12 08:17:00 crc kubenswrapper[4809]: I0312 08:17:00.246518 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzf4g\" (UniqueName: \"kubernetes.io/projected/829eb981-1af8-4c1f-982b-47e8141d9154-kube-api-access-gzf4g\") pod \"829eb981-1af8-4c1f-982b-47e8141d9154\" (UID: \"829eb981-1af8-4c1f-982b-47e8141d9154\") " Mar 12 08:17:00 crc kubenswrapper[4809]: I0312 08:17:00.246623 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/829eb981-1af8-4c1f-982b-47e8141d9154-util\") pod \"829eb981-1af8-4c1f-982b-47e8141d9154\" (UID: \"829eb981-1af8-4c1f-982b-47e8141d9154\") " Mar 12 08:17:00 crc kubenswrapper[4809]: I0312 08:17:00.246779 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/829eb981-1af8-4c1f-982b-47e8141d9154-bundle\") pod \"829eb981-1af8-4c1f-982b-47e8141d9154\" (UID: \"829eb981-1af8-4c1f-982b-47e8141d9154\") " Mar 12 08:17:00 crc kubenswrapper[4809]: I0312 08:17:00.248274 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/829eb981-1af8-4c1f-982b-47e8141d9154-bundle" (OuterVolumeSpecName: "bundle") pod "829eb981-1af8-4c1f-982b-47e8141d9154" (UID: "829eb981-1af8-4c1f-982b-47e8141d9154"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:17:00 crc kubenswrapper[4809]: I0312 08:17:00.254296 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/829eb981-1af8-4c1f-982b-47e8141d9154-kube-api-access-gzf4g" (OuterVolumeSpecName: "kube-api-access-gzf4g") pod "829eb981-1af8-4c1f-982b-47e8141d9154" (UID: "829eb981-1af8-4c1f-982b-47e8141d9154"). InnerVolumeSpecName "kube-api-access-gzf4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:17:00 crc kubenswrapper[4809]: I0312 08:17:00.263159 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/829eb981-1af8-4c1f-982b-47e8141d9154-util" (OuterVolumeSpecName: "util") pod "829eb981-1af8-4c1f-982b-47e8141d9154" (UID: "829eb981-1af8-4c1f-982b-47e8141d9154"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:17:00 crc kubenswrapper[4809]: I0312 08:17:00.348529 4809 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/829eb981-1af8-4c1f-982b-47e8141d9154-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:17:00 crc kubenswrapper[4809]: I0312 08:17:00.348777 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzf4g\" (UniqueName: \"kubernetes.io/projected/829eb981-1af8-4c1f-982b-47e8141d9154-kube-api-access-gzf4g\") on node \"crc\" DevicePath \"\"" Mar 12 08:17:00 crc kubenswrapper[4809]: I0312 08:17:00.348791 4809 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/829eb981-1af8-4c1f-982b-47e8141d9154-util\") on node \"crc\" DevicePath \"\"" Mar 12 08:17:00 crc kubenswrapper[4809]: I0312 08:17:00.758920 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14f9hp" event={"ID":"829eb981-1af8-4c1f-982b-47e8141d9154","Type":"ContainerDied","Data":"dbc627f1ea32d1de225ca4c51b04f0b7ed5818a409bf7eb58ef920d1304b6e3c"} Mar 12 08:17:00 crc kubenswrapper[4809]: I0312 08:17:00.759021 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dbc627f1ea32d1de225ca4c51b04f0b7ed5818a409bf7eb58ef920d1304b6e3c" Mar 12 08:17:00 crc kubenswrapper[4809]: I0312 08:17:00.759073 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14f9hp" Mar 12 08:17:05 crc kubenswrapper[4809]: E0312 08:17:05.476620 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4d10085_7b35_4cc0_ae7a_b9c3f443e326.slice\": RecentStats: unable to find data in memory cache]" Mar 12 08:17:09 crc kubenswrapper[4809]: E0312 08:17:09.643917 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4d10085_7b35_4cc0_ae7a_b9c3f443e326.slice\": RecentStats: unable to find data in memory cache]" Mar 12 08:17:10 crc kubenswrapper[4809]: I0312 08:17:10.532659 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-648f7d48f7-wdwdw"] Mar 12 08:17:10 crc kubenswrapper[4809]: E0312 08:17:10.552230 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="829eb981-1af8-4c1f-982b-47e8141d9154" containerName="pull" Mar 12 08:17:10 crc kubenswrapper[4809]: I0312 08:17:10.552270 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="829eb981-1af8-4c1f-982b-47e8141d9154" containerName="pull" Mar 12 08:17:10 crc kubenswrapper[4809]: E0312 08:17:10.552308 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="829eb981-1af8-4c1f-982b-47e8141d9154" containerName="extract" Mar 12 08:17:10 crc kubenswrapper[4809]: I0312 08:17:10.552314 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="829eb981-1af8-4c1f-982b-47e8141d9154" containerName="extract" Mar 12 08:17:10 crc kubenswrapper[4809]: E0312 08:17:10.552328 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="829eb981-1af8-4c1f-982b-47e8141d9154" containerName="util" Mar 12 08:17:10 crc kubenswrapper[4809]: I0312 08:17:10.552334 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="829eb981-1af8-4c1f-982b-47e8141d9154" containerName="util" Mar 12 08:17:10 crc kubenswrapper[4809]: I0312 08:17:10.552568 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="829eb981-1af8-4c1f-982b-47e8141d9154" containerName="extract" Mar 12 08:17:10 crc kubenswrapper[4809]: I0312 08:17:10.553788 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-648f7d48f7-wdwdw" Mar 12 08:17:10 crc kubenswrapper[4809]: I0312 08:17:10.558526 4809 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-ckfc6" Mar 12 08:17:10 crc kubenswrapper[4809]: I0312 08:17:10.572527 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-648f7d48f7-wdwdw"] Mar 12 08:17:10 crc kubenswrapper[4809]: I0312 08:17:10.616802 4809 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Mar 12 08:17:10 crc kubenswrapper[4809]: I0312 08:17:10.617007 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Mar 12 08:17:10 crc kubenswrapper[4809]: I0312 08:17:10.617283 4809 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Mar 12 08:17:10 crc kubenswrapper[4809]: I0312 08:17:10.617648 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Mar 12 08:17:10 crc kubenswrapper[4809]: I0312 08:17:10.643313 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbm8p\" (UniqueName: \"kubernetes.io/projected/d177b9be-4037-4f81-8227-9c4361eba85f-kube-api-access-qbm8p\") pod \"metallb-operator-controller-manager-648f7d48f7-wdwdw\" (UID: \"d177b9be-4037-4f81-8227-9c4361eba85f\") " pod="metallb-system/metallb-operator-controller-manager-648f7d48f7-wdwdw" Mar 12 08:17:10 crc kubenswrapper[4809]: I0312 08:17:10.643385 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d177b9be-4037-4f81-8227-9c4361eba85f-webhook-cert\") pod \"metallb-operator-controller-manager-648f7d48f7-wdwdw\" (UID: \"d177b9be-4037-4f81-8227-9c4361eba85f\") " pod="metallb-system/metallb-operator-controller-manager-648f7d48f7-wdwdw" Mar 12 08:17:10 crc kubenswrapper[4809]: I0312 08:17:10.643429 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d177b9be-4037-4f81-8227-9c4361eba85f-apiservice-cert\") pod \"metallb-operator-controller-manager-648f7d48f7-wdwdw\" (UID: \"d177b9be-4037-4f81-8227-9c4361eba85f\") " pod="metallb-system/metallb-operator-controller-manager-648f7d48f7-wdwdw" Mar 12 08:17:10 crc kubenswrapper[4809]: I0312 08:17:10.745483 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbm8p\" (UniqueName: \"kubernetes.io/projected/d177b9be-4037-4f81-8227-9c4361eba85f-kube-api-access-qbm8p\") pod \"metallb-operator-controller-manager-648f7d48f7-wdwdw\" (UID: \"d177b9be-4037-4f81-8227-9c4361eba85f\") " pod="metallb-system/metallb-operator-controller-manager-648f7d48f7-wdwdw" Mar 12 08:17:10 crc kubenswrapper[4809]: I0312 08:17:10.745582 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d177b9be-4037-4f81-8227-9c4361eba85f-webhook-cert\") pod \"metallb-operator-controller-manager-648f7d48f7-wdwdw\" (UID: \"d177b9be-4037-4f81-8227-9c4361eba85f\") " pod="metallb-system/metallb-operator-controller-manager-648f7d48f7-wdwdw" Mar 12 08:17:10 crc kubenswrapper[4809]: I0312 08:17:10.745635 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d177b9be-4037-4f81-8227-9c4361eba85f-apiservice-cert\") pod \"metallb-operator-controller-manager-648f7d48f7-wdwdw\" (UID: \"d177b9be-4037-4f81-8227-9c4361eba85f\") " pod="metallb-system/metallb-operator-controller-manager-648f7d48f7-wdwdw" Mar 12 08:17:10 crc kubenswrapper[4809]: I0312 08:17:10.755583 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d177b9be-4037-4f81-8227-9c4361eba85f-apiservice-cert\") pod \"metallb-operator-controller-manager-648f7d48f7-wdwdw\" (UID: \"d177b9be-4037-4f81-8227-9c4361eba85f\") " pod="metallb-system/metallb-operator-controller-manager-648f7d48f7-wdwdw" Mar 12 08:17:10 crc kubenswrapper[4809]: I0312 08:17:10.766931 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d177b9be-4037-4f81-8227-9c4361eba85f-webhook-cert\") pod \"metallb-operator-controller-manager-648f7d48f7-wdwdw\" (UID: \"d177b9be-4037-4f81-8227-9c4361eba85f\") " pod="metallb-system/metallb-operator-controller-manager-648f7d48f7-wdwdw" Mar 12 08:17:10 crc kubenswrapper[4809]: I0312 08:17:10.768974 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbm8p\" (UniqueName: \"kubernetes.io/projected/d177b9be-4037-4f81-8227-9c4361eba85f-kube-api-access-qbm8p\") pod \"metallb-operator-controller-manager-648f7d48f7-wdwdw\" (UID: \"d177b9be-4037-4f81-8227-9c4361eba85f\") " pod="metallb-system/metallb-operator-controller-manager-648f7d48f7-wdwdw" Mar 12 08:17:10 crc kubenswrapper[4809]: I0312 08:17:10.873017 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-59fcfb5dc8-gpg2j"] Mar 12 08:17:10 crc kubenswrapper[4809]: I0312 08:17:10.874009 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-59fcfb5dc8-gpg2j" Mar 12 08:17:10 crc kubenswrapper[4809]: I0312 08:17:10.884413 4809 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-hbw6v" Mar 12 08:17:10 crc kubenswrapper[4809]: I0312 08:17:10.886457 4809 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Mar 12 08:17:10 crc kubenswrapper[4809]: I0312 08:17:10.886565 4809 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 12 08:17:10 crc kubenswrapper[4809]: I0312 08:17:10.899777 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-59fcfb5dc8-gpg2j"] Mar 12 08:17:10 crc kubenswrapper[4809]: I0312 08:17:10.940563 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-648f7d48f7-wdwdw" Mar 12 08:17:11 crc kubenswrapper[4809]: I0312 08:17:11.055098 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6befee19-0c78-47ca-a608-be246c0d7bb5-webhook-cert\") pod \"metallb-operator-webhook-server-59fcfb5dc8-gpg2j\" (UID: \"6befee19-0c78-47ca-a608-be246c0d7bb5\") " pod="metallb-system/metallb-operator-webhook-server-59fcfb5dc8-gpg2j" Mar 12 08:17:11 crc kubenswrapper[4809]: I0312 08:17:11.055189 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldfmm\" (UniqueName: \"kubernetes.io/projected/6befee19-0c78-47ca-a608-be246c0d7bb5-kube-api-access-ldfmm\") pod \"metallb-operator-webhook-server-59fcfb5dc8-gpg2j\" (UID: \"6befee19-0c78-47ca-a608-be246c0d7bb5\") " pod="metallb-system/metallb-operator-webhook-server-59fcfb5dc8-gpg2j" Mar 12 08:17:11 crc kubenswrapper[4809]: I0312 08:17:11.055745 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6befee19-0c78-47ca-a608-be246c0d7bb5-apiservice-cert\") pod \"metallb-operator-webhook-server-59fcfb5dc8-gpg2j\" (UID: \"6befee19-0c78-47ca-a608-be246c0d7bb5\") " pod="metallb-system/metallb-operator-webhook-server-59fcfb5dc8-gpg2j" Mar 12 08:17:11 crc kubenswrapper[4809]: I0312 08:17:11.157749 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6befee19-0c78-47ca-a608-be246c0d7bb5-webhook-cert\") pod \"metallb-operator-webhook-server-59fcfb5dc8-gpg2j\" (UID: \"6befee19-0c78-47ca-a608-be246c0d7bb5\") " pod="metallb-system/metallb-operator-webhook-server-59fcfb5dc8-gpg2j" Mar 12 08:17:11 crc kubenswrapper[4809]: I0312 08:17:11.157805 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldfmm\" (UniqueName: \"kubernetes.io/projected/6befee19-0c78-47ca-a608-be246c0d7bb5-kube-api-access-ldfmm\") pod \"metallb-operator-webhook-server-59fcfb5dc8-gpg2j\" (UID: \"6befee19-0c78-47ca-a608-be246c0d7bb5\") " pod="metallb-system/metallb-operator-webhook-server-59fcfb5dc8-gpg2j" Mar 12 08:17:11 crc kubenswrapper[4809]: I0312 08:17:11.157828 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6befee19-0c78-47ca-a608-be246c0d7bb5-apiservice-cert\") pod \"metallb-operator-webhook-server-59fcfb5dc8-gpg2j\" (UID: \"6befee19-0c78-47ca-a608-be246c0d7bb5\") " pod="metallb-system/metallb-operator-webhook-server-59fcfb5dc8-gpg2j" Mar 12 08:17:11 crc kubenswrapper[4809]: I0312 08:17:11.174173 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6befee19-0c78-47ca-a608-be246c0d7bb5-apiservice-cert\") pod \"metallb-operator-webhook-server-59fcfb5dc8-gpg2j\" (UID: \"6befee19-0c78-47ca-a608-be246c0d7bb5\") " pod="metallb-system/metallb-operator-webhook-server-59fcfb5dc8-gpg2j" Mar 12 08:17:11 crc kubenswrapper[4809]: I0312 08:17:11.188825 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6befee19-0c78-47ca-a608-be246c0d7bb5-webhook-cert\") pod \"metallb-operator-webhook-server-59fcfb5dc8-gpg2j\" (UID: \"6befee19-0c78-47ca-a608-be246c0d7bb5\") " pod="metallb-system/metallb-operator-webhook-server-59fcfb5dc8-gpg2j" Mar 12 08:17:11 crc kubenswrapper[4809]: I0312 08:17:11.205211 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldfmm\" (UniqueName: \"kubernetes.io/projected/6befee19-0c78-47ca-a608-be246c0d7bb5-kube-api-access-ldfmm\") pod \"metallb-operator-webhook-server-59fcfb5dc8-gpg2j\" (UID: \"6befee19-0c78-47ca-a608-be246c0d7bb5\") " pod="metallb-system/metallb-operator-webhook-server-59fcfb5dc8-gpg2j" Mar 12 08:17:11 crc kubenswrapper[4809]: I0312 08:17:11.496214 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-59fcfb5dc8-gpg2j" Mar 12 08:17:11 crc kubenswrapper[4809]: I0312 08:17:11.573873 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-648f7d48f7-wdwdw"] Mar 12 08:17:11 crc kubenswrapper[4809]: W0312 08:17:11.576726 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd177b9be_4037_4f81_8227_9c4361eba85f.slice/crio-9a345b3e7cf7f774b7063adb54b2fd8e0086bf0abc761c827e8c9276217f19be WatchSource:0}: Error finding container 9a345b3e7cf7f774b7063adb54b2fd8e0086bf0abc761c827e8c9276217f19be: Status 404 returned error can't find the container with id 9a345b3e7cf7f774b7063adb54b2fd8e0086bf0abc761c827e8c9276217f19be Mar 12 08:17:11 crc kubenswrapper[4809]: I0312 08:17:11.812885 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-59fcfb5dc8-gpg2j"] Mar 12 08:17:11 crc kubenswrapper[4809]: W0312 08:17:11.819156 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6befee19_0c78_47ca_a608_be246c0d7bb5.slice/crio-309c55276cc1d2bdb51d6e7a624264c2767a83da259dbd4b1632f2d2e2a83731 WatchSource:0}: Error finding container 309c55276cc1d2bdb51d6e7a624264c2767a83da259dbd4b1632f2d2e2a83731: Status 404 returned error can't find the container with id 309c55276cc1d2bdb51d6e7a624264c2767a83da259dbd4b1632f2d2e2a83731 Mar 12 08:17:11 crc kubenswrapper[4809]: I0312 08:17:11.854178 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-648f7d48f7-wdwdw" event={"ID":"d177b9be-4037-4f81-8227-9c4361eba85f","Type":"ContainerStarted","Data":"9a345b3e7cf7f774b7063adb54b2fd8e0086bf0abc761c827e8c9276217f19be"} Mar 12 08:17:11 crc kubenswrapper[4809]: I0312 08:17:11.855782 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-59fcfb5dc8-gpg2j" event={"ID":"6befee19-0c78-47ca-a608-be246c0d7bb5","Type":"ContainerStarted","Data":"309c55276cc1d2bdb51d6e7a624264c2767a83da259dbd4b1632f2d2e2a83731"} Mar 12 08:17:15 crc kubenswrapper[4809]: I0312 08:17:15.049211 4809 patch_prober.go:28] interesting pod/machine-config-daemon-h6d4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 08:17:15 crc kubenswrapper[4809]: I0312 08:17:15.049826 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 08:17:17 crc kubenswrapper[4809]: I0312 08:17:17.916255 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-59fcfb5dc8-gpg2j" event={"ID":"6befee19-0c78-47ca-a608-be246c0d7bb5","Type":"ContainerStarted","Data":"026ed91a0e60127784c5d684fb23c4c3092787e276daee53bf7b01c4da6cd7bb"} Mar 12 08:17:17 crc kubenswrapper[4809]: I0312 08:17:17.916603 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-59fcfb5dc8-gpg2j" Mar 12 08:17:17 crc kubenswrapper[4809]: I0312 08:17:17.918088 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-648f7d48f7-wdwdw" event={"ID":"d177b9be-4037-4f81-8227-9c4361eba85f","Type":"ContainerStarted","Data":"dd5c413b46748727386ab53b46401c49ad8461e7d90dcffa0b8e4b5cf5718f4b"} Mar 12 08:17:17 crc kubenswrapper[4809]: I0312 08:17:17.918344 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-648f7d48f7-wdwdw" Mar 12 08:17:17 crc kubenswrapper[4809]: I0312 08:17:17.945953 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-59fcfb5dc8-gpg2j" podStartSLOduration=2.554572042 podStartE2EDuration="7.945932735s" podCreationTimestamp="2026-03-12 08:17:10 +0000 UTC" firstStartedPulling="2026-03-12 08:17:11.822082832 +0000 UTC m=+1105.404118565" lastFinishedPulling="2026-03-12 08:17:17.213443525 +0000 UTC m=+1110.795479258" observedRunningTime="2026-03-12 08:17:17.94022498 +0000 UTC m=+1111.522260723" watchObservedRunningTime="2026-03-12 08:17:17.945932735 +0000 UTC m=+1111.527968468" Mar 12 08:17:17 crc kubenswrapper[4809]: I0312 08:17:17.976835 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-648f7d48f7-wdwdw" podStartSLOduration=2.368730598 podStartE2EDuration="7.976815085s" podCreationTimestamp="2026-03-12 08:17:10 +0000 UTC" firstStartedPulling="2026-03-12 08:17:11.579132774 +0000 UTC m=+1105.161168507" lastFinishedPulling="2026-03-12 08:17:17.187217261 +0000 UTC m=+1110.769252994" observedRunningTime="2026-03-12 08:17:17.972960711 +0000 UTC m=+1111.554996454" watchObservedRunningTime="2026-03-12 08:17:17.976815085 +0000 UTC m=+1111.558850808" Mar 12 08:17:19 crc kubenswrapper[4809]: E0312 08:17:19.839245 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4d10085_7b35_4cc0_ae7a_b9c3f443e326.slice\": RecentStats: unable to find data in memory cache]" Mar 12 08:17:20 crc kubenswrapper[4809]: E0312 08:17:20.471704 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4d10085_7b35_4cc0_ae7a_b9c3f443e326.slice\": RecentStats: unable to find data in memory cache]" Mar 12 08:17:30 crc kubenswrapper[4809]: E0312 08:17:30.003857 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4d10085_7b35_4cc0_ae7a_b9c3f443e326.slice\": RecentStats: unable to find data in memory cache]" Mar 12 08:17:31 crc kubenswrapper[4809]: I0312 08:17:31.507244 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-59fcfb5dc8-gpg2j" Mar 12 08:17:35 crc kubenswrapper[4809]: E0312 08:17:35.613236 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4d10085_7b35_4cc0_ae7a_b9c3f443e326.slice\": RecentStats: unable to find data in memory cache]" Mar 12 08:17:40 crc kubenswrapper[4809]: E0312 08:17:40.037861 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4d10085_7b35_4cc0_ae7a_b9c3f443e326.slice\": RecentStats: unable to find data in memory cache]" Mar 12 08:17:45 crc kubenswrapper[4809]: I0312 08:17:45.048580 4809 patch_prober.go:28] interesting pod/machine-config-daemon-h6d4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 08:17:45 crc kubenswrapper[4809]: I0312 08:17:45.049473 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 08:17:50 crc kubenswrapper[4809]: I0312 08:17:50.944555 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-648f7d48f7-wdwdw" Mar 12 08:17:51 crc kubenswrapper[4809]: I0312 08:17:51.768926 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-n22vs"] Mar 12 08:17:51 crc kubenswrapper[4809]: I0312 08:17:51.772091 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-n22vs" Mar 12 08:17:51 crc kubenswrapper[4809]: I0312 08:17:51.779319 4809 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Mar 12 08:17:51 crc kubenswrapper[4809]: I0312 08:17:51.780238 4809 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-mw45w" Mar 12 08:17:51 crc kubenswrapper[4809]: I0312 08:17:51.784531 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Mar 12 08:17:51 crc kubenswrapper[4809]: I0312 08:17:51.797379 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-rmrt9"] Mar 12 08:17:51 crc kubenswrapper[4809]: I0312 08:17:51.800810 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-rmrt9" Mar 12 08:17:51 crc kubenswrapper[4809]: I0312 08:17:51.803403 4809 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Mar 12 08:17:51 crc kubenswrapper[4809]: I0312 08:17:51.809964 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-rmrt9"] Mar 12 08:17:51 crc kubenswrapper[4809]: I0312 08:17:51.827204 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/773274bc-3d57-4d1c-aaf9-f81ce1b981c4-frr-sockets\") pod \"frr-k8s-n22vs\" (UID: \"773274bc-3d57-4d1c-aaf9-f81ce1b981c4\") " pod="metallb-system/frr-k8s-n22vs" Mar 12 08:17:51 crc kubenswrapper[4809]: I0312 08:17:51.827248 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qk8d\" (UniqueName: \"kubernetes.io/projected/773274bc-3d57-4d1c-aaf9-f81ce1b981c4-kube-api-access-5qk8d\") pod \"frr-k8s-n22vs\" (UID: \"773274bc-3d57-4d1c-aaf9-f81ce1b981c4\") " pod="metallb-system/frr-k8s-n22vs" Mar 12 08:17:51 crc kubenswrapper[4809]: I0312 08:17:51.827284 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/773274bc-3d57-4d1c-aaf9-f81ce1b981c4-metrics-certs\") pod \"frr-k8s-n22vs\" (UID: \"773274bc-3d57-4d1c-aaf9-f81ce1b981c4\") " pod="metallb-system/frr-k8s-n22vs" Mar 12 08:17:51 crc kubenswrapper[4809]: I0312 08:17:51.827326 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/773274bc-3d57-4d1c-aaf9-f81ce1b981c4-metrics\") pod \"frr-k8s-n22vs\" (UID: \"773274bc-3d57-4d1c-aaf9-f81ce1b981c4\") " pod="metallb-system/frr-k8s-n22vs" Mar 12 08:17:51 crc kubenswrapper[4809]: I0312 08:17:51.827349 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/773274bc-3d57-4d1c-aaf9-f81ce1b981c4-frr-conf\") pod \"frr-k8s-n22vs\" (UID: \"773274bc-3d57-4d1c-aaf9-f81ce1b981c4\") " pod="metallb-system/frr-k8s-n22vs" Mar 12 08:17:51 crc kubenswrapper[4809]: I0312 08:17:51.828327 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/773274bc-3d57-4d1c-aaf9-f81ce1b981c4-frr-startup\") pod \"frr-k8s-n22vs\" (UID: \"773274bc-3d57-4d1c-aaf9-f81ce1b981c4\") " pod="metallb-system/frr-k8s-n22vs" Mar 12 08:17:51 crc kubenswrapper[4809]: I0312 08:17:51.828365 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/773274bc-3d57-4d1c-aaf9-f81ce1b981c4-reloader\") pod \"frr-k8s-n22vs\" (UID: \"773274bc-3d57-4d1c-aaf9-f81ce1b981c4\") " pod="metallb-system/frr-k8s-n22vs" Mar 12 08:17:51 crc kubenswrapper[4809]: I0312 08:17:51.882601 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-jt7s5"] Mar 12 08:17:51 crc kubenswrapper[4809]: I0312 08:17:51.884618 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-jt7s5" Mar 12 08:17:51 crc kubenswrapper[4809]: I0312 08:17:51.891333 4809 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-6bhmr" Mar 12 08:17:51 crc kubenswrapper[4809]: I0312 08:17:51.891855 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Mar 12 08:17:51 crc kubenswrapper[4809]: I0312 08:17:51.892273 4809 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Mar 12 08:17:51 crc kubenswrapper[4809]: I0312 08:17:51.892463 4809 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Mar 12 08:17:51 crc kubenswrapper[4809]: I0312 08:17:51.907251 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-7bb4cc7c98-7jdts"] Mar 12 08:17:51 crc kubenswrapper[4809]: I0312 08:17:51.908670 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-7bb4cc7c98-7jdts" Mar 12 08:17:51 crc kubenswrapper[4809]: I0312 08:17:51.910727 4809 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Mar 12 08:17:51 crc kubenswrapper[4809]: I0312 08:17:51.930750 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/773274bc-3d57-4d1c-aaf9-f81ce1b981c4-reloader\") pod \"frr-k8s-n22vs\" (UID: \"773274bc-3d57-4d1c-aaf9-f81ce1b981c4\") " pod="metallb-system/frr-k8s-n22vs" Mar 12 08:17:51 crc kubenswrapper[4809]: I0312 08:17:51.930831 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/773274bc-3d57-4d1c-aaf9-f81ce1b981c4-frr-sockets\") pod \"frr-k8s-n22vs\" (UID: \"773274bc-3d57-4d1c-aaf9-f81ce1b981c4\") " pod="metallb-system/frr-k8s-n22vs" Mar 12 08:17:51 crc kubenswrapper[4809]: I0312 08:17:51.930868 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qk8d\" (UniqueName: \"kubernetes.io/projected/773274bc-3d57-4d1c-aaf9-f81ce1b981c4-kube-api-access-5qk8d\") pod \"frr-k8s-n22vs\" (UID: \"773274bc-3d57-4d1c-aaf9-f81ce1b981c4\") " pod="metallb-system/frr-k8s-n22vs" Mar 12 08:17:51 crc kubenswrapper[4809]: I0312 08:17:51.930915 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcskh\" (UniqueName: \"kubernetes.io/projected/6b5e61e4-2d13-491e-be53-aed7ae027cb1-kube-api-access-rcskh\") pod \"frr-k8s-webhook-server-bcc4b6f68-rmrt9\" (UID: \"6b5e61e4-2d13-491e-be53-aed7ae027cb1\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-rmrt9" Mar 12 08:17:51 crc kubenswrapper[4809]: I0312 08:17:51.930949 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/773274bc-3d57-4d1c-aaf9-f81ce1b981c4-metrics-certs\") pod \"frr-k8s-n22vs\" (UID: \"773274bc-3d57-4d1c-aaf9-f81ce1b981c4\") " pod="metallb-system/frr-k8s-n22vs" Mar 12 08:17:51 crc kubenswrapper[4809]: I0312 08:17:51.930998 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/773274bc-3d57-4d1c-aaf9-f81ce1b981c4-metrics\") pod \"frr-k8s-n22vs\" (UID: \"773274bc-3d57-4d1c-aaf9-f81ce1b981c4\") " pod="metallb-system/frr-k8s-n22vs" Mar 12 08:17:51 crc kubenswrapper[4809]: I0312 08:17:51.931023 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6b5e61e4-2d13-491e-be53-aed7ae027cb1-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-rmrt9\" (UID: \"6b5e61e4-2d13-491e-be53-aed7ae027cb1\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-rmrt9" Mar 12 08:17:51 crc kubenswrapper[4809]: I0312 08:17:51.931066 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/773274bc-3d57-4d1c-aaf9-f81ce1b981c4-frr-conf\") pod \"frr-k8s-n22vs\" (UID: \"773274bc-3d57-4d1c-aaf9-f81ce1b981c4\") " pod="metallb-system/frr-k8s-n22vs" Mar 12 08:17:51 crc kubenswrapper[4809]: I0312 08:17:51.931090 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfbbf\" (UniqueName: \"kubernetes.io/projected/d64bdb22-6590-41af-94ad-0e725ca0355a-kube-api-access-bfbbf\") pod \"speaker-jt7s5\" (UID: \"d64bdb22-6590-41af-94ad-0e725ca0355a\") " pod="metallb-system/speaker-jt7s5" Mar 12 08:17:51 crc kubenswrapper[4809]: I0312 08:17:51.931156 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d64bdb22-6590-41af-94ad-0e725ca0355a-memberlist\") pod \"speaker-jt7s5\" (UID: \"d64bdb22-6590-41af-94ad-0e725ca0355a\") " pod="metallb-system/speaker-jt7s5" Mar 12 08:17:51 crc kubenswrapper[4809]: I0312 08:17:51.931203 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d64bdb22-6590-41af-94ad-0e725ca0355a-metrics-certs\") pod \"speaker-jt7s5\" (UID: \"d64bdb22-6590-41af-94ad-0e725ca0355a\") " pod="metallb-system/speaker-jt7s5" Mar 12 08:17:51 crc kubenswrapper[4809]: I0312 08:17:51.931235 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/d64bdb22-6590-41af-94ad-0e725ca0355a-metallb-excludel2\") pod \"speaker-jt7s5\" (UID: \"d64bdb22-6590-41af-94ad-0e725ca0355a\") " pod="metallb-system/speaker-jt7s5" Mar 12 08:17:51 crc kubenswrapper[4809]: I0312 08:17:51.931245 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/773274bc-3d57-4d1c-aaf9-f81ce1b981c4-reloader\") pod \"frr-k8s-n22vs\" (UID: \"773274bc-3d57-4d1c-aaf9-f81ce1b981c4\") " pod="metallb-system/frr-k8s-n22vs" Mar 12 08:17:51 crc kubenswrapper[4809]: I0312 08:17:51.931287 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/773274bc-3d57-4d1c-aaf9-f81ce1b981c4-frr-startup\") pod \"frr-k8s-n22vs\" (UID: \"773274bc-3d57-4d1c-aaf9-f81ce1b981c4\") " pod="metallb-system/frr-k8s-n22vs" Mar 12 08:17:51 crc kubenswrapper[4809]: I0312 08:17:51.931690 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/773274bc-3d57-4d1c-aaf9-f81ce1b981c4-frr-conf\") pod \"frr-k8s-n22vs\" (UID: \"773274bc-3d57-4d1c-aaf9-f81ce1b981c4\") " pod="metallb-system/frr-k8s-n22vs" Mar 12 08:17:51 crc kubenswrapper[4809]: I0312 08:17:51.932364 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/773274bc-3d57-4d1c-aaf9-f81ce1b981c4-metrics\") pod \"frr-k8s-n22vs\" (UID: \"773274bc-3d57-4d1c-aaf9-f81ce1b981c4\") " pod="metallb-system/frr-k8s-n22vs" Mar 12 08:17:51 crc kubenswrapper[4809]: I0312 08:17:51.932616 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/773274bc-3d57-4d1c-aaf9-f81ce1b981c4-frr-sockets\") pod \"frr-k8s-n22vs\" (UID: \"773274bc-3d57-4d1c-aaf9-f81ce1b981c4\") " pod="metallb-system/frr-k8s-n22vs" Mar 12 08:17:51 crc kubenswrapper[4809]: I0312 08:17:51.932800 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/773274bc-3d57-4d1c-aaf9-f81ce1b981c4-frr-startup\") pod \"frr-k8s-n22vs\" (UID: \"773274bc-3d57-4d1c-aaf9-f81ce1b981c4\") " pod="metallb-system/frr-k8s-n22vs" Mar 12 08:17:51 crc kubenswrapper[4809]: I0312 08:17:51.953874 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/773274bc-3d57-4d1c-aaf9-f81ce1b981c4-metrics-certs\") pod \"frr-k8s-n22vs\" (UID: \"773274bc-3d57-4d1c-aaf9-f81ce1b981c4\") " pod="metallb-system/frr-k8s-n22vs" Mar 12 08:17:51 crc kubenswrapper[4809]: I0312 08:17:51.959101 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-7bb4cc7c98-7jdts"] Mar 12 08:17:51 crc kubenswrapper[4809]: I0312 08:17:51.963331 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qk8d\" (UniqueName: \"kubernetes.io/projected/773274bc-3d57-4d1c-aaf9-f81ce1b981c4-kube-api-access-5qk8d\") pod \"frr-k8s-n22vs\" (UID: \"773274bc-3d57-4d1c-aaf9-f81ce1b981c4\") " pod="metallb-system/frr-k8s-n22vs" Mar 12 08:17:52 crc kubenswrapper[4809]: I0312 08:17:52.033446 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b7ddc716-c09f-4923-8e70-f2251873aea9-metrics-certs\") pod \"controller-7bb4cc7c98-7jdts\" (UID: \"b7ddc716-c09f-4923-8e70-f2251873aea9\") " pod="metallb-system/controller-7bb4cc7c98-7jdts" Mar 12 08:17:52 crc kubenswrapper[4809]: I0312 08:17:52.033518 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcskh\" (UniqueName: \"kubernetes.io/projected/6b5e61e4-2d13-491e-be53-aed7ae027cb1-kube-api-access-rcskh\") pod \"frr-k8s-webhook-server-bcc4b6f68-rmrt9\" (UID: \"6b5e61e4-2d13-491e-be53-aed7ae027cb1\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-rmrt9" Mar 12 08:17:52 crc kubenswrapper[4809]: I0312 08:17:52.033570 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bz96\" (UniqueName: \"kubernetes.io/projected/b7ddc716-c09f-4923-8e70-f2251873aea9-kube-api-access-7bz96\") pod \"controller-7bb4cc7c98-7jdts\" (UID: \"b7ddc716-c09f-4923-8e70-f2251873aea9\") " pod="metallb-system/controller-7bb4cc7c98-7jdts" Mar 12 08:17:52 crc kubenswrapper[4809]: I0312 08:17:52.033608 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6b5e61e4-2d13-491e-be53-aed7ae027cb1-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-rmrt9\" (UID: \"6b5e61e4-2d13-491e-be53-aed7ae027cb1\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-rmrt9" Mar 12 08:17:52 crc kubenswrapper[4809]: I0312 08:17:52.033642 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfbbf\" (UniqueName: \"kubernetes.io/projected/d64bdb22-6590-41af-94ad-0e725ca0355a-kube-api-access-bfbbf\") pod \"speaker-jt7s5\" (UID: \"d64bdb22-6590-41af-94ad-0e725ca0355a\") " pod="metallb-system/speaker-jt7s5" Mar 12 08:17:52 crc kubenswrapper[4809]: I0312 08:17:52.033716 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d64bdb22-6590-41af-94ad-0e725ca0355a-memberlist\") pod \"speaker-jt7s5\" (UID: \"d64bdb22-6590-41af-94ad-0e725ca0355a\") " pod="metallb-system/speaker-jt7s5" Mar 12 08:17:52 crc kubenswrapper[4809]: I0312 08:17:52.033754 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d64bdb22-6590-41af-94ad-0e725ca0355a-metrics-certs\") pod \"speaker-jt7s5\" (UID: \"d64bdb22-6590-41af-94ad-0e725ca0355a\") " pod="metallb-system/speaker-jt7s5" Mar 12 08:17:52 crc kubenswrapper[4809]: I0312 08:17:52.033776 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/d64bdb22-6590-41af-94ad-0e725ca0355a-metallb-excludel2\") pod \"speaker-jt7s5\" (UID: \"d64bdb22-6590-41af-94ad-0e725ca0355a\") " pod="metallb-system/speaker-jt7s5" Mar 12 08:17:52 crc kubenswrapper[4809]: I0312 08:17:52.033801 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b7ddc716-c09f-4923-8e70-f2251873aea9-cert\") pod \"controller-7bb4cc7c98-7jdts\" (UID: \"b7ddc716-c09f-4923-8e70-f2251873aea9\") " pod="metallb-system/controller-7bb4cc7c98-7jdts" Mar 12 08:17:52 crc kubenswrapper[4809]: E0312 08:17:52.034436 4809 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 12 08:17:52 crc kubenswrapper[4809]: E0312 08:17:52.034544 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d64bdb22-6590-41af-94ad-0e725ca0355a-memberlist podName:d64bdb22-6590-41af-94ad-0e725ca0355a nodeName:}" failed. No retries permitted until 2026-03-12 08:17:52.534522188 +0000 UTC m=+1146.116557921 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/d64bdb22-6590-41af-94ad-0e725ca0355a-memberlist") pod "speaker-jt7s5" (UID: "d64bdb22-6590-41af-94ad-0e725ca0355a") : secret "metallb-memberlist" not found Mar 12 08:17:52 crc kubenswrapper[4809]: I0312 08:17:52.035030 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/d64bdb22-6590-41af-94ad-0e725ca0355a-metallb-excludel2\") pod \"speaker-jt7s5\" (UID: \"d64bdb22-6590-41af-94ad-0e725ca0355a\") " pod="metallb-system/speaker-jt7s5" Mar 12 08:17:52 crc kubenswrapper[4809]: I0312 08:17:52.041701 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d64bdb22-6590-41af-94ad-0e725ca0355a-metrics-certs\") pod \"speaker-jt7s5\" (UID: \"d64bdb22-6590-41af-94ad-0e725ca0355a\") " pod="metallb-system/speaker-jt7s5" Mar 12 08:17:52 crc kubenswrapper[4809]: I0312 08:17:52.041892 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6b5e61e4-2d13-491e-be53-aed7ae027cb1-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-rmrt9\" (UID: \"6b5e61e4-2d13-491e-be53-aed7ae027cb1\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-rmrt9" Mar 12 08:17:52 crc kubenswrapper[4809]: I0312 08:17:52.053266 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcskh\" (UniqueName: \"kubernetes.io/projected/6b5e61e4-2d13-491e-be53-aed7ae027cb1-kube-api-access-rcskh\") pod \"frr-k8s-webhook-server-bcc4b6f68-rmrt9\" (UID: \"6b5e61e4-2d13-491e-be53-aed7ae027cb1\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-rmrt9" Mar 12 08:17:52 crc kubenswrapper[4809]: I0312 08:17:52.053705 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfbbf\" (UniqueName: \"kubernetes.io/projected/d64bdb22-6590-41af-94ad-0e725ca0355a-kube-api-access-bfbbf\") pod \"speaker-jt7s5\" (UID: \"d64bdb22-6590-41af-94ad-0e725ca0355a\") " pod="metallb-system/speaker-jt7s5" Mar 12 08:17:52 crc kubenswrapper[4809]: I0312 08:17:52.092934 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-n22vs" Mar 12 08:17:52 crc kubenswrapper[4809]: I0312 08:17:52.117739 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-rmrt9" Mar 12 08:17:52 crc kubenswrapper[4809]: I0312 08:17:52.135051 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b7ddc716-c09f-4923-8e70-f2251873aea9-metrics-certs\") pod \"controller-7bb4cc7c98-7jdts\" (UID: \"b7ddc716-c09f-4923-8e70-f2251873aea9\") " pod="metallb-system/controller-7bb4cc7c98-7jdts" Mar 12 08:17:52 crc kubenswrapper[4809]: I0312 08:17:52.135143 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bz96\" (UniqueName: \"kubernetes.io/projected/b7ddc716-c09f-4923-8e70-f2251873aea9-kube-api-access-7bz96\") pod \"controller-7bb4cc7c98-7jdts\" (UID: \"b7ddc716-c09f-4923-8e70-f2251873aea9\") " pod="metallb-system/controller-7bb4cc7c98-7jdts" Mar 12 08:17:52 crc kubenswrapper[4809]: I0312 08:17:52.135531 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b7ddc716-c09f-4923-8e70-f2251873aea9-cert\") pod \"controller-7bb4cc7c98-7jdts\" (UID: \"b7ddc716-c09f-4923-8e70-f2251873aea9\") " pod="metallb-system/controller-7bb4cc7c98-7jdts" Mar 12 08:17:52 crc kubenswrapper[4809]: I0312 08:17:52.139698 4809 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 12 08:17:52 crc kubenswrapper[4809]: I0312 08:17:52.143552 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b7ddc716-c09f-4923-8e70-f2251873aea9-metrics-certs\") pod \"controller-7bb4cc7c98-7jdts\" (UID: \"b7ddc716-c09f-4923-8e70-f2251873aea9\") " pod="metallb-system/controller-7bb4cc7c98-7jdts" Mar 12 08:17:52 crc kubenswrapper[4809]: I0312 08:17:52.156369 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b7ddc716-c09f-4923-8e70-f2251873aea9-cert\") pod \"controller-7bb4cc7c98-7jdts\" (UID: \"b7ddc716-c09f-4923-8e70-f2251873aea9\") " pod="metallb-system/controller-7bb4cc7c98-7jdts" Mar 12 08:17:52 crc kubenswrapper[4809]: I0312 08:17:52.161484 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bz96\" (UniqueName: \"kubernetes.io/projected/b7ddc716-c09f-4923-8e70-f2251873aea9-kube-api-access-7bz96\") pod \"controller-7bb4cc7c98-7jdts\" (UID: \"b7ddc716-c09f-4923-8e70-f2251873aea9\") " pod="metallb-system/controller-7bb4cc7c98-7jdts" Mar 12 08:17:52 crc kubenswrapper[4809]: I0312 08:17:52.293101 4809 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 08:17:52 crc kubenswrapper[4809]: I0312 08:17:52.296011 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-7bb4cc7c98-7jdts" Mar 12 08:17:52 crc kubenswrapper[4809]: I0312 08:17:52.546740 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d64bdb22-6590-41af-94ad-0e725ca0355a-memberlist\") pod \"speaker-jt7s5\" (UID: \"d64bdb22-6590-41af-94ad-0e725ca0355a\") " pod="metallb-system/speaker-jt7s5" Mar 12 08:17:52 crc kubenswrapper[4809]: E0312 08:17:52.546908 4809 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 12 08:17:52 crc kubenswrapper[4809]: E0312 08:17:52.546955 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d64bdb22-6590-41af-94ad-0e725ca0355a-memberlist podName:d64bdb22-6590-41af-94ad-0e725ca0355a nodeName:}" failed. No retries permitted until 2026-03-12 08:17:53.546942133 +0000 UTC m=+1147.128977866 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/d64bdb22-6590-41af-94ad-0e725ca0355a-memberlist") pod "speaker-jt7s5" (UID: "d64bdb22-6590-41af-94ad-0e725ca0355a") : secret "metallb-memberlist" not found Mar 12 08:17:52 crc kubenswrapper[4809]: I0312 08:17:52.590664 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-rmrt9"] Mar 12 08:17:52 crc kubenswrapper[4809]: I0312 08:17:52.753630 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-7bb4cc7c98-7jdts"] Mar 12 08:17:52 crc kubenswrapper[4809]: W0312 08:17:52.756768 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7ddc716_c09f_4923_8e70_f2251873aea9.slice/crio-9598293042b09917293352ba8a8cfcefddcf90820565b8109950d4f9317b9c80 WatchSource:0}: Error finding container 9598293042b09917293352ba8a8cfcefddcf90820565b8109950d4f9317b9c80: Status 404 returned error can't find the container with id 9598293042b09917293352ba8a8cfcefddcf90820565b8109950d4f9317b9c80 Mar 12 08:17:53 crc kubenswrapper[4809]: I0312 08:17:53.275774 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-7jdts" event={"ID":"b7ddc716-c09f-4923-8e70-f2251873aea9","Type":"ContainerStarted","Data":"c9deb971215cbd9ad7505e5cf8787c6c5c1fd6406906f6b6c2245d48985eb3a5"} Mar 12 08:17:53 crc kubenswrapper[4809]: I0312 08:17:53.276217 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-7jdts" event={"ID":"b7ddc716-c09f-4923-8e70-f2251873aea9","Type":"ContainerStarted","Data":"874336bec80c8d58a380231463cdb488cfe96d70677ac37ad84c9f8a1d0e03f0"} Mar 12 08:17:53 crc kubenswrapper[4809]: I0312 08:17:53.276232 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-7bb4cc7c98-7jdts" Mar 12 08:17:53 crc kubenswrapper[4809]: I0312 08:17:53.276242 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-7jdts" event={"ID":"b7ddc716-c09f-4923-8e70-f2251873aea9","Type":"ContainerStarted","Data":"9598293042b09917293352ba8a8cfcefddcf90820565b8109950d4f9317b9c80"} Mar 12 08:17:53 crc kubenswrapper[4809]: I0312 08:17:53.277136 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-rmrt9" event={"ID":"6b5e61e4-2d13-491e-be53-aed7ae027cb1","Type":"ContainerStarted","Data":"7319d9207fecca0176ae7b242acb7a6f794e633c51ef273fcd57ee7505cacb30"} Mar 12 08:17:53 crc kubenswrapper[4809]: I0312 08:17:53.279220 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-n22vs" event={"ID":"773274bc-3d57-4d1c-aaf9-f81ce1b981c4","Type":"ContainerStarted","Data":"5a898ae281dd1f9e4e8388b5dadf1d332626ce823e617e5eaf1cbca360527f0c"} Mar 12 08:17:53 crc kubenswrapper[4809]: I0312 08:17:53.298051 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-7bb4cc7c98-7jdts" podStartSLOduration=2.298031419 podStartE2EDuration="2.298031419s" podCreationTimestamp="2026-03-12 08:17:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:17:53.294874194 +0000 UTC m=+1146.876909937" watchObservedRunningTime="2026-03-12 08:17:53.298031419 +0000 UTC m=+1146.880067152" Mar 12 08:17:53 crc kubenswrapper[4809]: I0312 08:17:53.570270 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d64bdb22-6590-41af-94ad-0e725ca0355a-memberlist\") pod \"speaker-jt7s5\" (UID: \"d64bdb22-6590-41af-94ad-0e725ca0355a\") " pod="metallb-system/speaker-jt7s5" Mar 12 08:17:53 crc kubenswrapper[4809]: I0312 08:17:53.594383 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d64bdb22-6590-41af-94ad-0e725ca0355a-memberlist\") pod \"speaker-jt7s5\" (UID: \"d64bdb22-6590-41af-94ad-0e725ca0355a\") " pod="metallb-system/speaker-jt7s5" Mar 12 08:17:53 crc kubenswrapper[4809]: I0312 08:17:53.710805 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-jt7s5" Mar 12 08:17:54 crc kubenswrapper[4809]: I0312 08:17:54.313526 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-jt7s5" event={"ID":"d64bdb22-6590-41af-94ad-0e725ca0355a","Type":"ContainerStarted","Data":"ade5c69ad1337d150741ab5e04d79cdca3f83cf4a3e5e0abf35fb6bf1a1930a9"} Mar 12 08:17:54 crc kubenswrapper[4809]: I0312 08:17:54.313820 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-jt7s5" event={"ID":"d64bdb22-6590-41af-94ad-0e725ca0355a","Type":"ContainerStarted","Data":"981f11de023f7ab148cf88ce71d1a8aecebde4a01ccddbf0351fb7bcd8d619b7"} Mar 12 08:17:55 crc kubenswrapper[4809]: I0312 08:17:55.325638 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-jt7s5" event={"ID":"d64bdb22-6590-41af-94ad-0e725ca0355a","Type":"ContainerStarted","Data":"b1406f389604cdc2273c0dab25139d6ff1bd5475b0a7f0d512371e25bd50df96"} Mar 12 08:17:55 crc kubenswrapper[4809]: I0312 08:17:55.325861 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-jt7s5" Mar 12 08:17:55 crc kubenswrapper[4809]: I0312 08:17:55.351589 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-jt7s5" podStartSLOduration=4.351566028 podStartE2EDuration="4.351566028s" podCreationTimestamp="2026-03-12 08:17:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:17:55.348172365 +0000 UTC m=+1148.930208108" watchObservedRunningTime="2026-03-12 08:17:55.351566028 +0000 UTC m=+1148.933601761" Mar 12 08:18:00 crc kubenswrapper[4809]: I0312 08:18:00.139682 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555058-q6lwz"] Mar 12 08:18:00 crc kubenswrapper[4809]: I0312 08:18:00.143320 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555058-q6lwz" Mar 12 08:18:00 crc kubenswrapper[4809]: I0312 08:18:00.146404 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-tvxqv" Mar 12 08:18:00 crc kubenswrapper[4809]: I0312 08:18:00.146655 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 08:18:00 crc kubenswrapper[4809]: I0312 08:18:00.146851 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 08:18:00 crc kubenswrapper[4809]: I0312 08:18:00.162203 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555058-q6lwz"] Mar 12 08:18:00 crc kubenswrapper[4809]: I0312 08:18:00.242205 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6mpt\" (UniqueName: \"kubernetes.io/projected/58ee5bca-cc6a-4e65-bd6e-9ee1156c19de-kube-api-access-l6mpt\") pod \"auto-csr-approver-29555058-q6lwz\" (UID: \"58ee5bca-cc6a-4e65-bd6e-9ee1156c19de\") " pod="openshift-infra/auto-csr-approver-29555058-q6lwz" Mar 12 08:18:00 crc kubenswrapper[4809]: I0312 08:18:00.343965 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6mpt\" (UniqueName: \"kubernetes.io/projected/58ee5bca-cc6a-4e65-bd6e-9ee1156c19de-kube-api-access-l6mpt\") pod \"auto-csr-approver-29555058-q6lwz\" (UID: \"58ee5bca-cc6a-4e65-bd6e-9ee1156c19de\") " pod="openshift-infra/auto-csr-approver-29555058-q6lwz" Mar 12 08:18:00 crc kubenswrapper[4809]: I0312 08:18:00.373075 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6mpt\" (UniqueName: \"kubernetes.io/projected/58ee5bca-cc6a-4e65-bd6e-9ee1156c19de-kube-api-access-l6mpt\") pod \"auto-csr-approver-29555058-q6lwz\" (UID: \"58ee5bca-cc6a-4e65-bd6e-9ee1156c19de\") " pod="openshift-infra/auto-csr-approver-29555058-q6lwz" Mar 12 08:18:00 crc kubenswrapper[4809]: I0312 08:18:00.464000 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555058-q6lwz" Mar 12 08:18:01 crc kubenswrapper[4809]: I0312 08:18:01.064179 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555058-q6lwz"] Mar 12 08:18:01 crc kubenswrapper[4809]: I0312 08:18:01.397032 4809 generic.go:334] "Generic (PLEG): container finished" podID="773274bc-3d57-4d1c-aaf9-f81ce1b981c4" containerID="a2c2377c3b479d575c53ed379bda383ad0852fc12db6e1ce9fc59d1b6b239b1a" exitCode=0 Mar 12 08:18:01 crc kubenswrapper[4809]: I0312 08:18:01.397083 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-n22vs" event={"ID":"773274bc-3d57-4d1c-aaf9-f81ce1b981c4","Type":"ContainerDied","Data":"a2c2377c3b479d575c53ed379bda383ad0852fc12db6e1ce9fc59d1b6b239b1a"} Mar 12 08:18:01 crc kubenswrapper[4809]: I0312 08:18:01.399701 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-rmrt9" event={"ID":"6b5e61e4-2d13-491e-be53-aed7ae027cb1","Type":"ContainerStarted","Data":"1204925015f75d92dcf2b3a5d0b3c76bf3d729fe9d0def579db6e8215475ff99"} Mar 12 08:18:01 crc kubenswrapper[4809]: I0312 08:18:01.399844 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-rmrt9" Mar 12 08:18:01 crc kubenswrapper[4809]: I0312 08:18:01.400805 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555058-q6lwz" event={"ID":"58ee5bca-cc6a-4e65-bd6e-9ee1156c19de","Type":"ContainerStarted","Data":"c6911d7fcfcc4ca71afa39af6422c703c874cc35202ad78569ebcdaddc41b2d2"} Mar 12 08:18:01 crc kubenswrapper[4809]: I0312 08:18:01.455784 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-rmrt9" podStartSLOduration=2.820517767 podStartE2EDuration="10.455760606s" podCreationTimestamp="2026-03-12 08:17:51 +0000 UTC" firstStartedPulling="2026-03-12 08:17:52.596918411 +0000 UTC m=+1146.178954144" lastFinishedPulling="2026-03-12 08:18:00.23216125 +0000 UTC m=+1153.814196983" observedRunningTime="2026-03-12 08:18:01.455312734 +0000 UTC m=+1155.037348498" watchObservedRunningTime="2026-03-12 08:18:01.455760606 +0000 UTC m=+1155.037796369" Mar 12 08:18:02 crc kubenswrapper[4809]: I0312 08:18:02.301223 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-7bb4cc7c98-7jdts" Mar 12 08:18:02 crc kubenswrapper[4809]: I0312 08:18:02.412646 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555058-q6lwz" event={"ID":"58ee5bca-cc6a-4e65-bd6e-9ee1156c19de","Type":"ContainerStarted","Data":"f3db8f17cfacd1e7dbec9a00a340b5a8a163c21025c6d5d51a65608f3fa28bb8"} Mar 12 08:18:02 crc kubenswrapper[4809]: I0312 08:18:02.415873 4809 generic.go:334] "Generic (PLEG): container finished" podID="773274bc-3d57-4d1c-aaf9-f81ce1b981c4" containerID="e58d0f09a3620e3c5068de57a27b89fc7343372117435b77d87700f8afc49bbb" exitCode=0 Mar 12 08:18:02 crc kubenswrapper[4809]: I0312 08:18:02.416035 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-n22vs" event={"ID":"773274bc-3d57-4d1c-aaf9-f81ce1b981c4","Type":"ContainerDied","Data":"e58d0f09a3620e3c5068de57a27b89fc7343372117435b77d87700f8afc49bbb"} Mar 12 08:18:02 crc kubenswrapper[4809]: I0312 08:18:02.439335 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555058-q6lwz" podStartSLOduration=1.67318643 podStartE2EDuration="2.439303835s" podCreationTimestamp="2026-03-12 08:18:00 +0000 UTC" firstStartedPulling="2026-03-12 08:18:01.078544778 +0000 UTC m=+1154.660580521" lastFinishedPulling="2026-03-12 08:18:01.844662193 +0000 UTC m=+1155.426697926" observedRunningTime="2026-03-12 08:18:02.428408899 +0000 UTC m=+1156.010444652" watchObservedRunningTime="2026-03-12 08:18:02.439303835 +0000 UTC m=+1156.021339558" Mar 12 08:18:03 crc kubenswrapper[4809]: I0312 08:18:03.438503 4809 generic.go:334] "Generic (PLEG): container finished" podID="58ee5bca-cc6a-4e65-bd6e-9ee1156c19de" containerID="f3db8f17cfacd1e7dbec9a00a340b5a8a163c21025c6d5d51a65608f3fa28bb8" exitCode=0 Mar 12 08:18:03 crc kubenswrapper[4809]: I0312 08:18:03.439242 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555058-q6lwz" event={"ID":"58ee5bca-cc6a-4e65-bd6e-9ee1156c19de","Type":"ContainerDied","Data":"f3db8f17cfacd1e7dbec9a00a340b5a8a163c21025c6d5d51a65608f3fa28bb8"} Mar 12 08:18:03 crc kubenswrapper[4809]: I0312 08:18:03.448666 4809 generic.go:334] "Generic (PLEG): container finished" podID="773274bc-3d57-4d1c-aaf9-f81ce1b981c4" containerID="78af531835bc1cde329425383d3bccda1ca925cc56864220daa1f24f635c9632" exitCode=0 Mar 12 08:18:03 crc kubenswrapper[4809]: I0312 08:18:03.448741 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-n22vs" event={"ID":"773274bc-3d57-4d1c-aaf9-f81ce1b981c4","Type":"ContainerDied","Data":"78af531835bc1cde329425383d3bccda1ca925cc56864220daa1f24f635c9632"} Mar 12 08:18:04 crc kubenswrapper[4809]: I0312 08:18:04.460770 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-n22vs" event={"ID":"773274bc-3d57-4d1c-aaf9-f81ce1b981c4","Type":"ContainerStarted","Data":"5628ba1c0c5e61162d0323ffe564d0580b23c4e55fd3298e6f02b3a1845877e1"} Mar 12 08:18:04 crc kubenswrapper[4809]: I0312 08:18:04.461216 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-n22vs" event={"ID":"773274bc-3d57-4d1c-aaf9-f81ce1b981c4","Type":"ContainerStarted","Data":"6493d752fe68377c30b2a16d02fe37185d5314dc75f9c063e91ef9d2007566b3"} Mar 12 08:18:04 crc kubenswrapper[4809]: I0312 08:18:04.461233 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-n22vs" event={"ID":"773274bc-3d57-4d1c-aaf9-f81ce1b981c4","Type":"ContainerStarted","Data":"227daedc738e6c8c1fe10d9956a358f31cad4400920affb4998787f2b5adde02"} Mar 12 08:18:04 crc kubenswrapper[4809]: I0312 08:18:04.461249 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-n22vs" event={"ID":"773274bc-3d57-4d1c-aaf9-f81ce1b981c4","Type":"ContainerStarted","Data":"b8dbac94cf0906cf8b0e93d8247b8a9d204f6e8dca9a7504899c462a4a4937c2"} Mar 12 08:18:04 crc kubenswrapper[4809]: I0312 08:18:04.461262 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-n22vs" event={"ID":"773274bc-3d57-4d1c-aaf9-f81ce1b981c4","Type":"ContainerStarted","Data":"2e150576a9ab7f348d2f48751641b97f18a3b8a403d5f71d7c95cc18cad52a1f"} Mar 12 08:18:04 crc kubenswrapper[4809]: I0312 08:18:04.819547 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555058-q6lwz" Mar 12 08:18:04 crc kubenswrapper[4809]: I0312 08:18:04.954475 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6mpt\" (UniqueName: \"kubernetes.io/projected/58ee5bca-cc6a-4e65-bd6e-9ee1156c19de-kube-api-access-l6mpt\") pod \"58ee5bca-cc6a-4e65-bd6e-9ee1156c19de\" (UID: \"58ee5bca-cc6a-4e65-bd6e-9ee1156c19de\") " Mar 12 08:18:04 crc kubenswrapper[4809]: I0312 08:18:04.964036 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58ee5bca-cc6a-4e65-bd6e-9ee1156c19de-kube-api-access-l6mpt" (OuterVolumeSpecName: "kube-api-access-l6mpt") pod "58ee5bca-cc6a-4e65-bd6e-9ee1156c19de" (UID: "58ee5bca-cc6a-4e65-bd6e-9ee1156c19de"). InnerVolumeSpecName "kube-api-access-l6mpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:18:05 crc kubenswrapper[4809]: I0312 08:18:05.057350 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6mpt\" (UniqueName: \"kubernetes.io/projected/58ee5bca-cc6a-4e65-bd6e-9ee1156c19de-kube-api-access-l6mpt\") on node \"crc\" DevicePath \"\"" Mar 12 08:18:05 crc kubenswrapper[4809]: I0312 08:18:05.473700 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555058-q6lwz" event={"ID":"58ee5bca-cc6a-4e65-bd6e-9ee1156c19de","Type":"ContainerDied","Data":"c6911d7fcfcc4ca71afa39af6422c703c874cc35202ad78569ebcdaddc41b2d2"} Mar 12 08:18:05 crc kubenswrapper[4809]: I0312 08:18:05.474935 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6911d7fcfcc4ca71afa39af6422c703c874cc35202ad78569ebcdaddc41b2d2" Mar 12 08:18:05 crc kubenswrapper[4809]: I0312 08:18:05.473977 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555058-q6lwz" Mar 12 08:18:05 crc kubenswrapper[4809]: I0312 08:18:05.484425 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-n22vs" event={"ID":"773274bc-3d57-4d1c-aaf9-f81ce1b981c4","Type":"ContainerStarted","Data":"800a4da09c76353a56ca19b4c383c3937e02c19c21592de61ce34a2a15dc4fc0"} Mar 12 08:18:05 crc kubenswrapper[4809]: I0312 08:18:05.484970 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-n22vs" Mar 12 08:18:05 crc kubenswrapper[4809]: I0312 08:18:05.538255 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-n22vs" podStartSLOduration=6.570837082 podStartE2EDuration="14.538230253s" podCreationTimestamp="2026-03-12 08:17:51 +0000 UTC" firstStartedPulling="2026-03-12 08:17:52.292864833 +0000 UTC m=+1145.874900566" lastFinishedPulling="2026-03-12 08:18:00.260258014 +0000 UTC m=+1153.842293737" observedRunningTime="2026-03-12 08:18:05.52923964 +0000 UTC m=+1159.111275393" watchObservedRunningTime="2026-03-12 08:18:05.538230253 +0000 UTC m=+1159.120265996" Mar 12 08:18:05 crc kubenswrapper[4809]: I0312 08:18:05.556802 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555052-th5qj"] Mar 12 08:18:05 crc kubenswrapper[4809]: I0312 08:18:05.565734 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555052-th5qj"] Mar 12 08:18:07 crc kubenswrapper[4809]: I0312 08:18:07.093879 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-n22vs" Mar 12 08:18:07 crc kubenswrapper[4809]: I0312 08:18:07.114971 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1979c745-e0ef-473f-b9df-b7444a9cfd62" path="/var/lib/kubelet/pods/1979c745-e0ef-473f-b9df-b7444a9cfd62/volumes" Mar 12 08:18:07 crc kubenswrapper[4809]: I0312 08:18:07.135241 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-n22vs" Mar 12 08:18:12 crc kubenswrapper[4809]: I0312 08:18:12.127266 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-rmrt9" Mar 12 08:18:13 crc kubenswrapper[4809]: I0312 08:18:13.716025 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-jt7s5" Mar 12 08:18:15 crc kubenswrapper[4809]: I0312 08:18:15.048952 4809 patch_prober.go:28] interesting pod/machine-config-daemon-h6d4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 08:18:15 crc kubenswrapper[4809]: I0312 08:18:15.049101 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 08:18:15 crc kubenswrapper[4809]: I0312 08:18:15.049271 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" Mar 12 08:18:15 crc kubenswrapper[4809]: I0312 08:18:15.050320 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"943f7c8e36ba68e565889e8e2f0d5e4ed7b8e09774d0614bb78547c2761cd4cc"} pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 08:18:15 crc kubenswrapper[4809]: I0312 08:18:15.050423 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" containerID="cri-o://943f7c8e36ba68e565889e8e2f0d5e4ed7b8e09774d0614bb78547c2761cd4cc" gracePeriod=600 Mar 12 08:18:15 crc kubenswrapper[4809]: I0312 08:18:15.591279 4809 generic.go:334] "Generic (PLEG): container finished" podID="101483ba-8ed3-40eb-9855-077e9add029f" containerID="943f7c8e36ba68e565889e8e2f0d5e4ed7b8e09774d0614bb78547c2761cd4cc" exitCode=0 Mar 12 08:18:15 crc kubenswrapper[4809]: I0312 08:18:15.591321 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" event={"ID":"101483ba-8ed3-40eb-9855-077e9add029f","Type":"ContainerDied","Data":"943f7c8e36ba68e565889e8e2f0d5e4ed7b8e09774d0614bb78547c2761cd4cc"} Mar 12 08:18:15 crc kubenswrapper[4809]: I0312 08:18:15.591739 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" event={"ID":"101483ba-8ed3-40eb-9855-077e9add029f","Type":"ContainerStarted","Data":"a328a2cdcc5abe038555d03cc30ceecaf7377dc57d422a2ede895c12e879661b"} Mar 12 08:18:15 crc kubenswrapper[4809]: I0312 08:18:15.591765 4809 scope.go:117] "RemoveContainer" containerID="5a05a1c82af92d83a47e94a370bbfc613a47227fe6fbddbdc5d572ba375fcf3f" Mar 12 08:18:16 crc kubenswrapper[4809]: I0312 08:18:16.370608 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-v8w7v"] Mar 12 08:18:16 crc kubenswrapper[4809]: E0312 08:18:16.371198 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58ee5bca-cc6a-4e65-bd6e-9ee1156c19de" containerName="oc" Mar 12 08:18:16 crc kubenswrapper[4809]: I0312 08:18:16.371223 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="58ee5bca-cc6a-4e65-bd6e-9ee1156c19de" containerName="oc" Mar 12 08:18:16 crc kubenswrapper[4809]: I0312 08:18:16.371667 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="58ee5bca-cc6a-4e65-bd6e-9ee1156c19de" containerName="oc" Mar 12 08:18:16 crc kubenswrapper[4809]: I0312 08:18:16.373025 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-v8w7v" Mar 12 08:18:16 crc kubenswrapper[4809]: I0312 08:18:16.379788 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-v8w7v"] Mar 12 08:18:16 crc kubenswrapper[4809]: I0312 08:18:16.382108 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Mar 12 08:18:16 crc kubenswrapper[4809]: I0312 08:18:16.382878 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-9ppnl" Mar 12 08:18:16 crc kubenswrapper[4809]: I0312 08:18:16.386155 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Mar 12 08:18:16 crc kubenswrapper[4809]: I0312 08:18:16.502193 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rjfr\" (UniqueName: \"kubernetes.io/projected/ba8c20bb-2adc-4272-8a84-eb58f480c3d0-kube-api-access-7rjfr\") pod \"openstack-operator-index-v8w7v\" (UID: \"ba8c20bb-2adc-4272-8a84-eb58f480c3d0\") " pod="openstack-operators/openstack-operator-index-v8w7v" Mar 12 08:18:16 crc kubenswrapper[4809]: I0312 08:18:16.604147 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rjfr\" (UniqueName: \"kubernetes.io/projected/ba8c20bb-2adc-4272-8a84-eb58f480c3d0-kube-api-access-7rjfr\") pod \"openstack-operator-index-v8w7v\" (UID: \"ba8c20bb-2adc-4272-8a84-eb58f480c3d0\") " pod="openstack-operators/openstack-operator-index-v8w7v" Mar 12 08:18:16 crc kubenswrapper[4809]: I0312 08:18:16.624345 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rjfr\" (UniqueName: \"kubernetes.io/projected/ba8c20bb-2adc-4272-8a84-eb58f480c3d0-kube-api-access-7rjfr\") pod \"openstack-operator-index-v8w7v\" (UID: \"ba8c20bb-2adc-4272-8a84-eb58f480c3d0\") " pod="openstack-operators/openstack-operator-index-v8w7v" Mar 12 08:18:16 crc kubenswrapper[4809]: I0312 08:18:16.715079 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-v8w7v" Mar 12 08:18:17 crc kubenswrapper[4809]: I0312 08:18:17.144187 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-v8w7v"] Mar 12 08:18:17 crc kubenswrapper[4809]: W0312 08:18:17.152958 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba8c20bb_2adc_4272_8a84_eb58f480c3d0.slice/crio-cbe8658343c2b013f2b1fbbba83d2097fab54366bf43d87e79853cf30a09d1c8 WatchSource:0}: Error finding container cbe8658343c2b013f2b1fbbba83d2097fab54366bf43d87e79853cf30a09d1c8: Status 404 returned error can't find the container with id cbe8658343c2b013f2b1fbbba83d2097fab54366bf43d87e79853cf30a09d1c8 Mar 12 08:18:17 crc kubenswrapper[4809]: I0312 08:18:17.611357 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-v8w7v" event={"ID":"ba8c20bb-2adc-4272-8a84-eb58f480c3d0","Type":"ContainerStarted","Data":"cbe8658343c2b013f2b1fbbba83d2097fab54366bf43d87e79853cf30a09d1c8"} Mar 12 08:18:19 crc kubenswrapper[4809]: I0312 08:18:19.731956 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-v8w7v"] Mar 12 08:18:20 crc kubenswrapper[4809]: I0312 08:18:20.340762 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-fxhzz"] Mar 12 08:18:20 crc kubenswrapper[4809]: I0312 08:18:20.342283 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-fxhzz" Mar 12 08:18:20 crc kubenswrapper[4809]: I0312 08:18:20.352059 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-fxhzz"] Mar 12 08:18:20 crc kubenswrapper[4809]: I0312 08:18:20.481351 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rf2n\" (UniqueName: \"kubernetes.io/projected/1e560783-0ec2-4688-a79e-59a1df5b2e61-kube-api-access-2rf2n\") pod \"openstack-operator-index-fxhzz\" (UID: \"1e560783-0ec2-4688-a79e-59a1df5b2e61\") " pod="openstack-operators/openstack-operator-index-fxhzz" Mar 12 08:18:20 crc kubenswrapper[4809]: I0312 08:18:20.584090 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rf2n\" (UniqueName: \"kubernetes.io/projected/1e560783-0ec2-4688-a79e-59a1df5b2e61-kube-api-access-2rf2n\") pod \"openstack-operator-index-fxhzz\" (UID: \"1e560783-0ec2-4688-a79e-59a1df5b2e61\") " pod="openstack-operators/openstack-operator-index-fxhzz" Mar 12 08:18:20 crc kubenswrapper[4809]: I0312 08:18:20.606702 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rf2n\" (UniqueName: \"kubernetes.io/projected/1e560783-0ec2-4688-a79e-59a1df5b2e61-kube-api-access-2rf2n\") pod \"openstack-operator-index-fxhzz\" (UID: \"1e560783-0ec2-4688-a79e-59a1df5b2e61\") " pod="openstack-operators/openstack-operator-index-fxhzz" Mar 12 08:18:20 crc kubenswrapper[4809]: I0312 08:18:20.648732 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-v8w7v" event={"ID":"ba8c20bb-2adc-4272-8a84-eb58f480c3d0","Type":"ContainerStarted","Data":"98a8813ec76e2f59b735b75659e0b3f49d2b3e200bbf4c14bd29d4461d482dd8"} Mar 12 08:18:20 crc kubenswrapper[4809]: I0312 08:18:20.648957 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-v8w7v" podUID="ba8c20bb-2adc-4272-8a84-eb58f480c3d0" containerName="registry-server" containerID="cri-o://98a8813ec76e2f59b735b75659e0b3f49d2b3e200bbf4c14bd29d4461d482dd8" gracePeriod=2 Mar 12 08:18:20 crc kubenswrapper[4809]: I0312 08:18:20.669980 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-fxhzz" Mar 12 08:18:20 crc kubenswrapper[4809]: I0312 08:18:20.672883 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-v8w7v" podStartSLOduration=1.460438988 podStartE2EDuration="4.672854251s" podCreationTimestamp="2026-03-12 08:18:16 +0000 UTC" firstStartedPulling="2026-03-12 08:18:17.156444234 +0000 UTC m=+1170.738479987" lastFinishedPulling="2026-03-12 08:18:20.368859507 +0000 UTC m=+1173.950895250" observedRunningTime="2026-03-12 08:18:20.663493272 +0000 UTC m=+1174.245529005" watchObservedRunningTime="2026-03-12 08:18:20.672854251 +0000 UTC m=+1174.254889994" Mar 12 08:18:20 crc kubenswrapper[4809]: I0312 08:18:20.933945 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-fxhzz"] Mar 12 08:18:21 crc kubenswrapper[4809]: I0312 08:18:21.090175 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-v8w7v" Mar 12 08:18:21 crc kubenswrapper[4809]: I0312 08:18:21.197711 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rjfr\" (UniqueName: \"kubernetes.io/projected/ba8c20bb-2adc-4272-8a84-eb58f480c3d0-kube-api-access-7rjfr\") pod \"ba8c20bb-2adc-4272-8a84-eb58f480c3d0\" (UID: \"ba8c20bb-2adc-4272-8a84-eb58f480c3d0\") " Mar 12 08:18:21 crc kubenswrapper[4809]: I0312 08:18:21.205873 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba8c20bb-2adc-4272-8a84-eb58f480c3d0-kube-api-access-7rjfr" (OuterVolumeSpecName: "kube-api-access-7rjfr") pod "ba8c20bb-2adc-4272-8a84-eb58f480c3d0" (UID: "ba8c20bb-2adc-4272-8a84-eb58f480c3d0"). InnerVolumeSpecName "kube-api-access-7rjfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:18:21 crc kubenswrapper[4809]: I0312 08:18:21.300566 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rjfr\" (UniqueName: \"kubernetes.io/projected/ba8c20bb-2adc-4272-8a84-eb58f480c3d0-kube-api-access-7rjfr\") on node \"crc\" DevicePath \"\"" Mar 12 08:18:21 crc kubenswrapper[4809]: I0312 08:18:21.663923 4809 generic.go:334] "Generic (PLEG): container finished" podID="ba8c20bb-2adc-4272-8a84-eb58f480c3d0" containerID="98a8813ec76e2f59b735b75659e0b3f49d2b3e200bbf4c14bd29d4461d482dd8" exitCode=0 Mar 12 08:18:21 crc kubenswrapper[4809]: I0312 08:18:21.663994 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-v8w7v" event={"ID":"ba8c20bb-2adc-4272-8a84-eb58f480c3d0","Type":"ContainerDied","Data":"98a8813ec76e2f59b735b75659e0b3f49d2b3e200bbf4c14bd29d4461d482dd8"} Mar 12 08:18:21 crc kubenswrapper[4809]: I0312 08:18:21.664009 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-v8w7v" Mar 12 08:18:21 crc kubenswrapper[4809]: I0312 08:18:21.664831 4809 scope.go:117] "RemoveContainer" containerID="98a8813ec76e2f59b735b75659e0b3f49d2b3e200bbf4c14bd29d4461d482dd8" Mar 12 08:18:21 crc kubenswrapper[4809]: I0312 08:18:21.664831 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-v8w7v" event={"ID":"ba8c20bb-2adc-4272-8a84-eb58f480c3d0","Type":"ContainerDied","Data":"cbe8658343c2b013f2b1fbbba83d2097fab54366bf43d87e79853cf30a09d1c8"} Mar 12 08:18:21 crc kubenswrapper[4809]: I0312 08:18:21.666858 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-fxhzz" event={"ID":"1e560783-0ec2-4688-a79e-59a1df5b2e61","Type":"ContainerStarted","Data":"c75cd2420e84e4d4ddb914ebd6677b821a32a5ec4685448ee1ebcf37372b79b6"} Mar 12 08:18:21 crc kubenswrapper[4809]: I0312 08:18:21.666902 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-fxhzz" event={"ID":"1e560783-0ec2-4688-a79e-59a1df5b2e61","Type":"ContainerStarted","Data":"798d8db79b2071fec18c9e81372b945b23b4946c1a0fa3454c6022a790ac2cab"} Mar 12 08:18:21 crc kubenswrapper[4809]: I0312 08:18:21.690388 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-fxhzz" podStartSLOduration=1.6236509369999998 podStartE2EDuration="1.690368473s" podCreationTimestamp="2026-03-12 08:18:20 +0000 UTC" firstStartedPulling="2026-03-12 08:18:20.967198228 +0000 UTC m=+1174.549233961" lastFinishedPulling="2026-03-12 08:18:21.033915764 +0000 UTC m=+1174.615951497" observedRunningTime="2026-03-12 08:18:21.688363768 +0000 UTC m=+1175.270399541" watchObservedRunningTime="2026-03-12 08:18:21.690368473 +0000 UTC m=+1175.272404216" Mar 12 08:18:21 crc kubenswrapper[4809]: I0312 08:18:21.702611 4809 scope.go:117] "RemoveContainer" containerID="98a8813ec76e2f59b735b75659e0b3f49d2b3e200bbf4c14bd29d4461d482dd8" Mar 12 08:18:21 crc kubenswrapper[4809]: E0312 08:18:21.705219 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98a8813ec76e2f59b735b75659e0b3f49d2b3e200bbf4c14bd29d4461d482dd8\": container with ID starting with 98a8813ec76e2f59b735b75659e0b3f49d2b3e200bbf4c14bd29d4461d482dd8 not found: ID does not exist" containerID="98a8813ec76e2f59b735b75659e0b3f49d2b3e200bbf4c14bd29d4461d482dd8" Mar 12 08:18:21 crc kubenswrapper[4809]: I0312 08:18:21.705271 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98a8813ec76e2f59b735b75659e0b3f49d2b3e200bbf4c14bd29d4461d482dd8"} err="failed to get container status \"98a8813ec76e2f59b735b75659e0b3f49d2b3e200bbf4c14bd29d4461d482dd8\": rpc error: code = NotFound desc = could not find container \"98a8813ec76e2f59b735b75659e0b3f49d2b3e200bbf4c14bd29d4461d482dd8\": container with ID starting with 98a8813ec76e2f59b735b75659e0b3f49d2b3e200bbf4c14bd29d4461d482dd8 not found: ID does not exist" Mar 12 08:18:21 crc kubenswrapper[4809]: I0312 08:18:21.710217 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-v8w7v"] Mar 12 08:18:21 crc kubenswrapper[4809]: I0312 08:18:21.715729 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-v8w7v"] Mar 12 08:18:22 crc kubenswrapper[4809]: I0312 08:18:22.099448 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-n22vs" Mar 12 08:18:23 crc kubenswrapper[4809]: I0312 08:18:23.118739 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba8c20bb-2adc-4272-8a84-eb58f480c3d0" path="/var/lib/kubelet/pods/ba8c20bb-2adc-4272-8a84-eb58f480c3d0/volumes" Mar 12 08:18:30 crc kubenswrapper[4809]: I0312 08:18:30.671460 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-fxhzz" Mar 12 08:18:30 crc kubenswrapper[4809]: I0312 08:18:30.672337 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-fxhzz" Mar 12 08:18:30 crc kubenswrapper[4809]: I0312 08:18:30.706365 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-fxhzz" Mar 12 08:18:30 crc kubenswrapper[4809]: I0312 08:18:30.805404 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-fxhzz" Mar 12 08:18:37 crc kubenswrapper[4809]: I0312 08:18:37.056268 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/caeacfaf247b7c347bd3b189b13a1b1dbb195ad871df9bb2a8c421b70bpd64f"] Mar 12 08:18:37 crc kubenswrapper[4809]: E0312 08:18:37.057134 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba8c20bb-2adc-4272-8a84-eb58f480c3d0" containerName="registry-server" Mar 12 08:18:37 crc kubenswrapper[4809]: I0312 08:18:37.057148 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba8c20bb-2adc-4272-8a84-eb58f480c3d0" containerName="registry-server" Mar 12 08:18:37 crc kubenswrapper[4809]: I0312 08:18:37.057315 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba8c20bb-2adc-4272-8a84-eb58f480c3d0" containerName="registry-server" Mar 12 08:18:37 crc kubenswrapper[4809]: I0312 08:18:37.058434 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/caeacfaf247b7c347bd3b189b13a1b1dbb195ad871df9bb2a8c421b70bpd64f" Mar 12 08:18:37 crc kubenswrapper[4809]: I0312 08:18:37.071900 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/caeacfaf247b7c347bd3b189b13a1b1dbb195ad871df9bb2a8c421b70bpd64f"] Mar 12 08:18:37 crc kubenswrapper[4809]: I0312 08:18:37.076042 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-4sm2l" Mar 12 08:18:37 crc kubenswrapper[4809]: I0312 08:18:37.162940 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldgs9\" (UniqueName: \"kubernetes.io/projected/5973f973-351b-4a16-a21e-330a074ef1e3-kube-api-access-ldgs9\") pod \"caeacfaf247b7c347bd3b189b13a1b1dbb195ad871df9bb2a8c421b70bpd64f\" (UID: \"5973f973-351b-4a16-a21e-330a074ef1e3\") " pod="openstack-operators/caeacfaf247b7c347bd3b189b13a1b1dbb195ad871df9bb2a8c421b70bpd64f" Mar 12 08:18:37 crc kubenswrapper[4809]: I0312 08:18:37.162993 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5973f973-351b-4a16-a21e-330a074ef1e3-util\") pod \"caeacfaf247b7c347bd3b189b13a1b1dbb195ad871df9bb2a8c421b70bpd64f\" (UID: \"5973f973-351b-4a16-a21e-330a074ef1e3\") " pod="openstack-operators/caeacfaf247b7c347bd3b189b13a1b1dbb195ad871df9bb2a8c421b70bpd64f" Mar 12 08:18:37 crc kubenswrapper[4809]: I0312 08:18:37.163030 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5973f973-351b-4a16-a21e-330a074ef1e3-bundle\") pod \"caeacfaf247b7c347bd3b189b13a1b1dbb195ad871df9bb2a8c421b70bpd64f\" (UID: \"5973f973-351b-4a16-a21e-330a074ef1e3\") " pod="openstack-operators/caeacfaf247b7c347bd3b189b13a1b1dbb195ad871df9bb2a8c421b70bpd64f" Mar 12 08:18:37 crc kubenswrapper[4809]: I0312 08:18:37.264474 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldgs9\" (UniqueName: \"kubernetes.io/projected/5973f973-351b-4a16-a21e-330a074ef1e3-kube-api-access-ldgs9\") pod \"caeacfaf247b7c347bd3b189b13a1b1dbb195ad871df9bb2a8c421b70bpd64f\" (UID: \"5973f973-351b-4a16-a21e-330a074ef1e3\") " pod="openstack-operators/caeacfaf247b7c347bd3b189b13a1b1dbb195ad871df9bb2a8c421b70bpd64f" Mar 12 08:18:37 crc kubenswrapper[4809]: I0312 08:18:37.264595 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5973f973-351b-4a16-a21e-330a074ef1e3-util\") pod \"caeacfaf247b7c347bd3b189b13a1b1dbb195ad871df9bb2a8c421b70bpd64f\" (UID: \"5973f973-351b-4a16-a21e-330a074ef1e3\") " pod="openstack-operators/caeacfaf247b7c347bd3b189b13a1b1dbb195ad871df9bb2a8c421b70bpd64f" Mar 12 08:18:37 crc kubenswrapper[4809]: I0312 08:18:37.264634 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5973f973-351b-4a16-a21e-330a074ef1e3-bundle\") pod \"caeacfaf247b7c347bd3b189b13a1b1dbb195ad871df9bb2a8c421b70bpd64f\" (UID: \"5973f973-351b-4a16-a21e-330a074ef1e3\") " pod="openstack-operators/caeacfaf247b7c347bd3b189b13a1b1dbb195ad871df9bb2a8c421b70bpd64f" Mar 12 08:18:37 crc kubenswrapper[4809]: I0312 08:18:37.265408 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5973f973-351b-4a16-a21e-330a074ef1e3-bundle\") pod \"caeacfaf247b7c347bd3b189b13a1b1dbb195ad871df9bb2a8c421b70bpd64f\" (UID: \"5973f973-351b-4a16-a21e-330a074ef1e3\") " pod="openstack-operators/caeacfaf247b7c347bd3b189b13a1b1dbb195ad871df9bb2a8c421b70bpd64f" Mar 12 08:18:37 crc kubenswrapper[4809]: I0312 08:18:37.265556 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5973f973-351b-4a16-a21e-330a074ef1e3-util\") pod \"caeacfaf247b7c347bd3b189b13a1b1dbb195ad871df9bb2a8c421b70bpd64f\" (UID: \"5973f973-351b-4a16-a21e-330a074ef1e3\") " pod="openstack-operators/caeacfaf247b7c347bd3b189b13a1b1dbb195ad871df9bb2a8c421b70bpd64f" Mar 12 08:18:37 crc kubenswrapper[4809]: I0312 08:18:37.290199 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldgs9\" (UniqueName: \"kubernetes.io/projected/5973f973-351b-4a16-a21e-330a074ef1e3-kube-api-access-ldgs9\") pod \"caeacfaf247b7c347bd3b189b13a1b1dbb195ad871df9bb2a8c421b70bpd64f\" (UID: \"5973f973-351b-4a16-a21e-330a074ef1e3\") " pod="openstack-operators/caeacfaf247b7c347bd3b189b13a1b1dbb195ad871df9bb2a8c421b70bpd64f" Mar 12 08:18:37 crc kubenswrapper[4809]: I0312 08:18:37.375963 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/caeacfaf247b7c347bd3b189b13a1b1dbb195ad871df9bb2a8c421b70bpd64f" Mar 12 08:18:37 crc kubenswrapper[4809]: I0312 08:18:37.907948 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/caeacfaf247b7c347bd3b189b13a1b1dbb195ad871df9bb2a8c421b70bpd64f"] Mar 12 08:18:37 crc kubenswrapper[4809]: W0312 08:18:37.916910 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5973f973_351b_4a16_a21e_330a074ef1e3.slice/crio-e5998b6070f4ce8c5e0dde0bcb43095718d52326b4c7083a7ed1b5c6d56fd8fe WatchSource:0}: Error finding container e5998b6070f4ce8c5e0dde0bcb43095718d52326b4c7083a7ed1b5c6d56fd8fe: Status 404 returned error can't find the container with id e5998b6070f4ce8c5e0dde0bcb43095718d52326b4c7083a7ed1b5c6d56fd8fe Mar 12 08:18:38 crc kubenswrapper[4809]: I0312 08:18:38.887508 4809 generic.go:334] "Generic (PLEG): container finished" podID="5973f973-351b-4a16-a21e-330a074ef1e3" containerID="786a5a2d30a211294698c3229b5d5ca8387aa6f9968a55164c66f9e902313fbf" exitCode=0 Mar 12 08:18:38 crc kubenswrapper[4809]: I0312 08:18:38.887592 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/caeacfaf247b7c347bd3b189b13a1b1dbb195ad871df9bb2a8c421b70bpd64f" event={"ID":"5973f973-351b-4a16-a21e-330a074ef1e3","Type":"ContainerDied","Data":"786a5a2d30a211294698c3229b5d5ca8387aa6f9968a55164c66f9e902313fbf"} Mar 12 08:18:38 crc kubenswrapper[4809]: I0312 08:18:38.887869 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/caeacfaf247b7c347bd3b189b13a1b1dbb195ad871df9bb2a8c421b70bpd64f" event={"ID":"5973f973-351b-4a16-a21e-330a074ef1e3","Type":"ContainerStarted","Data":"e5998b6070f4ce8c5e0dde0bcb43095718d52326b4c7083a7ed1b5c6d56fd8fe"} Mar 12 08:18:39 crc kubenswrapper[4809]: I0312 08:18:39.898672 4809 generic.go:334] "Generic (PLEG): container finished" podID="5973f973-351b-4a16-a21e-330a074ef1e3" containerID="96ddb2f93fbe1c944a443ec5b9f1d91b43afbfb3947bd9ae3afdeb0dd8e1b9d3" exitCode=0 Mar 12 08:18:39 crc kubenswrapper[4809]: I0312 08:18:39.898775 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/caeacfaf247b7c347bd3b189b13a1b1dbb195ad871df9bb2a8c421b70bpd64f" event={"ID":"5973f973-351b-4a16-a21e-330a074ef1e3","Type":"ContainerDied","Data":"96ddb2f93fbe1c944a443ec5b9f1d91b43afbfb3947bd9ae3afdeb0dd8e1b9d3"} Mar 12 08:18:40 crc kubenswrapper[4809]: I0312 08:18:40.913813 4809 generic.go:334] "Generic (PLEG): container finished" podID="5973f973-351b-4a16-a21e-330a074ef1e3" containerID="defe284391ade6a009a684a64377f07625b3d5ee8e73c750d799c4d6be855e99" exitCode=0 Mar 12 08:18:40 crc kubenswrapper[4809]: I0312 08:18:40.913956 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/caeacfaf247b7c347bd3b189b13a1b1dbb195ad871df9bb2a8c421b70bpd64f" event={"ID":"5973f973-351b-4a16-a21e-330a074ef1e3","Type":"ContainerDied","Data":"defe284391ade6a009a684a64377f07625b3d5ee8e73c750d799c4d6be855e99"} Mar 12 08:18:42 crc kubenswrapper[4809]: I0312 08:18:42.347313 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/caeacfaf247b7c347bd3b189b13a1b1dbb195ad871df9bb2a8c421b70bpd64f" Mar 12 08:18:42 crc kubenswrapper[4809]: I0312 08:18:42.468623 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5973f973-351b-4a16-a21e-330a074ef1e3-bundle\") pod \"5973f973-351b-4a16-a21e-330a074ef1e3\" (UID: \"5973f973-351b-4a16-a21e-330a074ef1e3\") " Mar 12 08:18:42 crc kubenswrapper[4809]: I0312 08:18:42.469216 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5973f973-351b-4a16-a21e-330a074ef1e3-util\") pod \"5973f973-351b-4a16-a21e-330a074ef1e3\" (UID: \"5973f973-351b-4a16-a21e-330a074ef1e3\") " Mar 12 08:18:42 crc kubenswrapper[4809]: I0312 08:18:42.469314 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5973f973-351b-4a16-a21e-330a074ef1e3-bundle" (OuterVolumeSpecName: "bundle") pod "5973f973-351b-4a16-a21e-330a074ef1e3" (UID: "5973f973-351b-4a16-a21e-330a074ef1e3"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:18:42 crc kubenswrapper[4809]: I0312 08:18:42.469376 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldgs9\" (UniqueName: \"kubernetes.io/projected/5973f973-351b-4a16-a21e-330a074ef1e3-kube-api-access-ldgs9\") pod \"5973f973-351b-4a16-a21e-330a074ef1e3\" (UID: \"5973f973-351b-4a16-a21e-330a074ef1e3\") " Mar 12 08:18:42 crc kubenswrapper[4809]: I0312 08:18:42.469783 4809 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5973f973-351b-4a16-a21e-330a074ef1e3-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:18:42 crc kubenswrapper[4809]: I0312 08:18:42.478633 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5973f973-351b-4a16-a21e-330a074ef1e3-kube-api-access-ldgs9" (OuterVolumeSpecName: "kube-api-access-ldgs9") pod "5973f973-351b-4a16-a21e-330a074ef1e3" (UID: "5973f973-351b-4a16-a21e-330a074ef1e3"). InnerVolumeSpecName "kube-api-access-ldgs9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:18:42 crc kubenswrapper[4809]: I0312 08:18:42.484381 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5973f973-351b-4a16-a21e-330a074ef1e3-util" (OuterVolumeSpecName: "util") pod "5973f973-351b-4a16-a21e-330a074ef1e3" (UID: "5973f973-351b-4a16-a21e-330a074ef1e3"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:18:42 crc kubenswrapper[4809]: I0312 08:18:42.570916 4809 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5973f973-351b-4a16-a21e-330a074ef1e3-util\") on node \"crc\" DevicePath \"\"" Mar 12 08:18:42 crc kubenswrapper[4809]: I0312 08:18:42.571027 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldgs9\" (UniqueName: \"kubernetes.io/projected/5973f973-351b-4a16-a21e-330a074ef1e3-kube-api-access-ldgs9\") on node \"crc\" DevicePath \"\"" Mar 12 08:18:42 crc kubenswrapper[4809]: I0312 08:18:42.939523 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/caeacfaf247b7c347bd3b189b13a1b1dbb195ad871df9bb2a8c421b70bpd64f" event={"ID":"5973f973-351b-4a16-a21e-330a074ef1e3","Type":"ContainerDied","Data":"e5998b6070f4ce8c5e0dde0bcb43095718d52326b4c7083a7ed1b5c6d56fd8fe"} Mar 12 08:18:42 crc kubenswrapper[4809]: I0312 08:18:42.939562 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5998b6070f4ce8c5e0dde0bcb43095718d52326b4c7083a7ed1b5c6d56fd8fe" Mar 12 08:18:42 crc kubenswrapper[4809]: I0312 08:18:42.939574 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/caeacfaf247b7c347bd3b189b13a1b1dbb195ad871df9bb2a8c421b70bpd64f" Mar 12 08:18:49 crc kubenswrapper[4809]: I0312 08:18:49.873772 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-6bbc5b75f-zqc46"] Mar 12 08:18:49 crc kubenswrapper[4809]: E0312 08:18:49.874666 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5973f973-351b-4a16-a21e-330a074ef1e3" containerName="pull" Mar 12 08:18:49 crc kubenswrapper[4809]: I0312 08:18:49.874681 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="5973f973-351b-4a16-a21e-330a074ef1e3" containerName="pull" Mar 12 08:18:49 crc kubenswrapper[4809]: E0312 08:18:49.874696 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5973f973-351b-4a16-a21e-330a074ef1e3" containerName="extract" Mar 12 08:18:49 crc kubenswrapper[4809]: I0312 08:18:49.874704 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="5973f973-351b-4a16-a21e-330a074ef1e3" containerName="extract" Mar 12 08:18:49 crc kubenswrapper[4809]: E0312 08:18:49.874715 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5973f973-351b-4a16-a21e-330a074ef1e3" containerName="util" Mar 12 08:18:49 crc kubenswrapper[4809]: I0312 08:18:49.874721 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="5973f973-351b-4a16-a21e-330a074ef1e3" containerName="util" Mar 12 08:18:49 crc kubenswrapper[4809]: I0312 08:18:49.874877 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="5973f973-351b-4a16-a21e-330a074ef1e3" containerName="extract" Mar 12 08:18:49 crc kubenswrapper[4809]: I0312 08:18:49.875484 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6bbc5b75f-zqc46" Mar 12 08:18:49 crc kubenswrapper[4809]: I0312 08:18:49.878884 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-kk5lz" Mar 12 08:18:49 crc kubenswrapper[4809]: I0312 08:18:49.898801 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6bbc5b75f-zqc46"] Mar 12 08:18:49 crc kubenswrapper[4809]: I0312 08:18:49.899168 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nvvw\" (UniqueName: \"kubernetes.io/projected/53dea28f-c986-4b4e-a4da-757b2bc9435e-kube-api-access-8nvvw\") pod \"openstack-operator-controller-init-6bbc5b75f-zqc46\" (UID: \"53dea28f-c986-4b4e-a4da-757b2bc9435e\") " pod="openstack-operators/openstack-operator-controller-init-6bbc5b75f-zqc46" Mar 12 08:18:50 crc kubenswrapper[4809]: I0312 08:18:50.000686 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nvvw\" (UniqueName: \"kubernetes.io/projected/53dea28f-c986-4b4e-a4da-757b2bc9435e-kube-api-access-8nvvw\") pod \"openstack-operator-controller-init-6bbc5b75f-zqc46\" (UID: \"53dea28f-c986-4b4e-a4da-757b2bc9435e\") " pod="openstack-operators/openstack-operator-controller-init-6bbc5b75f-zqc46" Mar 12 08:18:50 crc kubenswrapper[4809]: I0312 08:18:50.023156 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nvvw\" (UniqueName: \"kubernetes.io/projected/53dea28f-c986-4b4e-a4da-757b2bc9435e-kube-api-access-8nvvw\") pod \"openstack-operator-controller-init-6bbc5b75f-zqc46\" (UID: \"53dea28f-c986-4b4e-a4da-757b2bc9435e\") " pod="openstack-operators/openstack-operator-controller-init-6bbc5b75f-zqc46" Mar 12 08:18:50 crc kubenswrapper[4809]: I0312 08:18:50.197305 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6bbc5b75f-zqc46" Mar 12 08:18:50 crc kubenswrapper[4809]: I0312 08:18:50.445637 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6bbc5b75f-zqc46"] Mar 12 08:18:50 crc kubenswrapper[4809]: W0312 08:18:50.451296 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53dea28f_c986_4b4e_a4da_757b2bc9435e.slice/crio-aef8a6f26a818e6cf84eef5f228716ccc9764eba0f126ff50c97a6e469e7687b WatchSource:0}: Error finding container aef8a6f26a818e6cf84eef5f228716ccc9764eba0f126ff50c97a6e469e7687b: Status 404 returned error can't find the container with id aef8a6f26a818e6cf84eef5f228716ccc9764eba0f126ff50c97a6e469e7687b Mar 12 08:18:51 crc kubenswrapper[4809]: I0312 08:18:51.006855 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6bbc5b75f-zqc46" event={"ID":"53dea28f-c986-4b4e-a4da-757b2bc9435e","Type":"ContainerStarted","Data":"aef8a6f26a818e6cf84eef5f228716ccc9764eba0f126ff50c97a6e469e7687b"} Mar 12 08:18:55 crc kubenswrapper[4809]: I0312 08:18:55.051172 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6bbc5b75f-zqc46" event={"ID":"53dea28f-c986-4b4e-a4da-757b2bc9435e","Type":"ContainerStarted","Data":"59204212673f7b5c85b79e1bc7d40904a427eee6a41202d08d8795132e222923"} Mar 12 08:18:55 crc kubenswrapper[4809]: I0312 08:18:55.052264 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-6bbc5b75f-zqc46" Mar 12 08:18:55 crc kubenswrapper[4809]: I0312 08:18:55.097137 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-6bbc5b75f-zqc46" podStartSLOduration=1.8277767470000001 podStartE2EDuration="6.097098094s" podCreationTimestamp="2026-03-12 08:18:49 +0000 UTC" firstStartedPulling="2026-03-12 08:18:50.453820807 +0000 UTC m=+1204.035856540" lastFinishedPulling="2026-03-12 08:18:54.723142154 +0000 UTC m=+1208.305177887" observedRunningTime="2026-03-12 08:18:55.089097233 +0000 UTC m=+1208.671133016" watchObservedRunningTime="2026-03-12 08:18:55.097098094 +0000 UTC m=+1208.679133817" Mar 12 08:18:58 crc kubenswrapper[4809]: I0312 08:18:58.948231 4809 scope.go:117] "RemoveContainer" containerID="f846103c2aa84b4a246514941e6eefaa4f603e46dc61389f7662f4a4a2252af7" Mar 12 08:19:00 crc kubenswrapper[4809]: I0312 08:19:00.199830 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-6bbc5b75f-zqc46" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.080431 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-677bd678f7-gtzwg"] Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.083063 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-gtzwg" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.088586 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-984cd4dcf-7dq74"] Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.090149 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-7dq74" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.095747 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-blf7l" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.095784 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-w226b" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.120197 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-677bd678f7-gtzwg"] Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.136193 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-984cd4dcf-7dq74"] Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.144868 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-66d56f6ff4-vbjh6"] Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.146464 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-vbjh6" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.148821 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-bsw5n" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.153906 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-66d56f6ff4-vbjh6"] Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.170494 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-5964f64c48-qtwhh"] Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.172262 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-qtwhh" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.175372 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-fp4xg" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.181882 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hslt\" (UniqueName: \"kubernetes.io/projected/ead62bdc-2a69-4b3a-a6c5-b60614a34263-kube-api-access-2hslt\") pod \"barbican-operator-controller-manager-677bd678f7-gtzwg\" (UID: \"ead62bdc-2a69-4b3a-a6c5-b60614a34263\") " pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-gtzwg" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.191069 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5964f64c48-qtwhh"] Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.208077 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-77b6666d85-7c2ts"] Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.209220 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-7c2ts" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.221184 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-jdr8n" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.238937 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6d9d6b584d-cvm8g"] Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.240115 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-cvm8g" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.252342 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-sfxm6" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.252354 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-77b6666d85-7c2ts"] Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.272299 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-5995f4446f-mz9kq"] Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.279246 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-mz9kq" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.283653 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4wvc\" (UniqueName: \"kubernetes.io/projected/12b71885-6cb4-4888-9056-a39becec3670-kube-api-access-d4wvc\") pod \"glance-operator-controller-manager-5964f64c48-qtwhh\" (UID: \"12b71885-6cb4-4888-9056-a39becec3670\") " pod="openstack-operators/glance-operator-controller-manager-5964f64c48-qtwhh" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.283723 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbvjz\" (UniqueName: \"kubernetes.io/projected/275762b5-44af-4358-8562-9574a793b736-kube-api-access-nbvjz\") pod \"designate-operator-controller-manager-66d56f6ff4-vbjh6\" (UID: \"275762b5-44af-4358-8562-9574a793b736\") " pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-vbjh6" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.283782 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjk8q\" (UniqueName: \"kubernetes.io/projected/8684cb78-fad5-4998-a52f-ba39be875af1-kube-api-access-sjk8q\") pod \"cinder-operator-controller-manager-984cd4dcf-7dq74\" (UID: \"8684cb78-fad5-4998-a52f-ba39be875af1\") " pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-7dq74" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.283821 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hslt\" (UniqueName: \"kubernetes.io/projected/ead62bdc-2a69-4b3a-a6c5-b60614a34263-kube-api-access-2hslt\") pod \"barbican-operator-controller-manager-677bd678f7-gtzwg\" (UID: \"ead62bdc-2a69-4b3a-a6c5-b60614a34263\") " pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-gtzwg" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.284876 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-jl8c2" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.290612 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.290746 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-5995f4446f-mz9kq"] Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.306925 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6bbb499bbc-rkf7l"] Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.310064 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-rkf7l" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.315531 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-dpxvw" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.320098 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6d9d6b584d-cvm8g"] Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.350906 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-684f77d66d-kwt4n"] Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.352058 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-kwt4n" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.360643 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-b6qwz" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.362813 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hslt\" (UniqueName: \"kubernetes.io/projected/ead62bdc-2a69-4b3a-a6c5-b60614a34263-kube-api-access-2hslt\") pod \"barbican-operator-controller-manager-677bd678f7-gtzwg\" (UID: \"ead62bdc-2a69-4b3a-a6c5-b60614a34263\") " pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-gtzwg" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.367353 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6bbb499bbc-rkf7l"] Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.387068 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjk8q\" (UniqueName: \"kubernetes.io/projected/8684cb78-fad5-4998-a52f-ba39be875af1-kube-api-access-sjk8q\") pod \"cinder-operator-controller-manager-984cd4dcf-7dq74\" (UID: \"8684cb78-fad5-4998-a52f-ba39be875af1\") " pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-7dq74" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.387115 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcjdx\" (UniqueName: \"kubernetes.io/projected/b7c605d7-46e5-4daa-beb3-4ef624bc0df9-kube-api-access-qcjdx\") pod \"heat-operator-controller-manager-77b6666d85-7c2ts\" (UID: \"b7c605d7-46e5-4daa-beb3-4ef624bc0df9\") " pod="openstack-operators/heat-operator-controller-manager-77b6666d85-7c2ts" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.387148 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9da05ba1-fc66-48d8-a8ce-c99c04f0e416-cert\") pod \"infra-operator-controller-manager-5995f4446f-mz9kq\" (UID: \"9da05ba1-fc66-48d8-a8ce-c99c04f0e416\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-mz9kq" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.387247 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4wvc\" (UniqueName: \"kubernetes.io/projected/12b71885-6cb4-4888-9056-a39becec3670-kube-api-access-d4wvc\") pod \"glance-operator-controller-manager-5964f64c48-qtwhh\" (UID: \"12b71885-6cb4-4888-9056-a39becec3670\") " pod="openstack-operators/glance-operator-controller-manager-5964f64c48-qtwhh" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.387272 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbvjz\" (UniqueName: \"kubernetes.io/projected/275762b5-44af-4358-8562-9574a793b736-kube-api-access-nbvjz\") pod \"designate-operator-controller-manager-66d56f6ff4-vbjh6\" (UID: \"275762b5-44af-4358-8562-9574a793b736\") " pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-vbjh6" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.387292 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8mgr\" (UniqueName: \"kubernetes.io/projected/9da05ba1-fc66-48d8-a8ce-c99c04f0e416-kube-api-access-r8mgr\") pod \"infra-operator-controller-manager-5995f4446f-mz9kq\" (UID: \"9da05ba1-fc66-48d8-a8ce-c99c04f0e416\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-mz9kq" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.387332 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lq6hl\" (UniqueName: \"kubernetes.io/projected/da29e412-21cc-4249-9791-55335156ff1b-kube-api-access-lq6hl\") pod \"horizon-operator-controller-manager-6d9d6b584d-cvm8g\" (UID: \"da29e412-21cc-4249-9791-55335156ff1b\") " pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-cvm8g" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.393612 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-684f77d66d-kwt4n"] Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.410477 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-68f45f9d9f-dbw4q"] Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.411592 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-dbw4q" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.417107 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-658d4cdd5-mnhzr"] Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.422061 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-gtzwg" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.424525 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-blgnr" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.425961 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-mnhzr" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.431717 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjk8q\" (UniqueName: \"kubernetes.io/projected/8684cb78-fad5-4998-a52f-ba39be875af1-kube-api-access-sjk8q\") pod \"cinder-operator-controller-manager-984cd4dcf-7dq74\" (UID: \"8684cb78-fad5-4998-a52f-ba39be875af1\") " pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-7dq74" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.434259 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-7dq74" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.443868 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbvjz\" (UniqueName: \"kubernetes.io/projected/275762b5-44af-4358-8562-9574a793b736-kube-api-access-nbvjz\") pod \"designate-operator-controller-manager-66d56f6ff4-vbjh6\" (UID: \"275762b5-44af-4358-8562-9574a793b736\") " pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-vbjh6" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.445765 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-68f45f9d9f-dbw4q"] Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.447597 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4wvc\" (UniqueName: \"kubernetes.io/projected/12b71885-6cb4-4888-9056-a39becec3670-kube-api-access-d4wvc\") pod \"glance-operator-controller-manager-5964f64c48-qtwhh\" (UID: \"12b71885-6cb4-4888-9056-a39becec3670\") " pod="openstack-operators/glance-operator-controller-manager-5964f64c48-qtwhh" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.450789 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-djpbt" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.474783 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-658d4cdd5-mnhzr"] Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.483620 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-vbjh6" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.524118 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8mgr\" (UniqueName: \"kubernetes.io/projected/9da05ba1-fc66-48d8-a8ce-c99c04f0e416-kube-api-access-r8mgr\") pod \"infra-operator-controller-manager-5995f4446f-mz9kq\" (UID: \"9da05ba1-fc66-48d8-a8ce-c99c04f0e416\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-mz9kq" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.524947 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lq6hl\" (UniqueName: \"kubernetes.io/projected/da29e412-21cc-4249-9791-55335156ff1b-kube-api-access-lq6hl\") pod \"horizon-operator-controller-manager-6d9d6b584d-cvm8g\" (UID: \"da29e412-21cc-4249-9791-55335156ff1b\") " pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-cvm8g" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.524986 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtpn2\" (UniqueName: \"kubernetes.io/projected/a4ff847c-f029-4537-ab92-0ae803769dfc-kube-api-access-qtpn2\") pod \"ironic-operator-controller-manager-6bbb499bbc-rkf7l\" (UID: \"a4ff847c-f029-4537-ab92-0ae803769dfc\") " pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-rkf7l" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.525038 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcjdx\" (UniqueName: \"kubernetes.io/projected/b7c605d7-46e5-4daa-beb3-4ef624bc0df9-kube-api-access-qcjdx\") pod \"heat-operator-controller-manager-77b6666d85-7c2ts\" (UID: \"b7c605d7-46e5-4daa-beb3-4ef624bc0df9\") " pod="openstack-operators/heat-operator-controller-manager-77b6666d85-7c2ts" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.525063 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9da05ba1-fc66-48d8-a8ce-c99c04f0e416-cert\") pod \"infra-operator-controller-manager-5995f4446f-mz9kq\" (UID: \"9da05ba1-fc66-48d8-a8ce-c99c04f0e416\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-mz9kq" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.525153 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbnpp\" (UniqueName: \"kubernetes.io/projected/ddab063f-ed2f-416c-8730-55de13229f58-kube-api-access-lbnpp\") pod \"keystone-operator-controller-manager-684f77d66d-kwt4n\" (UID: \"ddab063f-ed2f-416c-8730-55de13229f58\") " pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-kwt4n" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.525208 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jngbb\" (UniqueName: \"kubernetes.io/projected/a5138546-10af-4d98-96b5-b39dd71e9af1-kube-api-access-jngbb\") pod \"manila-operator-controller-manager-68f45f9d9f-dbw4q\" (UID: \"a5138546-10af-4d98-96b5-b39dd71e9af1\") " pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-dbw4q" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.525306 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgv7c\" (UniqueName: \"kubernetes.io/projected/87b1729d-5a9d-4e35-bec1-21d7307020f2-kube-api-access-lgv7c\") pod \"mariadb-operator-controller-manager-658d4cdd5-mnhzr\" (UID: \"87b1729d-5a9d-4e35-bec1-21d7307020f2\") " pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-mnhzr" Mar 12 08:19:39 crc kubenswrapper[4809]: E0312 08:19:39.525920 4809 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 12 08:19:39 crc kubenswrapper[4809]: E0312 08:19:39.525978 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9da05ba1-fc66-48d8-a8ce-c99c04f0e416-cert podName:9da05ba1-fc66-48d8-a8ce-c99c04f0e416 nodeName:}" failed. No retries permitted until 2026-03-12 08:19:40.025953072 +0000 UTC m=+1253.607988805 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9da05ba1-fc66-48d8-a8ce-c99c04f0e416-cert") pod "infra-operator-controller-manager-5995f4446f-mz9kq" (UID: "9da05ba1-fc66-48d8-a8ce-c99c04f0e416") : secret "infra-operator-webhook-server-cert" not found Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.529093 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-qtwhh" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.554206 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8mgr\" (UniqueName: \"kubernetes.io/projected/9da05ba1-fc66-48d8-a8ce-c99c04f0e416-kube-api-access-r8mgr\") pod \"infra-operator-controller-manager-5995f4446f-mz9kq\" (UID: \"9da05ba1-fc66-48d8-a8ce-c99c04f0e416\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-mz9kq" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.568347 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-569cc54c5-tlxd5"] Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.569463 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-tlxd5" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.572049 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-k2td2" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.572418 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lq6hl\" (UniqueName: \"kubernetes.io/projected/da29e412-21cc-4249-9791-55335156ff1b-kube-api-access-lq6hl\") pod \"horizon-operator-controller-manager-6d9d6b584d-cvm8g\" (UID: \"da29e412-21cc-4249-9791-55335156ff1b\") " pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-cvm8g" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.572739 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcjdx\" (UniqueName: \"kubernetes.io/projected/b7c605d7-46e5-4daa-beb3-4ef624bc0df9-kube-api-access-qcjdx\") pod \"heat-operator-controller-manager-77b6666d85-7c2ts\" (UID: \"b7c605d7-46e5-4daa-beb3-4ef624bc0df9\") " pod="openstack-operators/heat-operator-controller-manager-77b6666d85-7c2ts" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.586116 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-cvm8g" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.636248 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-776c5696bf-s5r4z"] Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.638386 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtpn2\" (UniqueName: \"kubernetes.io/projected/a4ff847c-f029-4537-ab92-0ae803769dfc-kube-api-access-qtpn2\") pod \"ironic-operator-controller-manager-6bbb499bbc-rkf7l\" (UID: \"a4ff847c-f029-4537-ab92-0ae803769dfc\") " pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-rkf7l" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.638524 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbnpp\" (UniqueName: \"kubernetes.io/projected/ddab063f-ed2f-416c-8730-55de13229f58-kube-api-access-lbnpp\") pod \"keystone-operator-controller-manager-684f77d66d-kwt4n\" (UID: \"ddab063f-ed2f-416c-8730-55de13229f58\") " pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-kwt4n" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.638587 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jngbb\" (UniqueName: \"kubernetes.io/projected/a5138546-10af-4d98-96b5-b39dd71e9af1-kube-api-access-jngbb\") pod \"manila-operator-controller-manager-68f45f9d9f-dbw4q\" (UID: \"a5138546-10af-4d98-96b5-b39dd71e9af1\") " pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-dbw4q" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.638681 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgv7c\" (UniqueName: \"kubernetes.io/projected/87b1729d-5a9d-4e35-bec1-21d7307020f2-kube-api-access-lgv7c\") pod \"mariadb-operator-controller-manager-658d4cdd5-mnhzr\" (UID: \"87b1729d-5a9d-4e35-bec1-21d7307020f2\") " pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-mnhzr" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.638772 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-s5r4z" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.638823 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjbst\" (UniqueName: \"kubernetes.io/projected/b9586c2f-2fbd-4ba8-a5a5-45ed306bc53e-kube-api-access-tjbst\") pod \"nova-operator-controller-manager-569cc54c5-tlxd5\" (UID: \"b9586c2f-2fbd-4ba8-a5a5-45ed306bc53e\") " pod="openstack-operators/nova-operator-controller-manager-569cc54c5-tlxd5" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.647440 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-n6qtg" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.678030 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgv7c\" (UniqueName: \"kubernetes.io/projected/87b1729d-5a9d-4e35-bec1-21d7307020f2-kube-api-access-lgv7c\") pod \"mariadb-operator-controller-manager-658d4cdd5-mnhzr\" (UID: \"87b1729d-5a9d-4e35-bec1-21d7307020f2\") " pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-mnhzr" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.699505 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbnpp\" (UniqueName: \"kubernetes.io/projected/ddab063f-ed2f-416c-8730-55de13229f58-kube-api-access-lbnpp\") pod \"keystone-operator-controller-manager-684f77d66d-kwt4n\" (UID: \"ddab063f-ed2f-416c-8730-55de13229f58\") " pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-kwt4n" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.707739 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtpn2\" (UniqueName: \"kubernetes.io/projected/a4ff847c-f029-4537-ab92-0ae803769dfc-kube-api-access-qtpn2\") pod \"ironic-operator-controller-manager-6bbb499bbc-rkf7l\" (UID: \"a4ff847c-f029-4537-ab92-0ae803769dfc\") " pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-rkf7l" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.713183 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jngbb\" (UniqueName: \"kubernetes.io/projected/a5138546-10af-4d98-96b5-b39dd71e9af1-kube-api-access-jngbb\") pod \"manila-operator-controller-manager-68f45f9d9f-dbw4q\" (UID: \"a5138546-10af-4d98-96b5-b39dd71e9af1\") " pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-dbw4q" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.713927 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-rkf7l" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.720685 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-mnhzr" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.742547 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjbst\" (UniqueName: \"kubernetes.io/projected/b9586c2f-2fbd-4ba8-a5a5-45ed306bc53e-kube-api-access-tjbst\") pod \"nova-operator-controller-manager-569cc54c5-tlxd5\" (UID: \"b9586c2f-2fbd-4ba8-a5a5-45ed306bc53e\") " pod="openstack-operators/nova-operator-controller-manager-569cc54c5-tlxd5" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.746003 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-569cc54c5-tlxd5"] Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.780711 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-776c5696bf-s5r4z"] Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.800897 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-nwrl4"] Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.803278 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-nwrl4" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.813106 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-pzfdq" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.813482 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjbst\" (UniqueName: \"kubernetes.io/projected/b9586c2f-2fbd-4ba8-a5a5-45ed306bc53e-kube-api-access-tjbst\") pod \"nova-operator-controller-manager-569cc54c5-tlxd5\" (UID: \"b9586c2f-2fbd-4ba8-a5a5-45ed306bc53e\") " pod="openstack-operators/nova-operator-controller-manager-569cc54c5-tlxd5" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.849737 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt8kp\" (UniqueName: \"kubernetes.io/projected/f3a998c8-9d1c-46ce-9bf6-adcbf704fb5c-kube-api-access-rt8kp\") pod \"neutron-operator-controller-manager-776c5696bf-s5r4z\" (UID: \"f3a998c8-9d1c-46ce-9bf6-adcbf704fb5c\") " pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-s5r4z" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.865163 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-nwrl4"] Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.868261 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-7c2ts" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.873166 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7bdkr4"] Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.875118 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7bdkr4" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.882207 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-4j6bg" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.882467 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.892256 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-kwt4n" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.895060 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bbc5b68f9-9544k"] Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.899317 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-9544k" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.901270 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-vmp88" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.954374 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j5lf\" (UniqueName: \"kubernetes.io/projected/d42ca3a9-74a0-4e76-ac25-730f412c28de-kube-api-access-2j5lf\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7bdkr4\" (UID: \"d42ca3a9-74a0-4e76-ac25-730f412c28de\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7bdkr4" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.954458 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7c6v\" (UniqueName: \"kubernetes.io/projected/e349e256-24bd-459e-b5d5-4bf9d85b2a5d-kube-api-access-z7c6v\") pod \"octavia-operator-controller-manager-5f4f55cb5c-nwrl4\" (UID: \"e349e256-24bd-459e-b5d5-4bf9d85b2a5d\") " pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-nwrl4" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.954505 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d42ca3a9-74a0-4e76-ac25-730f412c28de-cert\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7bdkr4\" (UID: \"d42ca3a9-74a0-4e76-ac25-730f412c28de\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7bdkr4" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.954527 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rt8kp\" (UniqueName: \"kubernetes.io/projected/f3a998c8-9d1c-46ce-9bf6-adcbf704fb5c-kube-api-access-rt8kp\") pod \"neutron-operator-controller-manager-776c5696bf-s5r4z\" (UID: \"f3a998c8-9d1c-46ce-9bf6-adcbf704fb5c\") " pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-s5r4z" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.967282 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7bdkr4"] Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.978931 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-dbw4q" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.979827 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rt8kp\" (UniqueName: \"kubernetes.io/projected/f3a998c8-9d1c-46ce-9bf6-adcbf704fb5c-kube-api-access-rt8kp\") pod \"neutron-operator-controller-manager-776c5696bf-s5r4z\" (UID: \"f3a998c8-9d1c-46ce-9bf6-adcbf704fb5c\") " pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-s5r4z" Mar 12 08:19:39 crc kubenswrapper[4809]: I0312 08:19:39.994368 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bbc5b68f9-9544k"] Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.030018 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-574d45c66c-nsxhb"] Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.030737 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-tlxd5" Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.033361 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-s5r4z" Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.034460 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-nsxhb" Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.042005 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-twgf8" Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.048166 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-574d45c66c-nsxhb"] Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.058867 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2j5lf\" (UniqueName: \"kubernetes.io/projected/d42ca3a9-74a0-4e76-ac25-730f412c28de-kube-api-access-2j5lf\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7bdkr4\" (UID: \"d42ca3a9-74a0-4e76-ac25-730f412c28de\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7bdkr4" Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.058952 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9da05ba1-fc66-48d8-a8ce-c99c04f0e416-cert\") pod \"infra-operator-controller-manager-5995f4446f-mz9kq\" (UID: \"9da05ba1-fc66-48d8-a8ce-c99c04f0e416\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-mz9kq" Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.059006 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzh4b\" (UniqueName: \"kubernetes.io/projected/990522cb-5ef4-45d5-9eba-debcb4e51bae-kube-api-access-lzh4b\") pod \"ovn-operator-controller-manager-bbc5b68f9-9544k\" (UID: \"990522cb-5ef4-45d5-9eba-debcb4e51bae\") " pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-9544k" Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.059041 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7c6v\" (UniqueName: \"kubernetes.io/projected/e349e256-24bd-459e-b5d5-4bf9d85b2a5d-kube-api-access-z7c6v\") pod \"octavia-operator-controller-manager-5f4f55cb5c-nwrl4\" (UID: \"e349e256-24bd-459e-b5d5-4bf9d85b2a5d\") " pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-nwrl4" Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.059116 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d42ca3a9-74a0-4e76-ac25-730f412c28de-cert\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7bdkr4\" (UID: \"d42ca3a9-74a0-4e76-ac25-730f412c28de\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7bdkr4" Mar 12 08:19:40 crc kubenswrapper[4809]: E0312 08:19:40.059350 4809 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 12 08:19:40 crc kubenswrapper[4809]: E0312 08:19:40.059386 4809 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 08:19:40 crc kubenswrapper[4809]: E0312 08:19:40.059421 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9da05ba1-fc66-48d8-a8ce-c99c04f0e416-cert podName:9da05ba1-fc66-48d8-a8ce-c99c04f0e416 nodeName:}" failed. No retries permitted until 2026-03-12 08:19:41.059397057 +0000 UTC m=+1254.641432980 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9da05ba1-fc66-48d8-a8ce-c99c04f0e416-cert") pod "infra-operator-controller-manager-5995f4446f-mz9kq" (UID: "9da05ba1-fc66-48d8-a8ce-c99c04f0e416") : secret "infra-operator-webhook-server-cert" not found Mar 12 08:19:40 crc kubenswrapper[4809]: E0312 08:19:40.059468 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d42ca3a9-74a0-4e76-ac25-730f412c28de-cert podName:d42ca3a9-74a0-4e76-ac25-730f412c28de nodeName:}" failed. No retries permitted until 2026-03-12 08:19:40.559441708 +0000 UTC m=+1254.141477441 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d42ca3a9-74a0-4e76-ac25-730f412c28de-cert") pod "openstack-baremetal-operator-controller-manager-557ccf57b7bdkr4" (UID: "d42ca3a9-74a0-4e76-ac25-730f412c28de") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.071224 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-57c6b5bd58-tt85t"] Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.073011 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-57c6b5bd58-tt85t" Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.075576 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-r4q9c" Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.086245 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2j5lf\" (UniqueName: \"kubernetes.io/projected/d42ca3a9-74a0-4e76-ac25-730f412c28de-kube-api-access-2j5lf\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7bdkr4\" (UID: \"d42ca3a9-74a0-4e76-ac25-730f412c28de\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7bdkr4" Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.088878 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-677c674df7-p24mv"] Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.093903 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-677c674df7-p24mv" Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.101629 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-2vnv5" Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.110290 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-57c6b5bd58-tt85t"] Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.112876 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7c6v\" (UniqueName: \"kubernetes.io/projected/e349e256-24bd-459e-b5d5-4bf9d85b2a5d-kube-api-access-z7c6v\") pod \"octavia-operator-controller-manager-5f4f55cb5c-nwrl4\" (UID: \"e349e256-24bd-459e-b5d5-4bf9d85b2a5d\") " pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-nwrl4" Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.125266 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-677c674df7-p24mv"] Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.137264 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-rw46j"] Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.154494 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-rw46j"] Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.154617 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-rw46j" Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.156639 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-qghxr" Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.160988 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbzpf\" (UniqueName: \"kubernetes.io/projected/3f72b8db-a17a-4b2c-b638-711766e4f6ed-kube-api-access-dbzpf\") pod \"placement-operator-controller-manager-574d45c66c-nsxhb\" (UID: \"3f72b8db-a17a-4b2c-b638-711766e4f6ed\") " pod="openstack-operators/placement-operator-controller-manager-574d45c66c-nsxhb" Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.161059 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn7jl\" (UniqueName: \"kubernetes.io/projected/b40480af-2b15-4c8f-9bf2-f63ca0dd6870-kube-api-access-nn7jl\") pod \"telemetry-operator-controller-manager-57c6b5bd58-tt85t\" (UID: \"b40480af-2b15-4c8f-9bf2-f63ca0dd6870\") " pod="openstack-operators/telemetry-operator-controller-manager-57c6b5bd58-tt85t" Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.161125 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzh4b\" (UniqueName: \"kubernetes.io/projected/990522cb-5ef4-45d5-9eba-debcb4e51bae-kube-api-access-lzh4b\") pod \"ovn-operator-controller-manager-bbc5b68f9-9544k\" (UID: \"990522cb-5ef4-45d5-9eba-debcb4e51bae\") " pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-9544k" Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.183870 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-nwrl4" Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.188907 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzh4b\" (UniqueName: \"kubernetes.io/projected/990522cb-5ef4-45d5-9eba-debcb4e51bae-kube-api-access-lzh4b\") pod \"ovn-operator-controller-manager-bbc5b68f9-9544k\" (UID: \"990522cb-5ef4-45d5-9eba-debcb4e51bae\") " pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-9544k" Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.196567 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6dd88c6f67-svzr9"] Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.197756 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-svzr9" Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.200626 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-fsj8j" Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.204954 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6dd88c6f67-svzr9"] Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.216782 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5d5444f5b-xmqds"] Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.218094 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5d5444f5b-xmqds" Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.222022 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.222274 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-b5b69" Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.222664 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.234799 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5d5444f5b-xmqds"] Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.260486 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tc5f9"] Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.262369 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tc5f9" Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.262703 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbzpf\" (UniqueName: \"kubernetes.io/projected/3f72b8db-a17a-4b2c-b638-711766e4f6ed-kube-api-access-dbzpf\") pod \"placement-operator-controller-manager-574d45c66c-nsxhb\" (UID: \"3f72b8db-a17a-4b2c-b638-711766e4f6ed\") " pod="openstack-operators/placement-operator-controller-manager-574d45c66c-nsxhb" Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.262775 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nn7jl\" (UniqueName: \"kubernetes.io/projected/b40480af-2b15-4c8f-9bf2-f63ca0dd6870-kube-api-access-nn7jl\") pod \"telemetry-operator-controller-manager-57c6b5bd58-tt85t\" (UID: \"b40480af-2b15-4c8f-9bf2-f63ca0dd6870\") " pod="openstack-operators/telemetry-operator-controller-manager-57c6b5bd58-tt85t" Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.262838 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twn2z\" (UniqueName: \"kubernetes.io/projected/60e08cbe-2284-4030-8073-892fd74bcdc6-kube-api-access-twn2z\") pod \"swift-operator-controller-manager-677c674df7-p24mv\" (UID: \"60e08cbe-2284-4030-8073-892fd74bcdc6\") " pod="openstack-operators/swift-operator-controller-manager-677c674df7-p24mv" Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.262933 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh4bt\" (UniqueName: \"kubernetes.io/projected/4abc098b-51aa-4483-93e1-4880178f6167-kube-api-access-rh4bt\") pod \"test-operator-controller-manager-5c5cb9c4d7-rw46j\" (UID: \"4abc098b-51aa-4483-93e1-4880178f6167\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-rw46j" Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.264941 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-hdf5s" Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.267270 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-9544k" Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.272397 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tc5f9"] Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.295970 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbzpf\" (UniqueName: \"kubernetes.io/projected/3f72b8db-a17a-4b2c-b638-711766e4f6ed-kube-api-access-dbzpf\") pod \"placement-operator-controller-manager-574d45c66c-nsxhb\" (UID: \"3f72b8db-a17a-4b2c-b638-711766e4f6ed\") " pod="openstack-operators/placement-operator-controller-manager-574d45c66c-nsxhb" Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.300628 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nn7jl\" (UniqueName: \"kubernetes.io/projected/b40480af-2b15-4c8f-9bf2-f63ca0dd6870-kube-api-access-nn7jl\") pod \"telemetry-operator-controller-manager-57c6b5bd58-tt85t\" (UID: \"b40480af-2b15-4c8f-9bf2-f63ca0dd6870\") " pod="openstack-operators/telemetry-operator-controller-manager-57c6b5bd58-tt85t" Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.304495 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-984cd4dcf-7dq74"] Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.369083 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2ef4d6d0-1c93-4f10-bd15-5de5ede76c62-metrics-certs\") pod \"openstack-operator-controller-manager-5d5444f5b-xmqds\" (UID: \"2ef4d6d0-1c93-4f10-bd15-5de5ede76c62\") " pod="openstack-operators/openstack-operator-controller-manager-5d5444f5b-xmqds" Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.369513 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rh4bt\" (UniqueName: \"kubernetes.io/projected/4abc098b-51aa-4483-93e1-4880178f6167-kube-api-access-rh4bt\") pod \"test-operator-controller-manager-5c5cb9c4d7-rw46j\" (UID: \"4abc098b-51aa-4483-93e1-4880178f6167\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-rw46j" Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.369590 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wnqx\" (UniqueName: \"kubernetes.io/projected/59492e92-3148-41d4-86ab-0a69de5a3518-kube-api-access-9wnqx\") pod \"rabbitmq-cluster-operator-manager-668c99d594-tc5f9\" (UID: \"59492e92-3148-41d4-86ab-0a69de5a3518\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tc5f9" Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.369694 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2ef4d6d0-1c93-4f10-bd15-5de5ede76c62-webhook-certs\") pod \"openstack-operator-controller-manager-5d5444f5b-xmqds\" (UID: \"2ef4d6d0-1c93-4f10-bd15-5de5ede76c62\") " pod="openstack-operators/openstack-operator-controller-manager-5d5444f5b-xmqds" Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.369716 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2489\" (UniqueName: \"kubernetes.io/projected/2ef4d6d0-1c93-4f10-bd15-5de5ede76c62-kube-api-access-b2489\") pod \"openstack-operator-controller-manager-5d5444f5b-xmqds\" (UID: \"2ef4d6d0-1c93-4f10-bd15-5de5ede76c62\") " pod="openstack-operators/openstack-operator-controller-manager-5d5444f5b-xmqds" Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.369823 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncdhv\" (UniqueName: \"kubernetes.io/projected/bd9084af-4a31-4802-b9b2-827b0ad53628-kube-api-access-ncdhv\") pod \"watcher-operator-controller-manager-6dd88c6f67-svzr9\" (UID: \"bd9084af-4a31-4802-b9b2-827b0ad53628\") " pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-svzr9" Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.369921 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twn2z\" (UniqueName: \"kubernetes.io/projected/60e08cbe-2284-4030-8073-892fd74bcdc6-kube-api-access-twn2z\") pod \"swift-operator-controller-manager-677c674df7-p24mv\" (UID: \"60e08cbe-2284-4030-8073-892fd74bcdc6\") " pod="openstack-operators/swift-operator-controller-manager-677c674df7-p24mv" Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.451284 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-nsxhb" Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.529927 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twn2z\" (UniqueName: \"kubernetes.io/projected/60e08cbe-2284-4030-8073-892fd74bcdc6-kube-api-access-twn2z\") pod \"swift-operator-controller-manager-677c674df7-p24mv\" (UID: \"60e08cbe-2284-4030-8073-892fd74bcdc6\") " pod="openstack-operators/swift-operator-controller-manager-677c674df7-p24mv" Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.537210 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-57c6b5bd58-tt85t" Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.538572 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-677c674df7-p24mv" Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.551977 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wnqx\" (UniqueName: \"kubernetes.io/projected/59492e92-3148-41d4-86ab-0a69de5a3518-kube-api-access-9wnqx\") pod \"rabbitmq-cluster-operator-manager-668c99d594-tc5f9\" (UID: \"59492e92-3148-41d4-86ab-0a69de5a3518\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tc5f9" Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.552174 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2ef4d6d0-1c93-4f10-bd15-5de5ede76c62-webhook-certs\") pod \"openstack-operator-controller-manager-5d5444f5b-xmqds\" (UID: \"2ef4d6d0-1c93-4f10-bd15-5de5ede76c62\") " pod="openstack-operators/openstack-operator-controller-manager-5d5444f5b-xmqds" Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.552211 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2489\" (UniqueName: \"kubernetes.io/projected/2ef4d6d0-1c93-4f10-bd15-5de5ede76c62-kube-api-access-b2489\") pod \"openstack-operator-controller-manager-5d5444f5b-xmqds\" (UID: \"2ef4d6d0-1c93-4f10-bd15-5de5ede76c62\") " pod="openstack-operators/openstack-operator-controller-manager-5d5444f5b-xmqds" Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.552231 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncdhv\" (UniqueName: \"kubernetes.io/projected/bd9084af-4a31-4802-b9b2-827b0ad53628-kube-api-access-ncdhv\") pod \"watcher-operator-controller-manager-6dd88c6f67-svzr9\" (UID: \"bd9084af-4a31-4802-b9b2-827b0ad53628\") " pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-svzr9" Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.552358 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2ef4d6d0-1c93-4f10-bd15-5de5ede76c62-metrics-certs\") pod \"openstack-operator-controller-manager-5d5444f5b-xmqds\" (UID: \"2ef4d6d0-1c93-4f10-bd15-5de5ede76c62\") " pod="openstack-operators/openstack-operator-controller-manager-5d5444f5b-xmqds" Mar 12 08:19:40 crc kubenswrapper[4809]: E0312 08:19:40.552567 4809 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 12 08:19:40 crc kubenswrapper[4809]: E0312 08:19:40.552640 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ef4d6d0-1c93-4f10-bd15-5de5ede76c62-metrics-certs podName:2ef4d6d0-1c93-4f10-bd15-5de5ede76c62 nodeName:}" failed. No retries permitted until 2026-03-12 08:19:41.052623489 +0000 UTC m=+1254.634659222 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2ef4d6d0-1c93-4f10-bd15-5de5ede76c62-metrics-certs") pod "openstack-operator-controller-manager-5d5444f5b-xmqds" (UID: "2ef4d6d0-1c93-4f10-bd15-5de5ede76c62") : secret "metrics-server-cert" not found Mar 12 08:19:40 crc kubenswrapper[4809]: E0312 08:19:40.552665 4809 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 12 08:19:40 crc kubenswrapper[4809]: E0312 08:19:40.552793 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ef4d6d0-1c93-4f10-bd15-5de5ede76c62-webhook-certs podName:2ef4d6d0-1c93-4f10-bd15-5de5ede76c62 nodeName:}" failed. No retries permitted until 2026-03-12 08:19:41.052750052 +0000 UTC m=+1254.634785865 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/2ef4d6d0-1c93-4f10-bd15-5de5ede76c62-webhook-certs") pod "openstack-operator-controller-manager-5d5444f5b-xmqds" (UID: "2ef4d6d0-1c93-4f10-bd15-5de5ede76c62") : secret "webhook-server-cert" not found Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.555318 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rh4bt\" (UniqueName: \"kubernetes.io/projected/4abc098b-51aa-4483-93e1-4880178f6167-kube-api-access-rh4bt\") pod \"test-operator-controller-manager-5c5cb9c4d7-rw46j\" (UID: \"4abc098b-51aa-4483-93e1-4880178f6167\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-rw46j" Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.587765 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2489\" (UniqueName: \"kubernetes.io/projected/2ef4d6d0-1c93-4f10-bd15-5de5ede76c62-kube-api-access-b2489\") pod \"openstack-operator-controller-manager-5d5444f5b-xmqds\" (UID: \"2ef4d6d0-1c93-4f10-bd15-5de5ede76c62\") " pod="openstack-operators/openstack-operator-controller-manager-5d5444f5b-xmqds" Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.602627 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wnqx\" (UniqueName: \"kubernetes.io/projected/59492e92-3148-41d4-86ab-0a69de5a3518-kube-api-access-9wnqx\") pod \"rabbitmq-cluster-operator-manager-668c99d594-tc5f9\" (UID: \"59492e92-3148-41d4-86ab-0a69de5a3518\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tc5f9" Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.620090 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncdhv\" (UniqueName: \"kubernetes.io/projected/bd9084af-4a31-4802-b9b2-827b0ad53628-kube-api-access-ncdhv\") pod \"watcher-operator-controller-manager-6dd88c6f67-svzr9\" (UID: \"bd9084af-4a31-4802-b9b2-827b0ad53628\") " pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-svzr9" Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.622836 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tc5f9" Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.638319 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-svzr9" Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.650769 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-7dq74" event={"ID":"8684cb78-fad5-4998-a52f-ba39be875af1","Type":"ContainerStarted","Data":"9b0ae74a04ed0e6cc0ea63c2966d0bdf57daf82888571b06cd8287cfe2a9801b"} Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.655191 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d42ca3a9-74a0-4e76-ac25-730f412c28de-cert\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7bdkr4\" (UID: \"d42ca3a9-74a0-4e76-ac25-730f412c28de\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7bdkr4" Mar 12 08:19:40 crc kubenswrapper[4809]: E0312 08:19:40.655482 4809 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 08:19:40 crc kubenswrapper[4809]: E0312 08:19:40.655532 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d42ca3a9-74a0-4e76-ac25-730f412c28de-cert podName:d42ca3a9-74a0-4e76-ac25-730f412c28de nodeName:}" failed. No retries permitted until 2026-03-12 08:19:41.655517736 +0000 UTC m=+1255.237553469 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d42ca3a9-74a0-4e76-ac25-730f412c28de-cert") pod "openstack-baremetal-operator-controller-manager-557ccf57b7bdkr4" (UID: "d42ca3a9-74a0-4e76-ac25-730f412c28de") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.790333 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-677bd678f7-gtzwg"] Mar 12 08:19:40 crc kubenswrapper[4809]: I0312 08:19:40.797159 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-rw46j" Mar 12 08:19:41 crc kubenswrapper[4809]: I0312 08:19:41.014325 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5964f64c48-qtwhh"] Mar 12 08:19:41 crc kubenswrapper[4809]: W0312 08:19:41.035232 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12b71885_6cb4_4888_9056_a39becec3670.slice/crio-99aec6d7250b77abc5164a555db9b62cc40c5fec24a2293f3bdf14e9d2861a43 WatchSource:0}: Error finding container 99aec6d7250b77abc5164a555db9b62cc40c5fec24a2293f3bdf14e9d2861a43: Status 404 returned error can't find the container with id 99aec6d7250b77abc5164a555db9b62cc40c5fec24a2293f3bdf14e9d2861a43 Mar 12 08:19:41 crc kubenswrapper[4809]: I0312 08:19:41.080095 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-66d56f6ff4-vbjh6"] Mar 12 08:19:41 crc kubenswrapper[4809]: I0312 08:19:41.089760 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9da05ba1-fc66-48d8-a8ce-c99c04f0e416-cert\") pod \"infra-operator-controller-manager-5995f4446f-mz9kq\" (UID: \"9da05ba1-fc66-48d8-a8ce-c99c04f0e416\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-mz9kq" Mar 12 08:19:41 crc kubenswrapper[4809]: I0312 08:19:41.089846 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2ef4d6d0-1c93-4f10-bd15-5de5ede76c62-metrics-certs\") pod \"openstack-operator-controller-manager-5d5444f5b-xmqds\" (UID: \"2ef4d6d0-1c93-4f10-bd15-5de5ede76c62\") " pod="openstack-operators/openstack-operator-controller-manager-5d5444f5b-xmqds" Mar 12 08:19:41 crc kubenswrapper[4809]: I0312 08:19:41.089958 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2ef4d6d0-1c93-4f10-bd15-5de5ede76c62-webhook-certs\") pod \"openstack-operator-controller-manager-5d5444f5b-xmqds\" (UID: \"2ef4d6d0-1c93-4f10-bd15-5de5ede76c62\") " pod="openstack-operators/openstack-operator-controller-manager-5d5444f5b-xmqds" Mar 12 08:19:41 crc kubenswrapper[4809]: E0312 08:19:41.090098 4809 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 12 08:19:41 crc kubenswrapper[4809]: E0312 08:19:41.090167 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ef4d6d0-1c93-4f10-bd15-5de5ede76c62-webhook-certs podName:2ef4d6d0-1c93-4f10-bd15-5de5ede76c62 nodeName:}" failed. No retries permitted until 2026-03-12 08:19:42.090149047 +0000 UTC m=+1255.672184780 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/2ef4d6d0-1c93-4f10-bd15-5de5ede76c62-webhook-certs") pod "openstack-operator-controller-manager-5d5444f5b-xmqds" (UID: "2ef4d6d0-1c93-4f10-bd15-5de5ede76c62") : secret "webhook-server-cert" not found Mar 12 08:19:41 crc kubenswrapper[4809]: E0312 08:19:41.090480 4809 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 12 08:19:41 crc kubenswrapper[4809]: E0312 08:19:41.090505 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9da05ba1-fc66-48d8-a8ce-c99c04f0e416-cert podName:9da05ba1-fc66-48d8-a8ce-c99c04f0e416 nodeName:}" failed. No retries permitted until 2026-03-12 08:19:43.090496976 +0000 UTC m=+1256.672532709 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9da05ba1-fc66-48d8-a8ce-c99c04f0e416-cert") pod "infra-operator-controller-manager-5995f4446f-mz9kq" (UID: "9da05ba1-fc66-48d8-a8ce-c99c04f0e416") : secret "infra-operator-webhook-server-cert" not found Mar 12 08:19:41 crc kubenswrapper[4809]: E0312 08:19:41.090542 4809 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 12 08:19:41 crc kubenswrapper[4809]: E0312 08:19:41.090561 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ef4d6d0-1c93-4f10-bd15-5de5ede76c62-metrics-certs podName:2ef4d6d0-1c93-4f10-bd15-5de5ede76c62 nodeName:}" failed. No retries permitted until 2026-03-12 08:19:42.090555257 +0000 UTC m=+1255.672590990 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2ef4d6d0-1c93-4f10-bd15-5de5ede76c62-metrics-certs") pod "openstack-operator-controller-manager-5d5444f5b-xmqds" (UID: "2ef4d6d0-1c93-4f10-bd15-5de5ede76c62") : secret "metrics-server-cert" not found Mar 12 08:19:41 crc kubenswrapper[4809]: I0312 08:19:41.153966 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-658d4cdd5-mnhzr"] Mar 12 08:19:41 crc kubenswrapper[4809]: I0312 08:19:41.244975 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6bbb499bbc-rkf7l"] Mar 12 08:19:41 crc kubenswrapper[4809]: I0312 08:19:41.254381 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6d9d6b584d-cvm8g"] Mar 12 08:19:41 crc kubenswrapper[4809]: I0312 08:19:41.260572 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-77b6666d85-7c2ts"] Mar 12 08:19:41 crc kubenswrapper[4809]: I0312 08:19:41.506525 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-684f77d66d-kwt4n"] Mar 12 08:19:41 crc kubenswrapper[4809]: W0312 08:19:41.514547 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podddab063f_ed2f_416c_8730_55de13229f58.slice/crio-714afc2a4ee0e9c60de46a362313b2dca0e5b35cc7c78ceaeb5d997a580baa21 WatchSource:0}: Error finding container 714afc2a4ee0e9c60de46a362313b2dca0e5b35cc7c78ceaeb5d997a580baa21: Status 404 returned error can't find the container with id 714afc2a4ee0e9c60de46a362313b2dca0e5b35cc7c78ceaeb5d997a580baa21 Mar 12 08:19:41 crc kubenswrapper[4809]: I0312 08:19:41.661078 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-kwt4n" event={"ID":"ddab063f-ed2f-416c-8730-55de13229f58","Type":"ContainerStarted","Data":"714afc2a4ee0e9c60de46a362313b2dca0e5b35cc7c78ceaeb5d997a580baa21"} Mar 12 08:19:41 crc kubenswrapper[4809]: I0312 08:19:41.662873 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-mnhzr" event={"ID":"87b1729d-5a9d-4e35-bec1-21d7307020f2","Type":"ContainerStarted","Data":"4c2302e21e222b51cd5eaec4ee931acc26ada2d893b88e0f6376d39a5ff9913c"} Mar 12 08:19:41 crc kubenswrapper[4809]: I0312 08:19:41.664471 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-cvm8g" event={"ID":"da29e412-21cc-4249-9791-55335156ff1b","Type":"ContainerStarted","Data":"333ac85781823494feb8aa70392cf73e7209570500a39c6c41d7781c980e7785"} Mar 12 08:19:41 crc kubenswrapper[4809]: I0312 08:19:41.665838 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-7c2ts" event={"ID":"b7c605d7-46e5-4daa-beb3-4ef624bc0df9","Type":"ContainerStarted","Data":"bd1a49122f513e1c70594981c22523bce729c30b78afe33993b031c136e6d928"} Mar 12 08:19:41 crc kubenswrapper[4809]: I0312 08:19:41.667037 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-vbjh6" event={"ID":"275762b5-44af-4358-8562-9574a793b736","Type":"ContainerStarted","Data":"c5eb7e1566587dc757f893ba2e8e6431f4070b11dd7905a7934723a8733c5ee1"} Mar 12 08:19:41 crc kubenswrapper[4809]: I0312 08:19:41.668650 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-gtzwg" event={"ID":"ead62bdc-2a69-4b3a-a6c5-b60614a34263","Type":"ContainerStarted","Data":"2ecc315923a9a0f4d9ed8d4877b153675388acf4df4164e2a3cfb0da5a3458d3"} Mar 12 08:19:41 crc kubenswrapper[4809]: I0312 08:19:41.669867 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-rkf7l" event={"ID":"a4ff847c-f029-4537-ab92-0ae803769dfc","Type":"ContainerStarted","Data":"b3ad00697e6a90a57d1ee86d44a546e9b27b462abd4976028274c9b02af1e7d5"} Mar 12 08:19:41 crc kubenswrapper[4809]: I0312 08:19:41.671333 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-qtwhh" event={"ID":"12b71885-6cb4-4888-9056-a39becec3670","Type":"ContainerStarted","Data":"99aec6d7250b77abc5164a555db9b62cc40c5fec24a2293f3bdf14e9d2861a43"} Mar 12 08:19:41 crc kubenswrapper[4809]: I0312 08:19:41.702658 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d42ca3a9-74a0-4e76-ac25-730f412c28de-cert\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7bdkr4\" (UID: \"d42ca3a9-74a0-4e76-ac25-730f412c28de\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7bdkr4" Mar 12 08:19:41 crc kubenswrapper[4809]: E0312 08:19:41.702926 4809 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 08:19:41 crc kubenswrapper[4809]: E0312 08:19:41.702971 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d42ca3a9-74a0-4e76-ac25-730f412c28de-cert podName:d42ca3a9-74a0-4e76-ac25-730f412c28de nodeName:}" failed. No retries permitted until 2026-03-12 08:19:43.702958307 +0000 UTC m=+1257.284994040 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d42ca3a9-74a0-4e76-ac25-730f412c28de-cert") pod "openstack-baremetal-operator-controller-manager-557ccf57b7bdkr4" (UID: "d42ca3a9-74a0-4e76-ac25-730f412c28de") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 08:19:42 crc kubenswrapper[4809]: I0312 08:19:42.117467 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2ef4d6d0-1c93-4f10-bd15-5de5ede76c62-webhook-certs\") pod \"openstack-operator-controller-manager-5d5444f5b-xmqds\" (UID: \"2ef4d6d0-1c93-4f10-bd15-5de5ede76c62\") " pod="openstack-operators/openstack-operator-controller-manager-5d5444f5b-xmqds" Mar 12 08:19:42 crc kubenswrapper[4809]: I0312 08:19:42.117625 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2ef4d6d0-1c93-4f10-bd15-5de5ede76c62-metrics-certs\") pod \"openstack-operator-controller-manager-5d5444f5b-xmqds\" (UID: \"2ef4d6d0-1c93-4f10-bd15-5de5ede76c62\") " pod="openstack-operators/openstack-operator-controller-manager-5d5444f5b-xmqds" Mar 12 08:19:42 crc kubenswrapper[4809]: E0312 08:19:42.117656 4809 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 12 08:19:42 crc kubenswrapper[4809]: E0312 08:19:42.117777 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ef4d6d0-1c93-4f10-bd15-5de5ede76c62-webhook-certs podName:2ef4d6d0-1c93-4f10-bd15-5de5ede76c62 nodeName:}" failed. No retries permitted until 2026-03-12 08:19:44.117752488 +0000 UTC m=+1257.699788221 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/2ef4d6d0-1c93-4f10-bd15-5de5ede76c62-webhook-certs") pod "openstack-operator-controller-manager-5d5444f5b-xmqds" (UID: "2ef4d6d0-1c93-4f10-bd15-5de5ede76c62") : secret "webhook-server-cert" not found Mar 12 08:19:42 crc kubenswrapper[4809]: E0312 08:19:42.117817 4809 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 12 08:19:42 crc kubenswrapper[4809]: E0312 08:19:42.117931 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ef4d6d0-1c93-4f10-bd15-5de5ede76c62-metrics-certs podName:2ef4d6d0-1c93-4f10-bd15-5de5ede76c62 nodeName:}" failed. No retries permitted until 2026-03-12 08:19:44.117904753 +0000 UTC m=+1257.699940486 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2ef4d6d0-1c93-4f10-bd15-5de5ede76c62-metrics-certs") pod "openstack-operator-controller-manager-5d5444f5b-xmqds" (UID: "2ef4d6d0-1c93-4f10-bd15-5de5ede76c62") : secret "metrics-server-cert" not found Mar 12 08:19:42 crc kubenswrapper[4809]: I0312 08:19:42.245270 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tc5f9"] Mar 12 08:19:42 crc kubenswrapper[4809]: I0312 08:19:42.277316 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6dd88c6f67-svzr9"] Mar 12 08:19:42 crc kubenswrapper[4809]: I0312 08:19:42.307287 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-574d45c66c-nsxhb"] Mar 12 08:19:42 crc kubenswrapper[4809]: I0312 08:19:42.346407 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-68f45f9d9f-dbw4q"] Mar 12 08:19:42 crc kubenswrapper[4809]: I0312 08:19:42.363272 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-776c5696bf-s5r4z"] Mar 12 08:19:42 crc kubenswrapper[4809]: I0312 08:19:42.370869 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-nwrl4"] Mar 12 08:19:42 crc kubenswrapper[4809]: I0312 08:19:42.391171 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bbc5b68f9-9544k"] Mar 12 08:19:42 crc kubenswrapper[4809]: W0312 08:19:42.392840 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3a998c8_9d1c_46ce_9bf6_adcbf704fb5c.slice/crio-1569305d85fb81b1671739b059c67cb0048aacd844061dab097c2e6ea16fa002 WatchSource:0}: Error finding container 1569305d85fb81b1671739b059c67cb0048aacd844061dab097c2e6ea16fa002: Status 404 returned error can't find the container with id 1569305d85fb81b1671739b059c67cb0048aacd844061dab097c2e6ea16fa002 Mar 12 08:19:42 crc kubenswrapper[4809]: I0312 08:19:42.442572 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-569cc54c5-tlxd5"] Mar 12 08:19:42 crc kubenswrapper[4809]: I0312 08:19:42.495531 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-rw46j"] Mar 12 08:19:42 crc kubenswrapper[4809]: E0312 08:19:42.496394 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:43bd420bc05b4789243740bc75f61e10c7aac7883fc2f82b2d4d50085bc96c42,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rh4bt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5c5cb9c4d7-rw46j_openstack-operators(4abc098b-51aa-4483-93e1-4880178f6167): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 12 08:19:42 crc kubenswrapper[4809]: E0312 08:19:42.498090 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-rw46j" podUID="4abc098b-51aa-4483-93e1-4880178f6167" Mar 12 08:19:42 crc kubenswrapper[4809]: E0312 08:19:42.499245 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.158:5001/openstack-k8s-operators/telemetry-operator:8ccfcdb23140e93b912bb23903a7d6fafb754e30,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nn7jl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-57c6b5bd58-tt85t_openstack-operators(b40480af-2b15-4c8f-9bf2-f63ca0dd6870): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 12 08:19:42 crc kubenswrapper[4809]: E0312 08:19:42.501264 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-57c6b5bd58-tt85t" podUID="b40480af-2b15-4c8f-9bf2-f63ca0dd6870" Mar 12 08:19:42 crc kubenswrapper[4809]: I0312 08:19:42.510225 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-677c674df7-p24mv"] Mar 12 08:19:42 crc kubenswrapper[4809]: I0312 08:19:42.522280 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-57c6b5bd58-tt85t"] Mar 12 08:19:42 crc kubenswrapper[4809]: I0312 08:19:42.691161 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-dbw4q" event={"ID":"a5138546-10af-4d98-96b5-b39dd71e9af1","Type":"ContainerStarted","Data":"eded78391586fd5a74fab35c3ff38befc89e502627d0c62a480fcc342be007b0"} Mar 12 08:19:42 crc kubenswrapper[4809]: I0312 08:19:42.693291 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-677c674df7-p24mv" event={"ID":"60e08cbe-2284-4030-8073-892fd74bcdc6","Type":"ContainerStarted","Data":"fd61fde6e83cf0cf634ad969e9e39d4edd0456e487994ced8a1c763e9e338977"} Mar 12 08:19:42 crc kubenswrapper[4809]: I0312 08:19:42.698370 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-nsxhb" event={"ID":"3f72b8db-a17a-4b2c-b638-711766e4f6ed","Type":"ContainerStarted","Data":"15e315f25ec844c5458351bd9969bc9d641866fd6b7b9ca56ad7fedccac50f88"} Mar 12 08:19:42 crc kubenswrapper[4809]: I0312 08:19:42.700657 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-57c6b5bd58-tt85t" event={"ID":"b40480af-2b15-4c8f-9bf2-f63ca0dd6870","Type":"ContainerStarted","Data":"b4ed582c6b80482776d3be05dcd8caa77377f065d8b1e32e432e5b610fef93f6"} Mar 12 08:19:42 crc kubenswrapper[4809]: I0312 08:19:42.702808 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-tlxd5" event={"ID":"b9586c2f-2fbd-4ba8-a5a5-45ed306bc53e","Type":"ContainerStarted","Data":"b718652a182698dcee8cb3e6811789a400eae4b75de7018c15b305861644759e"} Mar 12 08:19:42 crc kubenswrapper[4809]: I0312 08:19:42.705334 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-svzr9" event={"ID":"bd9084af-4a31-4802-b9b2-827b0ad53628","Type":"ContainerStarted","Data":"d76c6a2adfe71d6e012819da1005d961d7283c1a905a8983193edc9b7229cc45"} Mar 12 08:19:42 crc kubenswrapper[4809]: E0312 08:19:42.708298 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.158:5001/openstack-k8s-operators/telemetry-operator:8ccfcdb23140e93b912bb23903a7d6fafb754e30\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-57c6b5bd58-tt85t" podUID="b40480af-2b15-4c8f-9bf2-f63ca0dd6870" Mar 12 08:19:42 crc kubenswrapper[4809]: I0312 08:19:42.714439 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-9544k" event={"ID":"990522cb-5ef4-45d5-9eba-debcb4e51bae","Type":"ContainerStarted","Data":"3a26bff3e1a63c37bc154f8098a92b394ca0da657ad3d3a97648640a1bbea208"} Mar 12 08:19:42 crc kubenswrapper[4809]: I0312 08:19:42.720025 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-rw46j" event={"ID":"4abc098b-51aa-4483-93e1-4880178f6167","Type":"ContainerStarted","Data":"16b525ac4f545942784c821674df9de58d0d2ca141a7f9b9d47463f730513e7c"} Mar 12 08:19:42 crc kubenswrapper[4809]: E0312 08:19:42.723837 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:43bd420bc05b4789243740bc75f61e10c7aac7883fc2f82b2d4d50085bc96c42\\\"\"" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-rw46j" podUID="4abc098b-51aa-4483-93e1-4880178f6167" Mar 12 08:19:42 crc kubenswrapper[4809]: I0312 08:19:42.730827 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tc5f9" event={"ID":"59492e92-3148-41d4-86ab-0a69de5a3518","Type":"ContainerStarted","Data":"8bc3d87eb08450a1bbee8cca509723a58dfc3f80b86755da8e51027ef4ab53fe"} Mar 12 08:19:42 crc kubenswrapper[4809]: I0312 08:19:42.745658 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-nwrl4" event={"ID":"e349e256-24bd-459e-b5d5-4bf9d85b2a5d","Type":"ContainerStarted","Data":"1f501c206f073dbdcb5e626664f5587f2902efe4684e30b24e0a5c5b898392b1"} Mar 12 08:19:42 crc kubenswrapper[4809]: I0312 08:19:42.748316 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-s5r4z" event={"ID":"f3a998c8-9d1c-46ce-9bf6-adcbf704fb5c","Type":"ContainerStarted","Data":"1569305d85fb81b1671739b059c67cb0048aacd844061dab097c2e6ea16fa002"} Mar 12 08:19:43 crc kubenswrapper[4809]: I0312 08:19:43.157467 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9da05ba1-fc66-48d8-a8ce-c99c04f0e416-cert\") pod \"infra-operator-controller-manager-5995f4446f-mz9kq\" (UID: \"9da05ba1-fc66-48d8-a8ce-c99c04f0e416\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-mz9kq" Mar 12 08:19:43 crc kubenswrapper[4809]: E0312 08:19:43.158237 4809 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 12 08:19:43 crc kubenswrapper[4809]: E0312 08:19:43.158331 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9da05ba1-fc66-48d8-a8ce-c99c04f0e416-cert podName:9da05ba1-fc66-48d8-a8ce-c99c04f0e416 nodeName:}" failed. No retries permitted until 2026-03-12 08:19:47.158297799 +0000 UTC m=+1260.740333542 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9da05ba1-fc66-48d8-a8ce-c99c04f0e416-cert") pod "infra-operator-controller-manager-5995f4446f-mz9kq" (UID: "9da05ba1-fc66-48d8-a8ce-c99c04f0e416") : secret "infra-operator-webhook-server-cert" not found Mar 12 08:19:43 crc kubenswrapper[4809]: I0312 08:19:43.770913 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d42ca3a9-74a0-4e76-ac25-730f412c28de-cert\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7bdkr4\" (UID: \"d42ca3a9-74a0-4e76-ac25-730f412c28de\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7bdkr4" Mar 12 08:19:43 crc kubenswrapper[4809]: E0312 08:19:43.771248 4809 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 08:19:43 crc kubenswrapper[4809]: E0312 08:19:43.771396 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d42ca3a9-74a0-4e76-ac25-730f412c28de-cert podName:d42ca3a9-74a0-4e76-ac25-730f412c28de nodeName:}" failed. No retries permitted until 2026-03-12 08:19:47.771361186 +0000 UTC m=+1261.353396919 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d42ca3a9-74a0-4e76-ac25-730f412c28de-cert") pod "openstack-baremetal-operator-controller-manager-557ccf57b7bdkr4" (UID: "d42ca3a9-74a0-4e76-ac25-730f412c28de") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 08:19:43 crc kubenswrapper[4809]: E0312 08:19:43.781981 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.158:5001/openstack-k8s-operators/telemetry-operator:8ccfcdb23140e93b912bb23903a7d6fafb754e30\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-57c6b5bd58-tt85t" podUID="b40480af-2b15-4c8f-9bf2-f63ca0dd6870" Mar 12 08:19:43 crc kubenswrapper[4809]: E0312 08:19:43.782171 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:43bd420bc05b4789243740bc75f61e10c7aac7883fc2f82b2d4d50085bc96c42\\\"\"" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-rw46j" podUID="4abc098b-51aa-4483-93e1-4880178f6167" Mar 12 08:19:44 crc kubenswrapper[4809]: I0312 08:19:44.177098 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2ef4d6d0-1c93-4f10-bd15-5de5ede76c62-metrics-certs\") pod \"openstack-operator-controller-manager-5d5444f5b-xmqds\" (UID: \"2ef4d6d0-1c93-4f10-bd15-5de5ede76c62\") " pod="openstack-operators/openstack-operator-controller-manager-5d5444f5b-xmqds" Mar 12 08:19:44 crc kubenswrapper[4809]: I0312 08:19:44.177387 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2ef4d6d0-1c93-4f10-bd15-5de5ede76c62-webhook-certs\") pod \"openstack-operator-controller-manager-5d5444f5b-xmqds\" (UID: \"2ef4d6d0-1c93-4f10-bd15-5de5ede76c62\") " pod="openstack-operators/openstack-operator-controller-manager-5d5444f5b-xmqds" Mar 12 08:19:44 crc kubenswrapper[4809]: E0312 08:19:44.178806 4809 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 12 08:19:44 crc kubenswrapper[4809]: E0312 08:19:44.178879 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ef4d6d0-1c93-4f10-bd15-5de5ede76c62-metrics-certs podName:2ef4d6d0-1c93-4f10-bd15-5de5ede76c62 nodeName:}" failed. No retries permitted until 2026-03-12 08:19:48.178857496 +0000 UTC m=+1261.760893229 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2ef4d6d0-1c93-4f10-bd15-5de5ede76c62-metrics-certs") pod "openstack-operator-controller-manager-5d5444f5b-xmqds" (UID: "2ef4d6d0-1c93-4f10-bd15-5de5ede76c62") : secret "metrics-server-cert" not found Mar 12 08:19:44 crc kubenswrapper[4809]: E0312 08:19:44.179797 4809 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 12 08:19:44 crc kubenswrapper[4809]: E0312 08:19:44.179889 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ef4d6d0-1c93-4f10-bd15-5de5ede76c62-webhook-certs podName:2ef4d6d0-1c93-4f10-bd15-5de5ede76c62 nodeName:}" failed. No retries permitted until 2026-03-12 08:19:48.179862813 +0000 UTC m=+1261.761898546 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/2ef4d6d0-1c93-4f10-bd15-5de5ede76c62-webhook-certs") pod "openstack-operator-controller-manager-5d5444f5b-xmqds" (UID: "2ef4d6d0-1c93-4f10-bd15-5de5ede76c62") : secret "webhook-server-cert" not found Mar 12 08:19:47 crc kubenswrapper[4809]: I0312 08:19:47.172593 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9da05ba1-fc66-48d8-a8ce-c99c04f0e416-cert\") pod \"infra-operator-controller-manager-5995f4446f-mz9kq\" (UID: \"9da05ba1-fc66-48d8-a8ce-c99c04f0e416\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-mz9kq" Mar 12 08:19:47 crc kubenswrapper[4809]: E0312 08:19:47.172895 4809 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 12 08:19:47 crc kubenswrapper[4809]: E0312 08:19:47.173268 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9da05ba1-fc66-48d8-a8ce-c99c04f0e416-cert podName:9da05ba1-fc66-48d8-a8ce-c99c04f0e416 nodeName:}" failed. No retries permitted until 2026-03-12 08:19:55.173225994 +0000 UTC m=+1268.755261767 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9da05ba1-fc66-48d8-a8ce-c99c04f0e416-cert") pod "infra-operator-controller-manager-5995f4446f-mz9kq" (UID: "9da05ba1-fc66-48d8-a8ce-c99c04f0e416") : secret "infra-operator-webhook-server-cert" not found Mar 12 08:19:47 crc kubenswrapper[4809]: I0312 08:19:47.785037 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d42ca3a9-74a0-4e76-ac25-730f412c28de-cert\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7bdkr4\" (UID: \"d42ca3a9-74a0-4e76-ac25-730f412c28de\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7bdkr4" Mar 12 08:19:47 crc kubenswrapper[4809]: E0312 08:19:47.785487 4809 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 08:19:47 crc kubenswrapper[4809]: E0312 08:19:47.785721 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d42ca3a9-74a0-4e76-ac25-730f412c28de-cert podName:d42ca3a9-74a0-4e76-ac25-730f412c28de nodeName:}" failed. No retries permitted until 2026-03-12 08:19:55.785687926 +0000 UTC m=+1269.367723659 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d42ca3a9-74a0-4e76-ac25-730f412c28de-cert") pod "openstack-baremetal-operator-controller-manager-557ccf57b7bdkr4" (UID: "d42ca3a9-74a0-4e76-ac25-730f412c28de") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 08:19:48 crc kubenswrapper[4809]: I0312 08:19:48.193179 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2ef4d6d0-1c93-4f10-bd15-5de5ede76c62-webhook-certs\") pod \"openstack-operator-controller-manager-5d5444f5b-xmqds\" (UID: \"2ef4d6d0-1c93-4f10-bd15-5de5ede76c62\") " pod="openstack-operators/openstack-operator-controller-manager-5d5444f5b-xmqds" Mar 12 08:19:48 crc kubenswrapper[4809]: I0312 08:19:48.193308 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2ef4d6d0-1c93-4f10-bd15-5de5ede76c62-metrics-certs\") pod \"openstack-operator-controller-manager-5d5444f5b-xmqds\" (UID: \"2ef4d6d0-1c93-4f10-bd15-5de5ede76c62\") " pod="openstack-operators/openstack-operator-controller-manager-5d5444f5b-xmqds" Mar 12 08:19:48 crc kubenswrapper[4809]: E0312 08:19:48.193421 4809 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 12 08:19:48 crc kubenswrapper[4809]: E0312 08:19:48.193472 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ef4d6d0-1c93-4f10-bd15-5de5ede76c62-metrics-certs podName:2ef4d6d0-1c93-4f10-bd15-5de5ede76c62 nodeName:}" failed. No retries permitted until 2026-03-12 08:19:56.193455792 +0000 UTC m=+1269.775491525 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2ef4d6d0-1c93-4f10-bd15-5de5ede76c62-metrics-certs") pod "openstack-operator-controller-manager-5d5444f5b-xmqds" (UID: "2ef4d6d0-1c93-4f10-bd15-5de5ede76c62") : secret "metrics-server-cert" not found Mar 12 08:19:48 crc kubenswrapper[4809]: E0312 08:19:48.193987 4809 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 12 08:19:48 crc kubenswrapper[4809]: E0312 08:19:48.194020 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ef4d6d0-1c93-4f10-bd15-5de5ede76c62-webhook-certs podName:2ef4d6d0-1c93-4f10-bd15-5de5ede76c62 nodeName:}" failed. No retries permitted until 2026-03-12 08:19:56.194011107 +0000 UTC m=+1269.776046840 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/2ef4d6d0-1c93-4f10-bd15-5de5ede76c62-webhook-certs") pod "openstack-operator-controller-manager-5d5444f5b-xmqds" (UID: "2ef4d6d0-1c93-4f10-bd15-5de5ede76c62") : secret "webhook-server-cert" not found Mar 12 08:19:54 crc kubenswrapper[4809]: E0312 08:19:54.221313 4809 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:571f369855b0891a2b14e54a4c1c5ae2fbbd5de4c8fddd48e81033aad4b26423" Mar 12 08:19:54 crc kubenswrapper[4809]: E0312 08:19:54.222716 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:571f369855b0891a2b14e54a4c1c5ae2fbbd5de4c8fddd48e81033aad4b26423,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2hslt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-677bd678f7-gtzwg_openstack-operators(ead62bdc-2a69-4b3a-a6c5-b60614a34263): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 08:19:54 crc kubenswrapper[4809]: E0312 08:19:54.224691 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-gtzwg" podUID="ead62bdc-2a69-4b3a-a6c5-b60614a34263" Mar 12 08:19:54 crc kubenswrapper[4809]: E0312 08:19:54.924786 4809 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:b99cd5e08bd85c6aaf717519187ba7bfeea359e1537d43b73a7364b7c38116e2" Mar 12 08:19:54 crc kubenswrapper[4809]: E0312 08:19:54.925183 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:b99cd5e08bd85c6aaf717519187ba7bfeea359e1537d43b73a7364b7c38116e2,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lgv7c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-658d4cdd5-mnhzr_openstack-operators(87b1729d-5a9d-4e35-bec1-21d7307020f2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 08:19:54 crc kubenswrapper[4809]: E0312 08:19:54.926384 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-mnhzr" podUID="87b1729d-5a9d-4e35-bec1-21d7307020f2" Mar 12 08:19:54 crc kubenswrapper[4809]: E0312 08:19:54.932648 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/barbican-operator@sha256:571f369855b0891a2b14e54a4c1c5ae2fbbd5de4c8fddd48e81033aad4b26423\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-gtzwg" podUID="ead62bdc-2a69-4b3a-a6c5-b60614a34263" Mar 12 08:19:55 crc kubenswrapper[4809]: I0312 08:19:55.178099 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9da05ba1-fc66-48d8-a8ce-c99c04f0e416-cert\") pod \"infra-operator-controller-manager-5995f4446f-mz9kq\" (UID: \"9da05ba1-fc66-48d8-a8ce-c99c04f0e416\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-mz9kq" Mar 12 08:19:55 crc kubenswrapper[4809]: I0312 08:19:55.187152 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9da05ba1-fc66-48d8-a8ce-c99c04f0e416-cert\") pod \"infra-operator-controller-manager-5995f4446f-mz9kq\" (UID: \"9da05ba1-fc66-48d8-a8ce-c99c04f0e416\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-mz9kq" Mar 12 08:19:55 crc kubenswrapper[4809]: I0312 08:19:55.279532 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-jl8c2" Mar 12 08:19:55 crc kubenswrapper[4809]: I0312 08:19:55.287269 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-mz9kq" Mar 12 08:19:55 crc kubenswrapper[4809]: E0312 08:19:55.528820 4809 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:2f63ddf5c95c6c82f6e04bc9f7f20d56dc003614647726ab00276239eec40b7f" Mar 12 08:19:55 crc kubenswrapper[4809]: E0312 08:19:55.529013 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:2f63ddf5c95c6c82f6e04bc9f7f20d56dc003614647726ab00276239eec40b7f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lzh4b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-bbc5b68f9-9544k_openstack-operators(990522cb-5ef4-45d5-9eba-debcb4e51bae): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 08:19:55 crc kubenswrapper[4809]: E0312 08:19:55.530271 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-9544k" podUID="990522cb-5ef4-45d5-9eba-debcb4e51bae" Mar 12 08:19:55 crc kubenswrapper[4809]: I0312 08:19:55.788504 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d42ca3a9-74a0-4e76-ac25-730f412c28de-cert\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7bdkr4\" (UID: \"d42ca3a9-74a0-4e76-ac25-730f412c28de\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7bdkr4" Mar 12 08:19:55 crc kubenswrapper[4809]: E0312 08:19:55.788911 4809 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 08:19:55 crc kubenswrapper[4809]: E0312 08:19:55.789060 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d42ca3a9-74a0-4e76-ac25-730f412c28de-cert podName:d42ca3a9-74a0-4e76-ac25-730f412c28de nodeName:}" failed. No retries permitted until 2026-03-12 08:20:11.789021672 +0000 UTC m=+1285.371057445 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d42ca3a9-74a0-4e76-ac25-730f412c28de-cert") pod "openstack-baremetal-operator-controller-manager-557ccf57b7bdkr4" (UID: "d42ca3a9-74a0-4e76-ac25-730f412c28de") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 08:19:55 crc kubenswrapper[4809]: E0312 08:19:55.940525 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:2f63ddf5c95c6c82f6e04bc9f7f20d56dc003614647726ab00276239eec40b7f\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-9544k" podUID="990522cb-5ef4-45d5-9eba-debcb4e51bae" Mar 12 08:19:55 crc kubenswrapper[4809]: E0312 08:19:55.941087 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:b99cd5e08bd85c6aaf717519187ba7bfeea359e1537d43b73a7364b7c38116e2\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-mnhzr" podUID="87b1729d-5a9d-4e35-bec1-21d7307020f2" Mar 12 08:19:56 crc kubenswrapper[4809]: E0312 08:19:56.143859 4809 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:e7e865363955c670e41b6c042c4f87abceff78f5495ba5c5c82988baad45c978" Mar 12 08:19:56 crc kubenswrapper[4809]: E0312 08:19:56.144169 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:e7e865363955c670e41b6c042c4f87abceff78f5495ba5c5c82988baad45c978,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dbzpf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-574d45c66c-nsxhb_openstack-operators(3f72b8db-a17a-4b2c-b638-711766e4f6ed): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 08:19:56 crc kubenswrapper[4809]: E0312 08:19:56.145442 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-nsxhb" podUID="3f72b8db-a17a-4b2c-b638-711766e4f6ed" Mar 12 08:19:56 crc kubenswrapper[4809]: I0312 08:19:56.198007 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2ef4d6d0-1c93-4f10-bd15-5de5ede76c62-metrics-certs\") pod \"openstack-operator-controller-manager-5d5444f5b-xmqds\" (UID: \"2ef4d6d0-1c93-4f10-bd15-5de5ede76c62\") " pod="openstack-operators/openstack-operator-controller-manager-5d5444f5b-xmqds" Mar 12 08:19:56 crc kubenswrapper[4809]: I0312 08:19:56.199965 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2ef4d6d0-1c93-4f10-bd15-5de5ede76c62-webhook-certs\") pod \"openstack-operator-controller-manager-5d5444f5b-xmqds\" (UID: \"2ef4d6d0-1c93-4f10-bd15-5de5ede76c62\") " pod="openstack-operators/openstack-operator-controller-manager-5d5444f5b-xmqds" Mar 12 08:19:56 crc kubenswrapper[4809]: I0312 08:19:56.205405 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2ef4d6d0-1c93-4f10-bd15-5de5ede76c62-metrics-certs\") pod \"openstack-operator-controller-manager-5d5444f5b-xmqds\" (UID: \"2ef4d6d0-1c93-4f10-bd15-5de5ede76c62\") " pod="openstack-operators/openstack-operator-controller-manager-5d5444f5b-xmqds" Mar 12 08:19:56 crc kubenswrapper[4809]: I0312 08:19:56.207897 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2ef4d6d0-1c93-4f10-bd15-5de5ede76c62-webhook-certs\") pod \"openstack-operator-controller-manager-5d5444f5b-xmqds\" (UID: \"2ef4d6d0-1c93-4f10-bd15-5de5ede76c62\") " pod="openstack-operators/openstack-operator-controller-manager-5d5444f5b-xmqds" Mar 12 08:19:56 crc kubenswrapper[4809]: I0312 08:19:56.432980 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-b5b69" Mar 12 08:19:56 crc kubenswrapper[4809]: I0312 08:19:56.440023 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5d5444f5b-xmqds" Mar 12 08:19:56 crc kubenswrapper[4809]: E0312 08:19:56.952680 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e7e865363955c670e41b6c042c4f87abceff78f5495ba5c5c82988baad45c978\\\"\"" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-nsxhb" podUID="3f72b8db-a17a-4b2c-b638-711766e4f6ed" Mar 12 08:19:57 crc kubenswrapper[4809]: E0312 08:19:57.081854 4809 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:d9bffb59bb7f9f0a6cb103c3986fd2c1bdb13ce6349c39427a690858cbd754d6" Mar 12 08:19:57 crc kubenswrapper[4809]: E0312 08:19:57.082063 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:d9bffb59bb7f9f0a6cb103c3986fd2c1bdb13ce6349c39427a690858cbd754d6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lq6hl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-6d9d6b584d-cvm8g_openstack-operators(da29e412-21cc-4249-9791-55335156ff1b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 08:19:57 crc kubenswrapper[4809]: E0312 08:19:57.083396 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-cvm8g" podUID="da29e412-21cc-4249-9791-55335156ff1b" Mar 12 08:19:57 crc kubenswrapper[4809]: E0312 08:19:57.961552 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:d9bffb59bb7f9f0a6cb103c3986fd2c1bdb13ce6349c39427a690858cbd754d6\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-cvm8g" podUID="da29e412-21cc-4249-9791-55335156ff1b" Mar 12 08:20:00 crc kubenswrapper[4809]: I0312 08:20:00.147263 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555060-zzm7h"] Mar 12 08:20:00 crc kubenswrapper[4809]: I0312 08:20:00.149954 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555060-zzm7h" Mar 12 08:20:00 crc kubenswrapper[4809]: I0312 08:20:00.153445 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-tvxqv" Mar 12 08:20:00 crc kubenswrapper[4809]: I0312 08:20:00.153951 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 08:20:00 crc kubenswrapper[4809]: I0312 08:20:00.155021 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 08:20:00 crc kubenswrapper[4809]: I0312 08:20:00.175271 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555060-zzm7h"] Mar 12 08:20:00 crc kubenswrapper[4809]: I0312 08:20:00.297949 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd9g5\" (UniqueName: \"kubernetes.io/projected/eb2286a3-9a77-43c2-903d-08ab8468c9b6-kube-api-access-gd9g5\") pod \"auto-csr-approver-29555060-zzm7h\" (UID: \"eb2286a3-9a77-43c2-903d-08ab8468c9b6\") " pod="openshift-infra/auto-csr-approver-29555060-zzm7h" Mar 12 08:20:00 crc kubenswrapper[4809]: I0312 08:20:00.401047 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gd9g5\" (UniqueName: \"kubernetes.io/projected/eb2286a3-9a77-43c2-903d-08ab8468c9b6-kube-api-access-gd9g5\") pod \"auto-csr-approver-29555060-zzm7h\" (UID: \"eb2286a3-9a77-43c2-903d-08ab8468c9b6\") " pod="openshift-infra/auto-csr-approver-29555060-zzm7h" Mar 12 08:20:00 crc kubenswrapper[4809]: I0312 08:20:00.431745 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gd9g5\" (UniqueName: \"kubernetes.io/projected/eb2286a3-9a77-43c2-903d-08ab8468c9b6-kube-api-access-gd9g5\") pod \"auto-csr-approver-29555060-zzm7h\" (UID: \"eb2286a3-9a77-43c2-903d-08ab8468c9b6\") " pod="openshift-infra/auto-csr-approver-29555060-zzm7h" Mar 12 08:20:00 crc kubenswrapper[4809]: I0312 08:20:00.492098 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555060-zzm7h" Mar 12 08:20:01 crc kubenswrapper[4809]: E0312 08:20:01.522847 4809 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:9182d1816c6fdb093d6328f1b0bf39296b9eccfa495f35e2198ec4764fa6288f" Mar 12 08:20:01 crc kubenswrapper[4809]: E0312 08:20:01.523210 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:9182d1816c6fdb093d6328f1b0bf39296b9eccfa495f35e2198ec4764fa6288f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qtpn2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-6bbb499bbc-rkf7l_openstack-operators(a4ff847c-f029-4537-ab92-0ae803769dfc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 08:20:01 crc kubenswrapper[4809]: E0312 08:20:01.524476 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-rkf7l" podUID="a4ff847c-f029-4537-ab92-0ae803769dfc" Mar 12 08:20:02 crc kubenswrapper[4809]: E0312 08:20:02.010358 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:9182d1816c6fdb093d6328f1b0bf39296b9eccfa495f35e2198ec4764fa6288f\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-rkf7l" podUID="a4ff847c-f029-4537-ab92-0ae803769dfc" Mar 12 08:20:02 crc kubenswrapper[4809]: E0312 08:20:02.104389 4809 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:c223309f51714785bd878ad04080f7428567edad793be4f992d492abd77af44c" Mar 12 08:20:02 crc kubenswrapper[4809]: E0312 08:20:02.105024 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:c223309f51714785bd878ad04080f7428567edad793be4f992d492abd77af44c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-twn2z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-677c674df7-p24mv_openstack-operators(60e08cbe-2284-4030-8073-892fd74bcdc6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 08:20:02 crc kubenswrapper[4809]: E0312 08:20:02.106612 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-677c674df7-p24mv" podUID="60e08cbe-2284-4030-8073-892fd74bcdc6" Mar 12 08:20:02 crc kubenswrapper[4809]: E0312 08:20:02.731092 4809 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:5fe5351a3de5e1267112d52cd81477a01d47f90be713cc5439c76543a4c33721" Mar 12 08:20:02 crc kubenswrapper[4809]: E0312 08:20:02.731363 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:5fe5351a3de5e1267112d52cd81477a01d47f90be713cc5439c76543a4c33721,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rt8kp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-776c5696bf-s5r4z_openstack-operators(f3a998c8-9d1c-46ce-9bf6-adcbf704fb5c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 08:20:02 crc kubenswrapper[4809]: E0312 08:20:02.732496 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-s5r4z" podUID="f3a998c8-9d1c-46ce-9bf6-adcbf704fb5c" Mar 12 08:20:03 crc kubenswrapper[4809]: E0312 08:20:03.018144 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:c223309f51714785bd878ad04080f7428567edad793be4f992d492abd77af44c\\\"\"" pod="openstack-operators/swift-operator-controller-manager-677c674df7-p24mv" podUID="60e08cbe-2284-4030-8073-892fd74bcdc6" Mar 12 08:20:03 crc kubenswrapper[4809]: E0312 08:20:03.018493 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:5fe5351a3de5e1267112d52cd81477a01d47f90be713cc5439c76543a4c33721\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-s5r4z" podUID="f3a998c8-9d1c-46ce-9bf6-adcbf704fb5c" Mar 12 08:20:03 crc kubenswrapper[4809]: E0312 08:20:03.502826 4809 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:2bd37bdd917e3abe72613a734ce5021330242ec8cae9b8da76c57a0765152922" Mar 12 08:20:03 crc kubenswrapper[4809]: E0312 08:20:03.503072 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:2bd37bdd917e3abe72613a734ce5021330242ec8cae9b8da76c57a0765152922,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tjbst,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-569cc54c5-tlxd5_openstack-operators(b9586c2f-2fbd-4ba8-a5a5-45ed306bc53e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 08:20:03 crc kubenswrapper[4809]: E0312 08:20:03.504891 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-tlxd5" podUID="b9586c2f-2fbd-4ba8-a5a5-45ed306bc53e" Mar 12 08:20:04 crc kubenswrapper[4809]: E0312 08:20:04.029277 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:2bd37bdd917e3abe72613a734ce5021330242ec8cae9b8da76c57a0765152922\\\"\"" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-tlxd5" podUID="b9586c2f-2fbd-4ba8-a5a5-45ed306bc53e" Mar 12 08:20:05 crc kubenswrapper[4809]: E0312 08:20:05.582073 4809 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:18fe6f2f0be7e736db86ff2d600af12a753e14b0a03232ce4f03629a89905571" Mar 12 08:20:05 crc kubenswrapper[4809]: E0312 08:20:05.582719 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:18fe6f2f0be7e736db86ff2d600af12a753e14b0a03232ce4f03629a89905571,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z7c6v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-5f4f55cb5c-nwrl4_openstack-operators(e349e256-24bd-459e-b5d5-4bf9d85b2a5d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 08:20:05 crc kubenswrapper[4809]: E0312 08:20:05.583947 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-nwrl4" podUID="e349e256-24bd-459e-b5d5-4bf9d85b2a5d" Mar 12 08:20:06 crc kubenswrapper[4809]: E0312 08:20:06.070663 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:18fe6f2f0be7e736db86ff2d600af12a753e14b0a03232ce4f03629a89905571\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-nwrl4" podUID="e349e256-24bd-459e-b5d5-4bf9d85b2a5d" Mar 12 08:20:07 crc kubenswrapper[4809]: E0312 08:20:07.036577 4809 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Mar 12 08:20:07 crc kubenswrapper[4809]: E0312 08:20:07.036791 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9wnqx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-tc5f9_openstack-operators(59492e92-3148-41d4-86ab-0a69de5a3518): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 08:20:07 crc kubenswrapper[4809]: E0312 08:20:07.037991 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tc5f9" podUID="59492e92-3148-41d4-86ab-0a69de5a3518" Mar 12 08:20:07 crc kubenswrapper[4809]: E0312 08:20:07.083847 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tc5f9" podUID="59492e92-3148-41d4-86ab-0a69de5a3518" Mar 12 08:20:07 crc kubenswrapper[4809]: E0312 08:20:07.169834 4809 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.158:5001/openstack-k8s-operators/telemetry-operator:8ccfcdb23140e93b912bb23903a7d6fafb754e30" Mar 12 08:20:07 crc kubenswrapper[4809]: E0312 08:20:07.170423 4809 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.158:5001/openstack-k8s-operators/telemetry-operator:8ccfcdb23140e93b912bb23903a7d6fafb754e30" Mar 12 08:20:07 crc kubenswrapper[4809]: E0312 08:20:07.170575 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.158:5001/openstack-k8s-operators/telemetry-operator:8ccfcdb23140e93b912bb23903a7d6fafb754e30,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nn7jl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-57c6b5bd58-tt85t_openstack-operators(b40480af-2b15-4c8f-9bf2-f63ca0dd6870): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 08:20:07 crc kubenswrapper[4809]: E0312 08:20:07.171927 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-57c6b5bd58-tt85t" podUID="b40480af-2b15-4c8f-9bf2-f63ca0dd6870" Mar 12 08:20:07 crc kubenswrapper[4809]: I0312 08:20:07.586102 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-5995f4446f-mz9kq"] Mar 12 08:20:07 crc kubenswrapper[4809]: W0312 08:20:07.607264 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9da05ba1_fc66_48d8_a8ce_c99c04f0e416.slice/crio-c10c405edb60c12d4b3c9cd9563c86d4cb6d30a20b089889cc07f3e4658b71e8 WatchSource:0}: Error finding container c10c405edb60c12d4b3c9cd9563c86d4cb6d30a20b089889cc07f3e4658b71e8: Status 404 returned error can't find the container with id c10c405edb60c12d4b3c9cd9563c86d4cb6d30a20b089889cc07f3e4658b71e8 Mar 12 08:20:07 crc kubenswrapper[4809]: I0312 08:20:07.720744 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555060-zzm7h"] Mar 12 08:20:07 crc kubenswrapper[4809]: I0312 08:20:07.746907 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5d5444f5b-xmqds"] Mar 12 08:20:08 crc kubenswrapper[4809]: I0312 08:20:08.088256 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-kwt4n" event={"ID":"ddab063f-ed2f-416c-8730-55de13229f58","Type":"ContainerStarted","Data":"9ee038f1181b6f3b6417072ef4e5e67f21ebf131f1f20d5ec566828a121b8f46"} Mar 12 08:20:08 crc kubenswrapper[4809]: I0312 08:20:08.088899 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-kwt4n" Mar 12 08:20:08 crc kubenswrapper[4809]: I0312 08:20:08.090208 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5d5444f5b-xmqds" event={"ID":"2ef4d6d0-1c93-4f10-bd15-5de5ede76c62","Type":"ContainerStarted","Data":"8773fde33573d12a0d7147acbbec3014ffa6a9a66b0b4c08adbadd6ff0a4624f"} Mar 12 08:20:08 crc kubenswrapper[4809]: I0312 08:20:08.090265 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5d5444f5b-xmqds" event={"ID":"2ef4d6d0-1c93-4f10-bd15-5de5ede76c62","Type":"ContainerStarted","Data":"9019fe75600957436dea75c1a3e048800c21ed7298c4765cca0fa4c2b8404cf0"} Mar 12 08:20:08 crc kubenswrapper[4809]: I0312 08:20:08.090628 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-5d5444f5b-xmqds" Mar 12 08:20:08 crc kubenswrapper[4809]: I0312 08:20:08.092066 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-dbw4q" event={"ID":"a5138546-10af-4d98-96b5-b39dd71e9af1","Type":"ContainerStarted","Data":"aeef8f1155c8babe3852bd8271bd00f321afa81f2c2a5423a9782c7ed8b2e031"} Mar 12 08:20:08 crc kubenswrapper[4809]: I0312 08:20:08.092199 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-dbw4q" Mar 12 08:20:08 crc kubenswrapper[4809]: I0312 08:20:08.094481 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-mz9kq" event={"ID":"9da05ba1-fc66-48d8-a8ce-c99c04f0e416","Type":"ContainerStarted","Data":"c10c405edb60c12d4b3c9cd9563c86d4cb6d30a20b089889cc07f3e4658b71e8"} Mar 12 08:20:08 crc kubenswrapper[4809]: I0312 08:20:08.103698 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-qtwhh" event={"ID":"12b71885-6cb4-4888-9056-a39becec3670","Type":"ContainerStarted","Data":"1ba19f7b425e9d693c40c87763d2ecdbafcc10f5657316f8b7654818dbd01284"} Mar 12 08:20:08 crc kubenswrapper[4809]: I0312 08:20:08.105696 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-qtwhh" Mar 12 08:20:08 crc kubenswrapper[4809]: I0312 08:20:08.107472 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-svzr9" event={"ID":"bd9084af-4a31-4802-b9b2-827b0ad53628","Type":"ContainerStarted","Data":"19937977aa7e06a20ab6b20e916b149453861b55077eeaaaf854ed643fd7e1b2"} Mar 12 08:20:08 crc kubenswrapper[4809]: I0312 08:20:08.108155 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-svzr9" Mar 12 08:20:08 crc kubenswrapper[4809]: I0312 08:20:08.112893 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-7dq74" event={"ID":"8684cb78-fad5-4998-a52f-ba39be875af1","Type":"ContainerStarted","Data":"8815eb8e4e70058a45cdd5e3558856e2e707a629e236498716dced9334308746"} Mar 12 08:20:08 crc kubenswrapper[4809]: I0312 08:20:08.114972 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-7dq74" Mar 12 08:20:08 crc kubenswrapper[4809]: I0312 08:20:08.117645 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555060-zzm7h" event={"ID":"eb2286a3-9a77-43c2-903d-08ab8468c9b6","Type":"ContainerStarted","Data":"25cc31d05fdf798634e45e52b925de1537d2001197beb3e53396d647ecd9519a"} Mar 12 08:20:08 crc kubenswrapper[4809]: I0312 08:20:08.118254 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-kwt4n" podStartSLOduration=4.019718892 podStartE2EDuration="29.118236638s" podCreationTimestamp="2026-03-12 08:19:39 +0000 UTC" firstStartedPulling="2026-03-12 08:19:41.518141342 +0000 UTC m=+1255.100177075" lastFinishedPulling="2026-03-12 08:20:06.616659088 +0000 UTC m=+1280.198694821" observedRunningTime="2026-03-12 08:20:08.113075556 +0000 UTC m=+1281.695111289" watchObservedRunningTime="2026-03-12 08:20:08.118236638 +0000 UTC m=+1281.700272371" Mar 12 08:20:08 crc kubenswrapper[4809]: I0312 08:20:08.130054 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-7c2ts" event={"ID":"b7c605d7-46e5-4daa-beb3-4ef624bc0df9","Type":"ContainerStarted","Data":"baa202825e175982b649eed49d967c4c398176ffad1c37b51f376ee2fcf2ffc5"} Mar 12 08:20:08 crc kubenswrapper[4809]: I0312 08:20:08.130431 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-7c2ts" Mar 12 08:20:08 crc kubenswrapper[4809]: I0312 08:20:08.139270 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-vbjh6" event={"ID":"275762b5-44af-4358-8562-9574a793b736","Type":"ContainerStarted","Data":"9f7a7374f6aa8f78807ca4271db95e433786bd7eedefd2b965ce5c1d2c241446"} Mar 12 08:20:08 crc kubenswrapper[4809]: I0312 08:20:08.140219 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-vbjh6" Mar 12 08:20:08 crc kubenswrapper[4809]: I0312 08:20:08.141816 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-nsxhb" event={"ID":"3f72b8db-a17a-4b2c-b638-711766e4f6ed","Type":"ContainerStarted","Data":"cbcc153301d55a591a2e1d1980a53ddcbb1aa47f585bfb91a738a6e9fe7571e1"} Mar 12 08:20:08 crc kubenswrapper[4809]: I0312 08:20:08.142041 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-nsxhb" Mar 12 08:20:08 crc kubenswrapper[4809]: I0312 08:20:08.145341 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-rw46j" event={"ID":"4abc098b-51aa-4483-93e1-4880178f6167","Type":"ContainerStarted","Data":"7818b7d5311cdedb8d1ace05c16b298e2f645bcd0fe89c7afcfa18e46551380b"} Mar 12 08:20:08 crc kubenswrapper[4809]: I0312 08:20:08.145639 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-rw46j" Mar 12 08:20:08 crc kubenswrapper[4809]: I0312 08:20:08.146282 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-svzr9" podStartSLOduration=4.843611636 podStartE2EDuration="29.146263534s" podCreationTimestamp="2026-03-12 08:19:39 +0000 UTC" firstStartedPulling="2026-03-12 08:19:42.315307206 +0000 UTC m=+1255.897342939" lastFinishedPulling="2026-03-12 08:20:06.617959104 +0000 UTC m=+1280.199994837" observedRunningTime="2026-03-12 08:20:08.144003952 +0000 UTC m=+1281.726039685" watchObservedRunningTime="2026-03-12 08:20:08.146263534 +0000 UTC m=+1281.728299267" Mar 12 08:20:08 crc kubenswrapper[4809]: I0312 08:20:08.202490 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-5d5444f5b-xmqds" podStartSLOduration=29.202460619 podStartE2EDuration="29.202460619s" podCreationTimestamp="2026-03-12 08:19:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:20:08.194633453 +0000 UTC m=+1281.776669196" watchObservedRunningTime="2026-03-12 08:20:08.202460619 +0000 UTC m=+1281.784496352" Mar 12 08:20:08 crc kubenswrapper[4809]: I0312 08:20:08.227431 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-qtwhh" podStartSLOduration=4.705151354 podStartE2EDuration="29.22739992s" podCreationTimestamp="2026-03-12 08:19:39 +0000 UTC" firstStartedPulling="2026-03-12 08:19:41.050589461 +0000 UTC m=+1254.632625184" lastFinishedPulling="2026-03-12 08:20:05.572838007 +0000 UTC m=+1279.154873750" observedRunningTime="2026-03-12 08:20:08.225674652 +0000 UTC m=+1281.807710385" watchObservedRunningTime="2026-03-12 08:20:08.22739992 +0000 UTC m=+1281.809435653" Mar 12 08:20:08 crc kubenswrapper[4809]: I0312 08:20:08.260974 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-dbw4q" podStartSLOduration=4.951754829 podStartE2EDuration="29.260957848s" podCreationTimestamp="2026-03-12 08:19:39 +0000 UTC" firstStartedPulling="2026-03-12 08:19:42.307300065 +0000 UTC m=+1255.889335798" lastFinishedPulling="2026-03-12 08:20:06.616503074 +0000 UTC m=+1280.198538817" observedRunningTime="2026-03-12 08:20:08.259806376 +0000 UTC m=+1281.841842109" watchObservedRunningTime="2026-03-12 08:20:08.260957848 +0000 UTC m=+1281.842993581" Mar 12 08:20:08 crc kubenswrapper[4809]: I0312 08:20:08.289035 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-7c2ts" podStartSLOduration=7.098134266 podStartE2EDuration="29.289011985s" podCreationTimestamp="2026-03-12 08:19:39 +0000 UTC" firstStartedPulling="2026-03-12 08:19:41.297902606 +0000 UTC m=+1254.879938339" lastFinishedPulling="2026-03-12 08:20:03.488780335 +0000 UTC m=+1277.070816058" observedRunningTime="2026-03-12 08:20:08.283914165 +0000 UTC m=+1281.865949888" watchObservedRunningTime="2026-03-12 08:20:08.289011985 +0000 UTC m=+1281.871047718" Mar 12 08:20:08 crc kubenswrapper[4809]: I0312 08:20:08.303913 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-nsxhb" podStartSLOduration=3.911639241 podStartE2EDuration="29.303881987s" podCreationTimestamp="2026-03-12 08:19:39 +0000 UTC" firstStartedPulling="2026-03-12 08:19:42.292621488 +0000 UTC m=+1255.874657211" lastFinishedPulling="2026-03-12 08:20:07.684864224 +0000 UTC m=+1281.266899957" observedRunningTime="2026-03-12 08:20:08.30219924 +0000 UTC m=+1281.884234973" watchObservedRunningTime="2026-03-12 08:20:08.303881987 +0000 UTC m=+1281.885917720" Mar 12 08:20:08 crc kubenswrapper[4809]: I0312 08:20:08.336357 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-rw46j" podStartSLOduration=4.560906052 podStartE2EDuration="29.336326355s" podCreationTimestamp="2026-03-12 08:19:39 +0000 UTC" firstStartedPulling="2026-03-12 08:19:42.496187673 +0000 UTC m=+1256.078223406" lastFinishedPulling="2026-03-12 08:20:07.271607966 +0000 UTC m=+1280.853643709" observedRunningTime="2026-03-12 08:20:08.329086065 +0000 UTC m=+1281.911121798" watchObservedRunningTime="2026-03-12 08:20:08.336326355 +0000 UTC m=+1281.918362088" Mar 12 08:20:08 crc kubenswrapper[4809]: I0312 08:20:08.363921 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-7dq74" podStartSLOduration=12.543412589999999 podStartE2EDuration="29.363894598s" podCreationTimestamp="2026-03-12 08:19:39 +0000 UTC" firstStartedPulling="2026-03-12 08:19:40.25138085 +0000 UTC m=+1253.833416583" lastFinishedPulling="2026-03-12 08:19:57.071862858 +0000 UTC m=+1270.653898591" observedRunningTime="2026-03-12 08:20:08.352276596 +0000 UTC m=+1281.934312329" watchObservedRunningTime="2026-03-12 08:20:08.363894598 +0000 UTC m=+1281.945930331" Mar 12 08:20:08 crc kubenswrapper[4809]: I0312 08:20:08.373206 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-vbjh6" podStartSLOduration=3.89318061 podStartE2EDuration="29.373180744s" podCreationTimestamp="2026-03-12 08:19:39 +0000 UTC" firstStartedPulling="2026-03-12 08:19:41.138083893 +0000 UTC m=+1254.720119626" lastFinishedPulling="2026-03-12 08:20:06.618083987 +0000 UTC m=+1280.200119760" observedRunningTime="2026-03-12 08:20:08.372598429 +0000 UTC m=+1281.954634152" watchObservedRunningTime="2026-03-12 08:20:08.373180744 +0000 UTC m=+1281.955216487" Mar 12 08:20:11 crc kubenswrapper[4809]: I0312 08:20:11.847577 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d42ca3a9-74a0-4e76-ac25-730f412c28de-cert\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7bdkr4\" (UID: \"d42ca3a9-74a0-4e76-ac25-730f412c28de\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7bdkr4" Mar 12 08:20:11 crc kubenswrapper[4809]: I0312 08:20:11.855215 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d42ca3a9-74a0-4e76-ac25-730f412c28de-cert\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7bdkr4\" (UID: \"d42ca3a9-74a0-4e76-ac25-730f412c28de\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7bdkr4" Mar 12 08:20:12 crc kubenswrapper[4809]: I0312 08:20:12.034687 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-4j6bg" Mar 12 08:20:12 crc kubenswrapper[4809]: I0312 08:20:12.043503 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7bdkr4" Mar 12 08:20:12 crc kubenswrapper[4809]: I0312 08:20:12.228442 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-cvm8g" event={"ID":"da29e412-21cc-4249-9791-55335156ff1b","Type":"ContainerStarted","Data":"530a1dc2cd942839be8fa486229a2ba658205721a59d1dcec0908debc1462661"} Mar 12 08:20:12 crc kubenswrapper[4809]: I0312 08:20:12.229513 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-cvm8g" Mar 12 08:20:12 crc kubenswrapper[4809]: I0312 08:20:12.235992 4809 generic.go:334] "Generic (PLEG): container finished" podID="eb2286a3-9a77-43c2-903d-08ab8468c9b6" containerID="1a51f307df6fe5bf4978ea8e8793cd993e3bfd4fb5fd85053140da22e30402ce" exitCode=0 Mar 12 08:20:12 crc kubenswrapper[4809]: I0312 08:20:12.236063 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555060-zzm7h" event={"ID":"eb2286a3-9a77-43c2-903d-08ab8468c9b6","Type":"ContainerDied","Data":"1a51f307df6fe5bf4978ea8e8793cd993e3bfd4fb5fd85053140da22e30402ce"} Mar 12 08:20:12 crc kubenswrapper[4809]: I0312 08:20:12.254435 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-9544k" event={"ID":"990522cb-5ef4-45d5-9eba-debcb4e51bae","Type":"ContainerStarted","Data":"c5a644673dc71faeedd11408a214e768cf63ff7ae31c069886345f8d43c21c16"} Mar 12 08:20:12 crc kubenswrapper[4809]: I0312 08:20:12.254691 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-9544k" Mar 12 08:20:12 crc kubenswrapper[4809]: I0312 08:20:12.262829 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-gtzwg" event={"ID":"ead62bdc-2a69-4b3a-a6c5-b60614a34263","Type":"ContainerStarted","Data":"e189187fdc6a3371fa7f6f95b85e1442a48896c3ba21f6ee166390bb410fe8e2"} Mar 12 08:20:12 crc kubenswrapper[4809]: I0312 08:20:12.263319 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-gtzwg" Mar 12 08:20:12 crc kubenswrapper[4809]: I0312 08:20:12.265815 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-mnhzr" event={"ID":"87b1729d-5a9d-4e35-bec1-21d7307020f2","Type":"ContainerStarted","Data":"254bc290e3d8c4143a0b04f619ea0472c346a233dca8db11fe08ebf604e65703"} Mar 12 08:20:12 crc kubenswrapper[4809]: I0312 08:20:12.267641 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-mnhzr" Mar 12 08:20:12 crc kubenswrapper[4809]: I0312 08:20:12.272872 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-cvm8g" podStartSLOduration=2.6882915609999998 podStartE2EDuration="33.27285871s" podCreationTimestamp="2026-03-12 08:19:39 +0000 UTC" firstStartedPulling="2026-03-12 08:19:41.297889476 +0000 UTC m=+1254.879925209" lastFinishedPulling="2026-03-12 08:20:11.882456625 +0000 UTC m=+1285.464492358" observedRunningTime="2026-03-12 08:20:12.254478361 +0000 UTC m=+1285.836514094" watchObservedRunningTime="2026-03-12 08:20:12.27285871 +0000 UTC m=+1285.854894443" Mar 12 08:20:12 crc kubenswrapper[4809]: I0312 08:20:12.289885 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-9544k" podStartSLOduration=4.028243718 podStartE2EDuration="33.289867441s" podCreationTimestamp="2026-03-12 08:19:39 +0000 UTC" firstStartedPulling="2026-03-12 08:19:42.333333314 +0000 UTC m=+1255.915369047" lastFinishedPulling="2026-03-12 08:20:11.594957037 +0000 UTC m=+1285.176992770" observedRunningTime="2026-03-12 08:20:12.286819757 +0000 UTC m=+1285.868855490" watchObservedRunningTime="2026-03-12 08:20:12.289867441 +0000 UTC m=+1285.871903174" Mar 12 08:20:12 crc kubenswrapper[4809]: I0312 08:20:12.309013 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-mnhzr" podStartSLOduration=3.339775284 podStartE2EDuration="33.30899134s" podCreationTimestamp="2026-03-12 08:19:39 +0000 UTC" firstStartedPulling="2026-03-12 08:19:41.159137406 +0000 UTC m=+1254.741173139" lastFinishedPulling="2026-03-12 08:20:11.128353422 +0000 UTC m=+1284.710389195" observedRunningTime="2026-03-12 08:20:12.30286143 +0000 UTC m=+1285.884897163" watchObservedRunningTime="2026-03-12 08:20:12.30899134 +0000 UTC m=+1285.891027073" Mar 12 08:20:12 crc kubenswrapper[4809]: I0312 08:20:12.328095 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-gtzwg" podStartSLOduration=3.086716649 podStartE2EDuration="33.328074718s" podCreationTimestamp="2026-03-12 08:19:39 +0000 UTC" firstStartedPulling="2026-03-12 08:19:40.996743031 +0000 UTC m=+1254.578778754" lastFinishedPulling="2026-03-12 08:20:11.23810109 +0000 UTC m=+1284.820136823" observedRunningTime="2026-03-12 08:20:12.324636453 +0000 UTC m=+1285.906672186" watchObservedRunningTime="2026-03-12 08:20:12.328074718 +0000 UTC m=+1285.910110441" Mar 12 08:20:12 crc kubenswrapper[4809]: I0312 08:20:12.557297 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7bdkr4"] Mar 12 08:20:13 crc kubenswrapper[4809]: I0312 08:20:13.278800 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7bdkr4" event={"ID":"d42ca3a9-74a0-4e76-ac25-730f412c28de","Type":"ContainerStarted","Data":"8325f1381dbf0e82919eb17791e1769bf26f65f38e9c8f187955dd522eedcb88"} Mar 12 08:20:13 crc kubenswrapper[4809]: I0312 08:20:13.649217 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555060-zzm7h" Mar 12 08:20:13 crc kubenswrapper[4809]: I0312 08:20:13.783586 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gd9g5\" (UniqueName: \"kubernetes.io/projected/eb2286a3-9a77-43c2-903d-08ab8468c9b6-kube-api-access-gd9g5\") pod \"eb2286a3-9a77-43c2-903d-08ab8468c9b6\" (UID: \"eb2286a3-9a77-43c2-903d-08ab8468c9b6\") " Mar 12 08:20:13 crc kubenswrapper[4809]: I0312 08:20:13.791262 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb2286a3-9a77-43c2-903d-08ab8468c9b6-kube-api-access-gd9g5" (OuterVolumeSpecName: "kube-api-access-gd9g5") pod "eb2286a3-9a77-43c2-903d-08ab8468c9b6" (UID: "eb2286a3-9a77-43c2-903d-08ab8468c9b6"). InnerVolumeSpecName "kube-api-access-gd9g5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:20:13 crc kubenswrapper[4809]: I0312 08:20:13.888464 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gd9g5\" (UniqueName: \"kubernetes.io/projected/eb2286a3-9a77-43c2-903d-08ab8468c9b6-kube-api-access-gd9g5\") on node \"crc\" DevicePath \"\"" Mar 12 08:20:14 crc kubenswrapper[4809]: I0312 08:20:14.287345 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-mz9kq" event={"ID":"9da05ba1-fc66-48d8-a8ce-c99c04f0e416","Type":"ContainerStarted","Data":"c2c9a22fd513993839a0b699bb6a562390abf6b08e31505b31cced228c605df2"} Mar 12 08:20:14 crc kubenswrapper[4809]: I0312 08:20:14.288724 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-mz9kq" Mar 12 08:20:14 crc kubenswrapper[4809]: I0312 08:20:14.289800 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555060-zzm7h" event={"ID":"eb2286a3-9a77-43c2-903d-08ab8468c9b6","Type":"ContainerDied","Data":"25cc31d05fdf798634e45e52b925de1537d2001197beb3e53396d647ecd9519a"} Mar 12 08:20:14 crc kubenswrapper[4809]: I0312 08:20:14.289826 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25cc31d05fdf798634e45e52b925de1537d2001197beb3e53396d647ecd9519a" Mar 12 08:20:14 crc kubenswrapper[4809]: I0312 08:20:14.289862 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555060-zzm7h" Mar 12 08:20:14 crc kubenswrapper[4809]: I0312 08:20:14.316241 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-mz9kq" podStartSLOduration=29.301161692 podStartE2EDuration="35.316218646s" podCreationTimestamp="2026-03-12 08:19:39 +0000 UTC" firstStartedPulling="2026-03-12 08:20:07.618994351 +0000 UTC m=+1281.201030084" lastFinishedPulling="2026-03-12 08:20:13.634051295 +0000 UTC m=+1287.216087038" observedRunningTime="2026-03-12 08:20:14.314646142 +0000 UTC m=+1287.896681875" watchObservedRunningTime="2026-03-12 08:20:14.316218646 +0000 UTC m=+1287.898254379" Mar 12 08:20:14 crc kubenswrapper[4809]: I0312 08:20:14.712991 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555054-4zjhg"] Mar 12 08:20:14 crc kubenswrapper[4809]: I0312 08:20:14.725404 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555054-4zjhg"] Mar 12 08:20:15 crc kubenswrapper[4809]: I0312 08:20:15.059162 4809 patch_prober.go:28] interesting pod/machine-config-daemon-h6d4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 08:20:15 crc kubenswrapper[4809]: I0312 08:20:15.059792 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 08:20:15 crc kubenswrapper[4809]: I0312 08:20:15.117707 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcdb6b44-c8aa-4f25-82eb-f49c0566fc41" path="/var/lib/kubelet/pods/fcdb6b44-c8aa-4f25-82eb-f49c0566fc41/volumes" Mar 12 08:20:16 crc kubenswrapper[4809]: I0312 08:20:16.327835 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7bdkr4" event={"ID":"d42ca3a9-74a0-4e76-ac25-730f412c28de","Type":"ContainerStarted","Data":"6518d2d42a32579074955729cbb0f80cfd14bc5eee8585c672b0cf3b551b8f27"} Mar 12 08:20:16 crc kubenswrapper[4809]: I0312 08:20:16.328226 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7bdkr4" Mar 12 08:20:16 crc kubenswrapper[4809]: I0312 08:20:16.330199 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-s5r4z" event={"ID":"f3a998c8-9d1c-46ce-9bf6-adcbf704fb5c","Type":"ContainerStarted","Data":"affa355cd7f4868219d096f84c4596d9864ce1c251d1204a62a51d4c56d85a79"} Mar 12 08:20:16 crc kubenswrapper[4809]: I0312 08:20:16.330520 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-s5r4z" Mar 12 08:20:16 crc kubenswrapper[4809]: I0312 08:20:16.361481 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7bdkr4" podStartSLOduration=34.699356142 podStartE2EDuration="37.361453364s" podCreationTimestamp="2026-03-12 08:19:39 +0000 UTC" firstStartedPulling="2026-03-12 08:20:12.587611561 +0000 UTC m=+1286.169647294" lastFinishedPulling="2026-03-12 08:20:15.249708793 +0000 UTC m=+1288.831744516" observedRunningTime="2026-03-12 08:20:16.358705668 +0000 UTC m=+1289.940741451" watchObservedRunningTime="2026-03-12 08:20:16.361453364 +0000 UTC m=+1289.943489127" Mar 12 08:20:16 crc kubenswrapper[4809]: I0312 08:20:16.381666 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-s5r4z" podStartSLOduration=4.564435588 podStartE2EDuration="37.381640612s" podCreationTimestamp="2026-03-12 08:19:39 +0000 UTC" firstStartedPulling="2026-03-12 08:19:42.434108264 +0000 UTC m=+1256.016143997" lastFinishedPulling="2026-03-12 08:20:15.251313258 +0000 UTC m=+1288.833349021" observedRunningTime="2026-03-12 08:20:16.379706729 +0000 UTC m=+1289.961742502" watchObservedRunningTime="2026-03-12 08:20:16.381640612 +0000 UTC m=+1289.963676375" Mar 12 08:20:16 crc kubenswrapper[4809]: I0312 08:20:16.447824 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-5d5444f5b-xmqds" Mar 12 08:20:17 crc kubenswrapper[4809]: I0312 08:20:17.343292 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-rkf7l" event={"ID":"a4ff847c-f029-4537-ab92-0ae803769dfc","Type":"ContainerStarted","Data":"8e0daf4df79d07c6b37c6c6d5413673fcd60a4abe12c3c566981414e0a32994f"} Mar 12 08:20:17 crc kubenswrapper[4809]: I0312 08:20:17.344046 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-rkf7l" Mar 12 08:20:17 crc kubenswrapper[4809]: I0312 08:20:17.378024 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-rkf7l" podStartSLOduration=3.139297405 podStartE2EDuration="38.37799447s" podCreationTimestamp="2026-03-12 08:19:39 +0000 UTC" firstStartedPulling="2026-03-12 08:19:41.297949528 +0000 UTC m=+1254.879985261" lastFinishedPulling="2026-03-12 08:20:16.536646593 +0000 UTC m=+1290.118682326" observedRunningTime="2026-03-12 08:20:17.375191412 +0000 UTC m=+1290.957227145" watchObservedRunningTime="2026-03-12 08:20:17.37799447 +0000 UTC m=+1290.960030203" Mar 12 08:20:18 crc kubenswrapper[4809]: I0312 08:20:18.354333 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-tlxd5" event={"ID":"b9586c2f-2fbd-4ba8-a5a5-45ed306bc53e","Type":"ContainerStarted","Data":"58dba761330026c4251abfd8c9b811d0114c311523eadda7d7d62617ea3186ff"} Mar 12 08:20:18 crc kubenswrapper[4809]: I0312 08:20:18.354924 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-tlxd5" Mar 12 08:20:18 crc kubenswrapper[4809]: I0312 08:20:18.373021 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-tlxd5" podStartSLOduration=4.054664491 podStartE2EDuration="39.373003379s" podCreationTimestamp="2026-03-12 08:19:39 +0000 UTC" firstStartedPulling="2026-03-12 08:19:42.337971124 +0000 UTC m=+1255.920006857" lastFinishedPulling="2026-03-12 08:20:17.656310012 +0000 UTC m=+1291.238345745" observedRunningTime="2026-03-12 08:20:18.368350711 +0000 UTC m=+1291.950386444" watchObservedRunningTime="2026-03-12 08:20:18.373003379 +0000 UTC m=+1291.955039112" Mar 12 08:20:19 crc kubenswrapper[4809]: I0312 08:20:19.382334 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-677c674df7-p24mv" event={"ID":"60e08cbe-2284-4030-8073-892fd74bcdc6","Type":"ContainerStarted","Data":"81d4f7dbb10ddb59f8c2b5dd29add4d6f183e2cdcaa9a2953e4ed418e7be6ad6"} Mar 12 08:20:19 crc kubenswrapper[4809]: I0312 08:20:19.413576 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-677c674df7-p24mv" podStartSLOduration=4.25657627 podStartE2EDuration="40.41355046s" podCreationTimestamp="2026-03-12 08:19:39 +0000 UTC" firstStartedPulling="2026-03-12 08:19:42.469561796 +0000 UTC m=+1256.051597529" lastFinishedPulling="2026-03-12 08:20:18.626535986 +0000 UTC m=+1292.208571719" observedRunningTime="2026-03-12 08:20:19.404267353 +0000 UTC m=+1292.986303136" watchObservedRunningTime="2026-03-12 08:20:19.41355046 +0000 UTC m=+1292.995586193" Mar 12 08:20:19 crc kubenswrapper[4809]: I0312 08:20:19.429651 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-gtzwg" Mar 12 08:20:19 crc kubenswrapper[4809]: I0312 08:20:19.438881 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-7dq74" Mar 12 08:20:19 crc kubenswrapper[4809]: I0312 08:20:19.488001 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-vbjh6" Mar 12 08:20:19 crc kubenswrapper[4809]: I0312 08:20:19.532984 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-qtwhh" Mar 12 08:20:19 crc kubenswrapper[4809]: I0312 08:20:19.592593 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-cvm8g" Mar 12 08:20:19 crc kubenswrapper[4809]: I0312 08:20:19.723078 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-mnhzr" Mar 12 08:20:19 crc kubenswrapper[4809]: I0312 08:20:19.872335 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-7c2ts" Mar 12 08:20:19 crc kubenswrapper[4809]: I0312 08:20:19.895804 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-kwt4n" Mar 12 08:20:19 crc kubenswrapper[4809]: I0312 08:20:19.983195 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-dbw4q" Mar 12 08:20:20 crc kubenswrapper[4809]: I0312 08:20:20.036026 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-s5r4z" Mar 12 08:20:20 crc kubenswrapper[4809]: I0312 08:20:20.271267 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-9544k" Mar 12 08:20:20 crc kubenswrapper[4809]: I0312 08:20:20.394674 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-nwrl4" event={"ID":"e349e256-24bd-459e-b5d5-4bf9d85b2a5d","Type":"ContainerStarted","Data":"3bc86a5f138c51336ea9bb36f94b7532d330d87d98a5bc84a18ddedf5d3d032f"} Mar 12 08:20:20 crc kubenswrapper[4809]: I0312 08:20:20.394914 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-nwrl4" Mar 12 08:20:20 crc kubenswrapper[4809]: I0312 08:20:20.414638 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-nwrl4" podStartSLOduration=4.205136703 podStartE2EDuration="41.414612526s" podCreationTimestamp="2026-03-12 08:19:39 +0000 UTC" firstStartedPulling="2026-03-12 08:19:42.324692165 +0000 UTC m=+1255.906727898" lastFinishedPulling="2026-03-12 08:20:19.534167988 +0000 UTC m=+1293.116203721" observedRunningTime="2026-03-12 08:20:20.413797654 +0000 UTC m=+1293.995833387" watchObservedRunningTime="2026-03-12 08:20:20.414612526 +0000 UTC m=+1293.996648269" Mar 12 08:20:20 crc kubenswrapper[4809]: I0312 08:20:20.458715 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-nsxhb" Mar 12 08:20:20 crc kubenswrapper[4809]: I0312 08:20:20.538910 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-677c674df7-p24mv" Mar 12 08:20:20 crc kubenswrapper[4809]: I0312 08:20:20.643185 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-svzr9" Mar 12 08:20:20 crc kubenswrapper[4809]: I0312 08:20:20.810617 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-rw46j" Mar 12 08:20:21 crc kubenswrapper[4809]: E0312 08:20:21.108071 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.158:5001/openstack-k8s-operators/telemetry-operator:8ccfcdb23140e93b912bb23903a7d6fafb754e30\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-57c6b5bd58-tt85t" podUID="b40480af-2b15-4c8f-9bf2-f63ca0dd6870" Mar 12 08:20:22 crc kubenswrapper[4809]: I0312 08:20:22.049806 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7bdkr4" Mar 12 08:20:23 crc kubenswrapper[4809]: I0312 08:20:23.447392 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tc5f9" event={"ID":"59492e92-3148-41d4-86ab-0a69de5a3518","Type":"ContainerStarted","Data":"a66cf108af83c1066f053d0999295c1c9004b1a8c6bff08e386f260aa65b34a8"} Mar 12 08:20:23 crc kubenswrapper[4809]: I0312 08:20:23.472353 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tc5f9" podStartSLOduration=4.112883761 podStartE2EDuration="44.472327789s" podCreationTimestamp="2026-03-12 08:19:39 +0000 UTC" firstStartedPulling="2026-03-12 08:19:42.293940554 +0000 UTC m=+1255.875976287" lastFinishedPulling="2026-03-12 08:20:22.653384582 +0000 UTC m=+1296.235420315" observedRunningTime="2026-03-12 08:20:23.468935746 +0000 UTC m=+1297.050971479" watchObservedRunningTime="2026-03-12 08:20:23.472327789 +0000 UTC m=+1297.054363532" Mar 12 08:20:25 crc kubenswrapper[4809]: I0312 08:20:25.297620 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-mz9kq" Mar 12 08:20:29 crc kubenswrapper[4809]: I0312 08:20:29.718824 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-rkf7l" Mar 12 08:20:30 crc kubenswrapper[4809]: I0312 08:20:30.035793 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-tlxd5" Mar 12 08:20:30 crc kubenswrapper[4809]: I0312 08:20:30.187981 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-nwrl4" Mar 12 08:20:30 crc kubenswrapper[4809]: I0312 08:20:30.543204 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-677c674df7-p24mv" Mar 12 08:20:32 crc kubenswrapper[4809]: I0312 08:20:32.598380 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-57c6b5bd58-tt85t" event={"ID":"b40480af-2b15-4c8f-9bf2-f63ca0dd6870","Type":"ContainerStarted","Data":"fe0f707e1abcd58e7d3ce4b7d951c97bc5b03d32dbdea8f4be6e16b1c6d8f122"} Mar 12 08:20:32 crc kubenswrapper[4809]: I0312 08:20:32.600677 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-57c6b5bd58-tt85t" Mar 12 08:20:32 crc kubenswrapper[4809]: I0312 08:20:32.641847 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-57c6b5bd58-tt85t" podStartSLOduration=3.942458761 podStartE2EDuration="53.641803258s" podCreationTimestamp="2026-03-12 08:19:39 +0000 UTC" firstStartedPulling="2026-03-12 08:19:42.499012681 +0000 UTC m=+1256.081048414" lastFinishedPulling="2026-03-12 08:20:32.198357138 +0000 UTC m=+1305.780392911" observedRunningTime="2026-03-12 08:20:32.624241898 +0000 UTC m=+1306.206277631" watchObservedRunningTime="2026-03-12 08:20:32.641803258 +0000 UTC m=+1306.223839031" Mar 12 08:20:40 crc kubenswrapper[4809]: I0312 08:20:40.543865 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-57c6b5bd58-tt85t" Mar 12 08:20:45 crc kubenswrapper[4809]: I0312 08:20:45.048624 4809 patch_prober.go:28] interesting pod/machine-config-daemon-h6d4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 08:20:45 crc kubenswrapper[4809]: I0312 08:20:45.049393 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 08:20:57 crc kubenswrapper[4809]: I0312 08:20:57.570048 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-xvf62"] Mar 12 08:20:57 crc kubenswrapper[4809]: E0312 08:20:57.570964 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb2286a3-9a77-43c2-903d-08ab8468c9b6" containerName="oc" Mar 12 08:20:57 crc kubenswrapper[4809]: I0312 08:20:57.570977 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb2286a3-9a77-43c2-903d-08ab8468c9b6" containerName="oc" Mar 12 08:20:57 crc kubenswrapper[4809]: I0312 08:20:57.571166 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb2286a3-9a77-43c2-903d-08ab8468c9b6" containerName="oc" Mar 12 08:20:57 crc kubenswrapper[4809]: I0312 08:20:57.572082 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-xvf62" Mar 12 08:20:57 crc kubenswrapper[4809]: I0312 08:20:57.576244 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-b554j" Mar 12 08:20:57 crc kubenswrapper[4809]: I0312 08:20:57.576432 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Mar 12 08:20:57 crc kubenswrapper[4809]: I0312 08:20:57.576573 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Mar 12 08:20:57 crc kubenswrapper[4809]: I0312 08:20:57.577926 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Mar 12 08:20:57 crc kubenswrapper[4809]: I0312 08:20:57.595045 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-xvf62"] Mar 12 08:20:57 crc kubenswrapper[4809]: I0312 08:20:57.625150 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h28x\" (UniqueName: \"kubernetes.io/projected/a9d7c154-1946-486a-a724-181924559d35-kube-api-access-7h28x\") pod \"dnsmasq-dns-675f4bcbfc-xvf62\" (UID: \"a9d7c154-1946-486a-a724-181924559d35\") " pod="openstack/dnsmasq-dns-675f4bcbfc-xvf62" Mar 12 08:20:57 crc kubenswrapper[4809]: I0312 08:20:57.625211 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9d7c154-1946-486a-a724-181924559d35-config\") pod \"dnsmasq-dns-675f4bcbfc-xvf62\" (UID: \"a9d7c154-1946-486a-a724-181924559d35\") " pod="openstack/dnsmasq-dns-675f4bcbfc-xvf62" Mar 12 08:20:57 crc kubenswrapper[4809]: I0312 08:20:57.625434 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-8v8xx"] Mar 12 08:20:57 crc kubenswrapper[4809]: I0312 08:20:57.627015 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-8v8xx" Mar 12 08:20:57 crc kubenswrapper[4809]: I0312 08:20:57.630716 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Mar 12 08:20:57 crc kubenswrapper[4809]: I0312 08:20:57.645593 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-8v8xx"] Mar 12 08:20:57 crc kubenswrapper[4809]: I0312 08:20:57.727205 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c364e2e-35b0-4c80-8592-eb5841e7b6d8-config\") pod \"dnsmasq-dns-78dd6ddcc-8v8xx\" (UID: \"1c364e2e-35b0-4c80-8592-eb5841e7b6d8\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8v8xx" Mar 12 08:20:57 crc kubenswrapper[4809]: I0312 08:20:57.727299 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvftx\" (UniqueName: \"kubernetes.io/projected/1c364e2e-35b0-4c80-8592-eb5841e7b6d8-kube-api-access-mvftx\") pod \"dnsmasq-dns-78dd6ddcc-8v8xx\" (UID: \"1c364e2e-35b0-4c80-8592-eb5841e7b6d8\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8v8xx" Mar 12 08:20:57 crc kubenswrapper[4809]: I0312 08:20:57.727501 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7h28x\" (UniqueName: \"kubernetes.io/projected/a9d7c154-1946-486a-a724-181924559d35-kube-api-access-7h28x\") pod \"dnsmasq-dns-675f4bcbfc-xvf62\" (UID: \"a9d7c154-1946-486a-a724-181924559d35\") " pod="openstack/dnsmasq-dns-675f4bcbfc-xvf62" Mar 12 08:20:57 crc kubenswrapper[4809]: I0312 08:20:57.727555 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9d7c154-1946-486a-a724-181924559d35-config\") pod \"dnsmasq-dns-675f4bcbfc-xvf62\" (UID: \"a9d7c154-1946-486a-a724-181924559d35\") " pod="openstack/dnsmasq-dns-675f4bcbfc-xvf62" Mar 12 08:20:57 crc kubenswrapper[4809]: I0312 08:20:57.727749 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c364e2e-35b0-4c80-8592-eb5841e7b6d8-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-8v8xx\" (UID: \"1c364e2e-35b0-4c80-8592-eb5841e7b6d8\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8v8xx" Mar 12 08:20:57 crc kubenswrapper[4809]: I0312 08:20:57.729042 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9d7c154-1946-486a-a724-181924559d35-config\") pod \"dnsmasq-dns-675f4bcbfc-xvf62\" (UID: \"a9d7c154-1946-486a-a724-181924559d35\") " pod="openstack/dnsmasq-dns-675f4bcbfc-xvf62" Mar 12 08:20:57 crc kubenswrapper[4809]: I0312 08:20:57.758179 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7h28x\" (UniqueName: \"kubernetes.io/projected/a9d7c154-1946-486a-a724-181924559d35-kube-api-access-7h28x\") pod \"dnsmasq-dns-675f4bcbfc-xvf62\" (UID: \"a9d7c154-1946-486a-a724-181924559d35\") " pod="openstack/dnsmasq-dns-675f4bcbfc-xvf62" Mar 12 08:20:57 crc kubenswrapper[4809]: I0312 08:20:57.828892 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c364e2e-35b0-4c80-8592-eb5841e7b6d8-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-8v8xx\" (UID: \"1c364e2e-35b0-4c80-8592-eb5841e7b6d8\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8v8xx" Mar 12 08:20:57 crc kubenswrapper[4809]: I0312 08:20:57.829395 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c364e2e-35b0-4c80-8592-eb5841e7b6d8-config\") pod \"dnsmasq-dns-78dd6ddcc-8v8xx\" (UID: \"1c364e2e-35b0-4c80-8592-eb5841e7b6d8\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8v8xx" Mar 12 08:20:57 crc kubenswrapper[4809]: I0312 08:20:57.829455 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvftx\" (UniqueName: \"kubernetes.io/projected/1c364e2e-35b0-4c80-8592-eb5841e7b6d8-kube-api-access-mvftx\") pod \"dnsmasq-dns-78dd6ddcc-8v8xx\" (UID: \"1c364e2e-35b0-4c80-8592-eb5841e7b6d8\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8v8xx" Mar 12 08:20:57 crc kubenswrapper[4809]: I0312 08:20:57.830068 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c364e2e-35b0-4c80-8592-eb5841e7b6d8-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-8v8xx\" (UID: \"1c364e2e-35b0-4c80-8592-eb5841e7b6d8\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8v8xx" Mar 12 08:20:57 crc kubenswrapper[4809]: I0312 08:20:57.830193 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c364e2e-35b0-4c80-8592-eb5841e7b6d8-config\") pod \"dnsmasq-dns-78dd6ddcc-8v8xx\" (UID: \"1c364e2e-35b0-4c80-8592-eb5841e7b6d8\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8v8xx" Mar 12 08:20:57 crc kubenswrapper[4809]: I0312 08:20:57.867859 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvftx\" (UniqueName: \"kubernetes.io/projected/1c364e2e-35b0-4c80-8592-eb5841e7b6d8-kube-api-access-mvftx\") pod \"dnsmasq-dns-78dd6ddcc-8v8xx\" (UID: \"1c364e2e-35b0-4c80-8592-eb5841e7b6d8\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8v8xx" Mar 12 08:20:57 crc kubenswrapper[4809]: I0312 08:20:57.898978 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-xvf62" Mar 12 08:20:57 crc kubenswrapper[4809]: I0312 08:20:57.945755 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-8v8xx" Mar 12 08:20:58 crc kubenswrapper[4809]: I0312 08:20:58.420654 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-xvf62"] Mar 12 08:20:58 crc kubenswrapper[4809]: I0312 08:20:58.526697 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-8v8xx"] Mar 12 08:20:58 crc kubenswrapper[4809]: I0312 08:20:58.942265 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-xvf62" event={"ID":"a9d7c154-1946-486a-a724-181924559d35","Type":"ContainerStarted","Data":"5cfebb24669af9fda681fe4039d7553abdf058af79b4ef69855cd61d3bf77d09"} Mar 12 08:20:58 crc kubenswrapper[4809]: I0312 08:20:58.944217 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-8v8xx" event={"ID":"1c364e2e-35b0-4c80-8592-eb5841e7b6d8","Type":"ContainerStarted","Data":"a59e03906d18510ab1c027e0a21ddd0d38840b7a64a9e14561224041a796c975"} Mar 12 08:20:59 crc kubenswrapper[4809]: I0312 08:20:59.078949 4809 scope.go:117] "RemoveContainer" containerID="d9f1ba49fb9a53764bd9806a21df910c36608b752a45981f23ee548921c3bf9a" Mar 12 08:21:00 crc kubenswrapper[4809]: I0312 08:21:00.667687 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-xvf62"] Mar 12 08:21:00 crc kubenswrapper[4809]: I0312 08:21:00.709382 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-lctvd"] Mar 12 08:21:00 crc kubenswrapper[4809]: I0312 08:21:00.712829 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-lctvd" Mar 12 08:21:00 crc kubenswrapper[4809]: I0312 08:21:00.730266 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-lctvd"] Mar 12 08:21:00 crc kubenswrapper[4809]: I0312 08:21:00.813299 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03a48b86-8bc7-4d5d-87c1-61f3f57110b7-config\") pod \"dnsmasq-dns-5ccc8479f9-lctvd\" (UID: \"03a48b86-8bc7-4d5d-87c1-61f3f57110b7\") " pod="openstack/dnsmasq-dns-5ccc8479f9-lctvd" Mar 12 08:21:00 crc kubenswrapper[4809]: I0312 08:21:00.813378 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbspm\" (UniqueName: \"kubernetes.io/projected/03a48b86-8bc7-4d5d-87c1-61f3f57110b7-kube-api-access-pbspm\") pod \"dnsmasq-dns-5ccc8479f9-lctvd\" (UID: \"03a48b86-8bc7-4d5d-87c1-61f3f57110b7\") " pod="openstack/dnsmasq-dns-5ccc8479f9-lctvd" Mar 12 08:21:00 crc kubenswrapper[4809]: I0312 08:21:00.813411 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03a48b86-8bc7-4d5d-87c1-61f3f57110b7-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-lctvd\" (UID: \"03a48b86-8bc7-4d5d-87c1-61f3f57110b7\") " pod="openstack/dnsmasq-dns-5ccc8479f9-lctvd" Mar 12 08:21:00 crc kubenswrapper[4809]: I0312 08:21:00.915509 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbspm\" (UniqueName: \"kubernetes.io/projected/03a48b86-8bc7-4d5d-87c1-61f3f57110b7-kube-api-access-pbspm\") pod \"dnsmasq-dns-5ccc8479f9-lctvd\" (UID: \"03a48b86-8bc7-4d5d-87c1-61f3f57110b7\") " pod="openstack/dnsmasq-dns-5ccc8479f9-lctvd" Mar 12 08:21:00 crc kubenswrapper[4809]: I0312 08:21:00.915586 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03a48b86-8bc7-4d5d-87c1-61f3f57110b7-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-lctvd\" (UID: \"03a48b86-8bc7-4d5d-87c1-61f3f57110b7\") " pod="openstack/dnsmasq-dns-5ccc8479f9-lctvd" Mar 12 08:21:00 crc kubenswrapper[4809]: I0312 08:21:00.915757 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03a48b86-8bc7-4d5d-87c1-61f3f57110b7-config\") pod \"dnsmasq-dns-5ccc8479f9-lctvd\" (UID: \"03a48b86-8bc7-4d5d-87c1-61f3f57110b7\") " pod="openstack/dnsmasq-dns-5ccc8479f9-lctvd" Mar 12 08:21:00 crc kubenswrapper[4809]: I0312 08:21:00.917063 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03a48b86-8bc7-4d5d-87c1-61f3f57110b7-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-lctvd\" (UID: \"03a48b86-8bc7-4d5d-87c1-61f3f57110b7\") " pod="openstack/dnsmasq-dns-5ccc8479f9-lctvd" Mar 12 08:21:00 crc kubenswrapper[4809]: I0312 08:21:00.917281 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03a48b86-8bc7-4d5d-87c1-61f3f57110b7-config\") pod \"dnsmasq-dns-5ccc8479f9-lctvd\" (UID: \"03a48b86-8bc7-4d5d-87c1-61f3f57110b7\") " pod="openstack/dnsmasq-dns-5ccc8479f9-lctvd" Mar 12 08:21:00 crc kubenswrapper[4809]: I0312 08:21:00.976328 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbspm\" (UniqueName: \"kubernetes.io/projected/03a48b86-8bc7-4d5d-87c1-61f3f57110b7-kube-api-access-pbspm\") pod \"dnsmasq-dns-5ccc8479f9-lctvd\" (UID: \"03a48b86-8bc7-4d5d-87c1-61f3f57110b7\") " pod="openstack/dnsmasq-dns-5ccc8479f9-lctvd" Mar 12 08:21:01 crc kubenswrapper[4809]: I0312 08:21:01.047863 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-lctvd" Mar 12 08:21:01 crc kubenswrapper[4809]: I0312 08:21:01.067836 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-8v8xx"] Mar 12 08:21:01 crc kubenswrapper[4809]: I0312 08:21:01.146101 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-nzln7"] Mar 12 08:21:01 crc kubenswrapper[4809]: I0312 08:21:01.149701 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-nzln7" Mar 12 08:21:01 crc kubenswrapper[4809]: I0312 08:21:01.170649 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-nzln7"] Mar 12 08:21:01 crc kubenswrapper[4809]: I0312 08:21:01.332942 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36b41bd2-cbbb-49b4-b5bb-4cb6da3c748f-config\") pod \"dnsmasq-dns-57d769cc4f-nzln7\" (UID: \"36b41bd2-cbbb-49b4-b5bb-4cb6da3c748f\") " pod="openstack/dnsmasq-dns-57d769cc4f-nzln7" Mar 12 08:21:01 crc kubenswrapper[4809]: I0312 08:21:01.333219 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsjxm\" (UniqueName: \"kubernetes.io/projected/36b41bd2-cbbb-49b4-b5bb-4cb6da3c748f-kube-api-access-bsjxm\") pod \"dnsmasq-dns-57d769cc4f-nzln7\" (UID: \"36b41bd2-cbbb-49b4-b5bb-4cb6da3c748f\") " pod="openstack/dnsmasq-dns-57d769cc4f-nzln7" Mar 12 08:21:01 crc kubenswrapper[4809]: I0312 08:21:01.333355 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36b41bd2-cbbb-49b4-b5bb-4cb6da3c748f-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-nzln7\" (UID: \"36b41bd2-cbbb-49b4-b5bb-4cb6da3c748f\") " pod="openstack/dnsmasq-dns-57d769cc4f-nzln7" Mar 12 08:21:01 crc kubenswrapper[4809]: I0312 08:21:01.435700 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36b41bd2-cbbb-49b4-b5bb-4cb6da3c748f-config\") pod \"dnsmasq-dns-57d769cc4f-nzln7\" (UID: \"36b41bd2-cbbb-49b4-b5bb-4cb6da3c748f\") " pod="openstack/dnsmasq-dns-57d769cc4f-nzln7" Mar 12 08:21:01 crc kubenswrapper[4809]: I0312 08:21:01.435824 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsjxm\" (UniqueName: \"kubernetes.io/projected/36b41bd2-cbbb-49b4-b5bb-4cb6da3c748f-kube-api-access-bsjxm\") pod \"dnsmasq-dns-57d769cc4f-nzln7\" (UID: \"36b41bd2-cbbb-49b4-b5bb-4cb6da3c748f\") " pod="openstack/dnsmasq-dns-57d769cc4f-nzln7" Mar 12 08:21:01 crc kubenswrapper[4809]: I0312 08:21:01.435905 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36b41bd2-cbbb-49b4-b5bb-4cb6da3c748f-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-nzln7\" (UID: \"36b41bd2-cbbb-49b4-b5bb-4cb6da3c748f\") " pod="openstack/dnsmasq-dns-57d769cc4f-nzln7" Mar 12 08:21:01 crc kubenswrapper[4809]: I0312 08:21:01.436669 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36b41bd2-cbbb-49b4-b5bb-4cb6da3c748f-config\") pod \"dnsmasq-dns-57d769cc4f-nzln7\" (UID: \"36b41bd2-cbbb-49b4-b5bb-4cb6da3c748f\") " pod="openstack/dnsmasq-dns-57d769cc4f-nzln7" Mar 12 08:21:01 crc kubenswrapper[4809]: I0312 08:21:01.436735 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36b41bd2-cbbb-49b4-b5bb-4cb6da3c748f-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-nzln7\" (UID: \"36b41bd2-cbbb-49b4-b5bb-4cb6da3c748f\") " pod="openstack/dnsmasq-dns-57d769cc4f-nzln7" Mar 12 08:21:01 crc kubenswrapper[4809]: I0312 08:21:01.462192 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsjxm\" (UniqueName: \"kubernetes.io/projected/36b41bd2-cbbb-49b4-b5bb-4cb6da3c748f-kube-api-access-bsjxm\") pod \"dnsmasq-dns-57d769cc4f-nzln7\" (UID: \"36b41bd2-cbbb-49b4-b5bb-4cb6da3c748f\") " pod="openstack/dnsmasq-dns-57d769cc4f-nzln7" Mar 12 08:21:01 crc kubenswrapper[4809]: I0312 08:21:01.543838 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-nzln7" Mar 12 08:21:01 crc kubenswrapper[4809]: I0312 08:21:01.818152 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 12 08:21:01 crc kubenswrapper[4809]: I0312 08:21:01.833981 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:21:01 crc kubenswrapper[4809]: I0312 08:21:01.838514 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-9gcnx" Mar 12 08:21:01 crc kubenswrapper[4809]: I0312 08:21:01.838837 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Mar 12 08:21:01 crc kubenswrapper[4809]: I0312 08:21:01.838990 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Mar 12 08:21:01 crc kubenswrapper[4809]: I0312 08:21:01.839172 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Mar 12 08:21:01 crc kubenswrapper[4809]: I0312 08:21:01.839207 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Mar 12 08:21:01 crc kubenswrapper[4809]: I0312 08:21:01.839310 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Mar 12 08:21:01 crc kubenswrapper[4809]: I0312 08:21:01.842526 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-lctvd"] Mar 12 08:21:01 crc kubenswrapper[4809]: I0312 08:21:01.844082 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Mar 12 08:21:01 crc kubenswrapper[4809]: I0312 08:21:01.862494 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 12 08:21:01 crc kubenswrapper[4809]: I0312 08:21:01.947075 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dae47d35-2955-4c02-88bb-a0fbe4cd7bf0-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"dae47d35-2955-4c02-88bb-a0fbe4cd7bf0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:21:01 crc kubenswrapper[4809]: I0312 08:21:01.947173 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dae47d35-2955-4c02-88bb-a0fbe4cd7bf0-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"dae47d35-2955-4c02-88bb-a0fbe4cd7bf0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:21:01 crc kubenswrapper[4809]: I0312 08:21:01.947192 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dae47d35-2955-4c02-88bb-a0fbe4cd7bf0-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"dae47d35-2955-4c02-88bb-a0fbe4cd7bf0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:21:01 crc kubenswrapper[4809]: I0312 08:21:01.947441 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dae47d35-2955-4c02-88bb-a0fbe4cd7bf0-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"dae47d35-2955-4c02-88bb-a0fbe4cd7bf0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:21:01 crc kubenswrapper[4809]: I0312 08:21:01.947494 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ef1f67d4-0f74-4c05-ae85-7535fc08390e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ef1f67d4-0f74-4c05-ae85-7535fc08390e\") pod \"rabbitmq-cell1-server-0\" (UID: \"dae47d35-2955-4c02-88bb-a0fbe4cd7bf0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:21:01 crc kubenswrapper[4809]: I0312 08:21:01.947624 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dae47d35-2955-4c02-88bb-a0fbe4cd7bf0-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"dae47d35-2955-4c02-88bb-a0fbe4cd7bf0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:21:01 crc kubenswrapper[4809]: I0312 08:21:01.947713 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dae47d35-2955-4c02-88bb-a0fbe4cd7bf0-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"dae47d35-2955-4c02-88bb-a0fbe4cd7bf0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:21:01 crc kubenswrapper[4809]: I0312 08:21:01.947764 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8n88\" (UniqueName: \"kubernetes.io/projected/dae47d35-2955-4c02-88bb-a0fbe4cd7bf0-kube-api-access-z8n88\") pod \"rabbitmq-cell1-server-0\" (UID: \"dae47d35-2955-4c02-88bb-a0fbe4cd7bf0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:21:01 crc kubenswrapper[4809]: I0312 08:21:01.948384 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dae47d35-2955-4c02-88bb-a0fbe4cd7bf0-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"dae47d35-2955-4c02-88bb-a0fbe4cd7bf0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:21:01 crc kubenswrapper[4809]: I0312 08:21:01.948454 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dae47d35-2955-4c02-88bb-a0fbe4cd7bf0-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"dae47d35-2955-4c02-88bb-a0fbe4cd7bf0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:21:01 crc kubenswrapper[4809]: I0312 08:21:01.948489 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dae47d35-2955-4c02-88bb-a0fbe4cd7bf0-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"dae47d35-2955-4c02-88bb-a0fbe4cd7bf0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.051465 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dae47d35-2955-4c02-88bb-a0fbe4cd7bf0-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"dae47d35-2955-4c02-88bb-a0fbe4cd7bf0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.051515 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ef1f67d4-0f74-4c05-ae85-7535fc08390e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ef1f67d4-0f74-4c05-ae85-7535fc08390e\") pod \"rabbitmq-cell1-server-0\" (UID: \"dae47d35-2955-4c02-88bb-a0fbe4cd7bf0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.051566 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dae47d35-2955-4c02-88bb-a0fbe4cd7bf0-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"dae47d35-2955-4c02-88bb-a0fbe4cd7bf0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.051600 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dae47d35-2955-4c02-88bb-a0fbe4cd7bf0-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"dae47d35-2955-4c02-88bb-a0fbe4cd7bf0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.051622 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8n88\" (UniqueName: \"kubernetes.io/projected/dae47d35-2955-4c02-88bb-a0fbe4cd7bf0-kube-api-access-z8n88\") pod \"rabbitmq-cell1-server-0\" (UID: \"dae47d35-2955-4c02-88bb-a0fbe4cd7bf0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.051653 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dae47d35-2955-4c02-88bb-a0fbe4cd7bf0-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"dae47d35-2955-4c02-88bb-a0fbe4cd7bf0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.051671 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dae47d35-2955-4c02-88bb-a0fbe4cd7bf0-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"dae47d35-2955-4c02-88bb-a0fbe4cd7bf0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.051689 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dae47d35-2955-4c02-88bb-a0fbe4cd7bf0-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"dae47d35-2955-4c02-88bb-a0fbe4cd7bf0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.051732 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dae47d35-2955-4c02-88bb-a0fbe4cd7bf0-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"dae47d35-2955-4c02-88bb-a0fbe4cd7bf0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.051750 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dae47d35-2955-4c02-88bb-a0fbe4cd7bf0-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"dae47d35-2955-4c02-88bb-a0fbe4cd7bf0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.051765 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dae47d35-2955-4c02-88bb-a0fbe4cd7bf0-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"dae47d35-2955-4c02-88bb-a0fbe4cd7bf0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.052398 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dae47d35-2955-4c02-88bb-a0fbe4cd7bf0-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"dae47d35-2955-4c02-88bb-a0fbe4cd7bf0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.053687 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dae47d35-2955-4c02-88bb-a0fbe4cd7bf0-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"dae47d35-2955-4c02-88bb-a0fbe4cd7bf0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.053868 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dae47d35-2955-4c02-88bb-a0fbe4cd7bf0-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"dae47d35-2955-4c02-88bb-a0fbe4cd7bf0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.054753 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dae47d35-2955-4c02-88bb-a0fbe4cd7bf0-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"dae47d35-2955-4c02-88bb-a0fbe4cd7bf0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.054896 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dae47d35-2955-4c02-88bb-a0fbe4cd7bf0-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"dae47d35-2955-4c02-88bb-a0fbe4cd7bf0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.059139 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dae47d35-2955-4c02-88bb-a0fbe4cd7bf0-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"dae47d35-2955-4c02-88bb-a0fbe4cd7bf0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.059855 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dae47d35-2955-4c02-88bb-a0fbe4cd7bf0-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"dae47d35-2955-4c02-88bb-a0fbe4cd7bf0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.059904 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dae47d35-2955-4c02-88bb-a0fbe4cd7bf0-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"dae47d35-2955-4c02-88bb-a0fbe4cd7bf0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.060239 4809 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.060306 4809 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ef1f67d4-0f74-4c05-ae85-7535fc08390e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ef1f67d4-0f74-4c05-ae85-7535fc08390e\") pod \"rabbitmq-cell1-server-0\" (UID: \"dae47d35-2955-4c02-88bb-a0fbe4cd7bf0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/bf583053864d0acd6928dddf5a4c14113cd4eb1967e29536659ba8936bc6cdee/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.060525 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dae47d35-2955-4c02-88bb-a0fbe4cd7bf0-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"dae47d35-2955-4c02-88bb-a0fbe4cd7bf0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.076182 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8n88\" (UniqueName: \"kubernetes.io/projected/dae47d35-2955-4c02-88bb-a0fbe4cd7bf0-kube-api-access-z8n88\") pod \"rabbitmq-cell1-server-0\" (UID: \"dae47d35-2955-4c02-88bb-a0fbe4cd7bf0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.110916 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ef1f67d4-0f74-4c05-ae85-7535fc08390e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ef1f67d4-0f74-4c05-ae85-7535fc08390e\") pod \"rabbitmq-cell1-server-0\" (UID: \"dae47d35-2955-4c02-88bb-a0fbe4cd7bf0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.184079 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.267875 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.277304 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.282881 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-46ffq" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.283527 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.286344 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.286576 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.287064 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.287271 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.288549 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.288710 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.304033 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.305781 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.326640 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.353316 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.357734 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.364546 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c45bb76f-92d7-4214-9ce3-c64361a40416-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"c45bb76f-92d7-4214-9ce3-c64361a40416\") " pod="openstack/rabbitmq-server-2" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.364594 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d043d696-d09f-4c43-8960-0d31789103e8-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d043d696-d09f-4c43-8960-0d31789103e8\") " pod="openstack/rabbitmq-server-0" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.364616 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d043d696-d09f-4c43-8960-0d31789103e8-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d043d696-d09f-4c43-8960-0d31789103e8\") " pod="openstack/rabbitmq-server-0" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.364640 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c45bb76f-92d7-4214-9ce3-c64361a40416-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"c45bb76f-92d7-4214-9ce3-c64361a40416\") " pod="openstack/rabbitmq-server-2" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.364662 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfhxl\" (UniqueName: \"kubernetes.io/projected/c45bb76f-92d7-4214-9ce3-c64361a40416-kube-api-access-jfhxl\") pod \"rabbitmq-server-2\" (UID: \"c45bb76f-92d7-4214-9ce3-c64361a40416\") " pod="openstack/rabbitmq-server-2" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.364679 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c45bb76f-92d7-4214-9ce3-c64361a40416-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"c45bb76f-92d7-4214-9ce3-c64361a40416\") " pod="openstack/rabbitmq-server-2" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.364697 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c45bb76f-92d7-4214-9ce3-c64361a40416-server-conf\") pod \"rabbitmq-server-2\" (UID: \"c45bb76f-92d7-4214-9ce3-c64361a40416\") " pod="openstack/rabbitmq-server-2" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.364713 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d043d696-d09f-4c43-8960-0d31789103e8-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d043d696-d09f-4c43-8960-0d31789103e8\") " pod="openstack/rabbitmq-server-0" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.364734 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d043d696-d09f-4c43-8960-0d31789103e8-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d043d696-d09f-4c43-8960-0d31789103e8\") " pod="openstack/rabbitmq-server-0" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.364760 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c45bb76f-92d7-4214-9ce3-c64361a40416-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"c45bb76f-92d7-4214-9ce3-c64361a40416\") " pod="openstack/rabbitmq-server-2" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.364780 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c45bb76f-92d7-4214-9ce3-c64361a40416-config-data\") pod \"rabbitmq-server-2\" (UID: \"c45bb76f-92d7-4214-9ce3-c64361a40416\") " pod="openstack/rabbitmq-server-2" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.364800 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-1be8a036-b866-4541-a70d-841ba87cbfe2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1be8a036-b866-4541-a70d-841ba87cbfe2\") pod \"rabbitmq-server-2\" (UID: \"c45bb76f-92d7-4214-9ce3-c64361a40416\") " pod="openstack/rabbitmq-server-2" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.364815 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d043d696-d09f-4c43-8960-0d31789103e8-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d043d696-d09f-4c43-8960-0d31789103e8\") " pod="openstack/rabbitmq-server-0" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.364833 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d043d696-d09f-4c43-8960-0d31789103e8-config-data\") pod \"rabbitmq-server-0\" (UID: \"d043d696-d09f-4c43-8960-0d31789103e8\") " pod="openstack/rabbitmq-server-0" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.364851 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d043d696-d09f-4c43-8960-0d31789103e8-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"d043d696-d09f-4c43-8960-0d31789103e8\") " pod="openstack/rabbitmq-server-0" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.364866 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c45bb76f-92d7-4214-9ce3-c64361a40416-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"c45bb76f-92d7-4214-9ce3-c64361a40416\") " pod="openstack/rabbitmq-server-2" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.364906 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d043d696-d09f-4c43-8960-0d31789103e8-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d043d696-d09f-4c43-8960-0d31789103e8\") " pod="openstack/rabbitmq-server-0" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.364922 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d043d696-d09f-4c43-8960-0d31789103e8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d043d696-d09f-4c43-8960-0d31789103e8\") " pod="openstack/rabbitmq-server-0" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.364961 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c45bb76f-92d7-4214-9ce3-c64361a40416-pod-info\") pod \"rabbitmq-server-2\" (UID: \"c45bb76f-92d7-4214-9ce3-c64361a40416\") " pod="openstack/rabbitmq-server-2" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.365010 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-15723c3d-2262-4670-a49d-b9f57c4f291f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15723c3d-2262-4670-a49d-b9f57c4f291f\") pod \"rabbitmq-server-0\" (UID: \"d043d696-d09f-4c43-8960-0d31789103e8\") " pod="openstack/rabbitmq-server-0" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.365038 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c45bb76f-92d7-4214-9ce3-c64361a40416-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"c45bb76f-92d7-4214-9ce3-c64361a40416\") " pod="openstack/rabbitmq-server-2" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.365059 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7hvq\" (UniqueName: \"kubernetes.io/projected/d043d696-d09f-4c43-8960-0d31789103e8-kube-api-access-j7hvq\") pod \"rabbitmq-server-0\" (UID: \"d043d696-d09f-4c43-8960-0d31789103e8\") " pod="openstack/rabbitmq-server-0" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.377994 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.484186 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/31094b6a-8ac7-4bbf-883e-aabf280fe22e-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"31094b6a-8ac7-4bbf-883e-aabf280fe22e\") " pod="openstack/rabbitmq-server-1" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.484419 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c45bb76f-92d7-4214-9ce3-c64361a40416-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"c45bb76f-92d7-4214-9ce3-c64361a40416\") " pod="openstack/rabbitmq-server-2" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.484501 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7hvq\" (UniqueName: \"kubernetes.io/projected/d043d696-d09f-4c43-8960-0d31789103e8-kube-api-access-j7hvq\") pod \"rabbitmq-server-0\" (UID: \"d043d696-d09f-4c43-8960-0d31789103e8\") " pod="openstack/rabbitmq-server-0" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.484559 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/31094b6a-8ac7-4bbf-883e-aabf280fe22e-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"31094b6a-8ac7-4bbf-883e-aabf280fe22e\") " pod="openstack/rabbitmq-server-1" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.484731 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c45bb76f-92d7-4214-9ce3-c64361a40416-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"c45bb76f-92d7-4214-9ce3-c64361a40416\") " pod="openstack/rabbitmq-server-2" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.484772 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/31094b6a-8ac7-4bbf-883e-aabf280fe22e-pod-info\") pod \"rabbitmq-server-1\" (UID: \"31094b6a-8ac7-4bbf-883e-aabf280fe22e\") " pod="openstack/rabbitmq-server-1" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.484832 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/31094b6a-8ac7-4bbf-883e-aabf280fe22e-server-conf\") pod \"rabbitmq-server-1\" (UID: \"31094b6a-8ac7-4bbf-883e-aabf280fe22e\") " pod="openstack/rabbitmq-server-1" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.484905 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d043d696-d09f-4c43-8960-0d31789103e8-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d043d696-d09f-4c43-8960-0d31789103e8\") " pod="openstack/rabbitmq-server-0" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.484977 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d043d696-d09f-4c43-8960-0d31789103e8-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d043d696-d09f-4c43-8960-0d31789103e8\") " pod="openstack/rabbitmq-server-0" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.485569 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c45bb76f-92d7-4214-9ce3-c64361a40416-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"c45bb76f-92d7-4214-9ce3-c64361a40416\") " pod="openstack/rabbitmq-server-2" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.487405 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfhxl\" (UniqueName: \"kubernetes.io/projected/c45bb76f-92d7-4214-9ce3-c64361a40416-kube-api-access-jfhxl\") pod \"rabbitmq-server-2\" (UID: \"c45bb76f-92d7-4214-9ce3-c64361a40416\") " pod="openstack/rabbitmq-server-2" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.487474 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c45bb76f-92d7-4214-9ce3-c64361a40416-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"c45bb76f-92d7-4214-9ce3-c64361a40416\") " pod="openstack/rabbitmq-server-2" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.487526 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c45bb76f-92d7-4214-9ce3-c64361a40416-server-conf\") pod \"rabbitmq-server-2\" (UID: \"c45bb76f-92d7-4214-9ce3-c64361a40416\") " pod="openstack/rabbitmq-server-2" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.487558 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d043d696-d09f-4c43-8960-0d31789103e8-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d043d696-d09f-4c43-8960-0d31789103e8\") " pod="openstack/rabbitmq-server-0" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.487605 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/31094b6a-8ac7-4bbf-883e-aabf280fe22e-config-data\") pod \"rabbitmq-server-1\" (UID: \"31094b6a-8ac7-4bbf-883e-aabf280fe22e\") " pod="openstack/rabbitmq-server-1" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.487659 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d043d696-d09f-4c43-8960-0d31789103e8-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d043d696-d09f-4c43-8960-0d31789103e8\") " pod="openstack/rabbitmq-server-0" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.487721 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c45bb76f-92d7-4214-9ce3-c64361a40416-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"c45bb76f-92d7-4214-9ce3-c64361a40416\") " pod="openstack/rabbitmq-server-2" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.487754 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/31094b6a-8ac7-4bbf-883e-aabf280fe22e-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"31094b6a-8ac7-4bbf-883e-aabf280fe22e\") " pod="openstack/rabbitmq-server-1" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.487795 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/31094b6a-8ac7-4bbf-883e-aabf280fe22e-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"31094b6a-8ac7-4bbf-883e-aabf280fe22e\") " pod="openstack/rabbitmq-server-1" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.487842 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c45bb76f-92d7-4214-9ce3-c64361a40416-config-data\") pod \"rabbitmq-server-2\" (UID: \"c45bb76f-92d7-4214-9ce3-c64361a40416\") " pod="openstack/rabbitmq-server-2" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.487900 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-1be8a036-b866-4541-a70d-841ba87cbfe2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1be8a036-b866-4541-a70d-841ba87cbfe2\") pod \"rabbitmq-server-2\" (UID: \"c45bb76f-92d7-4214-9ce3-c64361a40416\") " pod="openstack/rabbitmq-server-2" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.487953 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d043d696-d09f-4c43-8960-0d31789103e8-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d043d696-d09f-4c43-8960-0d31789103e8\") " pod="openstack/rabbitmq-server-0" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.488005 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d043d696-d09f-4c43-8960-0d31789103e8-config-data\") pod \"rabbitmq-server-0\" (UID: \"d043d696-d09f-4c43-8960-0d31789103e8\") " pod="openstack/rabbitmq-server-0" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.488040 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/31094b6a-8ac7-4bbf-883e-aabf280fe22e-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"31094b6a-8ac7-4bbf-883e-aabf280fe22e\") " pod="openstack/rabbitmq-server-1" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.488078 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d043d696-d09f-4c43-8960-0d31789103e8-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"d043d696-d09f-4c43-8960-0d31789103e8\") " pod="openstack/rabbitmq-server-0" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.488140 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c45bb76f-92d7-4214-9ce3-c64361a40416-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"c45bb76f-92d7-4214-9ce3-c64361a40416\") " pod="openstack/rabbitmq-server-2" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.488220 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-dc1ea871-b536-4145-bc22-b04289bfeff6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc1ea871-b536-4145-bc22-b04289bfeff6\") pod \"rabbitmq-server-1\" (UID: \"31094b6a-8ac7-4bbf-883e-aabf280fe22e\") " pod="openstack/rabbitmq-server-1" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.488269 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d043d696-d09f-4c43-8960-0d31789103e8-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d043d696-d09f-4c43-8960-0d31789103e8\") " pod="openstack/rabbitmq-server-0" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.488299 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d043d696-d09f-4c43-8960-0d31789103e8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d043d696-d09f-4c43-8960-0d31789103e8\") " pod="openstack/rabbitmq-server-0" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.488443 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c45bb76f-92d7-4214-9ce3-c64361a40416-pod-info\") pod \"rabbitmq-server-2\" (UID: \"c45bb76f-92d7-4214-9ce3-c64361a40416\") " pod="openstack/rabbitmq-server-2" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.488767 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/31094b6a-8ac7-4bbf-883e-aabf280fe22e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"31094b6a-8ac7-4bbf-883e-aabf280fe22e\") " pod="openstack/rabbitmq-server-1" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.488827 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-15723c3d-2262-4670-a49d-b9f57c4f291f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15723c3d-2262-4670-a49d-b9f57c4f291f\") pod \"rabbitmq-server-0\" (UID: \"d043d696-d09f-4c43-8960-0d31789103e8\") " pod="openstack/rabbitmq-server-0" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.488925 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpm68\" (UniqueName: \"kubernetes.io/projected/31094b6a-8ac7-4bbf-883e-aabf280fe22e-kube-api-access-zpm68\") pod \"rabbitmq-server-1\" (UID: \"31094b6a-8ac7-4bbf-883e-aabf280fe22e\") " pod="openstack/rabbitmq-server-1" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.493620 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c45bb76f-92d7-4214-9ce3-c64361a40416-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"c45bb76f-92d7-4214-9ce3-c64361a40416\") " pod="openstack/rabbitmq-server-2" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.493917 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c45bb76f-92d7-4214-9ce3-c64361a40416-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"c45bb76f-92d7-4214-9ce3-c64361a40416\") " pod="openstack/rabbitmq-server-2" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.495134 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d043d696-d09f-4c43-8960-0d31789103e8-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d043d696-d09f-4c43-8960-0d31789103e8\") " pod="openstack/rabbitmq-server-0" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.496011 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c45bb76f-92d7-4214-9ce3-c64361a40416-server-conf\") pod \"rabbitmq-server-2\" (UID: \"c45bb76f-92d7-4214-9ce3-c64361a40416\") " pod="openstack/rabbitmq-server-2" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.496642 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d043d696-d09f-4c43-8960-0d31789103e8-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d043d696-d09f-4c43-8960-0d31789103e8\") " pod="openstack/rabbitmq-server-0" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.499957 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c45bb76f-92d7-4214-9ce3-c64361a40416-config-data\") pod \"rabbitmq-server-2\" (UID: \"c45bb76f-92d7-4214-9ce3-c64361a40416\") " pod="openstack/rabbitmq-server-2" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.502261 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d043d696-d09f-4c43-8960-0d31789103e8-config-data\") pod \"rabbitmq-server-0\" (UID: \"d043d696-d09f-4c43-8960-0d31789103e8\") " pod="openstack/rabbitmq-server-0" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.502862 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d043d696-d09f-4c43-8960-0d31789103e8-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d043d696-d09f-4c43-8960-0d31789103e8\") " pod="openstack/rabbitmq-server-0" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.503158 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d043d696-d09f-4c43-8960-0d31789103e8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d043d696-d09f-4c43-8960-0d31789103e8\") " pod="openstack/rabbitmq-server-0" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.505792 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d043d696-d09f-4c43-8960-0d31789103e8-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d043d696-d09f-4c43-8960-0d31789103e8\") " pod="openstack/rabbitmq-server-0" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.507543 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c45bb76f-92d7-4214-9ce3-c64361a40416-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"c45bb76f-92d7-4214-9ce3-c64361a40416\") " pod="openstack/rabbitmq-server-2" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.510997 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c45bb76f-92d7-4214-9ce3-c64361a40416-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"c45bb76f-92d7-4214-9ce3-c64361a40416\") " pod="openstack/rabbitmq-server-2" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.520497 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7hvq\" (UniqueName: \"kubernetes.io/projected/d043d696-d09f-4c43-8960-0d31789103e8-kube-api-access-j7hvq\") pod \"rabbitmq-server-0\" (UID: \"d043d696-d09f-4c43-8960-0d31789103e8\") " pod="openstack/rabbitmq-server-0" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.521270 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c45bb76f-92d7-4214-9ce3-c64361a40416-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"c45bb76f-92d7-4214-9ce3-c64361a40416\") " pod="openstack/rabbitmq-server-2" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.526365 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d043d696-d09f-4c43-8960-0d31789103e8-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"d043d696-d09f-4c43-8960-0d31789103e8\") " pod="openstack/rabbitmq-server-0" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.527461 4809 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.527490 4809 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-1be8a036-b866-4541-a70d-841ba87cbfe2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1be8a036-b866-4541-a70d-841ba87cbfe2\") pod \"rabbitmq-server-2\" (UID: \"c45bb76f-92d7-4214-9ce3-c64361a40416\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/926d41cabb633f27f3d651d566f648c3452c1aea11ce8a8ddceb0de312727de4/globalmount\"" pod="openstack/rabbitmq-server-2" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.527798 4809 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.527859 4809 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-15723c3d-2262-4670-a49d-b9f57c4f291f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15723c3d-2262-4670-a49d-b9f57c4f291f\") pod \"rabbitmq-server-0\" (UID: \"d043d696-d09f-4c43-8960-0d31789103e8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ffccda2180f9e360d117dc76242223eeb49978e3ef6885ba8cb876e094cb7312/globalmount\"" pod="openstack/rabbitmq-server-0" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.530530 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d043d696-d09f-4c43-8960-0d31789103e8-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d043d696-d09f-4c43-8960-0d31789103e8\") " pod="openstack/rabbitmq-server-0" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.537475 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c45bb76f-92d7-4214-9ce3-c64361a40416-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"c45bb76f-92d7-4214-9ce3-c64361a40416\") " pod="openstack/rabbitmq-server-2" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.538445 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d043d696-d09f-4c43-8960-0d31789103e8-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d043d696-d09f-4c43-8960-0d31789103e8\") " pod="openstack/rabbitmq-server-0" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.541167 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfhxl\" (UniqueName: \"kubernetes.io/projected/c45bb76f-92d7-4214-9ce3-c64361a40416-kube-api-access-jfhxl\") pod \"rabbitmq-server-2\" (UID: \"c45bb76f-92d7-4214-9ce3-c64361a40416\") " pod="openstack/rabbitmq-server-2" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.548150 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c45bb76f-92d7-4214-9ce3-c64361a40416-pod-info\") pod \"rabbitmq-server-2\" (UID: \"c45bb76f-92d7-4214-9ce3-c64361a40416\") " pod="openstack/rabbitmq-server-2" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.591694 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/31094b6a-8ac7-4bbf-883e-aabf280fe22e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"31094b6a-8ac7-4bbf-883e-aabf280fe22e\") " pod="openstack/rabbitmq-server-1" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.591793 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpm68\" (UniqueName: \"kubernetes.io/projected/31094b6a-8ac7-4bbf-883e-aabf280fe22e-kube-api-access-zpm68\") pod \"rabbitmq-server-1\" (UID: \"31094b6a-8ac7-4bbf-883e-aabf280fe22e\") " pod="openstack/rabbitmq-server-1" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.591829 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/31094b6a-8ac7-4bbf-883e-aabf280fe22e-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"31094b6a-8ac7-4bbf-883e-aabf280fe22e\") " pod="openstack/rabbitmq-server-1" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.591862 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/31094b6a-8ac7-4bbf-883e-aabf280fe22e-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"31094b6a-8ac7-4bbf-883e-aabf280fe22e\") " pod="openstack/rabbitmq-server-1" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.591902 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/31094b6a-8ac7-4bbf-883e-aabf280fe22e-pod-info\") pod \"rabbitmq-server-1\" (UID: \"31094b6a-8ac7-4bbf-883e-aabf280fe22e\") " pod="openstack/rabbitmq-server-1" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.591924 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/31094b6a-8ac7-4bbf-883e-aabf280fe22e-server-conf\") pod \"rabbitmq-server-1\" (UID: \"31094b6a-8ac7-4bbf-883e-aabf280fe22e\") " pod="openstack/rabbitmq-server-1" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.592009 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/31094b6a-8ac7-4bbf-883e-aabf280fe22e-config-data\") pod \"rabbitmq-server-1\" (UID: \"31094b6a-8ac7-4bbf-883e-aabf280fe22e\") " pod="openstack/rabbitmq-server-1" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.592050 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/31094b6a-8ac7-4bbf-883e-aabf280fe22e-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"31094b6a-8ac7-4bbf-883e-aabf280fe22e\") " pod="openstack/rabbitmq-server-1" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.592072 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/31094b6a-8ac7-4bbf-883e-aabf280fe22e-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"31094b6a-8ac7-4bbf-883e-aabf280fe22e\") " pod="openstack/rabbitmq-server-1" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.592247 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/31094b6a-8ac7-4bbf-883e-aabf280fe22e-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"31094b6a-8ac7-4bbf-883e-aabf280fe22e\") " pod="openstack/rabbitmq-server-1" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.592296 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-dc1ea871-b536-4145-bc22-b04289bfeff6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc1ea871-b536-4145-bc22-b04289bfeff6\") pod \"rabbitmq-server-1\" (UID: \"31094b6a-8ac7-4bbf-883e-aabf280fe22e\") " pod="openstack/rabbitmq-server-1" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.592530 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/31094b6a-8ac7-4bbf-883e-aabf280fe22e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"31094b6a-8ac7-4bbf-883e-aabf280fe22e\") " pod="openstack/rabbitmq-server-1" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.594321 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/31094b6a-8ac7-4bbf-883e-aabf280fe22e-config-data\") pod \"rabbitmq-server-1\" (UID: \"31094b6a-8ac7-4bbf-883e-aabf280fe22e\") " pod="openstack/rabbitmq-server-1" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.594452 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/31094b6a-8ac7-4bbf-883e-aabf280fe22e-server-conf\") pod \"rabbitmq-server-1\" (UID: \"31094b6a-8ac7-4bbf-883e-aabf280fe22e\") " pod="openstack/rabbitmq-server-1" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.595228 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/31094b6a-8ac7-4bbf-883e-aabf280fe22e-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"31094b6a-8ac7-4bbf-883e-aabf280fe22e\") " pod="openstack/rabbitmq-server-1" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.597193 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/31094b6a-8ac7-4bbf-883e-aabf280fe22e-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"31094b6a-8ac7-4bbf-883e-aabf280fe22e\") " pod="openstack/rabbitmq-server-1" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.604286 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/31094b6a-8ac7-4bbf-883e-aabf280fe22e-pod-info\") pod \"rabbitmq-server-1\" (UID: \"31094b6a-8ac7-4bbf-883e-aabf280fe22e\") " pod="openstack/rabbitmq-server-1" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.604364 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-15723c3d-2262-4670-a49d-b9f57c4f291f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15723c3d-2262-4670-a49d-b9f57c4f291f\") pod \"rabbitmq-server-0\" (UID: \"d043d696-d09f-4c43-8960-0d31789103e8\") " pod="openstack/rabbitmq-server-0" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.605627 4809 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.605654 4809 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-dc1ea871-b536-4145-bc22-b04289bfeff6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc1ea871-b536-4145-bc22-b04289bfeff6\") pod \"rabbitmq-server-1\" (UID: \"31094b6a-8ac7-4bbf-883e-aabf280fe22e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/328f91188ed09e3478b0c39e8f690b6e2a7dde835523be1bc416406be231afd1/globalmount\"" pod="openstack/rabbitmq-server-1" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.606360 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/31094b6a-8ac7-4bbf-883e-aabf280fe22e-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"31094b6a-8ac7-4bbf-883e-aabf280fe22e\") " pod="openstack/rabbitmq-server-1" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.608239 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/31094b6a-8ac7-4bbf-883e-aabf280fe22e-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"31094b6a-8ac7-4bbf-883e-aabf280fe22e\") " pod="openstack/rabbitmq-server-1" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.609817 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/31094b6a-8ac7-4bbf-883e-aabf280fe22e-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"31094b6a-8ac7-4bbf-883e-aabf280fe22e\") " pod="openstack/rabbitmq-server-1" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.619403 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpm68\" (UniqueName: \"kubernetes.io/projected/31094b6a-8ac7-4bbf-883e-aabf280fe22e-kube-api-access-zpm68\") pod \"rabbitmq-server-1\" (UID: \"31094b6a-8ac7-4bbf-883e-aabf280fe22e\") " pod="openstack/rabbitmq-server-1" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.671396 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-dc1ea871-b536-4145-bc22-b04289bfeff6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc1ea871-b536-4145-bc22-b04289bfeff6\") pod \"rabbitmq-server-1\" (UID: \"31094b6a-8ac7-4bbf-883e-aabf280fe22e\") " pod="openstack/rabbitmq-server-1" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.672465 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.676499 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-1be8a036-b866-4541-a70d-841ba87cbfe2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1be8a036-b866-4541-a70d-841ba87cbfe2\") pod \"rabbitmq-server-2\" (UID: \"c45bb76f-92d7-4214-9ce3-c64361a40416\") " pod="openstack/rabbitmq-server-2" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.693708 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.707296 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Mar 12 08:21:02 crc kubenswrapper[4809]: I0312 08:21:02.969446 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-nzln7"] Mar 12 08:21:02 crc kubenswrapper[4809]: W0312 08:21:02.975912 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36b41bd2_cbbb_49b4_b5bb_4cb6da3c748f.slice/crio-a9c084c522944819607b9ede4ffe2dced2276cacf775c2f2f86c203df6a04124 WatchSource:0}: Error finding container a9c084c522944819607b9ede4ffe2dced2276cacf775c2f2f86c203df6a04124: Status 404 returned error can't find the container with id a9c084c522944819607b9ede4ffe2dced2276cacf775c2f2f86c203df6a04124 Mar 12 08:21:03 crc kubenswrapper[4809]: I0312 08:21:03.143981 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 12 08:21:03 crc kubenswrapper[4809]: I0312 08:21:03.144037 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-lctvd" event={"ID":"03a48b86-8bc7-4d5d-87c1-61f3f57110b7","Type":"ContainerStarted","Data":"1981e7ad1b6fe101f96e3fe691761a482d9dbd0e569f995e4c2840874df0750c"} Mar 12 08:21:03 crc kubenswrapper[4809]: I0312 08:21:03.144065 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-nzln7" event={"ID":"36b41bd2-cbbb-49b4-b5bb-4cb6da3c748f","Type":"ContainerStarted","Data":"a9c084c522944819607b9ede4ffe2dced2276cacf775c2f2f86c203df6a04124"} Mar 12 08:21:03 crc kubenswrapper[4809]: W0312 08:21:03.160739 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddae47d35_2955_4c02_88bb_a0fbe4cd7bf0.slice/crio-f65b0cfe8b97b4f4419a315daa266a8831bd67c140d14d466b059baa3537feaa WatchSource:0}: Error finding container f65b0cfe8b97b4f4419a315daa266a8831bd67c140d14d466b059baa3537feaa: Status 404 returned error can't find the container with id f65b0cfe8b97b4f4419a315daa266a8831bd67c140d14d466b059baa3537feaa Mar 12 08:21:03 crc kubenswrapper[4809]: I0312 08:21:03.188998 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Mar 12 08:21:03 crc kubenswrapper[4809]: I0312 08:21:03.196404 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 12 08:21:03 crc kubenswrapper[4809]: I0312 08:21:03.208281 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Mar 12 08:21:03 crc kubenswrapper[4809]: I0312 08:21:03.208489 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Mar 12 08:21:03 crc kubenswrapper[4809]: I0312 08:21:03.214498 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Mar 12 08:21:03 crc kubenswrapper[4809]: I0312 08:21:03.215403 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Mar 12 08:21:03 crc kubenswrapper[4809]: I0312 08:21:03.216994 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-9qfr8" Mar 12 08:21:03 crc kubenswrapper[4809]: I0312 08:21:03.250345 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 12 08:21:03 crc kubenswrapper[4809]: I0312 08:21:03.272559 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d541616-6c38-428f-bd28-7dc54dceab8c-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"3d541616-6c38-428f-bd28-7dc54dceab8c\") " pod="openstack/openstack-galera-0" Mar 12 08:21:03 crc kubenswrapper[4809]: I0312 08:21:03.272638 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d541616-6c38-428f-bd28-7dc54dceab8c-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"3d541616-6c38-428f-bd28-7dc54dceab8c\") " pod="openstack/openstack-galera-0" Mar 12 08:21:03 crc kubenswrapper[4809]: I0312 08:21:03.272696 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d77c3aaa-4bb4-471c-b5a9-0ceaf16de4cb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d77c3aaa-4bb4-471c-b5a9-0ceaf16de4cb\") pod \"openstack-galera-0\" (UID: \"3d541616-6c38-428f-bd28-7dc54dceab8c\") " pod="openstack/openstack-galera-0" Mar 12 08:21:03 crc kubenswrapper[4809]: I0312 08:21:03.272749 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3d541616-6c38-428f-bd28-7dc54dceab8c-config-data-generated\") pod \"openstack-galera-0\" (UID: \"3d541616-6c38-428f-bd28-7dc54dceab8c\") " pod="openstack/openstack-galera-0" Mar 12 08:21:03 crc kubenswrapper[4809]: I0312 08:21:03.272835 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3d541616-6c38-428f-bd28-7dc54dceab8c-kolla-config\") pod \"openstack-galera-0\" (UID: \"3d541616-6c38-428f-bd28-7dc54dceab8c\") " pod="openstack/openstack-galera-0" Mar 12 08:21:03 crc kubenswrapper[4809]: I0312 08:21:03.272854 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3d541616-6c38-428f-bd28-7dc54dceab8c-config-data-default\") pod \"openstack-galera-0\" (UID: \"3d541616-6c38-428f-bd28-7dc54dceab8c\") " pod="openstack/openstack-galera-0" Mar 12 08:21:03 crc kubenswrapper[4809]: I0312 08:21:03.272878 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d541616-6c38-428f-bd28-7dc54dceab8c-operator-scripts\") pod \"openstack-galera-0\" (UID: \"3d541616-6c38-428f-bd28-7dc54dceab8c\") " pod="openstack/openstack-galera-0" Mar 12 08:21:03 crc kubenswrapper[4809]: I0312 08:21:03.272906 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v65v9\" (UniqueName: \"kubernetes.io/projected/3d541616-6c38-428f-bd28-7dc54dceab8c-kube-api-access-v65v9\") pod \"openstack-galera-0\" (UID: \"3d541616-6c38-428f-bd28-7dc54dceab8c\") " pod="openstack/openstack-galera-0" Mar 12 08:21:03 crc kubenswrapper[4809]: I0312 08:21:03.346522 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Mar 12 08:21:03 crc kubenswrapper[4809]: I0312 08:21:03.379084 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d541616-6c38-428f-bd28-7dc54dceab8c-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"3d541616-6c38-428f-bd28-7dc54dceab8c\") " pod="openstack/openstack-galera-0" Mar 12 08:21:03 crc kubenswrapper[4809]: I0312 08:21:03.379190 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d541616-6c38-428f-bd28-7dc54dceab8c-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"3d541616-6c38-428f-bd28-7dc54dceab8c\") " pod="openstack/openstack-galera-0" Mar 12 08:21:03 crc kubenswrapper[4809]: I0312 08:21:03.379243 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d77c3aaa-4bb4-471c-b5a9-0ceaf16de4cb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d77c3aaa-4bb4-471c-b5a9-0ceaf16de4cb\") pod \"openstack-galera-0\" (UID: \"3d541616-6c38-428f-bd28-7dc54dceab8c\") " pod="openstack/openstack-galera-0" Mar 12 08:21:03 crc kubenswrapper[4809]: I0312 08:21:03.379287 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3d541616-6c38-428f-bd28-7dc54dceab8c-config-data-generated\") pod \"openstack-galera-0\" (UID: \"3d541616-6c38-428f-bd28-7dc54dceab8c\") " pod="openstack/openstack-galera-0" Mar 12 08:21:03 crc kubenswrapper[4809]: I0312 08:21:03.379348 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3d541616-6c38-428f-bd28-7dc54dceab8c-kolla-config\") pod \"openstack-galera-0\" (UID: \"3d541616-6c38-428f-bd28-7dc54dceab8c\") " pod="openstack/openstack-galera-0" Mar 12 08:21:03 crc kubenswrapper[4809]: I0312 08:21:03.379374 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3d541616-6c38-428f-bd28-7dc54dceab8c-config-data-default\") pod \"openstack-galera-0\" (UID: \"3d541616-6c38-428f-bd28-7dc54dceab8c\") " pod="openstack/openstack-galera-0" Mar 12 08:21:03 crc kubenswrapper[4809]: I0312 08:21:03.379402 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d541616-6c38-428f-bd28-7dc54dceab8c-operator-scripts\") pod \"openstack-galera-0\" (UID: \"3d541616-6c38-428f-bd28-7dc54dceab8c\") " pod="openstack/openstack-galera-0" Mar 12 08:21:03 crc kubenswrapper[4809]: I0312 08:21:03.379431 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v65v9\" (UniqueName: \"kubernetes.io/projected/3d541616-6c38-428f-bd28-7dc54dceab8c-kube-api-access-v65v9\") pod \"openstack-galera-0\" (UID: \"3d541616-6c38-428f-bd28-7dc54dceab8c\") " pod="openstack/openstack-galera-0" Mar 12 08:21:03 crc kubenswrapper[4809]: I0312 08:21:03.380416 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3d541616-6c38-428f-bd28-7dc54dceab8c-config-data-generated\") pod \"openstack-galera-0\" (UID: \"3d541616-6c38-428f-bd28-7dc54dceab8c\") " pod="openstack/openstack-galera-0" Mar 12 08:21:03 crc kubenswrapper[4809]: I0312 08:21:03.381099 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3d541616-6c38-428f-bd28-7dc54dceab8c-kolla-config\") pod \"openstack-galera-0\" (UID: \"3d541616-6c38-428f-bd28-7dc54dceab8c\") " pod="openstack/openstack-galera-0" Mar 12 08:21:03 crc kubenswrapper[4809]: I0312 08:21:03.382504 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3d541616-6c38-428f-bd28-7dc54dceab8c-config-data-default\") pod \"openstack-galera-0\" (UID: \"3d541616-6c38-428f-bd28-7dc54dceab8c\") " pod="openstack/openstack-galera-0" Mar 12 08:21:03 crc kubenswrapper[4809]: I0312 08:21:03.383445 4809 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 08:21:03 crc kubenswrapper[4809]: I0312 08:21:03.383480 4809 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d77c3aaa-4bb4-471c-b5a9-0ceaf16de4cb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d77c3aaa-4bb4-471c-b5a9-0ceaf16de4cb\") pod \"openstack-galera-0\" (UID: \"3d541616-6c38-428f-bd28-7dc54dceab8c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4ab7e57ac34150b147dd3e9ab16a78bd8c24e8924f3c98ee53002146b4b61ebb/globalmount\"" pod="openstack/openstack-galera-0" Mar 12 08:21:03 crc kubenswrapper[4809]: I0312 08:21:03.386320 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d541616-6c38-428f-bd28-7dc54dceab8c-operator-scripts\") pod \"openstack-galera-0\" (UID: \"3d541616-6c38-428f-bd28-7dc54dceab8c\") " pod="openstack/openstack-galera-0" Mar 12 08:21:03 crc kubenswrapper[4809]: I0312 08:21:03.388775 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d541616-6c38-428f-bd28-7dc54dceab8c-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"3d541616-6c38-428f-bd28-7dc54dceab8c\") " pod="openstack/openstack-galera-0" Mar 12 08:21:03 crc kubenswrapper[4809]: I0312 08:21:03.389844 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d541616-6c38-428f-bd28-7dc54dceab8c-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"3d541616-6c38-428f-bd28-7dc54dceab8c\") " pod="openstack/openstack-galera-0" Mar 12 08:21:03 crc kubenswrapper[4809]: I0312 08:21:03.404704 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v65v9\" (UniqueName: \"kubernetes.io/projected/3d541616-6c38-428f-bd28-7dc54dceab8c-kube-api-access-v65v9\") pod \"openstack-galera-0\" (UID: \"3d541616-6c38-428f-bd28-7dc54dceab8c\") " pod="openstack/openstack-galera-0" Mar 12 08:21:03 crc kubenswrapper[4809]: I0312 08:21:03.460854 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d77c3aaa-4bb4-471c-b5a9-0ceaf16de4cb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d77c3aaa-4bb4-471c-b5a9-0ceaf16de4cb\") pod \"openstack-galera-0\" (UID: \"3d541616-6c38-428f-bd28-7dc54dceab8c\") " pod="openstack/openstack-galera-0" Mar 12 08:21:03 crc kubenswrapper[4809]: I0312 08:21:03.547926 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 12 08:21:03 crc kubenswrapper[4809]: I0312 08:21:03.605199 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Mar 12 08:21:03 crc kubenswrapper[4809]: W0312 08:21:03.623929 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31094b6a_8ac7_4bbf_883e_aabf280fe22e.slice/crio-0799204193ee6fb0ac4c511848a04cc29a30f9fd31ebfcab14241db41fbb8e6d WatchSource:0}: Error finding container 0799204193ee6fb0ac4c511848a04cc29a30f9fd31ebfcab14241db41fbb8e6d: Status 404 returned error can't find the container with id 0799204193ee6fb0ac4c511848a04cc29a30f9fd31ebfcab14241db41fbb8e6d Mar 12 08:21:03 crc kubenswrapper[4809]: I0312 08:21:03.860076 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 12 08:21:04 crc kubenswrapper[4809]: I0312 08:21:04.184032 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"dae47d35-2955-4c02-88bb-a0fbe4cd7bf0","Type":"ContainerStarted","Data":"f65b0cfe8b97b4f4419a315daa266a8831bd67c140d14d466b059baa3537feaa"} Mar 12 08:21:04 crc kubenswrapper[4809]: I0312 08:21:04.188127 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d043d696-d09f-4c43-8960-0d31789103e8","Type":"ContainerStarted","Data":"09e02e75df58f4998c203bf35d469e713466228ba5da0e5d43774c743fb9cd14"} Mar 12 08:21:04 crc kubenswrapper[4809]: I0312 08:21:04.211206 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"31094b6a-8ac7-4bbf-883e-aabf280fe22e","Type":"ContainerStarted","Data":"0799204193ee6fb0ac4c511848a04cc29a30f9fd31ebfcab14241db41fbb8e6d"} Mar 12 08:21:04 crc kubenswrapper[4809]: I0312 08:21:04.235772 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"c45bb76f-92d7-4214-9ce3-c64361a40416","Type":"ContainerStarted","Data":"aa4028f32dc12123d3abd0e57773433c39bbdce9fd9e72bf2e5b3f8e30d640bf"} Mar 12 08:21:04 crc kubenswrapper[4809]: I0312 08:21:04.335529 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 12 08:21:04 crc kubenswrapper[4809]: I0312 08:21:04.579999 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 12 08:21:04 crc kubenswrapper[4809]: I0312 08:21:04.599462 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 12 08:21:04 crc kubenswrapper[4809]: I0312 08:21:04.599597 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 12 08:21:04 crc kubenswrapper[4809]: I0312 08:21:04.627932 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-khvpq" Mar 12 08:21:04 crc kubenswrapper[4809]: I0312 08:21:04.628235 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Mar 12 08:21:04 crc kubenswrapper[4809]: I0312 08:21:04.629019 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Mar 12 08:21:04 crc kubenswrapper[4809]: I0312 08:21:04.629559 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Mar 12 08:21:04 crc kubenswrapper[4809]: I0312 08:21:04.741809 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/729d6f4c-335b-486c-bea9-812d4abfdfd9-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"729d6f4c-335b-486c-bea9-812d4abfdfd9\") " pod="openstack/openstack-cell1-galera-0" Mar 12 08:21:04 crc kubenswrapper[4809]: I0312 08:21:04.743320 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knpdr\" (UniqueName: \"kubernetes.io/projected/729d6f4c-335b-486c-bea9-812d4abfdfd9-kube-api-access-knpdr\") pod \"openstack-cell1-galera-0\" (UID: \"729d6f4c-335b-486c-bea9-812d4abfdfd9\") " pod="openstack/openstack-cell1-galera-0" Mar 12 08:21:04 crc kubenswrapper[4809]: I0312 08:21:04.743382 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/729d6f4c-335b-486c-bea9-812d4abfdfd9-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"729d6f4c-335b-486c-bea9-812d4abfdfd9\") " pod="openstack/openstack-cell1-galera-0" Mar 12 08:21:04 crc kubenswrapper[4809]: I0312 08:21:04.743430 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-489b2dda-6106-4d17-8f45-9995e85f4055\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-489b2dda-6106-4d17-8f45-9995e85f4055\") pod \"openstack-cell1-galera-0\" (UID: \"729d6f4c-335b-486c-bea9-812d4abfdfd9\") " pod="openstack/openstack-cell1-galera-0" Mar 12 08:21:04 crc kubenswrapper[4809]: I0312 08:21:04.743495 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/729d6f4c-335b-486c-bea9-812d4abfdfd9-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"729d6f4c-335b-486c-bea9-812d4abfdfd9\") " pod="openstack/openstack-cell1-galera-0" Mar 12 08:21:04 crc kubenswrapper[4809]: I0312 08:21:04.743628 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/729d6f4c-335b-486c-bea9-812d4abfdfd9-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"729d6f4c-335b-486c-bea9-812d4abfdfd9\") " pod="openstack/openstack-cell1-galera-0" Mar 12 08:21:04 crc kubenswrapper[4809]: I0312 08:21:04.744987 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/729d6f4c-335b-486c-bea9-812d4abfdfd9-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"729d6f4c-335b-486c-bea9-812d4abfdfd9\") " pod="openstack/openstack-cell1-galera-0" Mar 12 08:21:04 crc kubenswrapper[4809]: I0312 08:21:04.745066 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/729d6f4c-335b-486c-bea9-812d4abfdfd9-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"729d6f4c-335b-486c-bea9-812d4abfdfd9\") " pod="openstack/openstack-cell1-galera-0" Mar 12 08:21:04 crc kubenswrapper[4809]: I0312 08:21:04.756062 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Mar 12 08:21:04 crc kubenswrapper[4809]: I0312 08:21:04.759298 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 12 08:21:04 crc kubenswrapper[4809]: I0312 08:21:04.769813 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-6fs7v" Mar 12 08:21:04 crc kubenswrapper[4809]: I0312 08:21:04.770136 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Mar 12 08:21:04 crc kubenswrapper[4809]: I0312 08:21:04.770276 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Mar 12 08:21:04 crc kubenswrapper[4809]: I0312 08:21:04.784010 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 12 08:21:04 crc kubenswrapper[4809]: I0312 08:21:04.852081 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/729d6f4c-335b-486c-bea9-812d4abfdfd9-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"729d6f4c-335b-486c-bea9-812d4abfdfd9\") " pod="openstack/openstack-cell1-galera-0" Mar 12 08:21:04 crc kubenswrapper[4809]: I0312 08:21:04.852938 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/729d6f4c-335b-486c-bea9-812d4abfdfd9-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"729d6f4c-335b-486c-bea9-812d4abfdfd9\") " pod="openstack/openstack-cell1-galera-0" Mar 12 08:21:04 crc kubenswrapper[4809]: I0312 08:21:04.853494 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/729d6f4c-335b-486c-bea9-812d4abfdfd9-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"729d6f4c-335b-486c-bea9-812d4abfdfd9\") " pod="openstack/openstack-cell1-galera-0" Mar 12 08:21:04 crc kubenswrapper[4809]: I0312 08:21:04.853736 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/729d6f4c-335b-486c-bea9-812d4abfdfd9-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"729d6f4c-335b-486c-bea9-812d4abfdfd9\") " pod="openstack/openstack-cell1-galera-0" Mar 12 08:21:04 crc kubenswrapper[4809]: I0312 08:21:04.853986 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knpdr\" (UniqueName: \"kubernetes.io/projected/729d6f4c-335b-486c-bea9-812d4abfdfd9-kube-api-access-knpdr\") pod \"openstack-cell1-galera-0\" (UID: \"729d6f4c-335b-486c-bea9-812d4abfdfd9\") " pod="openstack/openstack-cell1-galera-0" Mar 12 08:21:04 crc kubenswrapper[4809]: I0312 08:21:04.854437 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/729d6f4c-335b-486c-bea9-812d4abfdfd9-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"729d6f4c-335b-486c-bea9-812d4abfdfd9\") " pod="openstack/openstack-cell1-galera-0" Mar 12 08:21:04 crc kubenswrapper[4809]: I0312 08:21:04.854685 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-489b2dda-6106-4d17-8f45-9995e85f4055\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-489b2dda-6106-4d17-8f45-9995e85f4055\") pod \"openstack-cell1-galera-0\" (UID: \"729d6f4c-335b-486c-bea9-812d4abfdfd9\") " pod="openstack/openstack-cell1-galera-0" Mar 12 08:21:04 crc kubenswrapper[4809]: I0312 08:21:04.854850 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/729d6f4c-335b-486c-bea9-812d4abfdfd9-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"729d6f4c-335b-486c-bea9-812d4abfdfd9\") " pod="openstack/openstack-cell1-galera-0" Mar 12 08:21:04 crc kubenswrapper[4809]: I0312 08:21:04.855369 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/729d6f4c-335b-486c-bea9-812d4abfdfd9-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"729d6f4c-335b-486c-bea9-812d4abfdfd9\") " pod="openstack/openstack-cell1-galera-0" Mar 12 08:21:04 crc kubenswrapper[4809]: I0312 08:21:04.855509 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/729d6f4c-335b-486c-bea9-812d4abfdfd9-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"729d6f4c-335b-486c-bea9-812d4abfdfd9\") " pod="openstack/openstack-cell1-galera-0" Mar 12 08:21:04 crc kubenswrapper[4809]: I0312 08:21:04.859567 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/729d6f4c-335b-486c-bea9-812d4abfdfd9-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"729d6f4c-335b-486c-bea9-812d4abfdfd9\") " pod="openstack/openstack-cell1-galera-0" Mar 12 08:21:04 crc kubenswrapper[4809]: I0312 08:21:04.861661 4809 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 08:21:04 crc kubenswrapper[4809]: I0312 08:21:04.861694 4809 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-489b2dda-6106-4d17-8f45-9995e85f4055\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-489b2dda-6106-4d17-8f45-9995e85f4055\") pod \"openstack-cell1-galera-0\" (UID: \"729d6f4c-335b-486c-bea9-812d4abfdfd9\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ebada474a0c7e834386fd1612291193b582c812282819a71e5c35aac6bb9c881/globalmount\"" pod="openstack/openstack-cell1-galera-0" Mar 12 08:21:04 crc kubenswrapper[4809]: I0312 08:21:04.865679 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/729d6f4c-335b-486c-bea9-812d4abfdfd9-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"729d6f4c-335b-486c-bea9-812d4abfdfd9\") " pod="openstack/openstack-cell1-galera-0" Mar 12 08:21:04 crc kubenswrapper[4809]: I0312 08:21:04.878745 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/729d6f4c-335b-486c-bea9-812d4abfdfd9-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"729d6f4c-335b-486c-bea9-812d4abfdfd9\") " pod="openstack/openstack-cell1-galera-0" Mar 12 08:21:04 crc kubenswrapper[4809]: I0312 08:21:04.891257 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knpdr\" (UniqueName: \"kubernetes.io/projected/729d6f4c-335b-486c-bea9-812d4abfdfd9-kube-api-access-knpdr\") pod \"openstack-cell1-galera-0\" (UID: \"729d6f4c-335b-486c-bea9-812d4abfdfd9\") " pod="openstack/openstack-cell1-galera-0" Mar 12 08:21:04 crc kubenswrapper[4809]: I0312 08:21:04.893672 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/729d6f4c-335b-486c-bea9-812d4abfdfd9-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"729d6f4c-335b-486c-bea9-812d4abfdfd9\") " pod="openstack/openstack-cell1-galera-0" Mar 12 08:21:04 crc kubenswrapper[4809]: I0312 08:21:04.924479 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-489b2dda-6106-4d17-8f45-9995e85f4055\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-489b2dda-6106-4d17-8f45-9995e85f4055\") pod \"openstack-cell1-galera-0\" (UID: \"729d6f4c-335b-486c-bea9-812d4abfdfd9\") " pod="openstack/openstack-cell1-galera-0" Mar 12 08:21:04 crc kubenswrapper[4809]: I0312 08:21:04.963490 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/58606282-c6cc-482f-b1be-78717b5d38b2-config-data\") pod \"memcached-0\" (UID: \"58606282-c6cc-482f-b1be-78717b5d38b2\") " pod="openstack/memcached-0" Mar 12 08:21:04 crc kubenswrapper[4809]: I0312 08:21:04.963641 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/58606282-c6cc-482f-b1be-78717b5d38b2-memcached-tls-certs\") pod \"memcached-0\" (UID: \"58606282-c6cc-482f-b1be-78717b5d38b2\") " pod="openstack/memcached-0" Mar 12 08:21:04 crc kubenswrapper[4809]: I0312 08:21:04.964035 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58606282-c6cc-482f-b1be-78717b5d38b2-combined-ca-bundle\") pod \"memcached-0\" (UID: \"58606282-c6cc-482f-b1be-78717b5d38b2\") " pod="openstack/memcached-0" Mar 12 08:21:04 crc kubenswrapper[4809]: I0312 08:21:04.964130 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8cst\" (UniqueName: \"kubernetes.io/projected/58606282-c6cc-482f-b1be-78717b5d38b2-kube-api-access-t8cst\") pod \"memcached-0\" (UID: \"58606282-c6cc-482f-b1be-78717b5d38b2\") " pod="openstack/memcached-0" Mar 12 08:21:04 crc kubenswrapper[4809]: I0312 08:21:04.964183 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/58606282-c6cc-482f-b1be-78717b5d38b2-kolla-config\") pod \"memcached-0\" (UID: \"58606282-c6cc-482f-b1be-78717b5d38b2\") " pod="openstack/memcached-0" Mar 12 08:21:04 crc kubenswrapper[4809]: I0312 08:21:04.964192 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 12 08:21:05 crc kubenswrapper[4809]: I0312 08:21:05.073656 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/58606282-c6cc-482f-b1be-78717b5d38b2-config-data\") pod \"memcached-0\" (UID: \"58606282-c6cc-482f-b1be-78717b5d38b2\") " pod="openstack/memcached-0" Mar 12 08:21:05 crc kubenswrapper[4809]: I0312 08:21:05.074060 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/58606282-c6cc-482f-b1be-78717b5d38b2-memcached-tls-certs\") pod \"memcached-0\" (UID: \"58606282-c6cc-482f-b1be-78717b5d38b2\") " pod="openstack/memcached-0" Mar 12 08:21:05 crc kubenswrapper[4809]: I0312 08:21:05.074308 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58606282-c6cc-482f-b1be-78717b5d38b2-combined-ca-bundle\") pod \"memcached-0\" (UID: \"58606282-c6cc-482f-b1be-78717b5d38b2\") " pod="openstack/memcached-0" Mar 12 08:21:05 crc kubenswrapper[4809]: I0312 08:21:05.074478 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/58606282-c6cc-482f-b1be-78717b5d38b2-kolla-config\") pod \"memcached-0\" (UID: \"58606282-c6cc-482f-b1be-78717b5d38b2\") " pod="openstack/memcached-0" Mar 12 08:21:05 crc kubenswrapper[4809]: I0312 08:21:05.074630 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8cst\" (UniqueName: \"kubernetes.io/projected/58606282-c6cc-482f-b1be-78717b5d38b2-kube-api-access-t8cst\") pod \"memcached-0\" (UID: \"58606282-c6cc-482f-b1be-78717b5d38b2\") " pod="openstack/memcached-0" Mar 12 08:21:05 crc kubenswrapper[4809]: I0312 08:21:05.074534 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/58606282-c6cc-482f-b1be-78717b5d38b2-config-data\") pod \"memcached-0\" (UID: \"58606282-c6cc-482f-b1be-78717b5d38b2\") " pod="openstack/memcached-0" Mar 12 08:21:05 crc kubenswrapper[4809]: I0312 08:21:05.075460 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/58606282-c6cc-482f-b1be-78717b5d38b2-kolla-config\") pod \"memcached-0\" (UID: \"58606282-c6cc-482f-b1be-78717b5d38b2\") " pod="openstack/memcached-0" Mar 12 08:21:05 crc kubenswrapper[4809]: I0312 08:21:05.087041 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58606282-c6cc-482f-b1be-78717b5d38b2-combined-ca-bundle\") pod \"memcached-0\" (UID: \"58606282-c6cc-482f-b1be-78717b5d38b2\") " pod="openstack/memcached-0" Mar 12 08:21:05 crc kubenswrapper[4809]: I0312 08:21:05.115766 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8cst\" (UniqueName: \"kubernetes.io/projected/58606282-c6cc-482f-b1be-78717b5d38b2-kube-api-access-t8cst\") pod \"memcached-0\" (UID: \"58606282-c6cc-482f-b1be-78717b5d38b2\") " pod="openstack/memcached-0" Mar 12 08:21:05 crc kubenswrapper[4809]: I0312 08:21:05.121217 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/58606282-c6cc-482f-b1be-78717b5d38b2-memcached-tls-certs\") pod \"memcached-0\" (UID: \"58606282-c6cc-482f-b1be-78717b5d38b2\") " pod="openstack/memcached-0" Mar 12 08:21:05 crc kubenswrapper[4809]: I0312 08:21:05.344179 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"3d541616-6c38-428f-bd28-7dc54dceab8c","Type":"ContainerStarted","Data":"2abaffd5ab64f10c95dfaa32655ab169414e108c5fdc5d33db7833e307429e0f"} Mar 12 08:21:05 crc kubenswrapper[4809]: I0312 08:21:05.408737 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 12 08:21:07 crc kubenswrapper[4809]: I0312 08:21:07.309907 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Mar 12 08:21:07 crc kubenswrapper[4809]: I0312 08:21:07.313575 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 12 08:21:07 crc kubenswrapper[4809]: I0312 08:21:07.321413 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-jzkp7" Mar 12 08:21:07 crc kubenswrapper[4809]: I0312 08:21:07.344532 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 12 08:21:07 crc kubenswrapper[4809]: I0312 08:21:07.364042 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhwtz\" (UniqueName: \"kubernetes.io/projected/160163b1-c728-46dd-8caa-df11fdb18266-kube-api-access-qhwtz\") pod \"kube-state-metrics-0\" (UID: \"160163b1-c728-46dd-8caa-df11fdb18266\") " pod="openstack/kube-state-metrics-0" Mar 12 08:21:07 crc kubenswrapper[4809]: I0312 08:21:07.475160 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhwtz\" (UniqueName: \"kubernetes.io/projected/160163b1-c728-46dd-8caa-df11fdb18266-kube-api-access-qhwtz\") pod \"kube-state-metrics-0\" (UID: \"160163b1-c728-46dd-8caa-df11fdb18266\") " pod="openstack/kube-state-metrics-0" Mar 12 08:21:07 crc kubenswrapper[4809]: I0312 08:21:07.526863 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhwtz\" (UniqueName: \"kubernetes.io/projected/160163b1-c728-46dd-8caa-df11fdb18266-kube-api-access-qhwtz\") pod \"kube-state-metrics-0\" (UID: \"160163b1-c728-46dd-8caa-df11fdb18266\") " pod="openstack/kube-state-metrics-0" Mar 12 08:21:07 crc kubenswrapper[4809]: I0312 08:21:07.686773 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.321190 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-p9md4"] Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.323136 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-p9md4" Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.337852 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.337937 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-kl4v8" Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.341933 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-p9md4"] Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.426560 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3c7bda2-fd1b-4e75-8991-a7f713283b7d-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-p9md4\" (UID: \"a3c7bda2-fd1b-4e75-8991-a7f713283b7d\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-p9md4" Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.426996 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdlmd\" (UniqueName: \"kubernetes.io/projected/a3c7bda2-fd1b-4e75-8991-a7f713283b7d-kube-api-access-vdlmd\") pod \"observability-ui-dashboards-66cbf594b5-p9md4\" (UID: \"a3c7bda2-fd1b-4e75-8991-a7f713283b7d\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-p9md4" Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.537516 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3c7bda2-fd1b-4e75-8991-a7f713283b7d-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-p9md4\" (UID: \"a3c7bda2-fd1b-4e75-8991-a7f713283b7d\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-p9md4" Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.537668 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdlmd\" (UniqueName: \"kubernetes.io/projected/a3c7bda2-fd1b-4e75-8991-a7f713283b7d-kube-api-access-vdlmd\") pod \"observability-ui-dashboards-66cbf594b5-p9md4\" (UID: \"a3c7bda2-fd1b-4e75-8991-a7f713283b7d\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-p9md4" Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.544863 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3c7bda2-fd1b-4e75-8991-a7f713283b7d-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-p9md4\" (UID: \"a3c7bda2-fd1b-4e75-8991-a7f713283b7d\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-p9md4" Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.566280 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdlmd\" (UniqueName: \"kubernetes.io/projected/a3c7bda2-fd1b-4e75-8991-a7f713283b7d-kube-api-access-vdlmd\") pod \"observability-ui-dashboards-66cbf594b5-p9md4\" (UID: \"a3c7bda2-fd1b-4e75-8991-a7f713283b7d\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-p9md4" Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.656687 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-p9md4" Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.656699 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5b99b64b5d-4hf4l"] Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.690359 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5b99b64b5d-4hf4l"] Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.690534 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5b99b64b5d-4hf4l" Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.747675 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4210061b-64cb-414a-be09-bf56697ad409-console-config\") pod \"console-5b99b64b5d-4hf4l\" (UID: \"4210061b-64cb-414a-be09-bf56697ad409\") " pod="openshift-console/console-5b99b64b5d-4hf4l" Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.747721 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4210061b-64cb-414a-be09-bf56697ad409-console-oauth-config\") pod \"console-5b99b64b5d-4hf4l\" (UID: \"4210061b-64cb-414a-be09-bf56697ad409\") " pod="openshift-console/console-5b99b64b5d-4hf4l" Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.747761 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbkk5\" (UniqueName: \"kubernetes.io/projected/4210061b-64cb-414a-be09-bf56697ad409-kube-api-access-jbkk5\") pod \"console-5b99b64b5d-4hf4l\" (UID: \"4210061b-64cb-414a-be09-bf56697ad409\") " pod="openshift-console/console-5b99b64b5d-4hf4l" Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.749413 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4210061b-64cb-414a-be09-bf56697ad409-console-serving-cert\") pod \"console-5b99b64b5d-4hf4l\" (UID: \"4210061b-64cb-414a-be09-bf56697ad409\") " pod="openshift-console/console-5b99b64b5d-4hf4l" Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.749503 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4210061b-64cb-414a-be09-bf56697ad409-trusted-ca-bundle\") pod \"console-5b99b64b5d-4hf4l\" (UID: \"4210061b-64cb-414a-be09-bf56697ad409\") " pod="openshift-console/console-5b99b64b5d-4hf4l" Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.749537 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4210061b-64cb-414a-be09-bf56697ad409-oauth-serving-cert\") pod \"console-5b99b64b5d-4hf4l\" (UID: \"4210061b-64cb-414a-be09-bf56697ad409\") " pod="openshift-console/console-5b99b64b5d-4hf4l" Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.749594 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4210061b-64cb-414a-be09-bf56697ad409-service-ca\") pod \"console-5b99b64b5d-4hf4l\" (UID: \"4210061b-64cb-414a-be09-bf56697ad409\") " pod="openshift-console/console-5b99b64b5d-4hf4l" Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.773431 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.776721 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.782609 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.782858 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.782923 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-p5jc5" Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.782968 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.783359 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.783585 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.783615 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.783716 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.824719 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.851050 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/dae78d40-ac67-4ec6-bf05-7f615d22aea9-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"dae78d40-ac67-4ec6-bf05-7f615d22aea9\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.851106 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4210061b-64cb-414a-be09-bf56697ad409-trusted-ca-bundle\") pod \"console-5b99b64b5d-4hf4l\" (UID: \"4210061b-64cb-414a-be09-bf56697ad409\") " pod="openshift-console/console-5b99b64b5d-4hf4l" Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.852206 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/dae78d40-ac67-4ec6-bf05-7f615d22aea9-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"dae78d40-ac67-4ec6-bf05-7f615d22aea9\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.852254 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4210061b-64cb-414a-be09-bf56697ad409-oauth-serving-cert\") pod \"console-5b99b64b5d-4hf4l\" (UID: \"4210061b-64cb-414a-be09-bf56697ad409\") " pod="openshift-console/console-5b99b64b5d-4hf4l" Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.852293 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4210061b-64cb-414a-be09-bf56697ad409-trusted-ca-bundle\") pod \"console-5b99b64b5d-4hf4l\" (UID: \"4210061b-64cb-414a-be09-bf56697ad409\") " pod="openshift-console/console-5b99b64b5d-4hf4l" Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.852314 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/dae78d40-ac67-4ec6-bf05-7f615d22aea9-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"dae78d40-ac67-4ec6-bf05-7f615d22aea9\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.852368 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4210061b-64cb-414a-be09-bf56697ad409-service-ca\") pod \"console-5b99b64b5d-4hf4l\" (UID: \"4210061b-64cb-414a-be09-bf56697ad409\") " pod="openshift-console/console-5b99b64b5d-4hf4l" Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.852425 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/dae78d40-ac67-4ec6-bf05-7f615d22aea9-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"dae78d40-ac67-4ec6-bf05-7f615d22aea9\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.852478 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4210061b-64cb-414a-be09-bf56697ad409-console-config\") pod \"console-5b99b64b5d-4hf4l\" (UID: \"4210061b-64cb-414a-be09-bf56697ad409\") " pod="openshift-console/console-5b99b64b5d-4hf4l" Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.852502 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4210061b-64cb-414a-be09-bf56697ad409-console-oauth-config\") pod \"console-5b99b64b5d-4hf4l\" (UID: \"4210061b-64cb-414a-be09-bf56697ad409\") " pod="openshift-console/console-5b99b64b5d-4hf4l" Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.852525 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbkk5\" (UniqueName: \"kubernetes.io/projected/4210061b-64cb-414a-be09-bf56697ad409-kube-api-access-jbkk5\") pod \"console-5b99b64b5d-4hf4l\" (UID: \"4210061b-64cb-414a-be09-bf56697ad409\") " pod="openshift-console/console-5b99b64b5d-4hf4l" Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.852548 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gclx8\" (UniqueName: \"kubernetes.io/projected/dae78d40-ac67-4ec6-bf05-7f615d22aea9-kube-api-access-gclx8\") pod \"prometheus-metric-storage-0\" (UID: \"dae78d40-ac67-4ec6-bf05-7f615d22aea9\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.852617 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/dae78d40-ac67-4ec6-bf05-7f615d22aea9-config\") pod \"prometheus-metric-storage-0\" (UID: \"dae78d40-ac67-4ec6-bf05-7f615d22aea9\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.852667 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4210061b-64cb-414a-be09-bf56697ad409-console-serving-cert\") pod \"console-5b99b64b5d-4hf4l\" (UID: \"4210061b-64cb-414a-be09-bf56697ad409\") " pod="openshift-console/console-5b99b64b5d-4hf4l" Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.852691 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/dae78d40-ac67-4ec6-bf05-7f615d22aea9-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"dae78d40-ac67-4ec6-bf05-7f615d22aea9\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.852717 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8df94e62-7c52-43ba-aa96-2fa0463399b9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8df94e62-7c52-43ba-aa96-2fa0463399b9\") pod \"prometheus-metric-storage-0\" (UID: \"dae78d40-ac67-4ec6-bf05-7f615d22aea9\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.852738 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/dae78d40-ac67-4ec6-bf05-7f615d22aea9-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"dae78d40-ac67-4ec6-bf05-7f615d22aea9\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.852758 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/dae78d40-ac67-4ec6-bf05-7f615d22aea9-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"dae78d40-ac67-4ec6-bf05-7f615d22aea9\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.852853 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4210061b-64cb-414a-be09-bf56697ad409-oauth-serving-cert\") pod \"console-5b99b64b5d-4hf4l\" (UID: \"4210061b-64cb-414a-be09-bf56697ad409\") " pod="openshift-console/console-5b99b64b5d-4hf4l" Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.853478 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4210061b-64cb-414a-be09-bf56697ad409-service-ca\") pod \"console-5b99b64b5d-4hf4l\" (UID: \"4210061b-64cb-414a-be09-bf56697ad409\") " pod="openshift-console/console-5b99b64b5d-4hf4l" Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.854010 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4210061b-64cb-414a-be09-bf56697ad409-console-config\") pod \"console-5b99b64b5d-4hf4l\" (UID: \"4210061b-64cb-414a-be09-bf56697ad409\") " pod="openshift-console/console-5b99b64b5d-4hf4l" Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.866060 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4210061b-64cb-414a-be09-bf56697ad409-console-oauth-config\") pod \"console-5b99b64b5d-4hf4l\" (UID: \"4210061b-64cb-414a-be09-bf56697ad409\") " pod="openshift-console/console-5b99b64b5d-4hf4l" Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.867146 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4210061b-64cb-414a-be09-bf56697ad409-console-serving-cert\") pod \"console-5b99b64b5d-4hf4l\" (UID: \"4210061b-64cb-414a-be09-bf56697ad409\") " pod="openshift-console/console-5b99b64b5d-4hf4l" Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.872156 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbkk5\" (UniqueName: \"kubernetes.io/projected/4210061b-64cb-414a-be09-bf56697ad409-kube-api-access-jbkk5\") pod \"console-5b99b64b5d-4hf4l\" (UID: \"4210061b-64cb-414a-be09-bf56697ad409\") " pod="openshift-console/console-5b99b64b5d-4hf4l" Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.954162 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/dae78d40-ac67-4ec6-bf05-7f615d22aea9-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"dae78d40-ac67-4ec6-bf05-7f615d22aea9\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.954251 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gclx8\" (UniqueName: \"kubernetes.io/projected/dae78d40-ac67-4ec6-bf05-7f615d22aea9-kube-api-access-gclx8\") pod \"prometheus-metric-storage-0\" (UID: \"dae78d40-ac67-4ec6-bf05-7f615d22aea9\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.954284 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/dae78d40-ac67-4ec6-bf05-7f615d22aea9-config\") pod \"prometheus-metric-storage-0\" (UID: \"dae78d40-ac67-4ec6-bf05-7f615d22aea9\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.954314 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/dae78d40-ac67-4ec6-bf05-7f615d22aea9-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"dae78d40-ac67-4ec6-bf05-7f615d22aea9\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.954340 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8df94e62-7c52-43ba-aa96-2fa0463399b9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8df94e62-7c52-43ba-aa96-2fa0463399b9\") pod \"prometheus-metric-storage-0\" (UID: \"dae78d40-ac67-4ec6-bf05-7f615d22aea9\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.954357 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/dae78d40-ac67-4ec6-bf05-7f615d22aea9-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"dae78d40-ac67-4ec6-bf05-7f615d22aea9\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.954375 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/dae78d40-ac67-4ec6-bf05-7f615d22aea9-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"dae78d40-ac67-4ec6-bf05-7f615d22aea9\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.954412 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/dae78d40-ac67-4ec6-bf05-7f615d22aea9-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"dae78d40-ac67-4ec6-bf05-7f615d22aea9\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.954434 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/dae78d40-ac67-4ec6-bf05-7f615d22aea9-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"dae78d40-ac67-4ec6-bf05-7f615d22aea9\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.954479 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/dae78d40-ac67-4ec6-bf05-7f615d22aea9-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"dae78d40-ac67-4ec6-bf05-7f615d22aea9\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.955637 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/dae78d40-ac67-4ec6-bf05-7f615d22aea9-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"dae78d40-ac67-4ec6-bf05-7f615d22aea9\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.955675 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/dae78d40-ac67-4ec6-bf05-7f615d22aea9-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"dae78d40-ac67-4ec6-bf05-7f615d22aea9\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.956764 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/dae78d40-ac67-4ec6-bf05-7f615d22aea9-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"dae78d40-ac67-4ec6-bf05-7f615d22aea9\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.966394 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/dae78d40-ac67-4ec6-bf05-7f615d22aea9-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"dae78d40-ac67-4ec6-bf05-7f615d22aea9\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.966906 4809 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.966958 4809 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8df94e62-7c52-43ba-aa96-2fa0463399b9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8df94e62-7c52-43ba-aa96-2fa0463399b9\") pod \"prometheus-metric-storage-0\" (UID: \"dae78d40-ac67-4ec6-bf05-7f615d22aea9\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/07900c41f367c6133b243a696847e564daee14d73a2768d37c66f6e5f7b4cf48/globalmount\"" pod="openstack/prometheus-metric-storage-0" Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.967414 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/dae78d40-ac67-4ec6-bf05-7f615d22aea9-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"dae78d40-ac67-4ec6-bf05-7f615d22aea9\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.979839 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gclx8\" (UniqueName: \"kubernetes.io/projected/dae78d40-ac67-4ec6-bf05-7f615d22aea9-kube-api-access-gclx8\") pod \"prometheus-metric-storage-0\" (UID: \"dae78d40-ac67-4ec6-bf05-7f615d22aea9\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.983023 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/dae78d40-ac67-4ec6-bf05-7f615d22aea9-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"dae78d40-ac67-4ec6-bf05-7f615d22aea9\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.983904 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/dae78d40-ac67-4ec6-bf05-7f615d22aea9-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"dae78d40-ac67-4ec6-bf05-7f615d22aea9\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:21:08 crc kubenswrapper[4809]: I0312 08:21:08.995164 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/dae78d40-ac67-4ec6-bf05-7f615d22aea9-config\") pod \"prometheus-metric-storage-0\" (UID: \"dae78d40-ac67-4ec6-bf05-7f615d22aea9\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:21:09 crc kubenswrapper[4809]: I0312 08:21:09.024770 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8df94e62-7c52-43ba-aa96-2fa0463399b9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8df94e62-7c52-43ba-aa96-2fa0463399b9\") pod \"prometheus-metric-storage-0\" (UID: \"dae78d40-ac67-4ec6-bf05-7f615d22aea9\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:21:09 crc kubenswrapper[4809]: I0312 08:21:09.037296 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5b99b64b5d-4hf4l" Mar 12 08:21:09 crc kubenswrapper[4809]: I0312 08:21:09.109171 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 12 08:21:09 crc kubenswrapper[4809]: I0312 08:21:09.671207 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.321377 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-x5bc6"] Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.323207 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-x5bc6" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.329922 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-pt4wh"] Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.332399 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-pt4wh" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.335946 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.336339 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-rqdhg" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.336472 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.363225 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-x5bc6"] Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.376530 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-pt4wh"] Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.425282 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8f77780-9a39-4298-8bfe-76a54e1e41d9-combined-ca-bundle\") pod \"ovn-controller-x5bc6\" (UID: \"e8f77780-9a39-4298-8bfe-76a54e1e41d9\") " pod="openstack/ovn-controller-x5bc6" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.425341 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8f77780-9a39-4298-8bfe-76a54e1e41d9-ovn-controller-tls-certs\") pod \"ovn-controller-x5bc6\" (UID: \"e8f77780-9a39-4298-8bfe-76a54e1e41d9\") " pod="openstack/ovn-controller-x5bc6" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.425399 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/cbad11f2-2bbf-45af-9f9f-72c409c5b0a6-etc-ovs\") pod \"ovn-controller-ovs-pt4wh\" (UID: \"cbad11f2-2bbf-45af-9f9f-72c409c5b0a6\") " pod="openstack/ovn-controller-ovs-pt4wh" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.425420 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2hps\" (UniqueName: \"kubernetes.io/projected/e8f77780-9a39-4298-8bfe-76a54e1e41d9-kube-api-access-l2hps\") pod \"ovn-controller-x5bc6\" (UID: \"e8f77780-9a39-4298-8bfe-76a54e1e41d9\") " pod="openstack/ovn-controller-x5bc6" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.425449 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e8f77780-9a39-4298-8bfe-76a54e1e41d9-var-log-ovn\") pod \"ovn-controller-x5bc6\" (UID: \"e8f77780-9a39-4298-8bfe-76a54e1e41d9\") " pod="openstack/ovn-controller-x5bc6" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.425518 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cbad11f2-2bbf-45af-9f9f-72c409c5b0a6-var-run\") pod \"ovn-controller-ovs-pt4wh\" (UID: \"cbad11f2-2bbf-45af-9f9f-72c409c5b0a6\") " pod="openstack/ovn-controller-ovs-pt4wh" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.425545 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6d66\" (UniqueName: \"kubernetes.io/projected/cbad11f2-2bbf-45af-9f9f-72c409c5b0a6-kube-api-access-s6d66\") pod \"ovn-controller-ovs-pt4wh\" (UID: \"cbad11f2-2bbf-45af-9f9f-72c409c5b0a6\") " pod="openstack/ovn-controller-ovs-pt4wh" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.425576 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e8f77780-9a39-4298-8bfe-76a54e1e41d9-var-run\") pod \"ovn-controller-x5bc6\" (UID: \"e8f77780-9a39-4298-8bfe-76a54e1e41d9\") " pod="openstack/ovn-controller-x5bc6" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.425616 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cbad11f2-2bbf-45af-9f9f-72c409c5b0a6-scripts\") pod \"ovn-controller-ovs-pt4wh\" (UID: \"cbad11f2-2bbf-45af-9f9f-72c409c5b0a6\") " pod="openstack/ovn-controller-ovs-pt4wh" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.425650 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e8f77780-9a39-4298-8bfe-76a54e1e41d9-scripts\") pod \"ovn-controller-x5bc6\" (UID: \"e8f77780-9a39-4298-8bfe-76a54e1e41d9\") " pod="openstack/ovn-controller-x5bc6" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.425687 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/cbad11f2-2bbf-45af-9f9f-72c409c5b0a6-var-lib\") pod \"ovn-controller-ovs-pt4wh\" (UID: \"cbad11f2-2bbf-45af-9f9f-72c409c5b0a6\") " pod="openstack/ovn-controller-ovs-pt4wh" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.425709 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/cbad11f2-2bbf-45af-9f9f-72c409c5b0a6-var-log\") pod \"ovn-controller-ovs-pt4wh\" (UID: \"cbad11f2-2bbf-45af-9f9f-72c409c5b0a6\") " pod="openstack/ovn-controller-ovs-pt4wh" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.425743 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e8f77780-9a39-4298-8bfe-76a54e1e41d9-var-run-ovn\") pod \"ovn-controller-x5bc6\" (UID: \"e8f77780-9a39-4298-8bfe-76a54e1e41d9\") " pod="openstack/ovn-controller-x5bc6" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.529863 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/cbad11f2-2bbf-45af-9f9f-72c409c5b0a6-etc-ovs\") pod \"ovn-controller-ovs-pt4wh\" (UID: \"cbad11f2-2bbf-45af-9f9f-72c409c5b0a6\") " pod="openstack/ovn-controller-ovs-pt4wh" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.529922 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2hps\" (UniqueName: \"kubernetes.io/projected/e8f77780-9a39-4298-8bfe-76a54e1e41d9-kube-api-access-l2hps\") pod \"ovn-controller-x5bc6\" (UID: \"e8f77780-9a39-4298-8bfe-76a54e1e41d9\") " pod="openstack/ovn-controller-x5bc6" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.529974 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e8f77780-9a39-4298-8bfe-76a54e1e41d9-var-log-ovn\") pod \"ovn-controller-x5bc6\" (UID: \"e8f77780-9a39-4298-8bfe-76a54e1e41d9\") " pod="openstack/ovn-controller-x5bc6" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.530057 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cbad11f2-2bbf-45af-9f9f-72c409c5b0a6-var-run\") pod \"ovn-controller-ovs-pt4wh\" (UID: \"cbad11f2-2bbf-45af-9f9f-72c409c5b0a6\") " pod="openstack/ovn-controller-ovs-pt4wh" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.530094 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6d66\" (UniqueName: \"kubernetes.io/projected/cbad11f2-2bbf-45af-9f9f-72c409c5b0a6-kube-api-access-s6d66\") pod \"ovn-controller-ovs-pt4wh\" (UID: \"cbad11f2-2bbf-45af-9f9f-72c409c5b0a6\") " pod="openstack/ovn-controller-ovs-pt4wh" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.530140 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e8f77780-9a39-4298-8bfe-76a54e1e41d9-var-run\") pod \"ovn-controller-x5bc6\" (UID: \"e8f77780-9a39-4298-8bfe-76a54e1e41d9\") " pod="openstack/ovn-controller-x5bc6" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.530183 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cbad11f2-2bbf-45af-9f9f-72c409c5b0a6-scripts\") pod \"ovn-controller-ovs-pt4wh\" (UID: \"cbad11f2-2bbf-45af-9f9f-72c409c5b0a6\") " pod="openstack/ovn-controller-ovs-pt4wh" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.530213 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e8f77780-9a39-4298-8bfe-76a54e1e41d9-scripts\") pod \"ovn-controller-x5bc6\" (UID: \"e8f77780-9a39-4298-8bfe-76a54e1e41d9\") " pod="openstack/ovn-controller-x5bc6" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.530253 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/cbad11f2-2bbf-45af-9f9f-72c409c5b0a6-var-lib\") pod \"ovn-controller-ovs-pt4wh\" (UID: \"cbad11f2-2bbf-45af-9f9f-72c409c5b0a6\") " pod="openstack/ovn-controller-ovs-pt4wh" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.530280 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/cbad11f2-2bbf-45af-9f9f-72c409c5b0a6-var-log\") pod \"ovn-controller-ovs-pt4wh\" (UID: \"cbad11f2-2bbf-45af-9f9f-72c409c5b0a6\") " pod="openstack/ovn-controller-ovs-pt4wh" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.530328 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e8f77780-9a39-4298-8bfe-76a54e1e41d9-var-run-ovn\") pod \"ovn-controller-x5bc6\" (UID: \"e8f77780-9a39-4298-8bfe-76a54e1e41d9\") " pod="openstack/ovn-controller-x5bc6" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.530374 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8f77780-9a39-4298-8bfe-76a54e1e41d9-combined-ca-bundle\") pod \"ovn-controller-x5bc6\" (UID: \"e8f77780-9a39-4298-8bfe-76a54e1e41d9\") " pod="openstack/ovn-controller-x5bc6" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.530396 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8f77780-9a39-4298-8bfe-76a54e1e41d9-ovn-controller-tls-certs\") pod \"ovn-controller-x5bc6\" (UID: \"e8f77780-9a39-4298-8bfe-76a54e1e41d9\") " pod="openstack/ovn-controller-x5bc6" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.531354 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e8f77780-9a39-4298-8bfe-76a54e1e41d9-var-run\") pod \"ovn-controller-x5bc6\" (UID: \"e8f77780-9a39-4298-8bfe-76a54e1e41d9\") " pod="openstack/ovn-controller-x5bc6" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.531626 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e8f77780-9a39-4298-8bfe-76a54e1e41d9-var-log-ovn\") pod \"ovn-controller-x5bc6\" (UID: \"e8f77780-9a39-4298-8bfe-76a54e1e41d9\") " pod="openstack/ovn-controller-x5bc6" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.531633 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/cbad11f2-2bbf-45af-9f9f-72c409c5b0a6-var-lib\") pod \"ovn-controller-ovs-pt4wh\" (UID: \"cbad11f2-2bbf-45af-9f9f-72c409c5b0a6\") " pod="openstack/ovn-controller-ovs-pt4wh" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.531769 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/cbad11f2-2bbf-45af-9f9f-72c409c5b0a6-etc-ovs\") pod \"ovn-controller-ovs-pt4wh\" (UID: \"cbad11f2-2bbf-45af-9f9f-72c409c5b0a6\") " pod="openstack/ovn-controller-ovs-pt4wh" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.531891 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e8f77780-9a39-4298-8bfe-76a54e1e41d9-var-run-ovn\") pod \"ovn-controller-x5bc6\" (UID: \"e8f77780-9a39-4298-8bfe-76a54e1e41d9\") " pod="openstack/ovn-controller-x5bc6" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.532105 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cbad11f2-2bbf-45af-9f9f-72c409c5b0a6-var-run\") pod \"ovn-controller-ovs-pt4wh\" (UID: \"cbad11f2-2bbf-45af-9f9f-72c409c5b0a6\") " pod="openstack/ovn-controller-ovs-pt4wh" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.532187 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/cbad11f2-2bbf-45af-9f9f-72c409c5b0a6-var-log\") pod \"ovn-controller-ovs-pt4wh\" (UID: \"cbad11f2-2bbf-45af-9f9f-72c409c5b0a6\") " pod="openstack/ovn-controller-ovs-pt4wh" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.533518 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cbad11f2-2bbf-45af-9f9f-72c409c5b0a6-scripts\") pod \"ovn-controller-ovs-pt4wh\" (UID: \"cbad11f2-2bbf-45af-9f9f-72c409c5b0a6\") " pod="openstack/ovn-controller-ovs-pt4wh" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.533761 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e8f77780-9a39-4298-8bfe-76a54e1e41d9-scripts\") pod \"ovn-controller-x5bc6\" (UID: \"e8f77780-9a39-4298-8bfe-76a54e1e41d9\") " pod="openstack/ovn-controller-x5bc6" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.542037 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8f77780-9a39-4298-8bfe-76a54e1e41d9-ovn-controller-tls-certs\") pod \"ovn-controller-x5bc6\" (UID: \"e8f77780-9a39-4298-8bfe-76a54e1e41d9\") " pod="openstack/ovn-controller-x5bc6" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.554243 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8f77780-9a39-4298-8bfe-76a54e1e41d9-combined-ca-bundle\") pod \"ovn-controller-x5bc6\" (UID: \"e8f77780-9a39-4298-8bfe-76a54e1e41d9\") " pod="openstack/ovn-controller-x5bc6" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.559880 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2hps\" (UniqueName: \"kubernetes.io/projected/e8f77780-9a39-4298-8bfe-76a54e1e41d9-kube-api-access-l2hps\") pod \"ovn-controller-x5bc6\" (UID: \"e8f77780-9a39-4298-8bfe-76a54e1e41d9\") " pod="openstack/ovn-controller-x5bc6" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.567977 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6d66\" (UniqueName: \"kubernetes.io/projected/cbad11f2-2bbf-45af-9f9f-72c409c5b0a6-kube-api-access-s6d66\") pod \"ovn-controller-ovs-pt4wh\" (UID: \"cbad11f2-2bbf-45af-9f9f-72c409c5b0a6\") " pod="openstack/ovn-controller-ovs-pt4wh" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.611029 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.615821 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.621657 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.621894 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-zhrtt" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.622017 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.622138 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.622236 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.631370 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.671308 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-x5bc6" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.689789 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-pt4wh" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.734386 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/70486c1b-1f98-4340-a8f9-5a48e381ef7d-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"70486c1b-1f98-4340-a8f9-5a48e381ef7d\") " pod="openstack/ovsdbserver-nb-0" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.734477 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70486c1b-1f98-4340-a8f9-5a48e381ef7d-config\") pod \"ovsdbserver-nb-0\" (UID: \"70486c1b-1f98-4340-a8f9-5a48e381ef7d\") " pod="openstack/ovsdbserver-nb-0" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.734648 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d11dc79b-d974-4d0a-97b2-5900b75742ad\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d11dc79b-d974-4d0a-97b2-5900b75742ad\") pod \"ovsdbserver-nb-0\" (UID: \"70486c1b-1f98-4340-a8f9-5a48e381ef7d\") " pod="openstack/ovsdbserver-nb-0" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.734981 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6qxf\" (UniqueName: \"kubernetes.io/projected/70486c1b-1f98-4340-a8f9-5a48e381ef7d-kube-api-access-z6qxf\") pod \"ovsdbserver-nb-0\" (UID: \"70486c1b-1f98-4340-a8f9-5a48e381ef7d\") " pod="openstack/ovsdbserver-nb-0" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.735084 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/70486c1b-1f98-4340-a8f9-5a48e381ef7d-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"70486c1b-1f98-4340-a8f9-5a48e381ef7d\") " pod="openstack/ovsdbserver-nb-0" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.735201 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/70486c1b-1f98-4340-a8f9-5a48e381ef7d-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"70486c1b-1f98-4340-a8f9-5a48e381ef7d\") " pod="openstack/ovsdbserver-nb-0" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.735356 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70486c1b-1f98-4340-a8f9-5a48e381ef7d-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"70486c1b-1f98-4340-a8f9-5a48e381ef7d\") " pod="openstack/ovsdbserver-nb-0" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.735627 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/70486c1b-1f98-4340-a8f9-5a48e381ef7d-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"70486c1b-1f98-4340-a8f9-5a48e381ef7d\") " pod="openstack/ovsdbserver-nb-0" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.837939 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70486c1b-1f98-4340-a8f9-5a48e381ef7d-config\") pod \"ovsdbserver-nb-0\" (UID: \"70486c1b-1f98-4340-a8f9-5a48e381ef7d\") " pod="openstack/ovsdbserver-nb-0" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.838445 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d11dc79b-d974-4d0a-97b2-5900b75742ad\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d11dc79b-d974-4d0a-97b2-5900b75742ad\") pod \"ovsdbserver-nb-0\" (UID: \"70486c1b-1f98-4340-a8f9-5a48e381ef7d\") " pod="openstack/ovsdbserver-nb-0" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.838491 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6qxf\" (UniqueName: \"kubernetes.io/projected/70486c1b-1f98-4340-a8f9-5a48e381ef7d-kube-api-access-z6qxf\") pod \"ovsdbserver-nb-0\" (UID: \"70486c1b-1f98-4340-a8f9-5a48e381ef7d\") " pod="openstack/ovsdbserver-nb-0" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.838525 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/70486c1b-1f98-4340-a8f9-5a48e381ef7d-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"70486c1b-1f98-4340-a8f9-5a48e381ef7d\") " pod="openstack/ovsdbserver-nb-0" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.838569 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/70486c1b-1f98-4340-a8f9-5a48e381ef7d-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"70486c1b-1f98-4340-a8f9-5a48e381ef7d\") " pod="openstack/ovsdbserver-nb-0" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.838649 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70486c1b-1f98-4340-a8f9-5a48e381ef7d-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"70486c1b-1f98-4340-a8f9-5a48e381ef7d\") " pod="openstack/ovsdbserver-nb-0" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.838707 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/70486c1b-1f98-4340-a8f9-5a48e381ef7d-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"70486c1b-1f98-4340-a8f9-5a48e381ef7d\") " pod="openstack/ovsdbserver-nb-0" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.838834 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/70486c1b-1f98-4340-a8f9-5a48e381ef7d-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"70486c1b-1f98-4340-a8f9-5a48e381ef7d\") " pod="openstack/ovsdbserver-nb-0" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.839808 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/70486c1b-1f98-4340-a8f9-5a48e381ef7d-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"70486c1b-1f98-4340-a8f9-5a48e381ef7d\") " pod="openstack/ovsdbserver-nb-0" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.840292 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/70486c1b-1f98-4340-a8f9-5a48e381ef7d-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"70486c1b-1f98-4340-a8f9-5a48e381ef7d\") " pod="openstack/ovsdbserver-nb-0" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.841066 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70486c1b-1f98-4340-a8f9-5a48e381ef7d-config\") pod \"ovsdbserver-nb-0\" (UID: \"70486c1b-1f98-4340-a8f9-5a48e381ef7d\") " pod="openstack/ovsdbserver-nb-0" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.843391 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/70486c1b-1f98-4340-a8f9-5a48e381ef7d-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"70486c1b-1f98-4340-a8f9-5a48e381ef7d\") " pod="openstack/ovsdbserver-nb-0" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.843841 4809 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.843870 4809 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d11dc79b-d974-4d0a-97b2-5900b75742ad\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d11dc79b-d974-4d0a-97b2-5900b75742ad\") pod \"ovsdbserver-nb-0\" (UID: \"70486c1b-1f98-4340-a8f9-5a48e381ef7d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8eac6ca8c0059e9877752c56e4ee0376f441ba0c45d1e0a470333cb01b0da439/globalmount\"" pod="openstack/ovsdbserver-nb-0" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.844048 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/70486c1b-1f98-4340-a8f9-5a48e381ef7d-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"70486c1b-1f98-4340-a8f9-5a48e381ef7d\") " pod="openstack/ovsdbserver-nb-0" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.853452 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70486c1b-1f98-4340-a8f9-5a48e381ef7d-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"70486c1b-1f98-4340-a8f9-5a48e381ef7d\") " pod="openstack/ovsdbserver-nb-0" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.854686 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6qxf\" (UniqueName: \"kubernetes.io/projected/70486c1b-1f98-4340-a8f9-5a48e381ef7d-kube-api-access-z6qxf\") pod \"ovsdbserver-nb-0\" (UID: \"70486c1b-1f98-4340-a8f9-5a48e381ef7d\") " pod="openstack/ovsdbserver-nb-0" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.881246 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d11dc79b-d974-4d0a-97b2-5900b75742ad\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d11dc79b-d974-4d0a-97b2-5900b75742ad\") pod \"ovsdbserver-nb-0\" (UID: \"70486c1b-1f98-4340-a8f9-5a48e381ef7d\") " pod="openstack/ovsdbserver-nb-0" Mar 12 08:21:10 crc kubenswrapper[4809]: I0312 08:21:10.975009 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 12 08:21:14 crc kubenswrapper[4809]: I0312 08:21:14.554456 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 12 08:21:14 crc kubenswrapper[4809]: I0312 08:21:14.564825 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 12 08:21:14 crc kubenswrapper[4809]: I0312 08:21:14.568447 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-52msm" Mar 12 08:21:14 crc kubenswrapper[4809]: I0312 08:21:14.568809 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Mar 12 08:21:14 crc kubenswrapper[4809]: I0312 08:21:14.569010 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Mar 12 08:21:14 crc kubenswrapper[4809]: I0312 08:21:14.569803 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Mar 12 08:21:14 crc kubenswrapper[4809]: I0312 08:21:14.607832 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 12 08:21:14 crc kubenswrapper[4809]: I0312 08:21:14.646681 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e7ddd2df-8231-4c1c-99a5-7af5758508ba-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"e7ddd2df-8231-4c1c-99a5-7af5758508ba\") " pod="openstack/ovsdbserver-sb-0" Mar 12 08:21:14 crc kubenswrapper[4809]: I0312 08:21:14.646785 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z4p6\" (UniqueName: \"kubernetes.io/projected/e7ddd2df-8231-4c1c-99a5-7af5758508ba-kube-api-access-4z4p6\") pod \"ovsdbserver-sb-0\" (UID: \"e7ddd2df-8231-4c1c-99a5-7af5758508ba\") " pod="openstack/ovsdbserver-sb-0" Mar 12 08:21:14 crc kubenswrapper[4809]: I0312 08:21:14.646823 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7ddd2df-8231-4c1c-99a5-7af5758508ba-config\") pod \"ovsdbserver-sb-0\" (UID: \"e7ddd2df-8231-4c1c-99a5-7af5758508ba\") " pod="openstack/ovsdbserver-sb-0" Mar 12 08:21:14 crc kubenswrapper[4809]: I0312 08:21:14.646857 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-cebae92b-8d67-4a0e-8873-bf634b36a74e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cebae92b-8d67-4a0e-8873-bf634b36a74e\") pod \"ovsdbserver-sb-0\" (UID: \"e7ddd2df-8231-4c1c-99a5-7af5758508ba\") " pod="openstack/ovsdbserver-sb-0" Mar 12 08:21:14 crc kubenswrapper[4809]: I0312 08:21:14.646894 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e7ddd2df-8231-4c1c-99a5-7af5758508ba-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"e7ddd2df-8231-4c1c-99a5-7af5758508ba\") " pod="openstack/ovsdbserver-sb-0" Mar 12 08:21:14 crc kubenswrapper[4809]: I0312 08:21:14.647073 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7ddd2df-8231-4c1c-99a5-7af5758508ba-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e7ddd2df-8231-4c1c-99a5-7af5758508ba\") " pod="openstack/ovsdbserver-sb-0" Mar 12 08:21:14 crc kubenswrapper[4809]: I0312 08:21:14.647105 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7ddd2df-8231-4c1c-99a5-7af5758508ba-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"e7ddd2df-8231-4c1c-99a5-7af5758508ba\") " pod="openstack/ovsdbserver-sb-0" Mar 12 08:21:14 crc kubenswrapper[4809]: I0312 08:21:14.647231 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7ddd2df-8231-4c1c-99a5-7af5758508ba-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e7ddd2df-8231-4c1c-99a5-7af5758508ba\") " pod="openstack/ovsdbserver-sb-0" Mar 12 08:21:14 crc kubenswrapper[4809]: I0312 08:21:14.750565 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e7ddd2df-8231-4c1c-99a5-7af5758508ba-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"e7ddd2df-8231-4c1c-99a5-7af5758508ba\") " pod="openstack/ovsdbserver-sb-0" Mar 12 08:21:14 crc kubenswrapper[4809]: I0312 08:21:14.750703 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7ddd2df-8231-4c1c-99a5-7af5758508ba-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e7ddd2df-8231-4c1c-99a5-7af5758508ba\") " pod="openstack/ovsdbserver-sb-0" Mar 12 08:21:14 crc kubenswrapper[4809]: I0312 08:21:14.750738 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7ddd2df-8231-4c1c-99a5-7af5758508ba-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"e7ddd2df-8231-4c1c-99a5-7af5758508ba\") " pod="openstack/ovsdbserver-sb-0" Mar 12 08:21:14 crc kubenswrapper[4809]: I0312 08:21:14.750822 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7ddd2df-8231-4c1c-99a5-7af5758508ba-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e7ddd2df-8231-4c1c-99a5-7af5758508ba\") " pod="openstack/ovsdbserver-sb-0" Mar 12 08:21:14 crc kubenswrapper[4809]: I0312 08:21:14.750939 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e7ddd2df-8231-4c1c-99a5-7af5758508ba-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"e7ddd2df-8231-4c1c-99a5-7af5758508ba\") " pod="openstack/ovsdbserver-sb-0" Mar 12 08:21:14 crc kubenswrapper[4809]: I0312 08:21:14.751023 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4z4p6\" (UniqueName: \"kubernetes.io/projected/e7ddd2df-8231-4c1c-99a5-7af5758508ba-kube-api-access-4z4p6\") pod \"ovsdbserver-sb-0\" (UID: \"e7ddd2df-8231-4c1c-99a5-7af5758508ba\") " pod="openstack/ovsdbserver-sb-0" Mar 12 08:21:14 crc kubenswrapper[4809]: I0312 08:21:14.751061 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7ddd2df-8231-4c1c-99a5-7af5758508ba-config\") pod \"ovsdbserver-sb-0\" (UID: \"e7ddd2df-8231-4c1c-99a5-7af5758508ba\") " pod="openstack/ovsdbserver-sb-0" Mar 12 08:21:14 crc kubenswrapper[4809]: I0312 08:21:14.751096 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-cebae92b-8d67-4a0e-8873-bf634b36a74e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cebae92b-8d67-4a0e-8873-bf634b36a74e\") pod \"ovsdbserver-sb-0\" (UID: \"e7ddd2df-8231-4c1c-99a5-7af5758508ba\") " pod="openstack/ovsdbserver-sb-0" Mar 12 08:21:14 crc kubenswrapper[4809]: I0312 08:21:14.751691 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e7ddd2df-8231-4c1c-99a5-7af5758508ba-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"e7ddd2df-8231-4c1c-99a5-7af5758508ba\") " pod="openstack/ovsdbserver-sb-0" Mar 12 08:21:14 crc kubenswrapper[4809]: I0312 08:21:14.752249 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7ddd2df-8231-4c1c-99a5-7af5758508ba-config\") pod \"ovsdbserver-sb-0\" (UID: \"e7ddd2df-8231-4c1c-99a5-7af5758508ba\") " pod="openstack/ovsdbserver-sb-0" Mar 12 08:21:14 crc kubenswrapper[4809]: I0312 08:21:14.753355 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e7ddd2df-8231-4c1c-99a5-7af5758508ba-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"e7ddd2df-8231-4c1c-99a5-7af5758508ba\") " pod="openstack/ovsdbserver-sb-0" Mar 12 08:21:14 crc kubenswrapper[4809]: I0312 08:21:14.757541 4809 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 08:21:14 crc kubenswrapper[4809]: I0312 08:21:14.757592 4809 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-cebae92b-8d67-4a0e-8873-bf634b36a74e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cebae92b-8d67-4a0e-8873-bf634b36a74e\") pod \"ovsdbserver-sb-0\" (UID: \"e7ddd2df-8231-4c1c-99a5-7af5758508ba\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/79ca26819639d2243e80d51775a5caa90710837a68f0165695566dbc350381c9/globalmount\"" pod="openstack/ovsdbserver-sb-0" Mar 12 08:21:14 crc kubenswrapper[4809]: I0312 08:21:14.759086 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7ddd2df-8231-4c1c-99a5-7af5758508ba-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e7ddd2df-8231-4c1c-99a5-7af5758508ba\") " pod="openstack/ovsdbserver-sb-0" Mar 12 08:21:14 crc kubenswrapper[4809]: I0312 08:21:14.759171 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7ddd2df-8231-4c1c-99a5-7af5758508ba-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"e7ddd2df-8231-4c1c-99a5-7af5758508ba\") " pod="openstack/ovsdbserver-sb-0" Mar 12 08:21:14 crc kubenswrapper[4809]: I0312 08:21:14.759593 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7ddd2df-8231-4c1c-99a5-7af5758508ba-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e7ddd2df-8231-4c1c-99a5-7af5758508ba\") " pod="openstack/ovsdbserver-sb-0" Mar 12 08:21:14 crc kubenswrapper[4809]: I0312 08:21:14.769332 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4z4p6\" (UniqueName: \"kubernetes.io/projected/e7ddd2df-8231-4c1c-99a5-7af5758508ba-kube-api-access-4z4p6\") pod \"ovsdbserver-sb-0\" (UID: \"e7ddd2df-8231-4c1c-99a5-7af5758508ba\") " pod="openstack/ovsdbserver-sb-0" Mar 12 08:21:14 crc kubenswrapper[4809]: I0312 08:21:14.815716 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-cebae92b-8d67-4a0e-8873-bf634b36a74e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cebae92b-8d67-4a0e-8873-bf634b36a74e\") pod \"ovsdbserver-sb-0\" (UID: \"e7ddd2df-8231-4c1c-99a5-7af5758508ba\") " pod="openstack/ovsdbserver-sb-0" Mar 12 08:21:14 crc kubenswrapper[4809]: I0312 08:21:14.921469 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 12 08:21:15 crc kubenswrapper[4809]: I0312 08:21:15.048607 4809 patch_prober.go:28] interesting pod/machine-config-daemon-h6d4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 08:21:15 crc kubenswrapper[4809]: I0312 08:21:15.048680 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 08:21:15 crc kubenswrapper[4809]: I0312 08:21:15.048731 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" Mar 12 08:21:15 crc kubenswrapper[4809]: I0312 08:21:15.049484 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a328a2cdcc5abe038555d03cc30ceecaf7377dc57d422a2ede895c12e879661b"} pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 08:21:15 crc kubenswrapper[4809]: I0312 08:21:15.049540 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" containerID="cri-o://a328a2cdcc5abe038555d03cc30ceecaf7377dc57d422a2ede895c12e879661b" gracePeriod=600 Mar 12 08:21:15 crc kubenswrapper[4809]: I0312 08:21:15.588629 4809 generic.go:334] "Generic (PLEG): container finished" podID="101483ba-8ed3-40eb-9855-077e9add029f" containerID="a328a2cdcc5abe038555d03cc30ceecaf7377dc57d422a2ede895c12e879661b" exitCode=0 Mar 12 08:21:15 crc kubenswrapper[4809]: I0312 08:21:15.588675 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" event={"ID":"101483ba-8ed3-40eb-9855-077e9add029f","Type":"ContainerDied","Data":"a328a2cdcc5abe038555d03cc30ceecaf7377dc57d422a2ede895c12e879661b"} Mar 12 08:21:15 crc kubenswrapper[4809]: I0312 08:21:15.588710 4809 scope.go:117] "RemoveContainer" containerID="943f7c8e36ba68e565889e8e2f0d5e4ed7b8e09774d0614bb78547c2761cd4cc" Mar 12 08:21:23 crc kubenswrapper[4809]: E0312 08:21:23.919922 4809 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Mar 12 08:21:23 crc kubenswrapper[4809]: E0312 08:21:23.921170 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z8n88,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(dae47d35-2955-4c02-88bb-a0fbe4cd7bf0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 08:21:23 crc kubenswrapper[4809]: E0312 08:21:23.924851 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="dae47d35-2955-4c02-88bb-a0fbe4cd7bf0" Mar 12 08:21:24 crc kubenswrapper[4809]: I0312 08:21:24.416735 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 12 08:21:24 crc kubenswrapper[4809]: E0312 08:21:24.707820 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="dae47d35-2955-4c02-88bb-a0fbe4cd7bf0" Mar 12 08:21:26 crc kubenswrapper[4809]: E0312 08:21:26.141756 4809 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Mar 12 08:21:26 crc kubenswrapper[4809]: E0312 08:21:26.142291 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v65v9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(3d541616-6c38-428f-bd28-7dc54dceab8c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 08:21:26 crc kubenswrapper[4809]: E0312 08:21:26.143551 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="3d541616-6c38-428f-bd28-7dc54dceab8c" Mar 12 08:21:26 crc kubenswrapper[4809]: E0312 08:21:26.733129 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="3d541616-6c38-428f-bd28-7dc54dceab8c" Mar 12 08:21:26 crc kubenswrapper[4809]: E0312 08:21:26.927969 4809 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Mar 12 08:21:26 crc kubenswrapper[4809]: E0312 08:21:26.929062 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bsjxm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-nzln7_openstack(36b41bd2-cbbb-49b4-b5bb-4cb6da3c748f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 08:21:26 crc kubenswrapper[4809]: E0312 08:21:26.930541 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-nzln7" podUID="36b41bd2-cbbb-49b4-b5bb-4cb6da3c748f" Mar 12 08:21:26 crc kubenswrapper[4809]: E0312 08:21:26.960067 4809 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Mar 12 08:21:26 crc kubenswrapper[4809]: E0312 08:21:26.960706 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mvftx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-8v8xx_openstack(1c364e2e-35b0-4c80-8592-eb5841e7b6d8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 08:21:26 crc kubenswrapper[4809]: E0312 08:21:26.962431 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-8v8xx" podUID="1c364e2e-35b0-4c80-8592-eb5841e7b6d8" Mar 12 08:21:26 crc kubenswrapper[4809]: E0312 08:21:26.966959 4809 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Mar 12 08:21:26 crc kubenswrapper[4809]: E0312 08:21:26.967231 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7h28x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-xvf62_openstack(a9d7c154-1946-486a-a724-181924559d35): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 08:21:26 crc kubenswrapper[4809]: E0312 08:21:26.968528 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-xvf62" podUID="a9d7c154-1946-486a-a724-181924559d35" Mar 12 08:21:26 crc kubenswrapper[4809]: E0312 08:21:26.973068 4809 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Mar 12 08:21:26 crc kubenswrapper[4809]: E0312 08:21:26.973314 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfdh5dfhb6h64h676hc4h78h97h669h54chfbh696hb5h54bh5d4h6bh64h644h677h584h5cbh698h9dh5bbh5f8h5b8hcdh644h5c7h694hbfh589q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pbspm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5ccc8479f9-lctvd_openstack(03a48b86-8bc7-4d5d-87c1-61f3f57110b7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 08:21:26 crc kubenswrapper[4809]: E0312 08:21:26.974951 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5ccc8479f9-lctvd" podUID="03a48b86-8bc7-4d5d-87c1-61f3f57110b7" Mar 12 08:21:26 crc kubenswrapper[4809]: W0312 08:21:26.981068 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod160163b1_c728_46dd_8caa_df11fdb18266.slice/crio-95a3cdbe11154e053127c70e279463673de4874127884bb0bca72f15818991db WatchSource:0}: Error finding container 95a3cdbe11154e053127c70e279463673de4874127884bb0bca72f15818991db: Status 404 returned error can't find the container with id 95a3cdbe11154e053127c70e279463673de4874127884bb0bca72f15818991db Mar 12 08:21:27 crc kubenswrapper[4809]: W0312 08:21:27.446701 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3c7bda2_fd1b_4e75_8991_a7f713283b7d.slice/crio-585601b72e14f5d01156e1aac4f22fe887ddb4608e68bb0eb2cfde7afc021936 WatchSource:0}: Error finding container 585601b72e14f5d01156e1aac4f22fe887ddb4608e68bb0eb2cfde7afc021936: Status 404 returned error can't find the container with id 585601b72e14f5d01156e1aac4f22fe887ddb4608e68bb0eb2cfde7afc021936 Mar 12 08:21:27 crc kubenswrapper[4809]: I0312 08:21:27.452253 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-p9md4"] Mar 12 08:21:27 crc kubenswrapper[4809]: I0312 08:21:27.687140 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-x5bc6"] Mar 12 08:21:27 crc kubenswrapper[4809]: I0312 08:21:27.699163 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 12 08:21:27 crc kubenswrapper[4809]: W0312 08:21:27.737428 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode8f77780_9a39_4298_8bfe_76a54e1e41d9.slice/crio-a7858ec6d00329d441f2ceeb33648d236ce97e0aff0eeac7a9125ee351bf0152 WatchSource:0}: Error finding container a7858ec6d00329d441f2ceeb33648d236ce97e0aff0eeac7a9125ee351bf0152: Status 404 returned error can't find the container with id a7858ec6d00329d441f2ceeb33648d236ce97e0aff0eeac7a9125ee351bf0152 Mar 12 08:21:27 crc kubenswrapper[4809]: W0312 08:21:27.745046 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod729d6f4c_335b_486c_bea9_812d4abfdfd9.slice/crio-4a43cbb25dc6cb373bfa8c5f945a0832821add10b36d63a23325b917c38cccfd WatchSource:0}: Error finding container 4a43cbb25dc6cb373bfa8c5f945a0832821add10b36d63a23325b917c38cccfd: Status 404 returned error can't find the container with id 4a43cbb25dc6cb373bfa8c5f945a0832821add10b36d63a23325b917c38cccfd Mar 12 08:21:27 crc kubenswrapper[4809]: I0312 08:21:27.746832 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"160163b1-c728-46dd-8caa-df11fdb18266","Type":"ContainerStarted","Data":"95a3cdbe11154e053127c70e279463673de4874127884bb0bca72f15818991db"} Mar 12 08:21:27 crc kubenswrapper[4809]: I0312 08:21:27.749043 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-p9md4" event={"ID":"a3c7bda2-fd1b-4e75-8991-a7f713283b7d","Type":"ContainerStarted","Data":"585601b72e14f5d01156e1aac4f22fe887ddb4608e68bb0eb2cfde7afc021936"} Mar 12 08:21:27 crc kubenswrapper[4809]: I0312 08:21:27.754229 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" event={"ID":"101483ba-8ed3-40eb-9855-077e9add029f","Type":"ContainerStarted","Data":"864ab2b9321734db2ec1c2e4ce57f8a096a813a5cbaa0744b231535713d6ab10"} Mar 12 08:21:27 crc kubenswrapper[4809]: E0312 08:21:27.758605 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-5ccc8479f9-lctvd" podUID="03a48b86-8bc7-4d5d-87c1-61f3f57110b7" Mar 12 08:21:27 crc kubenswrapper[4809]: E0312 08:21:27.772426 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-nzln7" podUID="36b41bd2-cbbb-49b4-b5bb-4cb6da3c748f" Mar 12 08:21:27 crc kubenswrapper[4809]: I0312 08:21:27.942623 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 12 08:21:28 crc kubenswrapper[4809]: I0312 08:21:28.479233 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 12 08:21:28 crc kubenswrapper[4809]: I0312 08:21:28.569138 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5b99b64b5d-4hf4l"] Mar 12 08:21:28 crc kubenswrapper[4809]: W0312 08:21:28.703395 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70486c1b_1f98_4340_a8f9_5a48e381ef7d.slice/crio-8a8dff3f408c89b9ebec03aba7e416755ef302d2605c4b9d678509516740cc4a WatchSource:0}: Error finding container 8a8dff3f408c89b9ebec03aba7e416755ef302d2605c4b9d678509516740cc4a: Status 404 returned error can't find the container with id 8a8dff3f408c89b9ebec03aba7e416755ef302d2605c4b9d678509516740cc4a Mar 12 08:21:28 crc kubenswrapper[4809]: I0312 08:21:28.775178 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"70486c1b-1f98-4340-a8f9-5a48e381ef7d","Type":"ContainerStarted","Data":"8a8dff3f408c89b9ebec03aba7e416755ef302d2605c4b9d678509516740cc4a"} Mar 12 08:21:28 crc kubenswrapper[4809]: I0312 08:21:28.776484 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"729d6f4c-335b-486c-bea9-812d4abfdfd9","Type":"ContainerStarted","Data":"4a43cbb25dc6cb373bfa8c5f945a0832821add10b36d63a23325b917c38cccfd"} Mar 12 08:21:28 crc kubenswrapper[4809]: I0312 08:21:28.777933 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"dae78d40-ac67-4ec6-bf05-7f615d22aea9","Type":"ContainerStarted","Data":"6669c6916343aa6405841b56f374201202dba020a52811f32504141688470e9d"} Mar 12 08:21:28 crc kubenswrapper[4809]: I0312 08:21:28.789745 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-x5bc6" event={"ID":"e8f77780-9a39-4298-8bfe-76a54e1e41d9","Type":"ContainerStarted","Data":"a7858ec6d00329d441f2ceeb33648d236ce97e0aff0eeac7a9125ee351bf0152"} Mar 12 08:21:28 crc kubenswrapper[4809]: I0312 08:21:28.791608 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 12 08:21:28 crc kubenswrapper[4809]: I0312 08:21:28.850354 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 12 08:21:28 crc kubenswrapper[4809]: I0312 08:21:28.991164 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-pt4wh"] Mar 12 08:21:29 crc kubenswrapper[4809]: W0312 08:21:29.307287 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcbad11f2_2bbf_45af_9f9f_72c409c5b0a6.slice/crio-24323743c0f1bc1f4276af11f8763df8aa8d59d24bf26955bdc8137c96284f03 WatchSource:0}: Error finding container 24323743c0f1bc1f4276af11f8763df8aa8d59d24bf26955bdc8137c96284f03: Status 404 returned error can't find the container with id 24323743c0f1bc1f4276af11f8763df8aa8d59d24bf26955bdc8137c96284f03 Mar 12 08:21:29 crc kubenswrapper[4809]: I0312 08:21:29.376308 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-8v8xx" Mar 12 08:21:29 crc kubenswrapper[4809]: I0312 08:21:29.380153 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-xvf62" Mar 12 08:21:29 crc kubenswrapper[4809]: I0312 08:21:29.566467 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c364e2e-35b0-4c80-8592-eb5841e7b6d8-config\") pod \"1c364e2e-35b0-4c80-8592-eb5841e7b6d8\" (UID: \"1c364e2e-35b0-4c80-8592-eb5841e7b6d8\") " Mar 12 08:21:29 crc kubenswrapper[4809]: I0312 08:21:29.566748 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9d7c154-1946-486a-a724-181924559d35-config\") pod \"a9d7c154-1946-486a-a724-181924559d35\" (UID: \"a9d7c154-1946-486a-a724-181924559d35\") " Mar 12 08:21:29 crc kubenswrapper[4809]: I0312 08:21:29.566812 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c364e2e-35b0-4c80-8592-eb5841e7b6d8-dns-svc\") pod \"1c364e2e-35b0-4c80-8592-eb5841e7b6d8\" (UID: \"1c364e2e-35b0-4c80-8592-eb5841e7b6d8\") " Mar 12 08:21:29 crc kubenswrapper[4809]: I0312 08:21:29.567177 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c364e2e-35b0-4c80-8592-eb5841e7b6d8-config" (OuterVolumeSpecName: "config") pod "1c364e2e-35b0-4c80-8592-eb5841e7b6d8" (UID: "1c364e2e-35b0-4c80-8592-eb5841e7b6d8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:21:29 crc kubenswrapper[4809]: I0312 08:21:29.567457 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c364e2e-35b0-4c80-8592-eb5841e7b6d8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1c364e2e-35b0-4c80-8592-eb5841e7b6d8" (UID: "1c364e2e-35b0-4c80-8592-eb5841e7b6d8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:21:29 crc kubenswrapper[4809]: I0312 08:21:29.567527 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7h28x\" (UniqueName: \"kubernetes.io/projected/a9d7c154-1946-486a-a724-181924559d35-kube-api-access-7h28x\") pod \"a9d7c154-1946-486a-a724-181924559d35\" (UID: \"a9d7c154-1946-486a-a724-181924559d35\") " Mar 12 08:21:29 crc kubenswrapper[4809]: I0312 08:21:29.567614 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvftx\" (UniqueName: \"kubernetes.io/projected/1c364e2e-35b0-4c80-8592-eb5841e7b6d8-kube-api-access-mvftx\") pod \"1c364e2e-35b0-4c80-8592-eb5841e7b6d8\" (UID: \"1c364e2e-35b0-4c80-8592-eb5841e7b6d8\") " Mar 12 08:21:29 crc kubenswrapper[4809]: I0312 08:21:29.567616 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9d7c154-1946-486a-a724-181924559d35-config" (OuterVolumeSpecName: "config") pod "a9d7c154-1946-486a-a724-181924559d35" (UID: "a9d7c154-1946-486a-a724-181924559d35"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:21:29 crc kubenswrapper[4809]: I0312 08:21:29.568483 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c364e2e-35b0-4c80-8592-eb5841e7b6d8-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:21:29 crc kubenswrapper[4809]: I0312 08:21:29.568505 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9d7c154-1946-486a-a724-181924559d35-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:21:29 crc kubenswrapper[4809]: I0312 08:21:29.568518 4809 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c364e2e-35b0-4c80-8592-eb5841e7b6d8-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 12 08:21:29 crc kubenswrapper[4809]: I0312 08:21:29.575426 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9d7c154-1946-486a-a724-181924559d35-kube-api-access-7h28x" (OuterVolumeSpecName: "kube-api-access-7h28x") pod "a9d7c154-1946-486a-a724-181924559d35" (UID: "a9d7c154-1946-486a-a724-181924559d35"). InnerVolumeSpecName "kube-api-access-7h28x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:21:29 crc kubenswrapper[4809]: I0312 08:21:29.590785 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c364e2e-35b0-4c80-8592-eb5841e7b6d8-kube-api-access-mvftx" (OuterVolumeSpecName: "kube-api-access-mvftx") pod "1c364e2e-35b0-4c80-8592-eb5841e7b6d8" (UID: "1c364e2e-35b0-4c80-8592-eb5841e7b6d8"). InnerVolumeSpecName "kube-api-access-mvftx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:21:29 crc kubenswrapper[4809]: I0312 08:21:29.670705 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7h28x\" (UniqueName: \"kubernetes.io/projected/a9d7c154-1946-486a-a724-181924559d35-kube-api-access-7h28x\") on node \"crc\" DevicePath \"\"" Mar 12 08:21:29 crc kubenswrapper[4809]: I0312 08:21:29.670752 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvftx\" (UniqueName: \"kubernetes.io/projected/1c364e2e-35b0-4c80-8592-eb5841e7b6d8-kube-api-access-mvftx\") on node \"crc\" DevicePath \"\"" Mar 12 08:21:29 crc kubenswrapper[4809]: I0312 08:21:29.804956 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-8v8xx" Mar 12 08:21:29 crc kubenswrapper[4809]: I0312 08:21:29.804982 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-8v8xx" event={"ID":"1c364e2e-35b0-4c80-8592-eb5841e7b6d8","Type":"ContainerDied","Data":"a59e03906d18510ab1c027e0a21ddd0d38840b7a64a9e14561224041a796c975"} Mar 12 08:21:29 crc kubenswrapper[4809]: I0312 08:21:29.814369 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"58606282-c6cc-482f-b1be-78717b5d38b2","Type":"ContainerStarted","Data":"c81cf95ea10b495f4a82ca34007f0655020863cdd731b1e79212ed04ff2edfd2"} Mar 12 08:21:29 crc kubenswrapper[4809]: I0312 08:21:29.815832 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-pt4wh" event={"ID":"cbad11f2-2bbf-45af-9f9f-72c409c5b0a6","Type":"ContainerStarted","Data":"24323743c0f1bc1f4276af11f8763df8aa8d59d24bf26955bdc8137c96284f03"} Mar 12 08:21:29 crc kubenswrapper[4809]: I0312 08:21:29.816700 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-xvf62" event={"ID":"a9d7c154-1946-486a-a724-181924559d35","Type":"ContainerDied","Data":"5cfebb24669af9fda681fe4039d7553abdf058af79b4ef69855cd61d3bf77d09"} Mar 12 08:21:29 crc kubenswrapper[4809]: I0312 08:21:29.816778 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-xvf62" Mar 12 08:21:29 crc kubenswrapper[4809]: I0312 08:21:29.833497 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5b99b64b5d-4hf4l" event={"ID":"4210061b-64cb-414a-be09-bf56697ad409","Type":"ContainerStarted","Data":"d7732ab4987e4d48da8cc1e1480190c2beb509c3d30947e9f8aee30c206d9b34"} Mar 12 08:21:29 crc kubenswrapper[4809]: I0312 08:21:29.841697 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"c45bb76f-92d7-4214-9ce3-c64361a40416","Type":"ContainerStarted","Data":"aa4dcb211340daf33201d36c37708b689c8f08f8477256ee26540cef8fdd886e"} Mar 12 08:21:29 crc kubenswrapper[4809]: I0312 08:21:29.846653 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d043d696-d09f-4c43-8960-0d31789103e8","Type":"ContainerStarted","Data":"853e1d2444390fdd233ca4750bb5446e59760186012bfcafdad952e0f8516afa"} Mar 12 08:21:29 crc kubenswrapper[4809]: I0312 08:21:29.849945 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"e7ddd2df-8231-4c1c-99a5-7af5758508ba","Type":"ContainerStarted","Data":"ffe21a8fe689c0d6939395b30233249b3c9def72aacee82a33ec234ac3c4349d"} Mar 12 08:21:29 crc kubenswrapper[4809]: I0312 08:21:29.853339 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"31094b6a-8ac7-4bbf-883e-aabf280fe22e","Type":"ContainerStarted","Data":"a6ede6ce0480318857787a54e82e85fbafb16bed43eeb8a9ce79c6d82759c251"} Mar 12 08:21:29 crc kubenswrapper[4809]: I0312 08:21:29.886919 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-xvf62"] Mar 12 08:21:29 crc kubenswrapper[4809]: I0312 08:21:29.907074 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-xvf62"] Mar 12 08:21:29 crc kubenswrapper[4809]: I0312 08:21:29.969467 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-8v8xx"] Mar 12 08:21:29 crc kubenswrapper[4809]: I0312 08:21:29.975585 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-8v8xx"] Mar 12 08:21:31 crc kubenswrapper[4809]: I0312 08:21:31.118801 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c364e2e-35b0-4c80-8592-eb5841e7b6d8" path="/var/lib/kubelet/pods/1c364e2e-35b0-4c80-8592-eb5841e7b6d8/volumes" Mar 12 08:21:31 crc kubenswrapper[4809]: I0312 08:21:31.119560 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9d7c154-1946-486a-a724-181924559d35" path="/var/lib/kubelet/pods/a9d7c154-1946-486a-a724-181924559d35/volumes" Mar 12 08:21:34 crc kubenswrapper[4809]: I0312 08:21:34.918560 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5b99b64b5d-4hf4l" event={"ID":"4210061b-64cb-414a-be09-bf56697ad409","Type":"ContainerStarted","Data":"8774afdeeb1679cfffb4c92ff1abdbd6484d72a346e7a7913274122c1082357f"} Mar 12 08:21:34 crc kubenswrapper[4809]: I0312 08:21:34.952424 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5b99b64b5d-4hf4l" podStartSLOduration=26.952395816 podStartE2EDuration="26.952395816s" podCreationTimestamp="2026-03-12 08:21:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:21:34.941310502 +0000 UTC m=+1368.523346245" watchObservedRunningTime="2026-03-12 08:21:34.952395816 +0000 UTC m=+1368.534431559" Mar 12 08:21:37 crc kubenswrapper[4809]: I0312 08:21:37.972852 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-pt4wh" event={"ID":"cbad11f2-2bbf-45af-9f9f-72c409c5b0a6","Type":"ContainerStarted","Data":"e53a472ec2fb8da9e2b9c0288381ec1e1162c6d83613666965b56a4fd2efa7ed"} Mar 12 08:21:37 crc kubenswrapper[4809]: I0312 08:21:37.979383 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-x5bc6" event={"ID":"e8f77780-9a39-4298-8bfe-76a54e1e41d9","Type":"ContainerStarted","Data":"ab3a8707e6840280effad94a730e1d3b790784e021421aba676a8ca756a7bc1d"} Mar 12 08:21:37 crc kubenswrapper[4809]: I0312 08:21:37.980088 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-x5bc6" Mar 12 08:21:37 crc kubenswrapper[4809]: I0312 08:21:37.993868 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"70486c1b-1f98-4340-a8f9-5a48e381ef7d","Type":"ContainerStarted","Data":"e85bbcd9f084860c15f54ce3bd288d0da54a15968d43a9b0fd6ac6aa76aaa202"} Mar 12 08:21:38 crc kubenswrapper[4809]: I0312 08:21:38.013635 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"729d6f4c-335b-486c-bea9-812d4abfdfd9","Type":"ContainerStarted","Data":"b24059a147901d02cf7b527699a75e5e26f7052d2f17aa750b66b885a6916ff8"} Mar 12 08:21:38 crc kubenswrapper[4809]: I0312 08:21:38.020504 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-x5bc6" podStartSLOduration=19.053844175 podStartE2EDuration="28.020481331s" podCreationTimestamp="2026-03-12 08:21:10 +0000 UTC" firstStartedPulling="2026-03-12 08:21:27.745000509 +0000 UTC m=+1361.327036252" lastFinishedPulling="2026-03-12 08:21:36.711637675 +0000 UTC m=+1370.293673408" observedRunningTime="2026-03-12 08:21:38.017538479 +0000 UTC m=+1371.599574212" watchObservedRunningTime="2026-03-12 08:21:38.020481331 +0000 UTC m=+1371.602517064" Mar 12 08:21:38 crc kubenswrapper[4809]: I0312 08:21:38.022347 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"160163b1-c728-46dd-8caa-df11fdb18266","Type":"ContainerStarted","Data":"936147b419bd7fc41f9179fd87b532efe109407e02f3eb966f79ebaa35a502a6"} Mar 12 08:21:38 crc kubenswrapper[4809]: I0312 08:21:38.022453 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Mar 12 08:21:38 crc kubenswrapper[4809]: I0312 08:21:38.027332 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"58606282-c6cc-482f-b1be-78717b5d38b2","Type":"ContainerStarted","Data":"9d73043063aec686cb88a72309a9271c9b29b350945c35d116eeffcd84dbb160"} Mar 12 08:21:38 crc kubenswrapper[4809]: I0312 08:21:38.027510 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Mar 12 08:21:38 crc kubenswrapper[4809]: I0312 08:21:38.028852 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-p9md4" event={"ID":"a3c7bda2-fd1b-4e75-8991-a7f713283b7d","Type":"ContainerStarted","Data":"0e894bea26c87eb1109a6b596e59542837c98061f83f7c47a5e36742ee7633ab"} Mar 12 08:21:38 crc kubenswrapper[4809]: I0312 08:21:38.048430 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"e7ddd2df-8231-4c1c-99a5-7af5758508ba","Type":"ContainerStarted","Data":"759453d0552c8b0b2e11e3331d97e0755acf0e5b98941ddfceb76db692c78443"} Mar 12 08:21:38 crc kubenswrapper[4809]: I0312 08:21:38.076509 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=26.630605267 podStartE2EDuration="34.076488204s" podCreationTimestamp="2026-03-12 08:21:04 +0000 UTC" firstStartedPulling="2026-03-12 08:21:29.265786718 +0000 UTC m=+1362.847822451" lastFinishedPulling="2026-03-12 08:21:36.711669625 +0000 UTC m=+1370.293705388" observedRunningTime="2026-03-12 08:21:38.070762766 +0000 UTC m=+1371.652798499" watchObservedRunningTime="2026-03-12 08:21:38.076488204 +0000 UTC m=+1371.658523937" Mar 12 08:21:38 crc kubenswrapper[4809]: I0312 08:21:38.088201 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=21.263257948 podStartE2EDuration="31.088181254s" podCreationTimestamp="2026-03-12 08:21:07 +0000 UTC" firstStartedPulling="2026-03-12 08:21:27.004785101 +0000 UTC m=+1360.586820874" lastFinishedPulling="2026-03-12 08:21:36.829708447 +0000 UTC m=+1370.411744180" observedRunningTime="2026-03-12 08:21:38.086325763 +0000 UTC m=+1371.668361496" watchObservedRunningTime="2026-03-12 08:21:38.088181254 +0000 UTC m=+1371.670216987" Mar 12 08:21:38 crc kubenswrapper[4809]: I0312 08:21:38.107984 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-p9md4" podStartSLOduration=20.771022941 podStartE2EDuration="30.107959875s" podCreationTimestamp="2026-03-12 08:21:08 +0000 UTC" firstStartedPulling="2026-03-12 08:21:27.449510688 +0000 UTC m=+1361.031546421" lastFinishedPulling="2026-03-12 08:21:36.786447622 +0000 UTC m=+1370.368483355" observedRunningTime="2026-03-12 08:21:38.105021785 +0000 UTC m=+1371.687057518" watchObservedRunningTime="2026-03-12 08:21:38.107959875 +0000 UTC m=+1371.689995608" Mar 12 08:21:39 crc kubenswrapper[4809]: I0312 08:21:39.128425 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5b99b64b5d-4hf4l" Mar 12 08:21:39 crc kubenswrapper[4809]: I0312 08:21:39.128798 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5b99b64b5d-4hf4l" Mar 12 08:21:39 crc kubenswrapper[4809]: I0312 08:21:39.131559 4809 generic.go:334] "Generic (PLEG): container finished" podID="cbad11f2-2bbf-45af-9f9f-72c409c5b0a6" containerID="e53a472ec2fb8da9e2b9c0288381ec1e1162c6d83613666965b56a4fd2efa7ed" exitCode=0 Mar 12 08:21:39 crc kubenswrapper[4809]: I0312 08:21:39.132984 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-pt4wh" event={"ID":"cbad11f2-2bbf-45af-9f9f-72c409c5b0a6","Type":"ContainerDied","Data":"e53a472ec2fb8da9e2b9c0288381ec1e1162c6d83613666965b56a4fd2efa7ed"} Mar 12 08:21:39 crc kubenswrapper[4809]: I0312 08:21:39.320577 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5b99b64b5d-4hf4l" Mar 12 08:21:40 crc kubenswrapper[4809]: I0312 08:21:40.152003 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-pt4wh" event={"ID":"cbad11f2-2bbf-45af-9f9f-72c409c5b0a6","Type":"ContainerStarted","Data":"ae091d077ecbc1eb6774c4e29d7223b1856014d63f5ee13ba7d6dada506c07de"} Mar 12 08:21:40 crc kubenswrapper[4809]: I0312 08:21:40.152902 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-pt4wh" event={"ID":"cbad11f2-2bbf-45af-9f9f-72c409c5b0a6","Type":"ContainerStarted","Data":"a5b2051836029d2909dd258826d3e0be978db4031c9b80979a3e3fe6f9658242"} Mar 12 08:21:40 crc kubenswrapper[4809]: I0312 08:21:40.153309 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-pt4wh" Mar 12 08:21:40 crc kubenswrapper[4809]: I0312 08:21:40.153329 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-pt4wh" Mar 12 08:21:40 crc kubenswrapper[4809]: I0312 08:21:40.156618 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"dae47d35-2955-4c02-88bb-a0fbe4cd7bf0","Type":"ContainerStarted","Data":"22b536a9bcc350066bd43312ac398e7aea4d0d0be2649687f90ebdc261a880ce"} Mar 12 08:21:40 crc kubenswrapper[4809]: I0312 08:21:40.188950 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-pt4wh" podStartSLOduration=22.72396035 podStartE2EDuration="30.188925941s" podCreationTimestamp="2026-03-12 08:21:10 +0000 UTC" firstStartedPulling="2026-03-12 08:21:29.367108231 +0000 UTC m=+1362.949143964" lastFinishedPulling="2026-03-12 08:21:36.832073822 +0000 UTC m=+1370.414109555" observedRunningTime="2026-03-12 08:21:40.176701177 +0000 UTC m=+1373.758736920" watchObservedRunningTime="2026-03-12 08:21:40.188925941 +0000 UTC m=+1373.770961674" Mar 12 08:21:40 crc kubenswrapper[4809]: I0312 08:21:40.206023 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"3d541616-6c38-428f-bd28-7dc54dceab8c","Type":"ContainerStarted","Data":"ef86e92941911427dde484b96afee1899ae2375e6870a0fa4527b851cb3768c5"} Mar 12 08:21:40 crc kubenswrapper[4809]: I0312 08:21:40.214409 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5b99b64b5d-4hf4l" Mar 12 08:21:40 crc kubenswrapper[4809]: I0312 08:21:40.418420 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-57dffbbf6c-jl4cx"] Mar 12 08:21:42 crc kubenswrapper[4809]: I0312 08:21:42.233054 4809 generic.go:334] "Generic (PLEG): container finished" podID="729d6f4c-335b-486c-bea9-812d4abfdfd9" containerID="b24059a147901d02cf7b527699a75e5e26f7052d2f17aa750b66b885a6916ff8" exitCode=0 Mar 12 08:21:42 crc kubenswrapper[4809]: I0312 08:21:42.233156 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"729d6f4c-335b-486c-bea9-812d4abfdfd9","Type":"ContainerDied","Data":"b24059a147901d02cf7b527699a75e5e26f7052d2f17aa750b66b885a6916ff8"} Mar 12 08:21:43 crc kubenswrapper[4809]: I0312 08:21:43.256700 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"3d541616-6c38-428f-bd28-7dc54dceab8c","Type":"ContainerDied","Data":"ef86e92941911427dde484b96afee1899ae2375e6870a0fa4527b851cb3768c5"} Mar 12 08:21:43 crc kubenswrapper[4809]: I0312 08:21:43.256755 4809 generic.go:334] "Generic (PLEG): container finished" podID="3d541616-6c38-428f-bd28-7dc54dceab8c" containerID="ef86e92941911427dde484b96afee1899ae2375e6870a0fa4527b851cb3768c5" exitCode=0 Mar 12 08:21:45 crc kubenswrapper[4809]: I0312 08:21:45.307996 4809 generic.go:334] "Generic (PLEG): container finished" podID="03a48b86-8bc7-4d5d-87c1-61f3f57110b7" containerID="fb1eb5054c8a30723fe7f2989d7ee98802b343aa6feadb1b14983b40f8c7904f" exitCode=0 Mar 12 08:21:45 crc kubenswrapper[4809]: I0312 08:21:45.308093 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-lctvd" event={"ID":"03a48b86-8bc7-4d5d-87c1-61f3f57110b7","Type":"ContainerDied","Data":"fb1eb5054c8a30723fe7f2989d7ee98802b343aa6feadb1b14983b40f8c7904f"} Mar 12 08:21:45 crc kubenswrapper[4809]: I0312 08:21:45.311942 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"729d6f4c-335b-486c-bea9-812d4abfdfd9","Type":"ContainerStarted","Data":"63bf83ae53184de2fa0781a0983009069fabb2ab94295c9a611958b97ebcb4fc"} Mar 12 08:21:45 crc kubenswrapper[4809]: I0312 08:21:45.316293 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"3d541616-6c38-428f-bd28-7dc54dceab8c","Type":"ContainerStarted","Data":"15b484e8ab97b9e9fca1f4e65fcc961082eb7604226d160ee802686d1cd47b9e"} Mar 12 08:21:45 crc kubenswrapper[4809]: I0312 08:21:45.322407 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"e7ddd2df-8231-4c1c-99a5-7af5758508ba","Type":"ContainerStarted","Data":"e364e9138fe9230508b41ad80539f00bc53f96fd27dda326b92e99094b7ef308"} Mar 12 08:21:45 crc kubenswrapper[4809]: I0312 08:21:45.342456 4809 generic.go:334] "Generic (PLEG): container finished" podID="36b41bd2-cbbb-49b4-b5bb-4cb6da3c748f" containerID="65c33d200424e87c3e34d9bd54c4f3135e1740d77c288727339e4a4c15f8371e" exitCode=0 Mar 12 08:21:45 crc kubenswrapper[4809]: I0312 08:21:45.342559 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-nzln7" event={"ID":"36b41bd2-cbbb-49b4-b5bb-4cb6da3c748f","Type":"ContainerDied","Data":"65c33d200424e87c3e34d9bd54c4f3135e1740d77c288727339e4a4c15f8371e"} Mar 12 08:21:45 crc kubenswrapper[4809]: I0312 08:21:45.349267 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"70486c1b-1f98-4340-a8f9-5a48e381ef7d","Type":"ContainerStarted","Data":"4548b02822a08b46580364ee2a3b6d51bee0b62ca30a2bc3ecba78d833ce622c"} Mar 12 08:21:45 crc kubenswrapper[4809]: I0312 08:21:45.427657 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Mar 12 08:21:45 crc kubenswrapper[4809]: I0312 08:21:45.441455 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=39.481896883 podStartE2EDuration="42.441435875s" podCreationTimestamp="2026-03-12 08:21:03 +0000 UTC" firstStartedPulling="2026-03-12 08:21:27.776015348 +0000 UTC m=+1361.358051081" lastFinishedPulling="2026-03-12 08:21:30.73555434 +0000 UTC m=+1364.317590073" observedRunningTime="2026-03-12 08:21:45.383488578 +0000 UTC m=+1378.965524311" watchObservedRunningTime="2026-03-12 08:21:45.441435875 +0000 UTC m=+1379.023471608" Mar 12 08:21:45 crc kubenswrapper[4809]: I0312 08:21:45.442814 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=-9223371993.411968 podStartE2EDuration="43.442807872s" podCreationTimestamp="2026-03-12 08:21:02 +0000 UTC" firstStartedPulling="2026-03-12 08:21:04.363770492 +0000 UTC m=+1337.945806215" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:21:45.440770616 +0000 UTC m=+1379.022806369" watchObservedRunningTime="2026-03-12 08:21:45.442807872 +0000 UTC m=+1379.024843605" Mar 12 08:21:45 crc kubenswrapper[4809]: I0312 08:21:45.481783 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=17.548263792 podStartE2EDuration="32.481761148s" podCreationTimestamp="2026-03-12 08:21:13 +0000 UTC" firstStartedPulling="2026-03-12 08:21:29.254286383 +0000 UTC m=+1362.836322116" lastFinishedPulling="2026-03-12 08:21:44.187783739 +0000 UTC m=+1377.769819472" observedRunningTime="2026-03-12 08:21:45.475465696 +0000 UTC m=+1379.057501429" watchObservedRunningTime="2026-03-12 08:21:45.481761148 +0000 UTC m=+1379.063796881" Mar 12 08:21:45 crc kubenswrapper[4809]: I0312 08:21:45.572545 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=21.2421657 podStartE2EDuration="36.572508643s" podCreationTimestamp="2026-03-12 08:21:09 +0000 UTC" firstStartedPulling="2026-03-12 08:21:28.707040509 +0000 UTC m=+1362.289076242" lastFinishedPulling="2026-03-12 08:21:44.037383442 +0000 UTC m=+1377.619419185" observedRunningTime="2026-03-12 08:21:45.531189072 +0000 UTC m=+1379.113224805" watchObservedRunningTime="2026-03-12 08:21:45.572508643 +0000 UTC m=+1379.154544376" Mar 12 08:21:45 crc kubenswrapper[4809]: I0312 08:21:45.976278 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Mar 12 08:21:46 crc kubenswrapper[4809]: I0312 08:21:46.360434 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-nzln7" event={"ID":"36b41bd2-cbbb-49b4-b5bb-4cb6da3c748f","Type":"ContainerStarted","Data":"facef78f97fa4a2540297c44274d46381defe98dfaee0d776241ab5e1c3e27cf"} Mar 12 08:21:46 crc kubenswrapper[4809]: I0312 08:21:46.361104 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-nzln7" Mar 12 08:21:46 crc kubenswrapper[4809]: I0312 08:21:46.362895 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-lctvd" event={"ID":"03a48b86-8bc7-4d5d-87c1-61f3f57110b7","Type":"ContainerStarted","Data":"4b1314275d02abb4e6d06ea97160b108460a6e1be34a5cc6f0f152ed49a1616b"} Mar 12 08:21:46 crc kubenswrapper[4809]: I0312 08:21:46.407892 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-nzln7" podStartSLOduration=4.43817969 podStartE2EDuration="45.407864565s" podCreationTimestamp="2026-03-12 08:21:01 +0000 UTC" firstStartedPulling="2026-03-12 08:21:02.97905076 +0000 UTC m=+1336.561086493" lastFinishedPulling="2026-03-12 08:21:43.948735605 +0000 UTC m=+1377.530771368" observedRunningTime="2026-03-12 08:21:46.39234275 +0000 UTC m=+1379.974378493" watchObservedRunningTime="2026-03-12 08:21:46.407864565 +0000 UTC m=+1379.989900298" Mar 12 08:21:46 crc kubenswrapper[4809]: I0312 08:21:46.425741 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5ccc8479f9-lctvd" podStartSLOduration=4.518266544 podStartE2EDuration="46.425719074s" podCreationTimestamp="2026-03-12 08:21:00 +0000 UTC" firstStartedPulling="2026-03-12 08:21:02.237041664 +0000 UTC m=+1335.819077397" lastFinishedPulling="2026-03-12 08:21:44.144494194 +0000 UTC m=+1377.726529927" observedRunningTime="2026-03-12 08:21:46.415691109 +0000 UTC m=+1379.997726852" watchObservedRunningTime="2026-03-12 08:21:46.425719074 +0000 UTC m=+1380.007754807" Mar 12 08:21:46 crc kubenswrapper[4809]: I0312 08:21:46.976601 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Mar 12 08:21:47 crc kubenswrapper[4809]: I0312 08:21:47.040550 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Mar 12 08:21:47 crc kubenswrapper[4809]: I0312 08:21:47.452206 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Mar 12 08:21:47 crc kubenswrapper[4809]: I0312 08:21:47.750984 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-nzln7"] Mar 12 08:21:47 crc kubenswrapper[4809]: I0312 08:21:47.753602 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Mar 12 08:21:47 crc kubenswrapper[4809]: I0312 08:21:47.844160 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-zn54x"] Mar 12 08:21:47 crc kubenswrapper[4809]: I0312 08:21:47.846023 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-zn54x" Mar 12 08:21:47 crc kubenswrapper[4809]: I0312 08:21:47.922623 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Mar 12 08:21:47 crc kubenswrapper[4809]: I0312 08:21:47.925858 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-zn54x"] Mar 12 08:21:47 crc kubenswrapper[4809]: I0312 08:21:47.958223 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssh2w\" (UniqueName: \"kubernetes.io/projected/ac713601-7c08-40f3-aaee-dafbe4c413e8-kube-api-access-ssh2w\") pod \"dnsmasq-dns-7cb5889db5-zn54x\" (UID: \"ac713601-7c08-40f3-aaee-dafbe4c413e8\") " pod="openstack/dnsmasq-dns-7cb5889db5-zn54x" Mar 12 08:21:47 crc kubenswrapper[4809]: I0312 08:21:47.958300 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac713601-7c08-40f3-aaee-dafbe4c413e8-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-zn54x\" (UID: \"ac713601-7c08-40f3-aaee-dafbe4c413e8\") " pod="openstack/dnsmasq-dns-7cb5889db5-zn54x" Mar 12 08:21:47 crc kubenswrapper[4809]: I0312 08:21:47.958389 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac713601-7c08-40f3-aaee-dafbe4c413e8-config\") pod \"dnsmasq-dns-7cb5889db5-zn54x\" (UID: \"ac713601-7c08-40f3-aaee-dafbe4c413e8\") " pod="openstack/dnsmasq-dns-7cb5889db5-zn54x" Mar 12 08:21:48 crc kubenswrapper[4809]: I0312 08:21:48.070103 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssh2w\" (UniqueName: \"kubernetes.io/projected/ac713601-7c08-40f3-aaee-dafbe4c413e8-kube-api-access-ssh2w\") pod \"dnsmasq-dns-7cb5889db5-zn54x\" (UID: \"ac713601-7c08-40f3-aaee-dafbe4c413e8\") " pod="openstack/dnsmasq-dns-7cb5889db5-zn54x" Mar 12 08:21:48 crc kubenswrapper[4809]: I0312 08:21:48.070413 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac713601-7c08-40f3-aaee-dafbe4c413e8-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-zn54x\" (UID: \"ac713601-7c08-40f3-aaee-dafbe4c413e8\") " pod="openstack/dnsmasq-dns-7cb5889db5-zn54x" Mar 12 08:21:48 crc kubenswrapper[4809]: I0312 08:21:48.072435 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac713601-7c08-40f3-aaee-dafbe4c413e8-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-zn54x\" (UID: \"ac713601-7c08-40f3-aaee-dafbe4c413e8\") " pod="openstack/dnsmasq-dns-7cb5889db5-zn54x" Mar 12 08:21:48 crc kubenswrapper[4809]: I0312 08:21:48.072645 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac713601-7c08-40f3-aaee-dafbe4c413e8-config\") pod \"dnsmasq-dns-7cb5889db5-zn54x\" (UID: \"ac713601-7c08-40f3-aaee-dafbe4c413e8\") " pod="openstack/dnsmasq-dns-7cb5889db5-zn54x" Mar 12 08:21:48 crc kubenswrapper[4809]: I0312 08:21:48.075098 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac713601-7c08-40f3-aaee-dafbe4c413e8-config\") pod \"dnsmasq-dns-7cb5889db5-zn54x\" (UID: \"ac713601-7c08-40f3-aaee-dafbe4c413e8\") " pod="openstack/dnsmasq-dns-7cb5889db5-zn54x" Mar 12 08:21:48 crc kubenswrapper[4809]: I0312 08:21:48.108932 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Mar 12 08:21:48 crc kubenswrapper[4809]: I0312 08:21:48.116080 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssh2w\" (UniqueName: \"kubernetes.io/projected/ac713601-7c08-40f3-aaee-dafbe4c413e8-kube-api-access-ssh2w\") pod \"dnsmasq-dns-7cb5889db5-zn54x\" (UID: \"ac713601-7c08-40f3-aaee-dafbe4c413e8\") " pod="openstack/dnsmasq-dns-7cb5889db5-zn54x" Mar 12 08:21:48 crc kubenswrapper[4809]: I0312 08:21:48.225321 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-zn54x" Mar 12 08:21:48 crc kubenswrapper[4809]: I0312 08:21:48.402457 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-chwkh"] Mar 12 08:21:48 crc kubenswrapper[4809]: I0312 08:21:48.418233 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-chwkh" Mar 12 08:21:48 crc kubenswrapper[4809]: I0312 08:21:48.422190 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Mar 12 08:21:48 crc kubenswrapper[4809]: I0312 08:21:48.503871 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="dae78d40-ac67-4ec6-bf05-7f615d22aea9" containerName="init-config-reloader" containerID="cri-o://31bea56799402d191891ba87d388362351ba8077b34c57b4d13ef5a639a7bc82" gracePeriod=600 Mar 12 08:21:48 crc kubenswrapper[4809]: I0312 08:21:48.507295 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-nzln7" podUID="36b41bd2-cbbb-49b4-b5bb-4cb6da3c748f" containerName="dnsmasq-dns" containerID="cri-o://facef78f97fa4a2540297c44274d46381defe98dfaee0d776241ab5e1c3e27cf" gracePeriod=10 Mar 12 08:21:48 crc kubenswrapper[4809]: I0312 08:21:48.507489 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"dae78d40-ac67-4ec6-bf05-7f615d22aea9","Type":"ContainerStarted","Data":"31bea56799402d191891ba87d388362351ba8077b34c57b4d13ef5a639a7bc82"} Mar 12 08:21:48 crc kubenswrapper[4809]: I0312 08:21:48.508593 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Mar 12 08:21:48 crc kubenswrapper[4809]: I0312 08:21:48.514851 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-lctvd"] Mar 12 08:21:48 crc kubenswrapper[4809]: I0312 08:21:48.515085 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5ccc8479f9-lctvd" podUID="03a48b86-8bc7-4d5d-87c1-61f3f57110b7" containerName="dnsmasq-dns" containerID="cri-o://4b1314275d02abb4e6d06ea97160b108460a6e1be34a5cc6f0f152ed49a1616b" gracePeriod=10 Mar 12 08:21:48 crc kubenswrapper[4809]: I0312 08:21:48.515216 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5ccc8479f9-lctvd" Mar 12 08:21:48 crc kubenswrapper[4809]: I0312 08:21:48.518497 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/55e426a8-784b-4859-a48a-509e5f045c98-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-chwkh\" (UID: \"55e426a8-784b-4859-a48a-509e5f045c98\") " pod="openstack/ovn-controller-metrics-chwkh" Mar 12 08:21:48 crc kubenswrapper[4809]: I0312 08:21:48.518578 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkgfh\" (UniqueName: \"kubernetes.io/projected/55e426a8-784b-4859-a48a-509e5f045c98-kube-api-access-lkgfh\") pod \"ovn-controller-metrics-chwkh\" (UID: \"55e426a8-784b-4859-a48a-509e5f045c98\") " pod="openstack/ovn-controller-metrics-chwkh" Mar 12 08:21:48 crc kubenswrapper[4809]: I0312 08:21:48.518632 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/55e426a8-784b-4859-a48a-509e5f045c98-ovs-rundir\") pod \"ovn-controller-metrics-chwkh\" (UID: \"55e426a8-784b-4859-a48a-509e5f045c98\") " pod="openstack/ovn-controller-metrics-chwkh" Mar 12 08:21:48 crc kubenswrapper[4809]: I0312 08:21:48.518659 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55e426a8-784b-4859-a48a-509e5f045c98-combined-ca-bundle\") pod \"ovn-controller-metrics-chwkh\" (UID: \"55e426a8-784b-4859-a48a-509e5f045c98\") " pod="openstack/ovn-controller-metrics-chwkh" Mar 12 08:21:48 crc kubenswrapper[4809]: I0312 08:21:48.518698 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/55e426a8-784b-4859-a48a-509e5f045c98-ovn-rundir\") pod \"ovn-controller-metrics-chwkh\" (UID: \"55e426a8-784b-4859-a48a-509e5f045c98\") " pod="openstack/ovn-controller-metrics-chwkh" Mar 12 08:21:48 crc kubenswrapper[4809]: I0312 08:21:48.518743 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55e426a8-784b-4859-a48a-509e5f045c98-config\") pod \"ovn-controller-metrics-chwkh\" (UID: \"55e426a8-784b-4859-a48a-509e5f045c98\") " pod="openstack/ovn-controller-metrics-chwkh" Mar 12 08:21:48 crc kubenswrapper[4809]: I0312 08:21:48.560403 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-chwkh"] Mar 12 08:21:48 crc kubenswrapper[4809]: I0312 08:21:48.566058 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-jfdbd"] Mar 12 08:21:48 crc kubenswrapper[4809]: I0312 08:21:48.567740 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6f696b9-jfdbd" Mar 12 08:21:48 crc kubenswrapper[4809]: I0312 08:21:48.570347 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Mar 12 08:21:48 crc kubenswrapper[4809]: I0312 08:21:48.622669 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/55e426a8-784b-4859-a48a-509e5f045c98-ovs-rundir\") pod \"ovn-controller-metrics-chwkh\" (UID: \"55e426a8-784b-4859-a48a-509e5f045c98\") " pod="openstack/ovn-controller-metrics-chwkh" Mar 12 08:21:48 crc kubenswrapper[4809]: I0312 08:21:48.622732 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55e426a8-784b-4859-a48a-509e5f045c98-combined-ca-bundle\") pod \"ovn-controller-metrics-chwkh\" (UID: \"55e426a8-784b-4859-a48a-509e5f045c98\") " pod="openstack/ovn-controller-metrics-chwkh" Mar 12 08:21:48 crc kubenswrapper[4809]: I0312 08:21:48.622832 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/55e426a8-784b-4859-a48a-509e5f045c98-ovn-rundir\") pod \"ovn-controller-metrics-chwkh\" (UID: \"55e426a8-784b-4859-a48a-509e5f045c98\") " pod="openstack/ovn-controller-metrics-chwkh" Mar 12 08:21:48 crc kubenswrapper[4809]: I0312 08:21:48.622922 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55e426a8-784b-4859-a48a-509e5f045c98-config\") pod \"ovn-controller-metrics-chwkh\" (UID: \"55e426a8-784b-4859-a48a-509e5f045c98\") " pod="openstack/ovn-controller-metrics-chwkh" Mar 12 08:21:48 crc kubenswrapper[4809]: I0312 08:21:48.623020 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/55e426a8-784b-4859-a48a-509e5f045c98-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-chwkh\" (UID: \"55e426a8-784b-4859-a48a-509e5f045c98\") " pod="openstack/ovn-controller-metrics-chwkh" Mar 12 08:21:48 crc kubenswrapper[4809]: I0312 08:21:48.623143 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkgfh\" (UniqueName: \"kubernetes.io/projected/55e426a8-784b-4859-a48a-509e5f045c98-kube-api-access-lkgfh\") pod \"ovn-controller-metrics-chwkh\" (UID: \"55e426a8-784b-4859-a48a-509e5f045c98\") " pod="openstack/ovn-controller-metrics-chwkh" Mar 12 08:21:48 crc kubenswrapper[4809]: I0312 08:21:48.625384 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/55e426a8-784b-4859-a48a-509e5f045c98-ovs-rundir\") pod \"ovn-controller-metrics-chwkh\" (UID: \"55e426a8-784b-4859-a48a-509e5f045c98\") " pod="openstack/ovn-controller-metrics-chwkh" Mar 12 08:21:48 crc kubenswrapper[4809]: I0312 08:21:48.626059 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55e426a8-784b-4859-a48a-509e5f045c98-config\") pod \"ovn-controller-metrics-chwkh\" (UID: \"55e426a8-784b-4859-a48a-509e5f045c98\") " pod="openstack/ovn-controller-metrics-chwkh" Mar 12 08:21:48 crc kubenswrapper[4809]: I0312 08:21:48.630658 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/55e426a8-784b-4859-a48a-509e5f045c98-ovn-rundir\") pod \"ovn-controller-metrics-chwkh\" (UID: \"55e426a8-784b-4859-a48a-509e5f045c98\") " pod="openstack/ovn-controller-metrics-chwkh" Mar 12 08:21:48 crc kubenswrapper[4809]: I0312 08:21:48.637687 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/55e426a8-784b-4859-a48a-509e5f045c98-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-chwkh\" (UID: \"55e426a8-784b-4859-a48a-509e5f045c98\") " pod="openstack/ovn-controller-metrics-chwkh" Mar 12 08:21:48 crc kubenswrapper[4809]: I0312 08:21:48.645261 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-jfdbd"] Mar 12 08:21:48 crc kubenswrapper[4809]: I0312 08:21:48.665848 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Mar 12 08:21:48 crc kubenswrapper[4809]: I0312 08:21:48.674868 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55e426a8-784b-4859-a48a-509e5f045c98-combined-ca-bundle\") pod \"ovn-controller-metrics-chwkh\" (UID: \"55e426a8-784b-4859-a48a-509e5f045c98\") " pod="openstack/ovn-controller-metrics-chwkh" Mar 12 08:21:48 crc kubenswrapper[4809]: I0312 08:21:48.681839 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkgfh\" (UniqueName: \"kubernetes.io/projected/55e426a8-784b-4859-a48a-509e5f045c98-kube-api-access-lkgfh\") pod \"ovn-controller-metrics-chwkh\" (UID: \"55e426a8-784b-4859-a48a-509e5f045c98\") " pod="openstack/ovn-controller-metrics-chwkh" Mar 12 08:21:48 crc kubenswrapper[4809]: I0312 08:21:48.725276 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2136d46-3abf-4108-9f88-19dfcadc2bf0-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6f696b9-jfdbd\" (UID: \"f2136d46-3abf-4108-9f88-19dfcadc2bf0\") " pod="openstack/dnsmasq-dns-74f6f696b9-jfdbd" Mar 12 08:21:48 crc kubenswrapper[4809]: I0312 08:21:48.725398 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s64f9\" (UniqueName: \"kubernetes.io/projected/f2136d46-3abf-4108-9f88-19dfcadc2bf0-kube-api-access-s64f9\") pod \"dnsmasq-dns-74f6f696b9-jfdbd\" (UID: \"f2136d46-3abf-4108-9f88-19dfcadc2bf0\") " pod="openstack/dnsmasq-dns-74f6f696b9-jfdbd" Mar 12 08:21:48 crc kubenswrapper[4809]: I0312 08:21:48.725431 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2136d46-3abf-4108-9f88-19dfcadc2bf0-dns-svc\") pod \"dnsmasq-dns-74f6f696b9-jfdbd\" (UID: \"f2136d46-3abf-4108-9f88-19dfcadc2bf0\") " pod="openstack/dnsmasq-dns-74f6f696b9-jfdbd" Mar 12 08:21:48 crc kubenswrapper[4809]: I0312 08:21:48.725456 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2136d46-3abf-4108-9f88-19dfcadc2bf0-config\") pod \"dnsmasq-dns-74f6f696b9-jfdbd\" (UID: \"f2136d46-3abf-4108-9f88-19dfcadc2bf0\") " pod="openstack/dnsmasq-dns-74f6f696b9-jfdbd" Mar 12 08:21:48 crc kubenswrapper[4809]: I0312 08:21:48.773980 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-chwkh" Mar 12 08:21:48 crc kubenswrapper[4809]: I0312 08:21:48.827486 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2136d46-3abf-4108-9f88-19dfcadc2bf0-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6f696b9-jfdbd\" (UID: \"f2136d46-3abf-4108-9f88-19dfcadc2bf0\") " pod="openstack/dnsmasq-dns-74f6f696b9-jfdbd" Mar 12 08:21:48 crc kubenswrapper[4809]: I0312 08:21:48.827578 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s64f9\" (UniqueName: \"kubernetes.io/projected/f2136d46-3abf-4108-9f88-19dfcadc2bf0-kube-api-access-s64f9\") pod \"dnsmasq-dns-74f6f696b9-jfdbd\" (UID: \"f2136d46-3abf-4108-9f88-19dfcadc2bf0\") " pod="openstack/dnsmasq-dns-74f6f696b9-jfdbd" Mar 12 08:21:48 crc kubenswrapper[4809]: I0312 08:21:48.827605 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2136d46-3abf-4108-9f88-19dfcadc2bf0-dns-svc\") pod \"dnsmasq-dns-74f6f696b9-jfdbd\" (UID: \"f2136d46-3abf-4108-9f88-19dfcadc2bf0\") " pod="openstack/dnsmasq-dns-74f6f696b9-jfdbd" Mar 12 08:21:48 crc kubenswrapper[4809]: I0312 08:21:48.827634 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2136d46-3abf-4108-9f88-19dfcadc2bf0-config\") pod \"dnsmasq-dns-74f6f696b9-jfdbd\" (UID: \"f2136d46-3abf-4108-9f88-19dfcadc2bf0\") " pod="openstack/dnsmasq-dns-74f6f696b9-jfdbd" Mar 12 08:21:48 crc kubenswrapper[4809]: I0312 08:21:48.830195 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2136d46-3abf-4108-9f88-19dfcadc2bf0-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6f696b9-jfdbd\" (UID: \"f2136d46-3abf-4108-9f88-19dfcadc2bf0\") " pod="openstack/dnsmasq-dns-74f6f696b9-jfdbd" Mar 12 08:21:48 crc kubenswrapper[4809]: I0312 08:21:48.831784 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-zn54x"] Mar 12 08:21:48 crc kubenswrapper[4809]: I0312 08:21:48.833017 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2136d46-3abf-4108-9f88-19dfcadc2bf0-config\") pod \"dnsmasq-dns-74f6f696b9-jfdbd\" (UID: \"f2136d46-3abf-4108-9f88-19dfcadc2bf0\") " pod="openstack/dnsmasq-dns-74f6f696b9-jfdbd" Mar 12 08:21:48 crc kubenswrapper[4809]: I0312 08:21:48.844274 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2136d46-3abf-4108-9f88-19dfcadc2bf0-dns-svc\") pod \"dnsmasq-dns-74f6f696b9-jfdbd\" (UID: \"f2136d46-3abf-4108-9f88-19dfcadc2bf0\") " pod="openstack/dnsmasq-dns-74f6f696b9-jfdbd" Mar 12 08:21:48 crc kubenswrapper[4809]: I0312 08:21:48.870174 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s64f9\" (UniqueName: \"kubernetes.io/projected/f2136d46-3abf-4108-9f88-19dfcadc2bf0-kube-api-access-s64f9\") pod \"dnsmasq-dns-74f6f696b9-jfdbd\" (UID: \"f2136d46-3abf-4108-9f88-19dfcadc2bf0\") " pod="openstack/dnsmasq-dns-74f6f696b9-jfdbd" Mar 12 08:21:48 crc kubenswrapper[4809]: I0312 08:21:48.870261 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-hd7gx"] Mar 12 08:21:48 crc kubenswrapper[4809]: I0312 08:21:48.872523 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-hd7gx" Mar 12 08:21:48 crc kubenswrapper[4809]: I0312 08:21:48.882739 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Mar 12 08:21:48 crc kubenswrapper[4809]: I0312 08:21:48.888975 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-hd7gx"] Mar 12 08:21:48 crc kubenswrapper[4809]: I0312 08:21:48.924260 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6f696b9-jfdbd" Mar 12 08:21:48 crc kubenswrapper[4809]: I0312 08:21:48.933191 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/085fdfa8-88d8-460c-82cf-87d59d145d7c-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-hd7gx\" (UID: \"085fdfa8-88d8-460c-82cf-87d59d145d7c\") " pod="openstack/dnsmasq-dns-698758b865-hd7gx" Mar 12 08:21:48 crc kubenswrapper[4809]: I0312 08:21:48.933290 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4qdw\" (UniqueName: \"kubernetes.io/projected/085fdfa8-88d8-460c-82cf-87d59d145d7c-kube-api-access-f4qdw\") pod \"dnsmasq-dns-698758b865-hd7gx\" (UID: \"085fdfa8-88d8-460c-82cf-87d59d145d7c\") " pod="openstack/dnsmasq-dns-698758b865-hd7gx" Mar 12 08:21:48 crc kubenswrapper[4809]: I0312 08:21:48.933334 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/085fdfa8-88d8-460c-82cf-87d59d145d7c-dns-svc\") pod \"dnsmasq-dns-698758b865-hd7gx\" (UID: \"085fdfa8-88d8-460c-82cf-87d59d145d7c\") " pod="openstack/dnsmasq-dns-698758b865-hd7gx" Mar 12 08:21:48 crc kubenswrapper[4809]: I0312 08:21:48.933407 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/085fdfa8-88d8-460c-82cf-87d59d145d7c-config\") pod \"dnsmasq-dns-698758b865-hd7gx\" (UID: \"085fdfa8-88d8-460c-82cf-87d59d145d7c\") " pod="openstack/dnsmasq-dns-698758b865-hd7gx" Mar 12 08:21:48 crc kubenswrapper[4809]: I0312 08:21:48.933538 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/085fdfa8-88d8-460c-82cf-87d59d145d7c-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-hd7gx\" (UID: \"085fdfa8-88d8-460c-82cf-87d59d145d7c\") " pod="openstack/dnsmasq-dns-698758b865-hd7gx" Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.016093 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.035675 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/085fdfa8-88d8-460c-82cf-87d59d145d7c-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-hd7gx\" (UID: \"085fdfa8-88d8-460c-82cf-87d59d145d7c\") " pod="openstack/dnsmasq-dns-698758b865-hd7gx" Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.038808 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4qdw\" (UniqueName: \"kubernetes.io/projected/085fdfa8-88d8-460c-82cf-87d59d145d7c-kube-api-access-f4qdw\") pod \"dnsmasq-dns-698758b865-hd7gx\" (UID: \"085fdfa8-88d8-460c-82cf-87d59d145d7c\") " pod="openstack/dnsmasq-dns-698758b865-hd7gx" Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.039030 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/085fdfa8-88d8-460c-82cf-87d59d145d7c-dns-svc\") pod \"dnsmasq-dns-698758b865-hd7gx\" (UID: \"085fdfa8-88d8-460c-82cf-87d59d145d7c\") " pod="openstack/dnsmasq-dns-698758b865-hd7gx" Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.039389 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/085fdfa8-88d8-460c-82cf-87d59d145d7c-config\") pod \"dnsmasq-dns-698758b865-hd7gx\" (UID: \"085fdfa8-88d8-460c-82cf-87d59d145d7c\") " pod="openstack/dnsmasq-dns-698758b865-hd7gx" Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.039726 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/085fdfa8-88d8-460c-82cf-87d59d145d7c-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-hd7gx\" (UID: \"085fdfa8-88d8-460c-82cf-87d59d145d7c\") " pod="openstack/dnsmasq-dns-698758b865-hd7gx" Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.041824 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/085fdfa8-88d8-460c-82cf-87d59d145d7c-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-hd7gx\" (UID: \"085fdfa8-88d8-460c-82cf-87d59d145d7c\") " pod="openstack/dnsmasq-dns-698758b865-hd7gx" Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.043385 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/085fdfa8-88d8-460c-82cf-87d59d145d7c-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-hd7gx\" (UID: \"085fdfa8-88d8-460c-82cf-87d59d145d7c\") " pod="openstack/dnsmasq-dns-698758b865-hd7gx" Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.045960 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/085fdfa8-88d8-460c-82cf-87d59d145d7c-dns-svc\") pod \"dnsmasq-dns-698758b865-hd7gx\" (UID: \"085fdfa8-88d8-460c-82cf-87d59d145d7c\") " pod="openstack/dnsmasq-dns-698758b865-hd7gx" Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.047096 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.054977 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/085fdfa8-88d8-460c-82cf-87d59d145d7c-config\") pod \"dnsmasq-dns-698758b865-hd7gx\" (UID: \"085fdfa8-88d8-460c-82cf-87d59d145d7c\") " pod="openstack/dnsmasq-dns-698758b865-hd7gx" Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.059169 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.059421 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.059720 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.059990 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-tqj2n" Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.079036 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4qdw\" (UniqueName: \"kubernetes.io/projected/085fdfa8-88d8-460c-82cf-87d59d145d7c-kube-api-access-f4qdw\") pod \"dnsmasq-dns-698758b865-hd7gx\" (UID: \"085fdfa8-88d8-460c-82cf-87d59d145d7c\") " pod="openstack/dnsmasq-dns-698758b865-hd7gx" Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.158958 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.223917 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-hd7gx" Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.228553 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.231696 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.239712 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.239811 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-pb8vl" Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.240052 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.240147 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.243074 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7b325264-3ac9-446e-b820-c40d942263e6-etc-swift\") pod \"swift-storage-0\" (UID: \"7b325264-3ac9-446e-b820-c40d942263e6\") " pod="openstack/swift-storage-0" Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.243182 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b325264-3ac9-446e-b820-c40d942263e6-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"7b325264-3ac9-446e-b820-c40d942263e6\") " pod="openstack/swift-storage-0" Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.243236 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/7b325264-3ac9-446e-b820-c40d942263e6-cache\") pod \"swift-storage-0\" (UID: \"7b325264-3ac9-446e-b820-c40d942263e6\") " pod="openstack/swift-storage-0" Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.243305 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4kjx\" (UniqueName: \"kubernetes.io/projected/7b325264-3ac9-446e-b820-c40d942263e6-kube-api-access-z4kjx\") pod \"swift-storage-0\" (UID: \"7b325264-3ac9-446e-b820-c40d942263e6\") " pod="openstack/swift-storage-0" Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.243332 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5590c361-483d-41eb-8649-94e82d61d326\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5590c361-483d-41eb-8649-94e82d61d326\") pod \"swift-storage-0\" (UID: \"7b325264-3ac9-446e-b820-c40d942263e6\") " pod="openstack/swift-storage-0" Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.243381 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/7b325264-3ac9-446e-b820-c40d942263e6-lock\") pod \"swift-storage-0\" (UID: \"7b325264-3ac9-446e-b820-c40d942263e6\") " pod="openstack/swift-storage-0" Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.279958 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.301561 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-zn54x"] Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.345185 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4df4ab5f-c9cf-40a8-9b50-44c5a67a97cc-scripts\") pod \"ovn-northd-0\" (UID: \"4df4ab5f-c9cf-40a8-9b50-44c5a67a97cc\") " pod="openstack/ovn-northd-0" Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.345571 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4df4ab5f-c9cf-40a8-9b50-44c5a67a97cc-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"4df4ab5f-c9cf-40a8-9b50-44c5a67a97cc\") " pod="openstack/ovn-northd-0" Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.345620 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4df4ab5f-c9cf-40a8-9b50-44c5a67a97cc-config\") pod \"ovn-northd-0\" (UID: \"4df4ab5f-c9cf-40a8-9b50-44c5a67a97cc\") " pod="openstack/ovn-northd-0" Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.345681 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7b325264-3ac9-446e-b820-c40d942263e6-etc-swift\") pod \"swift-storage-0\" (UID: \"7b325264-3ac9-446e-b820-c40d942263e6\") " pod="openstack/swift-storage-0" Mar 12 08:21:49 crc kubenswrapper[4809]: E0312 08:21:49.345795 4809 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 12 08:21:49 crc kubenswrapper[4809]: E0312 08:21:49.345817 4809 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 12 08:21:49 crc kubenswrapper[4809]: E0312 08:21:49.345873 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7b325264-3ac9-446e-b820-c40d942263e6-etc-swift podName:7b325264-3ac9-446e-b820-c40d942263e6 nodeName:}" failed. No retries permitted until 2026-03-12 08:21:49.845856757 +0000 UTC m=+1383.427892490 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7b325264-3ac9-446e-b820-c40d942263e6-etc-swift") pod "swift-storage-0" (UID: "7b325264-3ac9-446e-b820-c40d942263e6") : configmap "swift-ring-files" not found Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.345892 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b325264-3ac9-446e-b820-c40d942263e6-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"7b325264-3ac9-446e-b820-c40d942263e6\") " pod="openstack/swift-storage-0" Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.348194 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4df4ab5f-c9cf-40a8-9b50-44c5a67a97cc-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"4df4ab5f-c9cf-40a8-9b50-44c5a67a97cc\") " pod="openstack/ovn-northd-0" Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.348321 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/7b325264-3ac9-446e-b820-c40d942263e6-cache\") pod \"swift-storage-0\" (UID: \"7b325264-3ac9-446e-b820-c40d942263e6\") " pod="openstack/swift-storage-0" Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.348518 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4kjx\" (UniqueName: \"kubernetes.io/projected/7b325264-3ac9-446e-b820-c40d942263e6-kube-api-access-z4kjx\") pod \"swift-storage-0\" (UID: \"7b325264-3ac9-446e-b820-c40d942263e6\") " pod="openstack/swift-storage-0" Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.348565 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-5590c361-483d-41eb-8649-94e82d61d326\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5590c361-483d-41eb-8649-94e82d61d326\") pod \"swift-storage-0\" (UID: \"7b325264-3ac9-446e-b820-c40d942263e6\") " pod="openstack/swift-storage-0" Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.348688 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4df4ab5f-c9cf-40a8-9b50-44c5a67a97cc-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"4df4ab5f-c9cf-40a8-9b50-44c5a67a97cc\") " pod="openstack/ovn-northd-0" Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.348771 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/7b325264-3ac9-446e-b820-c40d942263e6-lock\") pod \"swift-storage-0\" (UID: \"7b325264-3ac9-446e-b820-c40d942263e6\") " pod="openstack/swift-storage-0" Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.348876 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/7b325264-3ac9-446e-b820-c40d942263e6-cache\") pod \"swift-storage-0\" (UID: \"7b325264-3ac9-446e-b820-c40d942263e6\") " pod="openstack/swift-storage-0" Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.348880 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/4df4ab5f-c9cf-40a8-9b50-44c5a67a97cc-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"4df4ab5f-c9cf-40a8-9b50-44c5a67a97cc\") " pod="openstack/ovn-northd-0" Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.349079 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqdff\" (UniqueName: \"kubernetes.io/projected/4df4ab5f-c9cf-40a8-9b50-44c5a67a97cc-kube-api-access-bqdff\") pod \"ovn-northd-0\" (UID: \"4df4ab5f-c9cf-40a8-9b50-44c5a67a97cc\") " pod="openstack/ovn-northd-0" Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.349245 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/7b325264-3ac9-446e-b820-c40d942263e6-lock\") pod \"swift-storage-0\" (UID: \"7b325264-3ac9-446e-b820-c40d942263e6\") " pod="openstack/swift-storage-0" Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.354255 4809 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.354313 4809 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-5590c361-483d-41eb-8649-94e82d61d326\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5590c361-483d-41eb-8649-94e82d61d326\") pod \"swift-storage-0\" (UID: \"7b325264-3ac9-446e-b820-c40d942263e6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6a91188ef2ad69f10f9c90bd831dfd7b48cbb730667be870a5946f37a36423a1/globalmount\"" pod="openstack/swift-storage-0" Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.376545 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b325264-3ac9-446e-b820-c40d942263e6-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"7b325264-3ac9-446e-b820-c40d942263e6\") " pod="openstack/swift-storage-0" Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.405441 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4kjx\" (UniqueName: \"kubernetes.io/projected/7b325264-3ac9-446e-b820-c40d942263e6-kube-api-access-z4kjx\") pod \"swift-storage-0\" (UID: \"7b325264-3ac9-446e-b820-c40d942263e6\") " pod="openstack/swift-storage-0" Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.451684 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4df4ab5f-c9cf-40a8-9b50-44c5a67a97cc-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"4df4ab5f-c9cf-40a8-9b50-44c5a67a97cc\") " pod="openstack/ovn-northd-0" Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.451772 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/4df4ab5f-c9cf-40a8-9b50-44c5a67a97cc-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"4df4ab5f-c9cf-40a8-9b50-44c5a67a97cc\") " pod="openstack/ovn-northd-0" Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.451848 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqdff\" (UniqueName: \"kubernetes.io/projected/4df4ab5f-c9cf-40a8-9b50-44c5a67a97cc-kube-api-access-bqdff\") pod \"ovn-northd-0\" (UID: \"4df4ab5f-c9cf-40a8-9b50-44c5a67a97cc\") " pod="openstack/ovn-northd-0" Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.451886 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4df4ab5f-c9cf-40a8-9b50-44c5a67a97cc-scripts\") pod \"ovn-northd-0\" (UID: \"4df4ab5f-c9cf-40a8-9b50-44c5a67a97cc\") " pod="openstack/ovn-northd-0" Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.451915 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4df4ab5f-c9cf-40a8-9b50-44c5a67a97cc-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"4df4ab5f-c9cf-40a8-9b50-44c5a67a97cc\") " pod="openstack/ovn-northd-0" Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.451964 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4df4ab5f-c9cf-40a8-9b50-44c5a67a97cc-config\") pod \"ovn-northd-0\" (UID: \"4df4ab5f-c9cf-40a8-9b50-44c5a67a97cc\") " pod="openstack/ovn-northd-0" Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.452057 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4df4ab5f-c9cf-40a8-9b50-44c5a67a97cc-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"4df4ab5f-c9cf-40a8-9b50-44c5a67a97cc\") " pod="openstack/ovn-northd-0" Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.455859 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4df4ab5f-c9cf-40a8-9b50-44c5a67a97cc-scripts\") pod \"ovn-northd-0\" (UID: \"4df4ab5f-c9cf-40a8-9b50-44c5a67a97cc\") " pod="openstack/ovn-northd-0" Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.459779 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4df4ab5f-c9cf-40a8-9b50-44c5a67a97cc-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"4df4ab5f-c9cf-40a8-9b50-44c5a67a97cc\") " pod="openstack/ovn-northd-0" Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.460894 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4df4ab5f-c9cf-40a8-9b50-44c5a67a97cc-config\") pod \"ovn-northd-0\" (UID: \"4df4ab5f-c9cf-40a8-9b50-44c5a67a97cc\") " pod="openstack/ovn-northd-0" Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.468064 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4df4ab5f-c9cf-40a8-9b50-44c5a67a97cc-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"4df4ab5f-c9cf-40a8-9b50-44c5a67a97cc\") " pod="openstack/ovn-northd-0" Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.478442 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/4df4ab5f-c9cf-40a8-9b50-44c5a67a97cc-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"4df4ab5f-c9cf-40a8-9b50-44c5a67a97cc\") " pod="openstack/ovn-northd-0" Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.485140 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4df4ab5f-c9cf-40a8-9b50-44c5a67a97cc-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"4df4ab5f-c9cf-40a8-9b50-44c5a67a97cc\") " pod="openstack/ovn-northd-0" Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.487242 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqdff\" (UniqueName: \"kubernetes.io/projected/4df4ab5f-c9cf-40a8-9b50-44c5a67a97cc-kube-api-access-bqdff\") pod \"ovn-northd-0\" (UID: \"4df4ab5f-c9cf-40a8-9b50-44c5a67a97cc\") " pod="openstack/ovn-northd-0" Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.488553 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-5590c361-483d-41eb-8649-94e82d61d326\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5590c361-483d-41eb-8649-94e82d61d326\") pod \"swift-storage-0\" (UID: \"7b325264-3ac9-446e-b820-c40d942263e6\") " pod="openstack/swift-storage-0" Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.539801 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.603875 4809 generic.go:334] "Generic (PLEG): container finished" podID="03a48b86-8bc7-4d5d-87c1-61f3f57110b7" containerID="4b1314275d02abb4e6d06ea97160b108460a6e1be34a5cc6f0f152ed49a1616b" exitCode=0 Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.604144 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-lctvd" event={"ID":"03a48b86-8bc7-4d5d-87c1-61f3f57110b7","Type":"ContainerDied","Data":"4b1314275d02abb4e6d06ea97160b108460a6e1be34a5cc6f0f152ed49a1616b"} Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.607773 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-zn54x" event={"ID":"ac713601-7c08-40f3-aaee-dafbe4c413e8","Type":"ContainerStarted","Data":"850788a0fbd9a1444a26584c8809f9da08e000635872de4fe67bff769c19393f"} Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.616337 4809 generic.go:334] "Generic (PLEG): container finished" podID="36b41bd2-cbbb-49b4-b5bb-4cb6da3c748f" containerID="facef78f97fa4a2540297c44274d46381defe98dfaee0d776241ab5e1c3e27cf" exitCode=0 Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.616560 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-nzln7" event={"ID":"36b41bd2-cbbb-49b4-b5bb-4cb6da3c748f","Type":"ContainerDied","Data":"facef78f97fa4a2540297c44274d46381defe98dfaee0d776241ab5e1c3e27cf"} Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.775633 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-jfdbd"] Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.825305 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-chwkh"] Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.901414 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7b325264-3ac9-446e-b820-c40d942263e6-etc-swift\") pod \"swift-storage-0\" (UID: \"7b325264-3ac9-446e-b820-c40d942263e6\") " pod="openstack/swift-storage-0" Mar 12 08:21:49 crc kubenswrapper[4809]: E0312 08:21:49.901660 4809 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 12 08:21:49 crc kubenswrapper[4809]: E0312 08:21:49.901790 4809 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 12 08:21:49 crc kubenswrapper[4809]: E0312 08:21:49.901870 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7b325264-3ac9-446e-b820-c40d942263e6-etc-swift podName:7b325264-3ac9-446e-b820-c40d942263e6 nodeName:}" failed. No retries permitted until 2026-03-12 08:21:50.901833029 +0000 UTC m=+1384.483868762 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7b325264-3ac9-446e-b820-c40d942263e6-etc-swift") pod "swift-storage-0" (UID: "7b325264-3ac9-446e-b820-c40d942263e6") : configmap "swift-ring-files" not found Mar 12 08:21:49 crc kubenswrapper[4809]: I0312 08:21:49.962840 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-hd7gx"] Mar 12 08:21:49 crc kubenswrapper[4809]: W0312 08:21:49.990049 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod085fdfa8_88d8_460c_82cf_87d59d145d7c.slice/crio-3b1cb9d7be66c6b8c750119aae9d6caaff862813e6431b51f08b7f7bb54fda04 WatchSource:0}: Error finding container 3b1cb9d7be66c6b8c750119aae9d6caaff862813e6431b51f08b7f7bb54fda04: Status 404 returned error can't find the container with id 3b1cb9d7be66c6b8c750119aae9d6caaff862813e6431b51f08b7f7bb54fda04 Mar 12 08:21:50 crc kubenswrapper[4809]: I0312 08:21:50.264182 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 12 08:21:50 crc kubenswrapper[4809]: I0312 08:21:50.629804 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-chwkh" event={"ID":"55e426a8-784b-4859-a48a-509e5f045c98","Type":"ContainerStarted","Data":"3f2ac14e9dbaa5302eddc1e9448e1927c63af34f2a757b7a63abe9f8d3b63b97"} Mar 12 08:21:50 crc kubenswrapper[4809]: I0312 08:21:50.635910 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-hd7gx" event={"ID":"085fdfa8-88d8-460c-82cf-87d59d145d7c","Type":"ContainerStarted","Data":"3b1cb9d7be66c6b8c750119aae9d6caaff862813e6431b51f08b7f7bb54fda04"} Mar 12 08:21:50 crc kubenswrapper[4809]: I0312 08:21:50.637719 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6f696b9-jfdbd" event={"ID":"f2136d46-3abf-4108-9f88-19dfcadc2bf0","Type":"ContainerStarted","Data":"288d508537e73fc44ee90c45baa7372ddd47dd4e147855384a147141e17c9029"} Mar 12 08:21:50 crc kubenswrapper[4809]: I0312 08:21:50.642222 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"4df4ab5f-c9cf-40a8-9b50-44c5a67a97cc","Type":"ContainerStarted","Data":"3c387e82a5fef23144bbbe18318ede84ab349fa16c2ce7e0c10603f408085cde"} Mar 12 08:21:50 crc kubenswrapper[4809]: I0312 08:21:50.927838 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7b325264-3ac9-446e-b820-c40d942263e6-etc-swift\") pod \"swift-storage-0\" (UID: \"7b325264-3ac9-446e-b820-c40d942263e6\") " pod="openstack/swift-storage-0" Mar 12 08:21:50 crc kubenswrapper[4809]: E0312 08:21:50.928160 4809 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 12 08:21:50 crc kubenswrapper[4809]: E0312 08:21:50.928210 4809 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 12 08:21:50 crc kubenswrapper[4809]: E0312 08:21:50.928322 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7b325264-3ac9-446e-b820-c40d942263e6-etc-swift podName:7b325264-3ac9-446e-b820-c40d942263e6 nodeName:}" failed. No retries permitted until 2026-03-12 08:21:52.928293703 +0000 UTC m=+1386.510329436 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7b325264-3ac9-446e-b820-c40d942263e6-etc-swift") pod "swift-storage-0" (UID: "7b325264-3ac9-446e-b820-c40d942263e6") : configmap "swift-ring-files" not found Mar 12 08:21:51 crc kubenswrapper[4809]: I0312 08:21:51.053272 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5ccc8479f9-lctvd" podUID="03a48b86-8bc7-4d5d-87c1-61f3f57110b7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.132:5353: connect: connection refused" Mar 12 08:21:51 crc kubenswrapper[4809]: I0312 08:21:51.549900 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-57d769cc4f-nzln7" podUID="36b41bd2-cbbb-49b4-b5bb-4cb6da3c748f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.133:5353: connect: connection refused" Mar 12 08:21:52 crc kubenswrapper[4809]: I0312 08:21:52.891587 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-w9c45"] Mar 12 08:21:52 crc kubenswrapper[4809]: I0312 08:21:52.895047 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-w9c45" Mar 12 08:21:52 crc kubenswrapper[4809]: I0312 08:21:52.907560 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Mar 12 08:21:52 crc kubenswrapper[4809]: I0312 08:21:52.907820 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Mar 12 08:21:52 crc kubenswrapper[4809]: I0312 08:21:52.907947 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Mar 12 08:21:52 crc kubenswrapper[4809]: I0312 08:21:52.922841 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-w9c45"] Mar 12 08:21:52 crc kubenswrapper[4809]: I0312 08:21:52.982777 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c349c64-fcf0-48cf-91c5-fac0131bacc6-combined-ca-bundle\") pod \"swift-ring-rebalance-w9c45\" (UID: \"7c349c64-fcf0-48cf-91c5-fac0131bacc6\") " pod="openstack/swift-ring-rebalance-w9c45" Mar 12 08:21:52 crc kubenswrapper[4809]: I0312 08:21:52.982865 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7c349c64-fcf0-48cf-91c5-fac0131bacc6-dispersionconf\") pod \"swift-ring-rebalance-w9c45\" (UID: \"7c349c64-fcf0-48cf-91c5-fac0131bacc6\") " pod="openstack/swift-ring-rebalance-w9c45" Mar 12 08:21:52 crc kubenswrapper[4809]: I0312 08:21:52.982915 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7c349c64-fcf0-48cf-91c5-fac0131bacc6-ring-data-devices\") pod \"swift-ring-rebalance-w9c45\" (UID: \"7c349c64-fcf0-48cf-91c5-fac0131bacc6\") " pod="openstack/swift-ring-rebalance-w9c45" Mar 12 08:21:52 crc kubenswrapper[4809]: I0312 08:21:52.982960 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crw7n\" (UniqueName: \"kubernetes.io/projected/7c349c64-fcf0-48cf-91c5-fac0131bacc6-kube-api-access-crw7n\") pod \"swift-ring-rebalance-w9c45\" (UID: \"7c349c64-fcf0-48cf-91c5-fac0131bacc6\") " pod="openstack/swift-ring-rebalance-w9c45" Mar 12 08:21:52 crc kubenswrapper[4809]: I0312 08:21:52.982993 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7b325264-3ac9-446e-b820-c40d942263e6-etc-swift\") pod \"swift-storage-0\" (UID: \"7b325264-3ac9-446e-b820-c40d942263e6\") " pod="openstack/swift-storage-0" Mar 12 08:21:52 crc kubenswrapper[4809]: I0312 08:21:52.983039 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c349c64-fcf0-48cf-91c5-fac0131bacc6-scripts\") pod \"swift-ring-rebalance-w9c45\" (UID: \"7c349c64-fcf0-48cf-91c5-fac0131bacc6\") " pod="openstack/swift-ring-rebalance-w9c45" Mar 12 08:21:52 crc kubenswrapper[4809]: I0312 08:21:52.983056 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7c349c64-fcf0-48cf-91c5-fac0131bacc6-etc-swift\") pod \"swift-ring-rebalance-w9c45\" (UID: \"7c349c64-fcf0-48cf-91c5-fac0131bacc6\") " pod="openstack/swift-ring-rebalance-w9c45" Mar 12 08:21:52 crc kubenswrapper[4809]: I0312 08:21:52.983101 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7c349c64-fcf0-48cf-91c5-fac0131bacc6-swiftconf\") pod \"swift-ring-rebalance-w9c45\" (UID: \"7c349c64-fcf0-48cf-91c5-fac0131bacc6\") " pod="openstack/swift-ring-rebalance-w9c45" Mar 12 08:21:52 crc kubenswrapper[4809]: E0312 08:21:52.983392 4809 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 12 08:21:52 crc kubenswrapper[4809]: E0312 08:21:52.983412 4809 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 12 08:21:52 crc kubenswrapper[4809]: E0312 08:21:52.983458 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7b325264-3ac9-446e-b820-c40d942263e6-etc-swift podName:7b325264-3ac9-446e-b820-c40d942263e6 nodeName:}" failed. No retries permitted until 2026-03-12 08:21:56.983440233 +0000 UTC m=+1390.565475966 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7b325264-3ac9-446e-b820-c40d942263e6-etc-swift") pod "swift-storage-0" (UID: "7b325264-3ac9-446e-b820-c40d942263e6") : configmap "swift-ring-files" not found Mar 12 08:21:53 crc kubenswrapper[4809]: I0312 08:21:53.086876 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7c349c64-fcf0-48cf-91c5-fac0131bacc6-dispersionconf\") pod \"swift-ring-rebalance-w9c45\" (UID: \"7c349c64-fcf0-48cf-91c5-fac0131bacc6\") " pod="openstack/swift-ring-rebalance-w9c45" Mar 12 08:21:53 crc kubenswrapper[4809]: I0312 08:21:53.086959 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7c349c64-fcf0-48cf-91c5-fac0131bacc6-ring-data-devices\") pod \"swift-ring-rebalance-w9c45\" (UID: \"7c349c64-fcf0-48cf-91c5-fac0131bacc6\") " pod="openstack/swift-ring-rebalance-w9c45" Mar 12 08:21:53 crc kubenswrapper[4809]: I0312 08:21:53.087044 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crw7n\" (UniqueName: \"kubernetes.io/projected/7c349c64-fcf0-48cf-91c5-fac0131bacc6-kube-api-access-crw7n\") pod \"swift-ring-rebalance-w9c45\" (UID: \"7c349c64-fcf0-48cf-91c5-fac0131bacc6\") " pod="openstack/swift-ring-rebalance-w9c45" Mar 12 08:21:53 crc kubenswrapper[4809]: I0312 08:21:53.087143 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c349c64-fcf0-48cf-91c5-fac0131bacc6-scripts\") pod \"swift-ring-rebalance-w9c45\" (UID: \"7c349c64-fcf0-48cf-91c5-fac0131bacc6\") " pod="openstack/swift-ring-rebalance-w9c45" Mar 12 08:21:53 crc kubenswrapper[4809]: I0312 08:21:53.087168 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7c349c64-fcf0-48cf-91c5-fac0131bacc6-etc-swift\") pod \"swift-ring-rebalance-w9c45\" (UID: \"7c349c64-fcf0-48cf-91c5-fac0131bacc6\") " pod="openstack/swift-ring-rebalance-w9c45" Mar 12 08:21:53 crc kubenswrapper[4809]: I0312 08:21:53.087237 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7c349c64-fcf0-48cf-91c5-fac0131bacc6-swiftconf\") pod \"swift-ring-rebalance-w9c45\" (UID: \"7c349c64-fcf0-48cf-91c5-fac0131bacc6\") " pod="openstack/swift-ring-rebalance-w9c45" Mar 12 08:21:53 crc kubenswrapper[4809]: I0312 08:21:53.087394 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c349c64-fcf0-48cf-91c5-fac0131bacc6-combined-ca-bundle\") pod \"swift-ring-rebalance-w9c45\" (UID: \"7c349c64-fcf0-48cf-91c5-fac0131bacc6\") " pod="openstack/swift-ring-rebalance-w9c45" Mar 12 08:21:53 crc kubenswrapper[4809]: I0312 08:21:53.094646 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7c349c64-fcf0-48cf-91c5-fac0131bacc6-etc-swift\") pod \"swift-ring-rebalance-w9c45\" (UID: \"7c349c64-fcf0-48cf-91c5-fac0131bacc6\") " pod="openstack/swift-ring-rebalance-w9c45" Mar 12 08:21:53 crc kubenswrapper[4809]: I0312 08:21:53.095166 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7c349c64-fcf0-48cf-91c5-fac0131bacc6-ring-data-devices\") pod \"swift-ring-rebalance-w9c45\" (UID: \"7c349c64-fcf0-48cf-91c5-fac0131bacc6\") " pod="openstack/swift-ring-rebalance-w9c45" Mar 12 08:21:53 crc kubenswrapper[4809]: I0312 08:21:53.095593 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c349c64-fcf0-48cf-91c5-fac0131bacc6-scripts\") pod \"swift-ring-rebalance-w9c45\" (UID: \"7c349c64-fcf0-48cf-91c5-fac0131bacc6\") " pod="openstack/swift-ring-rebalance-w9c45" Mar 12 08:21:53 crc kubenswrapper[4809]: I0312 08:21:53.104651 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7c349c64-fcf0-48cf-91c5-fac0131bacc6-dispersionconf\") pod \"swift-ring-rebalance-w9c45\" (UID: \"7c349c64-fcf0-48cf-91c5-fac0131bacc6\") " pod="openstack/swift-ring-rebalance-w9c45" Mar 12 08:21:53 crc kubenswrapper[4809]: I0312 08:21:53.105170 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7c349c64-fcf0-48cf-91c5-fac0131bacc6-swiftconf\") pod \"swift-ring-rebalance-w9c45\" (UID: \"7c349c64-fcf0-48cf-91c5-fac0131bacc6\") " pod="openstack/swift-ring-rebalance-w9c45" Mar 12 08:21:53 crc kubenswrapper[4809]: I0312 08:21:53.105291 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c349c64-fcf0-48cf-91c5-fac0131bacc6-combined-ca-bundle\") pod \"swift-ring-rebalance-w9c45\" (UID: \"7c349c64-fcf0-48cf-91c5-fac0131bacc6\") " pod="openstack/swift-ring-rebalance-w9c45" Mar 12 08:21:53 crc kubenswrapper[4809]: I0312 08:21:53.111198 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crw7n\" (UniqueName: \"kubernetes.io/projected/7c349c64-fcf0-48cf-91c5-fac0131bacc6-kube-api-access-crw7n\") pod \"swift-ring-rebalance-w9c45\" (UID: \"7c349c64-fcf0-48cf-91c5-fac0131bacc6\") " pod="openstack/swift-ring-rebalance-w9c45" Mar 12 08:21:53 crc kubenswrapper[4809]: I0312 08:21:53.216985 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-nzln7" Mar 12 08:21:53 crc kubenswrapper[4809]: I0312 08:21:53.292069 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36b41bd2-cbbb-49b4-b5bb-4cb6da3c748f-dns-svc\") pod \"36b41bd2-cbbb-49b4-b5bb-4cb6da3c748f\" (UID: \"36b41bd2-cbbb-49b4-b5bb-4cb6da3c748f\") " Mar 12 08:21:53 crc kubenswrapper[4809]: I0312 08:21:53.293432 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36b41bd2-cbbb-49b4-b5bb-4cb6da3c748f-config\") pod \"36b41bd2-cbbb-49b4-b5bb-4cb6da3c748f\" (UID: \"36b41bd2-cbbb-49b4-b5bb-4cb6da3c748f\") " Mar 12 08:21:53 crc kubenswrapper[4809]: I0312 08:21:53.345251 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-w9c45" Mar 12 08:21:53 crc kubenswrapper[4809]: I0312 08:21:53.350339 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36b41bd2-cbbb-49b4-b5bb-4cb6da3c748f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "36b41bd2-cbbb-49b4-b5bb-4cb6da3c748f" (UID: "36b41bd2-cbbb-49b4-b5bb-4cb6da3c748f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:21:53 crc kubenswrapper[4809]: I0312 08:21:53.352354 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36b41bd2-cbbb-49b4-b5bb-4cb6da3c748f-config" (OuterVolumeSpecName: "config") pod "36b41bd2-cbbb-49b4-b5bb-4cb6da3c748f" (UID: "36b41bd2-cbbb-49b4-b5bb-4cb6da3c748f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:21:53 crc kubenswrapper[4809]: I0312 08:21:53.397176 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bsjxm\" (UniqueName: \"kubernetes.io/projected/36b41bd2-cbbb-49b4-b5bb-4cb6da3c748f-kube-api-access-bsjxm\") pod \"36b41bd2-cbbb-49b4-b5bb-4cb6da3c748f\" (UID: \"36b41bd2-cbbb-49b4-b5bb-4cb6da3c748f\") " Mar 12 08:21:53 crc kubenswrapper[4809]: I0312 08:21:53.398199 4809 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36b41bd2-cbbb-49b4-b5bb-4cb6da3c748f-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 12 08:21:53 crc kubenswrapper[4809]: I0312 08:21:53.398217 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36b41bd2-cbbb-49b4-b5bb-4cb6da3c748f-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:21:53 crc kubenswrapper[4809]: I0312 08:21:53.402360 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36b41bd2-cbbb-49b4-b5bb-4cb6da3c748f-kube-api-access-bsjxm" (OuterVolumeSpecName: "kube-api-access-bsjxm") pod "36b41bd2-cbbb-49b4-b5bb-4cb6da3c748f" (UID: "36b41bd2-cbbb-49b4-b5bb-4cb6da3c748f"). InnerVolumeSpecName "kube-api-access-bsjxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:21:53 crc kubenswrapper[4809]: I0312 08:21:53.494602 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-lctvd" Mar 12 08:21:53 crc kubenswrapper[4809]: I0312 08:21:53.508836 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbspm\" (UniqueName: \"kubernetes.io/projected/03a48b86-8bc7-4d5d-87c1-61f3f57110b7-kube-api-access-pbspm\") pod \"03a48b86-8bc7-4d5d-87c1-61f3f57110b7\" (UID: \"03a48b86-8bc7-4d5d-87c1-61f3f57110b7\") " Mar 12 08:21:53 crc kubenswrapper[4809]: I0312 08:21:53.508909 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03a48b86-8bc7-4d5d-87c1-61f3f57110b7-dns-svc\") pod \"03a48b86-8bc7-4d5d-87c1-61f3f57110b7\" (UID: \"03a48b86-8bc7-4d5d-87c1-61f3f57110b7\") " Mar 12 08:21:53 crc kubenswrapper[4809]: I0312 08:21:53.508940 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03a48b86-8bc7-4d5d-87c1-61f3f57110b7-config\") pod \"03a48b86-8bc7-4d5d-87c1-61f3f57110b7\" (UID: \"03a48b86-8bc7-4d5d-87c1-61f3f57110b7\") " Mar 12 08:21:53 crc kubenswrapper[4809]: I0312 08:21:53.509385 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bsjxm\" (UniqueName: \"kubernetes.io/projected/36b41bd2-cbbb-49b4-b5bb-4cb6da3c748f-kube-api-access-bsjxm\") on node \"crc\" DevicePath \"\"" Mar 12 08:21:53 crc kubenswrapper[4809]: I0312 08:21:53.523630 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03a48b86-8bc7-4d5d-87c1-61f3f57110b7-kube-api-access-pbspm" (OuterVolumeSpecName: "kube-api-access-pbspm") pod "03a48b86-8bc7-4d5d-87c1-61f3f57110b7" (UID: "03a48b86-8bc7-4d5d-87c1-61f3f57110b7"). InnerVolumeSpecName "kube-api-access-pbspm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:21:53 crc kubenswrapper[4809]: I0312 08:21:53.548691 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Mar 12 08:21:53 crc kubenswrapper[4809]: I0312 08:21:53.550199 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Mar 12 08:21:53 crc kubenswrapper[4809]: I0312 08:21:53.613067 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbspm\" (UniqueName: \"kubernetes.io/projected/03a48b86-8bc7-4d5d-87c1-61f3f57110b7-kube-api-access-pbspm\") on node \"crc\" DevicePath \"\"" Mar 12 08:21:53 crc kubenswrapper[4809]: I0312 08:21:53.630769 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03a48b86-8bc7-4d5d-87c1-61f3f57110b7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "03a48b86-8bc7-4d5d-87c1-61f3f57110b7" (UID: "03a48b86-8bc7-4d5d-87c1-61f3f57110b7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:21:53 crc kubenswrapper[4809]: I0312 08:21:53.635330 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03a48b86-8bc7-4d5d-87c1-61f3f57110b7-config" (OuterVolumeSpecName: "config") pod "03a48b86-8bc7-4d5d-87c1-61f3f57110b7" (UID: "03a48b86-8bc7-4d5d-87c1-61f3f57110b7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:21:53 crc kubenswrapper[4809]: I0312 08:21:53.677152 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-lctvd" event={"ID":"03a48b86-8bc7-4d5d-87c1-61f3f57110b7","Type":"ContainerDied","Data":"1981e7ad1b6fe101f96e3fe691761a482d9dbd0e569f995e4c2840874df0750c"} Mar 12 08:21:53 crc kubenswrapper[4809]: I0312 08:21:53.677207 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-lctvd" Mar 12 08:21:53 crc kubenswrapper[4809]: I0312 08:21:53.677217 4809 scope.go:117] "RemoveContainer" containerID="4b1314275d02abb4e6d06ea97160b108460a6e1be34a5cc6f0f152ed49a1616b" Mar 12 08:21:53 crc kubenswrapper[4809]: I0312 08:21:53.682319 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-chwkh" event={"ID":"55e426a8-784b-4859-a48a-509e5f045c98","Type":"ContainerStarted","Data":"b334fe1c9390c6a9e42fba9c578ffc1f35822fb5859183d4dbfefe3f4b446782"} Mar 12 08:21:53 crc kubenswrapper[4809]: I0312 08:21:53.685210 4809 generic.go:334] "Generic (PLEG): container finished" podID="ac713601-7c08-40f3-aaee-dafbe4c413e8" containerID="99480db2cfa79dbba157aa84cc1fbc136c734178c3254c7b761e4ef81c058d05" exitCode=0 Mar 12 08:21:53 crc kubenswrapper[4809]: I0312 08:21:53.685280 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-zn54x" event={"ID":"ac713601-7c08-40f3-aaee-dafbe4c413e8","Type":"ContainerDied","Data":"99480db2cfa79dbba157aa84cc1fbc136c734178c3254c7b761e4ef81c058d05"} Mar 12 08:21:53 crc kubenswrapper[4809]: I0312 08:21:53.686675 4809 generic.go:334] "Generic (PLEG): container finished" podID="085fdfa8-88d8-460c-82cf-87d59d145d7c" containerID="2d85fec4720bcc579fded2d352fa130084e253678bb26851f353a19f38483718" exitCode=0 Mar 12 08:21:53 crc kubenswrapper[4809]: I0312 08:21:53.686762 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-hd7gx" event={"ID":"085fdfa8-88d8-460c-82cf-87d59d145d7c","Type":"ContainerDied","Data":"2d85fec4720bcc579fded2d352fa130084e253678bb26851f353a19f38483718"} Mar 12 08:21:53 crc kubenswrapper[4809]: I0312 08:21:53.693565 4809 generic.go:334] "Generic (PLEG): container finished" podID="f2136d46-3abf-4108-9f88-19dfcadc2bf0" containerID="b118e6526ec5c87f478c419791ca43146967f29cfb365d631b5202f3e9dd942f" exitCode=0 Mar 12 08:21:53 crc kubenswrapper[4809]: I0312 08:21:53.693614 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6f696b9-jfdbd" event={"ID":"f2136d46-3abf-4108-9f88-19dfcadc2bf0","Type":"ContainerDied","Data":"b118e6526ec5c87f478c419791ca43146967f29cfb365d631b5202f3e9dd942f"} Mar 12 08:21:53 crc kubenswrapper[4809]: I0312 08:21:53.715807 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-chwkh" podStartSLOduration=5.714183961 podStartE2EDuration="5.714183961s" podCreationTimestamp="2026-03-12 08:21:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:21:53.700312231 +0000 UTC m=+1387.282347964" watchObservedRunningTime="2026-03-12 08:21:53.714183961 +0000 UTC m=+1387.296219694" Mar 12 08:21:53 crc kubenswrapper[4809]: I0312 08:21:53.718002 4809 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03a48b86-8bc7-4d5d-87c1-61f3f57110b7-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 12 08:21:53 crc kubenswrapper[4809]: I0312 08:21:53.718047 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03a48b86-8bc7-4d5d-87c1-61f3f57110b7-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:21:53 crc kubenswrapper[4809]: I0312 08:21:53.723261 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-nzln7" Mar 12 08:21:53 crc kubenswrapper[4809]: I0312 08:21:53.723907 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-nzln7" event={"ID":"36b41bd2-cbbb-49b4-b5bb-4cb6da3c748f","Type":"ContainerDied","Data":"a9c084c522944819607b9ede4ffe2dced2276cacf775c2f2f86c203df6a04124"} Mar 12 08:21:53 crc kubenswrapper[4809]: I0312 08:21:53.725530 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Mar 12 08:21:53 crc kubenswrapper[4809]: I0312 08:21:53.848656 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-lctvd"] Mar 12 08:21:53 crc kubenswrapper[4809]: I0312 08:21:53.867253 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-lctvd"] Mar 12 08:21:53 crc kubenswrapper[4809]: I0312 08:21:53.892308 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-nzln7"] Mar 12 08:21:53 crc kubenswrapper[4809]: I0312 08:21:53.946689 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-nzln7"] Mar 12 08:21:53 crc kubenswrapper[4809]: I0312 08:21:53.957341 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-w9c45"] Mar 12 08:21:54 crc kubenswrapper[4809]: W0312 08:21:54.212322 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c349c64_fcf0_48cf_91c5_fac0131bacc6.slice/crio-a8ca8d2e85e526ced50a464b68bea5f4b0a3febc7f74127a57b6800f41da183a WatchSource:0}: Error finding container a8ca8d2e85e526ced50a464b68bea5f4b0a3febc7f74127a57b6800f41da183a: Status 404 returned error can't find the container with id a8ca8d2e85e526ced50a464b68bea5f4b0a3febc7f74127a57b6800f41da183a Mar 12 08:21:54 crc kubenswrapper[4809]: I0312 08:21:54.224353 4809 scope.go:117] "RemoveContainer" containerID="fb1eb5054c8a30723fe7f2989d7ee98802b343aa6feadb1b14983b40f8c7904f" Mar 12 08:21:54 crc kubenswrapper[4809]: I0312 08:21:54.439800 4809 scope.go:117] "RemoveContainer" containerID="facef78f97fa4a2540297c44274d46381defe98dfaee0d776241ab5e1c3e27cf" Mar 12 08:21:54 crc kubenswrapper[4809]: I0312 08:21:54.450661 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-zn54x" Mar 12 08:21:54 crc kubenswrapper[4809]: I0312 08:21:54.523176 4809 scope.go:117] "RemoveContainer" containerID="65c33d200424e87c3e34d9bd54c4f3135e1740d77c288727339e4a4c15f8371e" Mar 12 08:21:54 crc kubenswrapper[4809]: I0312 08:21:54.538705 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac713601-7c08-40f3-aaee-dafbe4c413e8-config\") pod \"ac713601-7c08-40f3-aaee-dafbe4c413e8\" (UID: \"ac713601-7c08-40f3-aaee-dafbe4c413e8\") " Mar 12 08:21:54 crc kubenswrapper[4809]: I0312 08:21:54.538924 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac713601-7c08-40f3-aaee-dafbe4c413e8-dns-svc\") pod \"ac713601-7c08-40f3-aaee-dafbe4c413e8\" (UID: \"ac713601-7c08-40f3-aaee-dafbe4c413e8\") " Mar 12 08:21:54 crc kubenswrapper[4809]: I0312 08:21:54.538983 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssh2w\" (UniqueName: \"kubernetes.io/projected/ac713601-7c08-40f3-aaee-dafbe4c413e8-kube-api-access-ssh2w\") pod \"ac713601-7c08-40f3-aaee-dafbe4c413e8\" (UID: \"ac713601-7c08-40f3-aaee-dafbe4c413e8\") " Mar 12 08:21:54 crc kubenswrapper[4809]: I0312 08:21:54.577275 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac713601-7c08-40f3-aaee-dafbe4c413e8-kube-api-access-ssh2w" (OuterVolumeSpecName: "kube-api-access-ssh2w") pod "ac713601-7c08-40f3-aaee-dafbe4c413e8" (UID: "ac713601-7c08-40f3-aaee-dafbe4c413e8"). InnerVolumeSpecName "kube-api-access-ssh2w". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:21:54 crc kubenswrapper[4809]: I0312 08:21:54.626632 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac713601-7c08-40f3-aaee-dafbe4c413e8-config" (OuterVolumeSpecName: "config") pod "ac713601-7c08-40f3-aaee-dafbe4c413e8" (UID: "ac713601-7c08-40f3-aaee-dafbe4c413e8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:21:54 crc kubenswrapper[4809]: I0312 08:21:54.632706 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac713601-7c08-40f3-aaee-dafbe4c413e8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ac713601-7c08-40f3-aaee-dafbe4c413e8" (UID: "ac713601-7c08-40f3-aaee-dafbe4c413e8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:21:54 crc kubenswrapper[4809]: I0312 08:21:54.637840 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 12 08:21:54 crc kubenswrapper[4809]: I0312 08:21:54.646538 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ssh2w\" (UniqueName: \"kubernetes.io/projected/ac713601-7c08-40f3-aaee-dafbe4c413e8-kube-api-access-ssh2w\") on node \"crc\" DevicePath \"\"" Mar 12 08:21:54 crc kubenswrapper[4809]: I0312 08:21:54.646572 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac713601-7c08-40f3-aaee-dafbe4c413e8-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:21:54 crc kubenswrapper[4809]: I0312 08:21:54.646583 4809 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac713601-7c08-40f3-aaee-dafbe4c413e8-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 12 08:21:54 crc kubenswrapper[4809]: I0312 08:21:54.733438 4809 generic.go:334] "Generic (PLEG): container finished" podID="dae78d40-ac67-4ec6-bf05-7f615d22aea9" containerID="31bea56799402d191891ba87d388362351ba8077b34c57b4d13ef5a639a7bc82" exitCode=0 Mar 12 08:21:54 crc kubenswrapper[4809]: I0312 08:21:54.733484 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"dae78d40-ac67-4ec6-bf05-7f615d22aea9","Type":"ContainerDied","Data":"31bea56799402d191891ba87d388362351ba8077b34c57b4d13ef5a639a7bc82"} Mar 12 08:21:54 crc kubenswrapper[4809]: I0312 08:21:54.733524 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 12 08:21:54 crc kubenswrapper[4809]: I0312 08:21:54.733534 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"dae78d40-ac67-4ec6-bf05-7f615d22aea9","Type":"ContainerDied","Data":"6669c6916343aa6405841b56f374201202dba020a52811f32504141688470e9d"} Mar 12 08:21:54 crc kubenswrapper[4809]: I0312 08:21:54.733553 4809 scope.go:117] "RemoveContainer" containerID="31bea56799402d191891ba87d388362351ba8077b34c57b4d13ef5a639a7bc82" Mar 12 08:21:54 crc kubenswrapper[4809]: I0312 08:21:54.736142 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-hd7gx" event={"ID":"085fdfa8-88d8-460c-82cf-87d59d145d7c","Type":"ContainerStarted","Data":"6701a3188bee1691395f8296e3fe5c114f89c6c2a68e1136a6a5ed9ec9661fc8"} Mar 12 08:21:54 crc kubenswrapper[4809]: I0312 08:21:54.736202 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-hd7gx" Mar 12 08:21:54 crc kubenswrapper[4809]: I0312 08:21:54.740678 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-w9c45" event={"ID":"7c349c64-fcf0-48cf-91c5-fac0131bacc6","Type":"ContainerStarted","Data":"a8ca8d2e85e526ced50a464b68bea5f4b0a3febc7f74127a57b6800f41da183a"} Mar 12 08:21:54 crc kubenswrapper[4809]: I0312 08:21:54.743062 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6f696b9-jfdbd" event={"ID":"f2136d46-3abf-4108-9f88-19dfcadc2bf0","Type":"ContainerStarted","Data":"666574e56d1a4791da92d7fb32d184f5d2a8b1928beb178e1be4d7bb4c92aa95"} Mar 12 08:21:54 crc kubenswrapper[4809]: I0312 08:21:54.743278 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74f6f696b9-jfdbd" Mar 12 08:21:54 crc kubenswrapper[4809]: I0312 08:21:54.746191 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"4df4ab5f-c9cf-40a8-9b50-44c5a67a97cc","Type":"ContainerStarted","Data":"6ddebca748dcabb4c220ac8431ed2bb05337702abda01dac027073448cbc699e"} Mar 12 08:21:54 crc kubenswrapper[4809]: I0312 08:21:54.747564 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/dae78d40-ac67-4ec6-bf05-7f615d22aea9-tls-assets\") pod \"dae78d40-ac67-4ec6-bf05-7f615d22aea9\" (UID: \"dae78d40-ac67-4ec6-bf05-7f615d22aea9\") " Mar 12 08:21:54 crc kubenswrapper[4809]: I0312 08:21:54.748206 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/dae78d40-ac67-4ec6-bf05-7f615d22aea9-prometheus-metric-storage-rulefiles-1\") pod \"dae78d40-ac67-4ec6-bf05-7f615d22aea9\" (UID: \"dae78d40-ac67-4ec6-bf05-7f615d22aea9\") " Mar 12 08:21:54 crc kubenswrapper[4809]: I0312 08:21:54.748261 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/dae78d40-ac67-4ec6-bf05-7f615d22aea9-config-out\") pod \"dae78d40-ac67-4ec6-bf05-7f615d22aea9\" (UID: \"dae78d40-ac67-4ec6-bf05-7f615d22aea9\") " Mar 12 08:21:54 crc kubenswrapper[4809]: I0312 08:21:54.748286 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gclx8\" (UniqueName: \"kubernetes.io/projected/dae78d40-ac67-4ec6-bf05-7f615d22aea9-kube-api-access-gclx8\") pod \"dae78d40-ac67-4ec6-bf05-7f615d22aea9\" (UID: \"dae78d40-ac67-4ec6-bf05-7f615d22aea9\") " Mar 12 08:21:54 crc kubenswrapper[4809]: I0312 08:21:54.748321 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/dae78d40-ac67-4ec6-bf05-7f615d22aea9-prometheus-metric-storage-rulefiles-0\") pod \"dae78d40-ac67-4ec6-bf05-7f615d22aea9\" (UID: \"dae78d40-ac67-4ec6-bf05-7f615d22aea9\") " Mar 12 08:21:54 crc kubenswrapper[4809]: I0312 08:21:54.748448 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8df94e62-7c52-43ba-aa96-2fa0463399b9\") pod \"dae78d40-ac67-4ec6-bf05-7f615d22aea9\" (UID: \"dae78d40-ac67-4ec6-bf05-7f615d22aea9\") " Mar 12 08:21:54 crc kubenswrapper[4809]: I0312 08:21:54.748534 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/dae78d40-ac67-4ec6-bf05-7f615d22aea9-web-config\") pod \"dae78d40-ac67-4ec6-bf05-7f615d22aea9\" (UID: \"dae78d40-ac67-4ec6-bf05-7f615d22aea9\") " Mar 12 08:21:54 crc kubenswrapper[4809]: I0312 08:21:54.748624 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/dae78d40-ac67-4ec6-bf05-7f615d22aea9-config\") pod \"dae78d40-ac67-4ec6-bf05-7f615d22aea9\" (UID: \"dae78d40-ac67-4ec6-bf05-7f615d22aea9\") " Mar 12 08:21:54 crc kubenswrapper[4809]: I0312 08:21:54.748665 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/dae78d40-ac67-4ec6-bf05-7f615d22aea9-prometheus-metric-storage-rulefiles-2\") pod \"dae78d40-ac67-4ec6-bf05-7f615d22aea9\" (UID: \"dae78d40-ac67-4ec6-bf05-7f615d22aea9\") " Mar 12 08:21:54 crc kubenswrapper[4809]: I0312 08:21:54.748688 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/dae78d40-ac67-4ec6-bf05-7f615d22aea9-thanos-prometheus-http-client-file\") pod \"dae78d40-ac67-4ec6-bf05-7f615d22aea9\" (UID: \"dae78d40-ac67-4ec6-bf05-7f615d22aea9\") " Mar 12 08:21:54 crc kubenswrapper[4809]: I0312 08:21:54.749364 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dae78d40-ac67-4ec6-bf05-7f615d22aea9-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "dae78d40-ac67-4ec6-bf05-7f615d22aea9" (UID: "dae78d40-ac67-4ec6-bf05-7f615d22aea9"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:21:54 crc kubenswrapper[4809]: I0312 08:21:54.749378 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dae78d40-ac67-4ec6-bf05-7f615d22aea9-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "dae78d40-ac67-4ec6-bf05-7f615d22aea9" (UID: "dae78d40-ac67-4ec6-bf05-7f615d22aea9"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:21:54 crc kubenswrapper[4809]: I0312 08:21:54.749391 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dae78d40-ac67-4ec6-bf05-7f615d22aea9-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "dae78d40-ac67-4ec6-bf05-7f615d22aea9" (UID: "dae78d40-ac67-4ec6-bf05-7f615d22aea9"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:21:54 crc kubenswrapper[4809]: I0312 08:21:54.752775 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-zn54x" event={"ID":"ac713601-7c08-40f3-aaee-dafbe4c413e8","Type":"ContainerDied","Data":"850788a0fbd9a1444a26584c8809f9da08e000635872de4fe67bff769c19393f"} Mar 12 08:21:54 crc kubenswrapper[4809]: I0312 08:21:54.752896 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-zn54x" Mar 12 08:21:54 crc kubenswrapper[4809]: I0312 08:21:54.759357 4809 scope.go:117] "RemoveContainer" containerID="31bea56799402d191891ba87d388362351ba8077b34c57b4d13ef5a639a7bc82" Mar 12 08:21:54 crc kubenswrapper[4809]: E0312 08:21:54.760748 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31bea56799402d191891ba87d388362351ba8077b34c57b4d13ef5a639a7bc82\": container with ID starting with 31bea56799402d191891ba87d388362351ba8077b34c57b4d13ef5a639a7bc82 not found: ID does not exist" containerID="31bea56799402d191891ba87d388362351ba8077b34c57b4d13ef5a639a7bc82" Mar 12 08:21:54 crc kubenswrapper[4809]: I0312 08:21:54.760792 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31bea56799402d191891ba87d388362351ba8077b34c57b4d13ef5a639a7bc82"} err="failed to get container status \"31bea56799402d191891ba87d388362351ba8077b34c57b4d13ef5a639a7bc82\": rpc error: code = NotFound desc = could not find container \"31bea56799402d191891ba87d388362351ba8077b34c57b4d13ef5a639a7bc82\": container with ID starting with 31bea56799402d191891ba87d388362351ba8077b34c57b4d13ef5a639a7bc82 not found: ID does not exist" Mar 12 08:21:54 crc kubenswrapper[4809]: I0312 08:21:54.760823 4809 scope.go:117] "RemoveContainer" containerID="99480db2cfa79dbba157aa84cc1fbc136c734178c3254c7b761e4ef81c058d05" Mar 12 08:21:54 crc kubenswrapper[4809]: I0312 08:21:54.761338 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dae78d40-ac67-4ec6-bf05-7f615d22aea9-kube-api-access-gclx8" (OuterVolumeSpecName: "kube-api-access-gclx8") pod "dae78d40-ac67-4ec6-bf05-7f615d22aea9" (UID: "dae78d40-ac67-4ec6-bf05-7f615d22aea9"). InnerVolumeSpecName "kube-api-access-gclx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:21:54 crc kubenswrapper[4809]: I0312 08:21:54.762096 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-hd7gx" podStartSLOduration=6.762079193 podStartE2EDuration="6.762079193s" podCreationTimestamp="2026-03-12 08:21:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:21:54.759537703 +0000 UTC m=+1388.341573436" watchObservedRunningTime="2026-03-12 08:21:54.762079193 +0000 UTC m=+1388.344114926" Mar 12 08:21:54 crc kubenswrapper[4809]: I0312 08:21:54.762981 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dae78d40-ac67-4ec6-bf05-7f615d22aea9-config-out" (OuterVolumeSpecName: "config-out") pod "dae78d40-ac67-4ec6-bf05-7f615d22aea9" (UID: "dae78d40-ac67-4ec6-bf05-7f615d22aea9"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:21:54 crc kubenswrapper[4809]: I0312 08:21:54.766269 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dae78d40-ac67-4ec6-bf05-7f615d22aea9-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "dae78d40-ac67-4ec6-bf05-7f615d22aea9" (UID: "dae78d40-ac67-4ec6-bf05-7f615d22aea9"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:21:54 crc kubenswrapper[4809]: I0312 08:21:54.767746 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dae78d40-ac67-4ec6-bf05-7f615d22aea9-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "dae78d40-ac67-4ec6-bf05-7f615d22aea9" (UID: "dae78d40-ac67-4ec6-bf05-7f615d22aea9"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:21:54 crc kubenswrapper[4809]: I0312 08:21:54.767830 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dae78d40-ac67-4ec6-bf05-7f615d22aea9-config" (OuterVolumeSpecName: "config") pod "dae78d40-ac67-4ec6-bf05-7f615d22aea9" (UID: "dae78d40-ac67-4ec6-bf05-7f615d22aea9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:21:54 crc kubenswrapper[4809]: I0312 08:21:54.771865 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dae78d40-ac67-4ec6-bf05-7f615d22aea9-web-config" (OuterVolumeSpecName: "web-config") pod "dae78d40-ac67-4ec6-bf05-7f615d22aea9" (UID: "dae78d40-ac67-4ec6-bf05-7f615d22aea9"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:21:54 crc kubenswrapper[4809]: I0312 08:21:54.788685 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74f6f696b9-jfdbd" podStartSLOduration=6.788662261 podStartE2EDuration="6.788662261s" podCreationTimestamp="2026-03-12 08:21:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:21:54.779752996 +0000 UTC m=+1388.361788719" watchObservedRunningTime="2026-03-12 08:21:54.788662261 +0000 UTC m=+1388.370697994" Mar 12 08:21:54 crc kubenswrapper[4809]: I0312 08:21:54.790366 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8df94e62-7c52-43ba-aa96-2fa0463399b9" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "dae78d40-ac67-4ec6-bf05-7f615d22aea9" (UID: "dae78d40-ac67-4ec6-bf05-7f615d22aea9"). InnerVolumeSpecName "pvc-8df94e62-7c52-43ba-aa96-2fa0463399b9". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 12 08:21:54 crc kubenswrapper[4809]: I0312 08:21:54.854931 4809 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/dae78d40-ac67-4ec6-bf05-7f615d22aea9-web-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:21:54 crc kubenswrapper[4809]: I0312 08:21:54.854961 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/dae78d40-ac67-4ec6-bf05-7f615d22aea9-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:21:54 crc kubenswrapper[4809]: I0312 08:21:54.854972 4809 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/dae78d40-ac67-4ec6-bf05-7f615d22aea9-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Mar 12 08:21:54 crc kubenswrapper[4809]: I0312 08:21:54.854985 4809 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/dae78d40-ac67-4ec6-bf05-7f615d22aea9-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Mar 12 08:21:54 crc kubenswrapper[4809]: I0312 08:21:54.854994 4809 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/dae78d40-ac67-4ec6-bf05-7f615d22aea9-tls-assets\") on node \"crc\" DevicePath \"\"" Mar 12 08:21:54 crc kubenswrapper[4809]: I0312 08:21:54.855005 4809 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/dae78d40-ac67-4ec6-bf05-7f615d22aea9-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Mar 12 08:21:54 crc kubenswrapper[4809]: I0312 08:21:54.855015 4809 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/dae78d40-ac67-4ec6-bf05-7f615d22aea9-config-out\") on node \"crc\" DevicePath \"\"" Mar 12 08:21:54 crc kubenswrapper[4809]: I0312 08:21:54.855025 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gclx8\" (UniqueName: \"kubernetes.io/projected/dae78d40-ac67-4ec6-bf05-7f615d22aea9-kube-api-access-gclx8\") on node \"crc\" DevicePath \"\"" Mar 12 08:21:54 crc kubenswrapper[4809]: I0312 08:21:54.855034 4809 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/dae78d40-ac67-4ec6-bf05-7f615d22aea9-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Mar 12 08:21:54 crc kubenswrapper[4809]: I0312 08:21:54.855061 4809 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-8df94e62-7c52-43ba-aa96-2fa0463399b9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8df94e62-7c52-43ba-aa96-2fa0463399b9\") on node \"crc\" " Mar 12 08:21:54 crc kubenswrapper[4809]: I0312 08:21:54.874184 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-zn54x"] Mar 12 08:21:54 crc kubenswrapper[4809]: I0312 08:21:54.887236 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-zn54x"] Mar 12 08:21:54 crc kubenswrapper[4809]: I0312 08:21:54.965568 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Mar 12 08:21:54 crc kubenswrapper[4809]: I0312 08:21:54.966768 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.004373 4809 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.004561 4809 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-8df94e62-7c52-43ba-aa96-2fa0463399b9" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8df94e62-7c52-43ba-aa96-2fa0463399b9") on node "crc" Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.059905 4809 reconciler_common.go:293] "Volume detached for volume \"pvc-8df94e62-7c52-43ba-aa96-2fa0463399b9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8df94e62-7c52-43ba-aa96-2fa0463399b9\") on node \"crc\" DevicePath \"\"" Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.136741 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03a48b86-8bc7-4d5d-87c1-61f3f57110b7" path="/var/lib/kubelet/pods/03a48b86-8bc7-4d5d-87c1-61f3f57110b7/volumes" Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.137856 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36b41bd2-cbbb-49b4-b5bb-4cb6da3c748f" path="/var/lib/kubelet/pods/36b41bd2-cbbb-49b4-b5bb-4cb6da3c748f/volumes" Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.139777 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac713601-7c08-40f3-aaee-dafbe4c413e8" path="/var/lib/kubelet/pods/ac713601-7c08-40f3-aaee-dafbe4c413e8/volumes" Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.150851 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.189631 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.209619 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.248317 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 12 08:21:55 crc kubenswrapper[4809]: E0312 08:21:55.248794 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03a48b86-8bc7-4d5d-87c1-61f3f57110b7" containerName="init" Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.248813 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="03a48b86-8bc7-4d5d-87c1-61f3f57110b7" containerName="init" Mar 12 08:21:55 crc kubenswrapper[4809]: E0312 08:21:55.248822 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03a48b86-8bc7-4d5d-87c1-61f3f57110b7" containerName="dnsmasq-dns" Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.248828 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="03a48b86-8bc7-4d5d-87c1-61f3f57110b7" containerName="dnsmasq-dns" Mar 12 08:21:55 crc kubenswrapper[4809]: E0312 08:21:55.248842 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36b41bd2-cbbb-49b4-b5bb-4cb6da3c748f" containerName="init" Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.248852 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="36b41bd2-cbbb-49b4-b5bb-4cb6da3c748f" containerName="init" Mar 12 08:21:55 crc kubenswrapper[4809]: E0312 08:21:55.248866 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36b41bd2-cbbb-49b4-b5bb-4cb6da3c748f" containerName="dnsmasq-dns" Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.248872 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="36b41bd2-cbbb-49b4-b5bb-4cb6da3c748f" containerName="dnsmasq-dns" Mar 12 08:21:55 crc kubenswrapper[4809]: E0312 08:21:55.248880 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac713601-7c08-40f3-aaee-dafbe4c413e8" containerName="init" Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.248886 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac713601-7c08-40f3-aaee-dafbe4c413e8" containerName="init" Mar 12 08:21:55 crc kubenswrapper[4809]: E0312 08:21:55.248904 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dae78d40-ac67-4ec6-bf05-7f615d22aea9" containerName="init-config-reloader" Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.248910 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="dae78d40-ac67-4ec6-bf05-7f615d22aea9" containerName="init-config-reloader" Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.249144 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="03a48b86-8bc7-4d5d-87c1-61f3f57110b7" containerName="dnsmasq-dns" Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.249159 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="36b41bd2-cbbb-49b4-b5bb-4cb6da3c748f" containerName="dnsmasq-dns" Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.249174 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac713601-7c08-40f3-aaee-dafbe4c413e8" containerName="init" Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.249184 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="dae78d40-ac67-4ec6-bf05-7f615d22aea9" containerName="init-config-reloader" Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.251087 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.256545 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-p5jc5" Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.264635 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.266842 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.267054 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.267091 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.267091 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.268347 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.270971 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.279977 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.325211 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.375173 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fefb3329-70bc-45d1-ac98-44f1836b3470-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"fefb3329-70bc-45d1-ac98-44f1836b3470\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.375372 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpplf\" (UniqueName: \"kubernetes.io/projected/fefb3329-70bc-45d1-ac98-44f1836b3470-kube-api-access-dpplf\") pod \"prometheus-metric-storage-0\" (UID: \"fefb3329-70bc-45d1-ac98-44f1836b3470\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.375488 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fefb3329-70bc-45d1-ac98-44f1836b3470-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"fefb3329-70bc-45d1-ac98-44f1836b3470\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.375534 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/fefb3329-70bc-45d1-ac98-44f1836b3470-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"fefb3329-70bc-45d1-ac98-44f1836b3470\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.375766 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/fefb3329-70bc-45d1-ac98-44f1836b3470-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"fefb3329-70bc-45d1-ac98-44f1836b3470\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.375875 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fefb3329-70bc-45d1-ac98-44f1836b3470-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"fefb3329-70bc-45d1-ac98-44f1836b3470\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.375925 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8df94e62-7c52-43ba-aa96-2fa0463399b9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8df94e62-7c52-43ba-aa96-2fa0463399b9\") pod \"prometheus-metric-storage-0\" (UID: \"fefb3329-70bc-45d1-ac98-44f1836b3470\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.376092 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fefb3329-70bc-45d1-ac98-44f1836b3470-config\") pod \"prometheus-metric-storage-0\" (UID: \"fefb3329-70bc-45d1-ac98-44f1836b3470\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.376228 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/fefb3329-70bc-45d1-ac98-44f1836b3470-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"fefb3329-70bc-45d1-ac98-44f1836b3470\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.376273 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/fefb3329-70bc-45d1-ac98-44f1836b3470-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"fefb3329-70bc-45d1-ac98-44f1836b3470\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.477853 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fefb3329-70bc-45d1-ac98-44f1836b3470-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"fefb3329-70bc-45d1-ac98-44f1836b3470\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.477983 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpplf\" (UniqueName: \"kubernetes.io/projected/fefb3329-70bc-45d1-ac98-44f1836b3470-kube-api-access-dpplf\") pod \"prometheus-metric-storage-0\" (UID: \"fefb3329-70bc-45d1-ac98-44f1836b3470\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.478031 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fefb3329-70bc-45d1-ac98-44f1836b3470-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"fefb3329-70bc-45d1-ac98-44f1836b3470\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.478064 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/fefb3329-70bc-45d1-ac98-44f1836b3470-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"fefb3329-70bc-45d1-ac98-44f1836b3470\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.478184 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/fefb3329-70bc-45d1-ac98-44f1836b3470-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"fefb3329-70bc-45d1-ac98-44f1836b3470\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.478245 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fefb3329-70bc-45d1-ac98-44f1836b3470-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"fefb3329-70bc-45d1-ac98-44f1836b3470\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.478279 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8df94e62-7c52-43ba-aa96-2fa0463399b9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8df94e62-7c52-43ba-aa96-2fa0463399b9\") pod \"prometheus-metric-storage-0\" (UID: \"fefb3329-70bc-45d1-ac98-44f1836b3470\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.478312 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fefb3329-70bc-45d1-ac98-44f1836b3470-config\") pod \"prometheus-metric-storage-0\" (UID: \"fefb3329-70bc-45d1-ac98-44f1836b3470\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.478340 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/fefb3329-70bc-45d1-ac98-44f1836b3470-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"fefb3329-70bc-45d1-ac98-44f1836b3470\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.478373 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/fefb3329-70bc-45d1-ac98-44f1836b3470-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"fefb3329-70bc-45d1-ac98-44f1836b3470\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.479335 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/fefb3329-70bc-45d1-ac98-44f1836b3470-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"fefb3329-70bc-45d1-ac98-44f1836b3470\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.479388 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/fefb3329-70bc-45d1-ac98-44f1836b3470-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"fefb3329-70bc-45d1-ac98-44f1836b3470\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.479975 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/fefb3329-70bc-45d1-ac98-44f1836b3470-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"fefb3329-70bc-45d1-ac98-44f1836b3470\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.483141 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fefb3329-70bc-45d1-ac98-44f1836b3470-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"fefb3329-70bc-45d1-ac98-44f1836b3470\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.483428 4809 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.483455 4809 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8df94e62-7c52-43ba-aa96-2fa0463399b9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8df94e62-7c52-43ba-aa96-2fa0463399b9\") pod \"prometheus-metric-storage-0\" (UID: \"fefb3329-70bc-45d1-ac98-44f1836b3470\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/07900c41f367c6133b243a696847e564daee14d73a2768d37c66f6e5f7b4cf48/globalmount\"" pod="openstack/prometheus-metric-storage-0" Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.483769 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/fefb3329-70bc-45d1-ac98-44f1836b3470-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"fefb3329-70bc-45d1-ac98-44f1836b3470\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.485344 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fefb3329-70bc-45d1-ac98-44f1836b3470-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"fefb3329-70bc-45d1-ac98-44f1836b3470\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.488069 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fefb3329-70bc-45d1-ac98-44f1836b3470-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"fefb3329-70bc-45d1-ac98-44f1836b3470\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.489748 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/fefb3329-70bc-45d1-ac98-44f1836b3470-config\") pod \"prometheus-metric-storage-0\" (UID: \"fefb3329-70bc-45d1-ac98-44f1836b3470\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.500630 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpplf\" (UniqueName: \"kubernetes.io/projected/fefb3329-70bc-45d1-ac98-44f1836b3470-kube-api-access-dpplf\") pod \"prometheus-metric-storage-0\" (UID: \"fefb3329-70bc-45d1-ac98-44f1836b3470\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.521448 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8df94e62-7c52-43ba-aa96-2fa0463399b9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8df94e62-7c52-43ba-aa96-2fa0463399b9\") pod \"prometheus-metric-storage-0\" (UID: \"fefb3329-70bc-45d1-ac98-44f1836b3470\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.582201 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.820026 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-b202-account-create-update-678rc"] Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.822660 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b202-account-create-update-678rc" Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.825434 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.826286 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"4df4ab5f-c9cf-40a8-9b50-44c5a67a97cc","Type":"ContainerStarted","Data":"4ae54c806f6fb5fe8982c136082e01668c4c6b583634f3ea07297bbd55856b74"} Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.826458 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.832666 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-5m8tq"] Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.835938 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-5m8tq" Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.846013 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b202-account-create-update-678rc"] Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.857583 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-5m8tq"] Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.858883 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.867290464 podStartE2EDuration="6.858856412s" podCreationTimestamp="2026-03-12 08:21:49 +0000 UTC" firstStartedPulling="2026-03-12 08:21:50.28861925 +0000 UTC m=+1383.870654983" lastFinishedPulling="2026-03-12 08:21:54.280185188 +0000 UTC m=+1387.862220931" observedRunningTime="2026-03-12 08:21:55.847270355 +0000 UTC m=+1389.429306088" watchObservedRunningTime="2026-03-12 08:21:55.858856412 +0000 UTC m=+1389.440892135" Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.895656 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9pbm\" (UniqueName: \"kubernetes.io/projected/a9faf40d-5722-4d62-a8b6-017e0ab167d2-kube-api-access-w9pbm\") pod \"glance-db-create-5m8tq\" (UID: \"a9faf40d-5722-4d62-a8b6-017e0ab167d2\") " pod="openstack/glance-db-create-5m8tq" Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.896127 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54d1448b-4fcf-49ce-aab4-884a71885bb6-operator-scripts\") pod \"glance-b202-account-create-update-678rc\" (UID: \"54d1448b-4fcf-49ce-aab4-884a71885bb6\") " pod="openstack/glance-b202-account-create-update-678rc" Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.897415 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9faf40d-5722-4d62-a8b6-017e0ab167d2-operator-scripts\") pod \"glance-db-create-5m8tq\" (UID: \"a9faf40d-5722-4d62-a8b6-017e0ab167d2\") " pod="openstack/glance-db-create-5m8tq" Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.897620 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jrmx\" (UniqueName: \"kubernetes.io/projected/54d1448b-4fcf-49ce-aab4-884a71885bb6-kube-api-access-7jrmx\") pod \"glance-b202-account-create-update-678rc\" (UID: \"54d1448b-4fcf-49ce-aab4-884a71885bb6\") " pod="openstack/glance-b202-account-create-update-678rc" Mar 12 08:21:55 crc kubenswrapper[4809]: I0312 08:21:55.981353 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Mar 12 08:21:56 crc kubenswrapper[4809]: I0312 08:21:56.000832 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9pbm\" (UniqueName: \"kubernetes.io/projected/a9faf40d-5722-4d62-a8b6-017e0ab167d2-kube-api-access-w9pbm\") pod \"glance-db-create-5m8tq\" (UID: \"a9faf40d-5722-4d62-a8b6-017e0ab167d2\") " pod="openstack/glance-db-create-5m8tq" Mar 12 08:21:56 crc kubenswrapper[4809]: I0312 08:21:56.000943 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54d1448b-4fcf-49ce-aab4-884a71885bb6-operator-scripts\") pod \"glance-b202-account-create-update-678rc\" (UID: \"54d1448b-4fcf-49ce-aab4-884a71885bb6\") " pod="openstack/glance-b202-account-create-update-678rc" Mar 12 08:21:56 crc kubenswrapper[4809]: I0312 08:21:56.001023 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9faf40d-5722-4d62-a8b6-017e0ab167d2-operator-scripts\") pod \"glance-db-create-5m8tq\" (UID: \"a9faf40d-5722-4d62-a8b6-017e0ab167d2\") " pod="openstack/glance-db-create-5m8tq" Mar 12 08:21:56 crc kubenswrapper[4809]: I0312 08:21:56.001075 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jrmx\" (UniqueName: \"kubernetes.io/projected/54d1448b-4fcf-49ce-aab4-884a71885bb6-kube-api-access-7jrmx\") pod \"glance-b202-account-create-update-678rc\" (UID: \"54d1448b-4fcf-49ce-aab4-884a71885bb6\") " pod="openstack/glance-b202-account-create-update-678rc" Mar 12 08:21:56 crc kubenswrapper[4809]: I0312 08:21:56.001950 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54d1448b-4fcf-49ce-aab4-884a71885bb6-operator-scripts\") pod \"glance-b202-account-create-update-678rc\" (UID: \"54d1448b-4fcf-49ce-aab4-884a71885bb6\") " pod="openstack/glance-b202-account-create-update-678rc" Mar 12 08:21:56 crc kubenswrapper[4809]: I0312 08:21:56.002195 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9faf40d-5722-4d62-a8b6-017e0ab167d2-operator-scripts\") pod \"glance-db-create-5m8tq\" (UID: \"a9faf40d-5722-4d62-a8b6-017e0ab167d2\") " pod="openstack/glance-db-create-5m8tq" Mar 12 08:21:56 crc kubenswrapper[4809]: I0312 08:21:56.025614 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9pbm\" (UniqueName: \"kubernetes.io/projected/a9faf40d-5722-4d62-a8b6-017e0ab167d2-kube-api-access-w9pbm\") pod \"glance-db-create-5m8tq\" (UID: \"a9faf40d-5722-4d62-a8b6-017e0ab167d2\") " pod="openstack/glance-db-create-5m8tq" Mar 12 08:21:56 crc kubenswrapper[4809]: I0312 08:21:56.030083 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jrmx\" (UniqueName: \"kubernetes.io/projected/54d1448b-4fcf-49ce-aab4-884a71885bb6-kube-api-access-7jrmx\") pod \"glance-b202-account-create-update-678rc\" (UID: \"54d1448b-4fcf-49ce-aab4-884a71885bb6\") " pod="openstack/glance-b202-account-create-update-678rc" Mar 12 08:21:56 crc kubenswrapper[4809]: I0312 08:21:56.153009 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b202-account-create-update-678rc" Mar 12 08:21:56 crc kubenswrapper[4809]: I0312 08:21:56.181688 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-5m8tq" Mar 12 08:21:56 crc kubenswrapper[4809]: I0312 08:21:56.298764 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 12 08:21:56 crc kubenswrapper[4809]: I0312 08:21:56.322463 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-56b8-account-create-update-2c67x"] Mar 12 08:21:56 crc kubenswrapper[4809]: I0312 08:21:56.324365 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-56b8-account-create-update-2c67x" Mar 12 08:21:56 crc kubenswrapper[4809]: I0312 08:21:56.329379 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Mar 12 08:21:56 crc kubenswrapper[4809]: I0312 08:21:56.339142 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-56b8-account-create-update-2c67x"] Mar 12 08:21:56 crc kubenswrapper[4809]: I0312 08:21:56.416436 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d89fg\" (UniqueName: \"kubernetes.io/projected/aecd8d5a-ebc5-430d-b54d-f027d02c3def-kube-api-access-d89fg\") pod \"keystone-56b8-account-create-update-2c67x\" (UID: \"aecd8d5a-ebc5-430d-b54d-f027d02c3def\") " pod="openstack/keystone-56b8-account-create-update-2c67x" Mar 12 08:21:56 crc kubenswrapper[4809]: I0312 08:21:56.416604 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aecd8d5a-ebc5-430d-b54d-f027d02c3def-operator-scripts\") pod \"keystone-56b8-account-create-update-2c67x\" (UID: \"aecd8d5a-ebc5-430d-b54d-f027d02c3def\") " pod="openstack/keystone-56b8-account-create-update-2c67x" Mar 12 08:21:56 crc kubenswrapper[4809]: I0312 08:21:56.418016 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7b33-account-create-update-zxwc5"] Mar 12 08:21:56 crc kubenswrapper[4809]: I0312 08:21:56.419815 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7b33-account-create-update-zxwc5" Mar 12 08:21:56 crc kubenswrapper[4809]: I0312 08:21:56.427184 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Mar 12 08:21:56 crc kubenswrapper[4809]: I0312 08:21:56.433190 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7b33-account-create-update-zxwc5"] Mar 12 08:21:56 crc kubenswrapper[4809]: I0312 08:21:56.518744 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aecd8d5a-ebc5-430d-b54d-f027d02c3def-operator-scripts\") pod \"keystone-56b8-account-create-update-2c67x\" (UID: \"aecd8d5a-ebc5-430d-b54d-f027d02c3def\") " pod="openstack/keystone-56b8-account-create-update-2c67x" Mar 12 08:21:56 crc kubenswrapper[4809]: I0312 08:21:56.519168 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d89fg\" (UniqueName: \"kubernetes.io/projected/aecd8d5a-ebc5-430d-b54d-f027d02c3def-kube-api-access-d89fg\") pod \"keystone-56b8-account-create-update-2c67x\" (UID: \"aecd8d5a-ebc5-430d-b54d-f027d02c3def\") " pod="openstack/keystone-56b8-account-create-update-2c67x" Mar 12 08:21:56 crc kubenswrapper[4809]: I0312 08:21:56.519869 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aecd8d5a-ebc5-430d-b54d-f027d02c3def-operator-scripts\") pod \"keystone-56b8-account-create-update-2c67x\" (UID: \"aecd8d5a-ebc5-430d-b54d-f027d02c3def\") " pod="openstack/keystone-56b8-account-create-update-2c67x" Mar 12 08:21:56 crc kubenswrapper[4809]: I0312 08:21:56.542192 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d89fg\" (UniqueName: \"kubernetes.io/projected/aecd8d5a-ebc5-430d-b54d-f027d02c3def-kube-api-access-d89fg\") pod \"keystone-56b8-account-create-update-2c67x\" (UID: \"aecd8d5a-ebc5-430d-b54d-f027d02c3def\") " pod="openstack/keystone-56b8-account-create-update-2c67x" Mar 12 08:21:56 crc kubenswrapper[4809]: I0312 08:21:56.620682 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/117d7e22-88db-4dd9-a3d9-625dd1b577de-operator-scripts\") pod \"placement-7b33-account-create-update-zxwc5\" (UID: \"117d7e22-88db-4dd9-a3d9-625dd1b577de\") " pod="openstack/placement-7b33-account-create-update-zxwc5" Mar 12 08:21:56 crc kubenswrapper[4809]: I0312 08:21:56.621136 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mjwr\" (UniqueName: \"kubernetes.io/projected/117d7e22-88db-4dd9-a3d9-625dd1b577de-kube-api-access-4mjwr\") pod \"placement-7b33-account-create-update-zxwc5\" (UID: \"117d7e22-88db-4dd9-a3d9-625dd1b577de\") " pod="openstack/placement-7b33-account-create-update-zxwc5" Mar 12 08:21:56 crc kubenswrapper[4809]: I0312 08:21:56.658801 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-56b8-account-create-update-2c67x" Mar 12 08:21:56 crc kubenswrapper[4809]: I0312 08:21:56.723318 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mjwr\" (UniqueName: \"kubernetes.io/projected/117d7e22-88db-4dd9-a3d9-625dd1b577de-kube-api-access-4mjwr\") pod \"placement-7b33-account-create-update-zxwc5\" (UID: \"117d7e22-88db-4dd9-a3d9-625dd1b577de\") " pod="openstack/placement-7b33-account-create-update-zxwc5" Mar 12 08:21:56 crc kubenswrapper[4809]: I0312 08:21:56.723439 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/117d7e22-88db-4dd9-a3d9-625dd1b577de-operator-scripts\") pod \"placement-7b33-account-create-update-zxwc5\" (UID: \"117d7e22-88db-4dd9-a3d9-625dd1b577de\") " pod="openstack/placement-7b33-account-create-update-zxwc5" Mar 12 08:21:56 crc kubenswrapper[4809]: I0312 08:21:56.724365 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/117d7e22-88db-4dd9-a3d9-625dd1b577de-operator-scripts\") pod \"placement-7b33-account-create-update-zxwc5\" (UID: \"117d7e22-88db-4dd9-a3d9-625dd1b577de\") " pod="openstack/placement-7b33-account-create-update-zxwc5" Mar 12 08:21:56 crc kubenswrapper[4809]: I0312 08:21:56.749628 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mjwr\" (UniqueName: \"kubernetes.io/projected/117d7e22-88db-4dd9-a3d9-625dd1b577de-kube-api-access-4mjwr\") pod \"placement-7b33-account-create-update-zxwc5\" (UID: \"117d7e22-88db-4dd9-a3d9-625dd1b577de\") " pod="openstack/placement-7b33-account-create-update-zxwc5" Mar 12 08:21:56 crc kubenswrapper[4809]: I0312 08:21:56.756548 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-6hwhh"] Mar 12 08:21:56 crc kubenswrapper[4809]: I0312 08:21:56.758317 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6hwhh" Mar 12 08:21:56 crc kubenswrapper[4809]: I0312 08:21:56.778921 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-6hwhh"] Mar 12 08:21:56 crc kubenswrapper[4809]: I0312 08:21:56.839511 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-hkstm"] Mar 12 08:21:56 crc kubenswrapper[4809]: I0312 08:21:56.843752 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-hkstm" Mar 12 08:21:56 crc kubenswrapper[4809]: I0312 08:21:56.872146 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-hkstm"] Mar 12 08:21:56 crc kubenswrapper[4809]: I0312 08:21:56.927052 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcm99\" (UniqueName: \"kubernetes.io/projected/b7b849d3-1e9c-4122-ac6f-d9100a6187d2-kube-api-access-lcm99\") pod \"keystone-db-create-6hwhh\" (UID: \"b7b849d3-1e9c-4122-ac6f-d9100a6187d2\") " pod="openstack/keystone-db-create-6hwhh" Mar 12 08:21:56 crc kubenswrapper[4809]: I0312 08:21:56.927452 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7b849d3-1e9c-4122-ac6f-d9100a6187d2-operator-scripts\") pod \"keystone-db-create-6hwhh\" (UID: \"b7b849d3-1e9c-4122-ac6f-d9100a6187d2\") " pod="openstack/keystone-db-create-6hwhh" Mar 12 08:21:57 crc kubenswrapper[4809]: I0312 08:21:57.031957 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbl8s\" (UniqueName: \"kubernetes.io/projected/5bea16ac-c763-4dea-891e-35af9814c6a8-kube-api-access-dbl8s\") pod \"placement-db-create-hkstm\" (UID: \"5bea16ac-c763-4dea-891e-35af9814c6a8\") " pod="openstack/placement-db-create-hkstm" Mar 12 08:21:57 crc kubenswrapper[4809]: I0312 08:21:57.032056 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7b849d3-1e9c-4122-ac6f-d9100a6187d2-operator-scripts\") pod \"keystone-db-create-6hwhh\" (UID: \"b7b849d3-1e9c-4122-ac6f-d9100a6187d2\") " pod="openstack/keystone-db-create-6hwhh" Mar 12 08:21:57 crc kubenswrapper[4809]: I0312 08:21:57.032226 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5bea16ac-c763-4dea-891e-35af9814c6a8-operator-scripts\") pod \"placement-db-create-hkstm\" (UID: \"5bea16ac-c763-4dea-891e-35af9814c6a8\") " pod="openstack/placement-db-create-hkstm" Mar 12 08:21:57 crc kubenswrapper[4809]: I0312 08:21:57.032310 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7b325264-3ac9-446e-b820-c40d942263e6-etc-swift\") pod \"swift-storage-0\" (UID: \"7b325264-3ac9-446e-b820-c40d942263e6\") " pod="openstack/swift-storage-0" Mar 12 08:21:57 crc kubenswrapper[4809]: I0312 08:21:57.032495 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcm99\" (UniqueName: \"kubernetes.io/projected/b7b849d3-1e9c-4122-ac6f-d9100a6187d2-kube-api-access-lcm99\") pod \"keystone-db-create-6hwhh\" (UID: \"b7b849d3-1e9c-4122-ac6f-d9100a6187d2\") " pod="openstack/keystone-db-create-6hwhh" Mar 12 08:21:57 crc kubenswrapper[4809]: I0312 08:21:57.033343 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7b849d3-1e9c-4122-ac6f-d9100a6187d2-operator-scripts\") pod \"keystone-db-create-6hwhh\" (UID: \"b7b849d3-1e9c-4122-ac6f-d9100a6187d2\") " pod="openstack/keystone-db-create-6hwhh" Mar 12 08:21:57 crc kubenswrapper[4809]: E0312 08:21:57.033531 4809 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 12 08:21:57 crc kubenswrapper[4809]: E0312 08:21:57.033556 4809 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 12 08:21:57 crc kubenswrapper[4809]: E0312 08:21:57.033827 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7b325264-3ac9-446e-b820-c40d942263e6-etc-swift podName:7b325264-3ac9-446e-b820-c40d942263e6 nodeName:}" failed. No retries permitted until 2026-03-12 08:22:05.033800462 +0000 UTC m=+1398.615836395 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7b325264-3ac9-446e-b820-c40d942263e6-etc-swift") pod "swift-storage-0" (UID: "7b325264-3ac9-446e-b820-c40d942263e6") : configmap "swift-ring-files" not found Mar 12 08:21:57 crc kubenswrapper[4809]: I0312 08:21:57.040282 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7b33-account-create-update-zxwc5" Mar 12 08:21:57 crc kubenswrapper[4809]: I0312 08:21:57.076555 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcm99\" (UniqueName: \"kubernetes.io/projected/b7b849d3-1e9c-4122-ac6f-d9100a6187d2-kube-api-access-lcm99\") pod \"keystone-db-create-6hwhh\" (UID: \"b7b849d3-1e9c-4122-ac6f-d9100a6187d2\") " pod="openstack/keystone-db-create-6hwhh" Mar 12 08:21:57 crc kubenswrapper[4809]: I0312 08:21:57.134461 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbl8s\" (UniqueName: \"kubernetes.io/projected/5bea16ac-c763-4dea-891e-35af9814c6a8-kube-api-access-dbl8s\") pod \"placement-db-create-hkstm\" (UID: \"5bea16ac-c763-4dea-891e-35af9814c6a8\") " pod="openstack/placement-db-create-hkstm" Mar 12 08:21:57 crc kubenswrapper[4809]: I0312 08:21:57.134554 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5bea16ac-c763-4dea-891e-35af9814c6a8-operator-scripts\") pod \"placement-db-create-hkstm\" (UID: \"5bea16ac-c763-4dea-891e-35af9814c6a8\") " pod="openstack/placement-db-create-hkstm" Mar 12 08:21:57 crc kubenswrapper[4809]: I0312 08:21:57.135424 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5bea16ac-c763-4dea-891e-35af9814c6a8-operator-scripts\") pod \"placement-db-create-hkstm\" (UID: \"5bea16ac-c763-4dea-891e-35af9814c6a8\") " pod="openstack/placement-db-create-hkstm" Mar 12 08:21:57 crc kubenswrapper[4809]: I0312 08:21:57.140135 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dae78d40-ac67-4ec6-bf05-7f615d22aea9" path="/var/lib/kubelet/pods/dae78d40-ac67-4ec6-bf05-7f615d22aea9/volumes" Mar 12 08:21:57 crc kubenswrapper[4809]: I0312 08:21:57.142221 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6hwhh" Mar 12 08:21:57 crc kubenswrapper[4809]: I0312 08:21:57.172765 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbl8s\" (UniqueName: \"kubernetes.io/projected/5bea16ac-c763-4dea-891e-35af9814c6a8-kube-api-access-dbl8s\") pod \"placement-db-create-hkstm\" (UID: \"5bea16ac-c763-4dea-891e-35af9814c6a8\") " pod="openstack/placement-db-create-hkstm" Mar 12 08:21:57 crc kubenswrapper[4809]: I0312 08:21:57.469297 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-hkstm" Mar 12 08:21:57 crc kubenswrapper[4809]: I0312 08:21:57.521797 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-t975n"] Mar 12 08:21:57 crc kubenswrapper[4809]: I0312 08:21:57.523264 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-t975n" Mar 12 08:21:57 crc kubenswrapper[4809]: I0312 08:21:57.541567 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-t975n"] Mar 12 08:21:57 crc kubenswrapper[4809]: I0312 08:21:57.650776 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68799bd7-b150-4834-a9c3-e1d95cb2af7f-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-t975n\" (UID: \"68799bd7-b150-4834-a9c3-e1d95cb2af7f\") " pod="openstack/mysqld-exporter-openstack-db-create-t975n" Mar 12 08:21:57 crc kubenswrapper[4809]: I0312 08:21:57.650939 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tzhw\" (UniqueName: \"kubernetes.io/projected/68799bd7-b150-4834-a9c3-e1d95cb2af7f-kube-api-access-7tzhw\") pod \"mysqld-exporter-openstack-db-create-t975n\" (UID: \"68799bd7-b150-4834-a9c3-e1d95cb2af7f\") " pod="openstack/mysqld-exporter-openstack-db-create-t975n" Mar 12 08:21:57 crc kubenswrapper[4809]: I0312 08:21:57.747813 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-4855-account-create-update-m9b2t"] Mar 12 08:21:57 crc kubenswrapper[4809]: I0312 08:21:57.749750 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-4855-account-create-update-m9b2t" Mar 12 08:21:57 crc kubenswrapper[4809]: I0312 08:21:57.753131 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tzhw\" (UniqueName: \"kubernetes.io/projected/68799bd7-b150-4834-a9c3-e1d95cb2af7f-kube-api-access-7tzhw\") pod \"mysqld-exporter-openstack-db-create-t975n\" (UID: \"68799bd7-b150-4834-a9c3-e1d95cb2af7f\") " pod="openstack/mysqld-exporter-openstack-db-create-t975n" Mar 12 08:21:57 crc kubenswrapper[4809]: I0312 08:21:57.753382 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68799bd7-b150-4834-a9c3-e1d95cb2af7f-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-t975n\" (UID: \"68799bd7-b150-4834-a9c3-e1d95cb2af7f\") " pod="openstack/mysqld-exporter-openstack-db-create-t975n" Mar 12 08:21:57 crc kubenswrapper[4809]: I0312 08:21:57.753139 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-db-secret" Mar 12 08:21:57 crc kubenswrapper[4809]: I0312 08:21:57.754200 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68799bd7-b150-4834-a9c3-e1d95cb2af7f-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-t975n\" (UID: \"68799bd7-b150-4834-a9c3-e1d95cb2af7f\") " pod="openstack/mysqld-exporter-openstack-db-create-t975n" Mar 12 08:21:57 crc kubenswrapper[4809]: I0312 08:21:57.774498 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tzhw\" (UniqueName: \"kubernetes.io/projected/68799bd7-b150-4834-a9c3-e1d95cb2af7f-kube-api-access-7tzhw\") pod \"mysqld-exporter-openstack-db-create-t975n\" (UID: \"68799bd7-b150-4834-a9c3-e1d95cb2af7f\") " pod="openstack/mysqld-exporter-openstack-db-create-t975n" Mar 12 08:21:57 crc kubenswrapper[4809]: I0312 08:21:57.780748 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-4855-account-create-update-m9b2t"] Mar 12 08:21:57 crc kubenswrapper[4809]: I0312 08:21:57.855003 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d092868-620f-4652-a74b-a3650282474a-operator-scripts\") pod \"mysqld-exporter-4855-account-create-update-m9b2t\" (UID: \"0d092868-620f-4652-a74b-a3650282474a\") " pod="openstack/mysqld-exporter-4855-account-create-update-m9b2t" Mar 12 08:21:57 crc kubenswrapper[4809]: I0312 08:21:57.855091 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzdkk\" (UniqueName: \"kubernetes.io/projected/0d092868-620f-4652-a74b-a3650282474a-kube-api-access-gzdkk\") pod \"mysqld-exporter-4855-account-create-update-m9b2t\" (UID: \"0d092868-620f-4652-a74b-a3650282474a\") " pod="openstack/mysqld-exporter-4855-account-create-update-m9b2t" Mar 12 08:21:57 crc kubenswrapper[4809]: I0312 08:21:57.861912 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-t975n" Mar 12 08:21:57 crc kubenswrapper[4809]: I0312 08:21:57.964980 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d092868-620f-4652-a74b-a3650282474a-operator-scripts\") pod \"mysqld-exporter-4855-account-create-update-m9b2t\" (UID: \"0d092868-620f-4652-a74b-a3650282474a\") " pod="openstack/mysqld-exporter-4855-account-create-update-m9b2t" Mar 12 08:21:57 crc kubenswrapper[4809]: I0312 08:21:57.965160 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzdkk\" (UniqueName: \"kubernetes.io/projected/0d092868-620f-4652-a74b-a3650282474a-kube-api-access-gzdkk\") pod \"mysqld-exporter-4855-account-create-update-m9b2t\" (UID: \"0d092868-620f-4652-a74b-a3650282474a\") " pod="openstack/mysqld-exporter-4855-account-create-update-m9b2t" Mar 12 08:21:57 crc kubenswrapper[4809]: I0312 08:21:57.966470 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d092868-620f-4652-a74b-a3650282474a-operator-scripts\") pod \"mysqld-exporter-4855-account-create-update-m9b2t\" (UID: \"0d092868-620f-4652-a74b-a3650282474a\") " pod="openstack/mysqld-exporter-4855-account-create-update-m9b2t" Mar 12 08:21:57 crc kubenswrapper[4809]: I0312 08:21:57.982343 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzdkk\" (UniqueName: \"kubernetes.io/projected/0d092868-620f-4652-a74b-a3650282474a-kube-api-access-gzdkk\") pod \"mysqld-exporter-4855-account-create-update-m9b2t\" (UID: \"0d092868-620f-4652-a74b-a3650282474a\") " pod="openstack/mysqld-exporter-4855-account-create-update-m9b2t" Mar 12 08:21:58 crc kubenswrapper[4809]: I0312 08:21:58.117312 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-4855-account-create-update-m9b2t" Mar 12 08:21:59 crc kubenswrapper[4809]: I0312 08:21:59.237927 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-hd7gx" Mar 12 08:21:59 crc kubenswrapper[4809]: I0312 08:21:59.360828 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-jfdbd"] Mar 12 08:21:59 crc kubenswrapper[4809]: I0312 08:21:59.361052 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74f6f696b9-jfdbd" podUID="f2136d46-3abf-4108-9f88-19dfcadc2bf0" containerName="dnsmasq-dns" containerID="cri-o://666574e56d1a4791da92d7fb32d184f5d2a8b1928beb178e1be4d7bb4c92aa95" gracePeriod=10 Mar 12 08:21:59 crc kubenswrapper[4809]: I0312 08:21:59.362256 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-74f6f696b9-jfdbd" Mar 12 08:21:59 crc kubenswrapper[4809]: I0312 08:21:59.830768 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b202-account-create-update-678rc"] Mar 12 08:21:59 crc kubenswrapper[4809]: I0312 08:21:59.893925 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b202-account-create-update-678rc" event={"ID":"54d1448b-4fcf-49ce-aab4-884a71885bb6","Type":"ContainerStarted","Data":"d689d7f45b899674d947fbf24bac176074949929b49d13df97c044b2e5a29130"} Mar 12 08:21:59 crc kubenswrapper[4809]: I0312 08:21:59.896621 4809 generic.go:334] "Generic (PLEG): container finished" podID="f2136d46-3abf-4108-9f88-19dfcadc2bf0" containerID="666574e56d1a4791da92d7fb32d184f5d2a8b1928beb178e1be4d7bb4c92aa95" exitCode=0 Mar 12 08:21:59 crc kubenswrapper[4809]: I0312 08:21:59.896735 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6f696b9-jfdbd" event={"ID":"f2136d46-3abf-4108-9f88-19dfcadc2bf0","Type":"ContainerDied","Data":"666574e56d1a4791da92d7fb32d184f5d2a8b1928beb178e1be4d7bb4c92aa95"} Mar 12 08:21:59 crc kubenswrapper[4809]: I0312 08:21:59.898665 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fefb3329-70bc-45d1-ac98-44f1836b3470","Type":"ContainerStarted","Data":"cd575372eb730b27c2d60f0239e38424a260f94b7dbba5b1c3812c57c07c508c"} Mar 12 08:22:00 crc kubenswrapper[4809]: I0312 08:22:00.112014 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7b33-account-create-update-zxwc5"] Mar 12 08:22:00 crc kubenswrapper[4809]: I0312 08:22:00.143846 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555062-g8cq6"] Mar 12 08:22:00 crc kubenswrapper[4809]: I0312 08:22:00.145573 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555062-g8cq6" Mar 12 08:22:00 crc kubenswrapper[4809]: I0312 08:22:00.148976 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-tvxqv" Mar 12 08:22:00 crc kubenswrapper[4809]: I0312 08:22:00.150425 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 08:22:00 crc kubenswrapper[4809]: I0312 08:22:00.154659 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 08:22:00 crc kubenswrapper[4809]: I0312 08:22:00.161854 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555062-g8cq6"] Mar 12 08:22:00 crc kubenswrapper[4809]: I0312 08:22:00.273346 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4l9x6\" (UniqueName: \"kubernetes.io/projected/f9671632-9016-48e7-827c-6c440d55245e-kube-api-access-4l9x6\") pod \"auto-csr-approver-29555062-g8cq6\" (UID: \"f9671632-9016-48e7-827c-6c440d55245e\") " pod="openshift-infra/auto-csr-approver-29555062-g8cq6" Mar 12 08:22:00 crc kubenswrapper[4809]: I0312 08:22:00.326210 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-6hwhh"] Mar 12 08:22:00 crc kubenswrapper[4809]: I0312 08:22:00.365127 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-hkstm"] Mar 12 08:22:00 crc kubenswrapper[4809]: I0312 08:22:00.376219 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4l9x6\" (UniqueName: \"kubernetes.io/projected/f9671632-9016-48e7-827c-6c440d55245e-kube-api-access-4l9x6\") pod \"auto-csr-approver-29555062-g8cq6\" (UID: \"f9671632-9016-48e7-827c-6c440d55245e\") " pod="openshift-infra/auto-csr-approver-29555062-g8cq6" Mar 12 08:22:00 crc kubenswrapper[4809]: I0312 08:22:00.377701 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-t975n"] Mar 12 08:22:00 crc kubenswrapper[4809]: I0312 08:22:00.386397 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-4855-account-create-update-m9b2t"] Mar 12 08:22:00 crc kubenswrapper[4809]: I0312 08:22:00.401525 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4l9x6\" (UniqueName: \"kubernetes.io/projected/f9671632-9016-48e7-827c-6c440d55245e-kube-api-access-4l9x6\") pod \"auto-csr-approver-29555062-g8cq6\" (UID: \"f9671632-9016-48e7-827c-6c440d55245e\") " pod="openshift-infra/auto-csr-approver-29555062-g8cq6" Mar 12 08:22:00 crc kubenswrapper[4809]: I0312 08:22:00.445256 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-5m8tq"] Mar 12 08:22:00 crc kubenswrapper[4809]: I0312 08:22:00.479808 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-56b8-account-create-update-2c67x"] Mar 12 08:22:00 crc kubenswrapper[4809]: I0312 08:22:00.561224 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555062-g8cq6" Mar 12 08:22:00 crc kubenswrapper[4809]: I0312 08:22:00.934707 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-hkstm" event={"ID":"5bea16ac-c763-4dea-891e-35af9814c6a8","Type":"ContainerStarted","Data":"3c7772715689b58daeff59ad1214869c6a8a8640d2a4e0e821150f039eb73048"} Mar 12 08:22:00 crc kubenswrapper[4809]: I0312 08:22:00.934995 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-hkstm" event={"ID":"5bea16ac-c763-4dea-891e-35af9814c6a8","Type":"ContainerStarted","Data":"74d790da50f5081e5e55e4591846c636922bdf34cd1feb4b21d2f71648ce57b9"} Mar 12 08:22:01 crc kubenswrapper[4809]: I0312 08:22:01.003777 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-w9c45" event={"ID":"7c349c64-fcf0-48cf-91c5-fac0131bacc6","Type":"ContainerStarted","Data":"02875af7f91784040d3dff2f284134bf4beb1b7dd95bd2602334fd2622d0b2ba"} Mar 12 08:22:01 crc kubenswrapper[4809]: I0312 08:22:01.010336 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6f696b9-jfdbd" Mar 12 08:22:01 crc kubenswrapper[4809]: I0312 08:22:01.010337 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7b33-account-create-update-zxwc5" event={"ID":"117d7e22-88db-4dd9-a3d9-625dd1b577de","Type":"ContainerStarted","Data":"1bbfca5d99887738ec2786a81ccecdb8000287233937d66c8101d1c47decc03b"} Mar 12 08:22:01 crc kubenswrapper[4809]: I0312 08:22:01.010485 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7b33-account-create-update-zxwc5" event={"ID":"117d7e22-88db-4dd9-a3d9-625dd1b577de","Type":"ContainerStarted","Data":"658dfa10f21dede66f2e419b336ad9e061bfaab010adbe0a18a642b00b4ee8a2"} Mar 12 08:22:01 crc kubenswrapper[4809]: I0312 08:22:01.033407 4809 generic.go:334] "Generic (PLEG): container finished" podID="54d1448b-4fcf-49ce-aab4-884a71885bb6" containerID="a45fa71d0bd3b61b59c29d11ca018611976ed910a61665dda3cfc9044cce3425" exitCode=0 Mar 12 08:22:01 crc kubenswrapper[4809]: I0312 08:22:01.033501 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b202-account-create-update-678rc" event={"ID":"54d1448b-4fcf-49ce-aab4-884a71885bb6","Type":"ContainerDied","Data":"a45fa71d0bd3b61b59c29d11ca018611976ed910a61665dda3cfc9044cce3425"} Mar 12 08:22:01 crc kubenswrapper[4809]: I0312 08:22:01.047991 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-t975n" event={"ID":"68799bd7-b150-4834-a9c3-e1d95cb2af7f","Type":"ContainerStarted","Data":"cd6bce521edb240076096c0a170d4b84896d7898013d73e286de308497ea3de5"} Mar 12 08:22:01 crc kubenswrapper[4809]: I0312 08:22:01.052369 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6hwhh" event={"ID":"b7b849d3-1e9c-4122-ac6f-d9100a6187d2","Type":"ContainerStarted","Data":"df0b9de8050adc233537769da77340f12a32272343d6be81086a05030e08be19"} Mar 12 08:22:01 crc kubenswrapper[4809]: I0312 08:22:01.055338 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-hkstm" podStartSLOduration=5.055315451 podStartE2EDuration="5.055315451s" podCreationTimestamp="2026-03-12 08:21:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:22:00.979960067 +0000 UTC m=+1394.561995800" watchObservedRunningTime="2026-03-12 08:22:01.055315451 +0000 UTC m=+1394.637351174" Mar 12 08:22:01 crc kubenswrapper[4809]: I0312 08:22:01.074652 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-56b8-account-create-update-2c67x" event={"ID":"aecd8d5a-ebc5-430d-b54d-f027d02c3def","Type":"ContainerStarted","Data":"1c57a2385eec63d67171f3239b8df1e5d067b4f547ea7817ed58208be1ae89f0"} Mar 12 08:22:01 crc kubenswrapper[4809]: I0312 08:22:01.096378 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-4855-account-create-update-m9b2t" event={"ID":"0d092868-620f-4652-a74b-a3650282474a","Type":"ContainerStarted","Data":"d4347ea00a5ec931248c2a63c120945308f81c507af429928b5b3b11b63afd89"} Mar 12 08:22:01 crc kubenswrapper[4809]: I0312 08:22:01.100424 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-5m8tq" event={"ID":"a9faf40d-5722-4d62-a8b6-017e0ab167d2","Type":"ContainerStarted","Data":"51b6902436f8590ade0f021bd5a7cb5cd409c279a4f1b28b8c8623203e9a3a01"} Mar 12 08:22:01 crc kubenswrapper[4809]: I0312 08:22:01.103507 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-w9c45" podStartSLOduration=4.072914713 podStartE2EDuration="9.103379477s" podCreationTimestamp="2026-03-12 08:21:52 +0000 UTC" firstStartedPulling="2026-03-12 08:21:54.257296572 +0000 UTC m=+1387.839332295" lastFinishedPulling="2026-03-12 08:21:59.287761326 +0000 UTC m=+1392.869797059" observedRunningTime="2026-03-12 08:22:01.036499166 +0000 UTC m=+1394.618534899" watchObservedRunningTime="2026-03-12 08:22:01.103379477 +0000 UTC m=+1394.685415220" Mar 12 08:22:01 crc kubenswrapper[4809]: I0312 08:22:01.131778 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-7b33-account-create-update-zxwc5" podStartSLOduration=5.131760764 podStartE2EDuration="5.131760764s" podCreationTimestamp="2026-03-12 08:21:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:22:01.069281533 +0000 UTC m=+1394.651317256" watchObservedRunningTime="2026-03-12 08:22:01.131760764 +0000 UTC m=+1394.713796497" Mar 12 08:22:01 crc kubenswrapper[4809]: I0312 08:22:01.219397 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2136d46-3abf-4108-9f88-19dfcadc2bf0-ovsdbserver-nb\") pod \"f2136d46-3abf-4108-9f88-19dfcadc2bf0\" (UID: \"f2136d46-3abf-4108-9f88-19dfcadc2bf0\") " Mar 12 08:22:01 crc kubenswrapper[4809]: I0312 08:22:01.219484 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2136d46-3abf-4108-9f88-19dfcadc2bf0-dns-svc\") pod \"f2136d46-3abf-4108-9f88-19dfcadc2bf0\" (UID: \"f2136d46-3abf-4108-9f88-19dfcadc2bf0\") " Mar 12 08:22:01 crc kubenswrapper[4809]: I0312 08:22:01.219643 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2136d46-3abf-4108-9f88-19dfcadc2bf0-config\") pod \"f2136d46-3abf-4108-9f88-19dfcadc2bf0\" (UID: \"f2136d46-3abf-4108-9f88-19dfcadc2bf0\") " Mar 12 08:22:01 crc kubenswrapper[4809]: I0312 08:22:01.219700 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s64f9\" (UniqueName: \"kubernetes.io/projected/f2136d46-3abf-4108-9f88-19dfcadc2bf0-kube-api-access-s64f9\") pod \"f2136d46-3abf-4108-9f88-19dfcadc2bf0\" (UID: \"f2136d46-3abf-4108-9f88-19dfcadc2bf0\") " Mar 12 08:22:01 crc kubenswrapper[4809]: I0312 08:22:01.309778 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2136d46-3abf-4108-9f88-19dfcadc2bf0-kube-api-access-s64f9" (OuterVolumeSpecName: "kube-api-access-s64f9") pod "f2136d46-3abf-4108-9f88-19dfcadc2bf0" (UID: "f2136d46-3abf-4108-9f88-19dfcadc2bf0"). InnerVolumeSpecName "kube-api-access-s64f9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:22:01 crc kubenswrapper[4809]: I0312 08:22:01.321257 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s64f9\" (UniqueName: \"kubernetes.io/projected/f2136d46-3abf-4108-9f88-19dfcadc2bf0-kube-api-access-s64f9\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:01 crc kubenswrapper[4809]: I0312 08:22:01.457311 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2136d46-3abf-4108-9f88-19dfcadc2bf0-config" (OuterVolumeSpecName: "config") pod "f2136d46-3abf-4108-9f88-19dfcadc2bf0" (UID: "f2136d46-3abf-4108-9f88-19dfcadc2bf0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:22:01 crc kubenswrapper[4809]: I0312 08:22:01.457744 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2136d46-3abf-4108-9f88-19dfcadc2bf0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f2136d46-3abf-4108-9f88-19dfcadc2bf0" (UID: "f2136d46-3abf-4108-9f88-19dfcadc2bf0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:22:01 crc kubenswrapper[4809]: I0312 08:22:01.459130 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2136d46-3abf-4108-9f88-19dfcadc2bf0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f2136d46-3abf-4108-9f88-19dfcadc2bf0" (UID: "f2136d46-3abf-4108-9f88-19dfcadc2bf0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:22:01 crc kubenswrapper[4809]: I0312 08:22:01.527587 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2136d46-3abf-4108-9f88-19dfcadc2bf0-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:01 crc kubenswrapper[4809]: I0312 08:22:01.527812 4809 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2136d46-3abf-4108-9f88-19dfcadc2bf0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:01 crc kubenswrapper[4809]: I0312 08:22:01.527873 4809 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2136d46-3abf-4108-9f88-19dfcadc2bf0-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:01 crc kubenswrapper[4809]: I0312 08:22:01.922844 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555062-g8cq6"] Mar 12 08:22:02 crc kubenswrapper[4809]: I0312 08:22:02.082531 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-kbtbl"] Mar 12 08:22:02 crc kubenswrapper[4809]: E0312 08:22:02.082971 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2136d46-3abf-4108-9f88-19dfcadc2bf0" containerName="init" Mar 12 08:22:02 crc kubenswrapper[4809]: I0312 08:22:02.082990 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2136d46-3abf-4108-9f88-19dfcadc2bf0" containerName="init" Mar 12 08:22:02 crc kubenswrapper[4809]: E0312 08:22:02.082998 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2136d46-3abf-4108-9f88-19dfcadc2bf0" containerName="dnsmasq-dns" Mar 12 08:22:02 crc kubenswrapper[4809]: I0312 08:22:02.083005 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2136d46-3abf-4108-9f88-19dfcadc2bf0" containerName="dnsmasq-dns" Mar 12 08:22:02 crc kubenswrapper[4809]: I0312 08:22:02.083269 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2136d46-3abf-4108-9f88-19dfcadc2bf0" containerName="dnsmasq-dns" Mar 12 08:22:02 crc kubenswrapper[4809]: I0312 08:22:02.083993 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kbtbl" Mar 12 08:22:02 crc kubenswrapper[4809]: I0312 08:22:02.088480 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Mar 12 08:22:02 crc kubenswrapper[4809]: I0312 08:22:02.096359 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-kbtbl"] Mar 12 08:22:02 crc kubenswrapper[4809]: I0312 08:22:02.136208 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555062-g8cq6" event={"ID":"f9671632-9016-48e7-827c-6c440d55245e","Type":"ContainerStarted","Data":"8963b0c1befd2eb91246e155130bcfa171d284efe4b06736fbe4984cd5a5ce25"} Mar 12 08:22:02 crc kubenswrapper[4809]: I0312 08:22:02.142140 4809 generic.go:334] "Generic (PLEG): container finished" podID="0d092868-620f-4652-a74b-a3650282474a" containerID="353e627a3c11215f87a5cb13740bac75c8510ed931ce3804dbf24ac9a081b6bd" exitCode=0 Mar 12 08:22:02 crc kubenswrapper[4809]: I0312 08:22:02.142599 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-4855-account-create-update-m9b2t" event={"ID":"0d092868-620f-4652-a74b-a3650282474a","Type":"ContainerDied","Data":"353e627a3c11215f87a5cb13740bac75c8510ed931ce3804dbf24ac9a081b6bd"} Mar 12 08:22:02 crc kubenswrapper[4809]: I0312 08:22:02.144395 4809 generic.go:334] "Generic (PLEG): container finished" podID="5bea16ac-c763-4dea-891e-35af9814c6a8" containerID="3c7772715689b58daeff59ad1214869c6a8a8640d2a4e0e821150f039eb73048" exitCode=0 Mar 12 08:22:02 crc kubenswrapper[4809]: I0312 08:22:02.144452 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-hkstm" event={"ID":"5bea16ac-c763-4dea-891e-35af9814c6a8","Type":"ContainerDied","Data":"3c7772715689b58daeff59ad1214869c6a8a8640d2a4e0e821150f039eb73048"} Mar 12 08:22:02 crc kubenswrapper[4809]: I0312 08:22:02.147784 4809 generic.go:334] "Generic (PLEG): container finished" podID="31094b6a-8ac7-4bbf-883e-aabf280fe22e" containerID="a6ede6ce0480318857787a54e82e85fbafb16bed43eeb8a9ce79c6d82759c251" exitCode=0 Mar 12 08:22:02 crc kubenswrapper[4809]: I0312 08:22:02.147840 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"31094b6a-8ac7-4bbf-883e-aabf280fe22e","Type":"ContainerDied","Data":"a6ede6ce0480318857787a54e82e85fbafb16bed43eeb8a9ce79c6d82759c251"} Mar 12 08:22:02 crc kubenswrapper[4809]: I0312 08:22:02.160196 4809 generic.go:334] "Generic (PLEG): container finished" podID="117d7e22-88db-4dd9-a3d9-625dd1b577de" containerID="1bbfca5d99887738ec2786a81ccecdb8000287233937d66c8101d1c47decc03b" exitCode=0 Mar 12 08:22:02 crc kubenswrapper[4809]: I0312 08:22:02.160231 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7b33-account-create-update-zxwc5" event={"ID":"117d7e22-88db-4dd9-a3d9-625dd1b577de","Type":"ContainerDied","Data":"1bbfca5d99887738ec2786a81ccecdb8000287233937d66c8101d1c47decc03b"} Mar 12 08:22:02 crc kubenswrapper[4809]: I0312 08:22:02.162180 4809 generic.go:334] "Generic (PLEG): container finished" podID="b7b849d3-1e9c-4122-ac6f-d9100a6187d2" containerID="43313c59113888de1212b1194c98774b04ff88c098f313032020d1022a08e568" exitCode=0 Mar 12 08:22:02 crc kubenswrapper[4809]: I0312 08:22:02.162259 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6hwhh" event={"ID":"b7b849d3-1e9c-4122-ac6f-d9100a6187d2","Type":"ContainerDied","Data":"43313c59113888de1212b1194c98774b04ff88c098f313032020d1022a08e568"} Mar 12 08:22:02 crc kubenswrapper[4809]: I0312 08:22:02.167658 4809 generic.go:334] "Generic (PLEG): container finished" podID="68799bd7-b150-4834-a9c3-e1d95cb2af7f" containerID="f2e363d1208bee8705631da4519b442de6b732cdf38c31bebe25e81e761b9b5e" exitCode=0 Mar 12 08:22:02 crc kubenswrapper[4809]: I0312 08:22:02.167717 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-t975n" event={"ID":"68799bd7-b150-4834-a9c3-e1d95cb2af7f","Type":"ContainerDied","Data":"f2e363d1208bee8705631da4519b442de6b732cdf38c31bebe25e81e761b9b5e"} Mar 12 08:22:02 crc kubenswrapper[4809]: I0312 08:22:02.169693 4809 generic.go:334] "Generic (PLEG): container finished" podID="d043d696-d09f-4c43-8960-0d31789103e8" containerID="853e1d2444390fdd233ca4750bb5446e59760186012bfcafdad952e0f8516afa" exitCode=0 Mar 12 08:22:02 crc kubenswrapper[4809]: I0312 08:22:02.169732 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d043d696-d09f-4c43-8960-0d31789103e8","Type":"ContainerDied","Data":"853e1d2444390fdd233ca4750bb5446e59760186012bfcafdad952e0f8516afa"} Mar 12 08:22:02 crc kubenswrapper[4809]: I0312 08:22:02.177892 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-5m8tq" event={"ID":"a9faf40d-5722-4d62-a8b6-017e0ab167d2","Type":"ContainerStarted","Data":"868f6f1d0d73f5ae768ae483c5a77830d1b0a6b89ca79c86657e30edd7ce45e7"} Mar 12 08:22:02 crc kubenswrapper[4809]: I0312 08:22:02.186604 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6f696b9-jfdbd" event={"ID":"f2136d46-3abf-4108-9f88-19dfcadc2bf0","Type":"ContainerDied","Data":"288d508537e73fc44ee90c45baa7372ddd47dd4e147855384a147141e17c9029"} Mar 12 08:22:02 crc kubenswrapper[4809]: I0312 08:22:02.186671 4809 scope.go:117] "RemoveContainer" containerID="666574e56d1a4791da92d7fb32d184f5d2a8b1928beb178e1be4d7bb4c92aa95" Mar 12 08:22:02 crc kubenswrapper[4809]: I0312 08:22:02.186799 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6f696b9-jfdbd" Mar 12 08:22:02 crc kubenswrapper[4809]: I0312 08:22:02.199820 4809 generic.go:334] "Generic (PLEG): container finished" podID="c45bb76f-92d7-4214-9ce3-c64361a40416" containerID="aa4dcb211340daf33201d36c37708b689c8f08f8477256ee26540cef8fdd886e" exitCode=0 Mar 12 08:22:02 crc kubenswrapper[4809]: I0312 08:22:02.200028 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"c45bb76f-92d7-4214-9ce3-c64361a40416","Type":"ContainerDied","Data":"aa4dcb211340daf33201d36c37708b689c8f08f8477256ee26540cef8fdd886e"} Mar 12 08:22:02 crc kubenswrapper[4809]: I0312 08:22:02.217476 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-5m8tq" podStartSLOduration=7.2174497 podStartE2EDuration="7.2174497s" podCreationTimestamp="2026-03-12 08:21:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:22:02.205171594 +0000 UTC m=+1395.787207327" watchObservedRunningTime="2026-03-12 08:22:02.2174497 +0000 UTC m=+1395.799485443" Mar 12 08:22:02 crc kubenswrapper[4809]: I0312 08:22:02.231308 4809 scope.go:117] "RemoveContainer" containerID="b118e6526ec5c87f478c419791ca43146967f29cfb365d631b5202f3e9dd942f" Mar 12 08:22:02 crc kubenswrapper[4809]: I0312 08:22:02.244561 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf54bb83-9452-47d2-ba9d-dc0b0e503202-operator-scripts\") pod \"root-account-create-update-kbtbl\" (UID: \"cf54bb83-9452-47d2-ba9d-dc0b0e503202\") " pod="openstack/root-account-create-update-kbtbl" Mar 12 08:22:02 crc kubenswrapper[4809]: I0312 08:22:02.244931 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt7rp\" (UniqueName: \"kubernetes.io/projected/cf54bb83-9452-47d2-ba9d-dc0b0e503202-kube-api-access-zt7rp\") pod \"root-account-create-update-kbtbl\" (UID: \"cf54bb83-9452-47d2-ba9d-dc0b0e503202\") " pod="openstack/root-account-create-update-kbtbl" Mar 12 08:22:02 crc kubenswrapper[4809]: I0312 08:22:02.307302 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-jfdbd"] Mar 12 08:22:02 crc kubenswrapper[4809]: I0312 08:22:02.316834 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-jfdbd"] Mar 12 08:22:02 crc kubenswrapper[4809]: I0312 08:22:02.347252 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt7rp\" (UniqueName: \"kubernetes.io/projected/cf54bb83-9452-47d2-ba9d-dc0b0e503202-kube-api-access-zt7rp\") pod \"root-account-create-update-kbtbl\" (UID: \"cf54bb83-9452-47d2-ba9d-dc0b0e503202\") " pod="openstack/root-account-create-update-kbtbl" Mar 12 08:22:02 crc kubenswrapper[4809]: I0312 08:22:02.347338 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf54bb83-9452-47d2-ba9d-dc0b0e503202-operator-scripts\") pod \"root-account-create-update-kbtbl\" (UID: \"cf54bb83-9452-47d2-ba9d-dc0b0e503202\") " pod="openstack/root-account-create-update-kbtbl" Mar 12 08:22:02 crc kubenswrapper[4809]: I0312 08:22:02.352184 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf54bb83-9452-47d2-ba9d-dc0b0e503202-operator-scripts\") pod \"root-account-create-update-kbtbl\" (UID: \"cf54bb83-9452-47d2-ba9d-dc0b0e503202\") " pod="openstack/root-account-create-update-kbtbl" Mar 12 08:22:02 crc kubenswrapper[4809]: I0312 08:22:02.385080 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zt7rp\" (UniqueName: \"kubernetes.io/projected/cf54bb83-9452-47d2-ba9d-dc0b0e503202-kube-api-access-zt7rp\") pod \"root-account-create-update-kbtbl\" (UID: \"cf54bb83-9452-47d2-ba9d-dc0b0e503202\") " pod="openstack/root-account-create-update-kbtbl" Mar 12 08:22:02 crc kubenswrapper[4809]: I0312 08:22:02.404731 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kbtbl" Mar 12 08:22:02 crc kubenswrapper[4809]: I0312 08:22:02.901430 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b202-account-create-update-678rc" Mar 12 08:22:02 crc kubenswrapper[4809]: I0312 08:22:02.984097 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jrmx\" (UniqueName: \"kubernetes.io/projected/54d1448b-4fcf-49ce-aab4-884a71885bb6-kube-api-access-7jrmx\") pod \"54d1448b-4fcf-49ce-aab4-884a71885bb6\" (UID: \"54d1448b-4fcf-49ce-aab4-884a71885bb6\") " Mar 12 08:22:02 crc kubenswrapper[4809]: I0312 08:22:02.984250 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54d1448b-4fcf-49ce-aab4-884a71885bb6-operator-scripts\") pod \"54d1448b-4fcf-49ce-aab4-884a71885bb6\" (UID: \"54d1448b-4fcf-49ce-aab4-884a71885bb6\") " Mar 12 08:22:02 crc kubenswrapper[4809]: I0312 08:22:02.985755 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54d1448b-4fcf-49ce-aab4-884a71885bb6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "54d1448b-4fcf-49ce-aab4-884a71885bb6" (UID: "54d1448b-4fcf-49ce-aab4-884a71885bb6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:22:03 crc kubenswrapper[4809]: I0312 08:22:03.029287 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54d1448b-4fcf-49ce-aab4-884a71885bb6-kube-api-access-7jrmx" (OuterVolumeSpecName: "kube-api-access-7jrmx") pod "54d1448b-4fcf-49ce-aab4-884a71885bb6" (UID: "54d1448b-4fcf-49ce-aab4-884a71885bb6"). InnerVolumeSpecName "kube-api-access-7jrmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:22:03 crc kubenswrapper[4809]: I0312 08:22:03.085270 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-kbtbl"] Mar 12 08:22:03 crc kubenswrapper[4809]: I0312 08:22:03.092497 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jrmx\" (UniqueName: \"kubernetes.io/projected/54d1448b-4fcf-49ce-aab4-884a71885bb6-kube-api-access-7jrmx\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:03 crc kubenswrapper[4809]: I0312 08:22:03.092531 4809 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54d1448b-4fcf-49ce-aab4-884a71885bb6-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:03 crc kubenswrapper[4809]: I0312 08:22:03.135851 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2136d46-3abf-4108-9f88-19dfcadc2bf0" path="/var/lib/kubelet/pods/f2136d46-3abf-4108-9f88-19dfcadc2bf0/volumes" Mar 12 08:22:03 crc kubenswrapper[4809]: I0312 08:22:03.213934 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555062-g8cq6" event={"ID":"f9671632-9016-48e7-827c-6c440d55245e","Type":"ContainerStarted","Data":"cdde10bb8ba356c33cd0f0e3dd3997a2688213f7550653d5a00b2251fc8255cc"} Mar 12 08:22:03 crc kubenswrapper[4809]: I0312 08:22:03.218554 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d043d696-d09f-4c43-8960-0d31789103e8","Type":"ContainerStarted","Data":"a5d33dc459ac717259e45f13ec4149298d61afa5b3792d19a16b0edda76ae037"} Mar 12 08:22:03 crc kubenswrapper[4809]: I0312 08:22:03.218779 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Mar 12 08:22:03 crc kubenswrapper[4809]: I0312 08:22:03.222189 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b202-account-create-update-678rc" event={"ID":"54d1448b-4fcf-49ce-aab4-884a71885bb6","Type":"ContainerDied","Data":"d689d7f45b899674d947fbf24bac176074949929b49d13df97c044b2e5a29130"} Mar 12 08:22:03 crc kubenswrapper[4809]: I0312 08:22:03.222311 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d689d7f45b899674d947fbf24bac176074949929b49d13df97c044b2e5a29130" Mar 12 08:22:03 crc kubenswrapper[4809]: I0312 08:22:03.222242 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b202-account-create-update-678rc" Mar 12 08:22:03 crc kubenswrapper[4809]: I0312 08:22:03.223826 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-kbtbl" event={"ID":"cf54bb83-9452-47d2-ba9d-dc0b0e503202","Type":"ContainerStarted","Data":"328e11e0dcc4f2cfbfb1523dd0c7a5ad2105735c2ca5a178039e399f9f9859e2"} Mar 12 08:22:03 crc kubenswrapper[4809]: I0312 08:22:03.225335 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fefb3329-70bc-45d1-ac98-44f1836b3470","Type":"ContainerStarted","Data":"8fe98d57f4f19a9c27d3d704cb13af2e6683cbc0c4edb4b0457e6d79607de148"} Mar 12 08:22:03 crc kubenswrapper[4809]: I0312 08:22:03.227677 4809 generic.go:334] "Generic (PLEG): container finished" podID="a9faf40d-5722-4d62-a8b6-017e0ab167d2" containerID="868f6f1d0d73f5ae768ae483c5a77830d1b0a6b89ca79c86657e30edd7ce45e7" exitCode=0 Mar 12 08:22:03 crc kubenswrapper[4809]: I0312 08:22:03.227767 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-5m8tq" event={"ID":"a9faf40d-5722-4d62-a8b6-017e0ab167d2","Type":"ContainerDied","Data":"868f6f1d0d73f5ae768ae483c5a77830d1b0a6b89ca79c86657e30edd7ce45e7"} Mar 12 08:22:03 crc kubenswrapper[4809]: I0312 08:22:03.232456 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"31094b6a-8ac7-4bbf-883e-aabf280fe22e","Type":"ContainerStarted","Data":"3fa858e890d83738234fdbb2c73ad1d7d67bdb9834004517350810694cfe0668"} Mar 12 08:22:03 crc kubenswrapper[4809]: I0312 08:22:03.232780 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Mar 12 08:22:03 crc kubenswrapper[4809]: I0312 08:22:03.249491 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555062-g8cq6" podStartSLOduration=2.229003765 podStartE2EDuration="3.249468646s" podCreationTimestamp="2026-03-12 08:22:00 +0000 UTC" firstStartedPulling="2026-03-12 08:22:01.373020369 +0000 UTC m=+1394.955056102" lastFinishedPulling="2026-03-12 08:22:02.39348525 +0000 UTC m=+1395.975520983" observedRunningTime="2026-03-12 08:22:03.247893043 +0000 UTC m=+1396.829928766" watchObservedRunningTime="2026-03-12 08:22:03.249468646 +0000 UTC m=+1396.831504379" Mar 12 08:22:03 crc kubenswrapper[4809]: I0312 08:22:03.250178 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"c45bb76f-92d7-4214-9ce3-c64361a40416","Type":"ContainerStarted","Data":"189ef92e31873bc3527a72453e076cc8cc76e11ec56712bdc61eaa1ca3c4e771"} Mar 12 08:22:03 crc kubenswrapper[4809]: I0312 08:22:03.251946 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Mar 12 08:22:03 crc kubenswrapper[4809]: I0312 08:22:03.262821 4809 generic.go:334] "Generic (PLEG): container finished" podID="aecd8d5a-ebc5-430d-b54d-f027d02c3def" containerID="b8b3a2f4d8a12ac11be4677768907f37b93be1114ae893e614e8d393fec2c0c2" exitCode=0 Mar 12 08:22:03 crc kubenswrapper[4809]: I0312 08:22:03.262974 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-56b8-account-create-update-2c67x" event={"ID":"aecd8d5a-ebc5-430d-b54d-f027d02c3def","Type":"ContainerDied","Data":"b8b3a2f4d8a12ac11be4677768907f37b93be1114ae893e614e8d393fec2c0c2"} Mar 12 08:22:03 crc kubenswrapper[4809]: I0312 08:22:03.283686 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=39.182109655 podStartE2EDuration="1m2.283664223s" podCreationTimestamp="2026-03-12 08:21:01 +0000 UTC" firstStartedPulling="2026-03-12 08:21:03.920930248 +0000 UTC m=+1337.502965981" lastFinishedPulling="2026-03-12 08:21:27.022484806 +0000 UTC m=+1360.604520549" observedRunningTime="2026-03-12 08:22:03.282104711 +0000 UTC m=+1396.864140454" watchObservedRunningTime="2026-03-12 08:22:03.283664223 +0000 UTC m=+1396.865699956" Mar 12 08:22:03 crc kubenswrapper[4809]: I0312 08:22:03.344221 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=38.902261833 podStartE2EDuration="1m2.34419846s" podCreationTimestamp="2026-03-12 08:21:01 +0000 UTC" firstStartedPulling="2026-03-12 08:21:03.636328056 +0000 UTC m=+1337.218363789" lastFinishedPulling="2026-03-12 08:21:27.078264693 +0000 UTC m=+1360.660300416" observedRunningTime="2026-03-12 08:22:03.339356618 +0000 UTC m=+1396.921392351" watchObservedRunningTime="2026-03-12 08:22:03.34419846 +0000 UTC m=+1396.926234203" Mar 12 08:22:03 crc kubenswrapper[4809]: I0312 08:22:03.408883 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=38.62974503 podStartE2EDuration="1m2.408868851s" podCreationTimestamp="2026-03-12 08:21:01 +0000 UTC" firstStartedPulling="2026-03-12 08:21:03.396222001 +0000 UTC m=+1336.978257734" lastFinishedPulling="2026-03-12 08:21:27.175345822 +0000 UTC m=+1360.757381555" observedRunningTime="2026-03-12 08:22:03.407409091 +0000 UTC m=+1396.989444824" watchObservedRunningTime="2026-03-12 08:22:03.408868851 +0000 UTC m=+1396.990904584" Mar 12 08:22:03 crc kubenswrapper[4809]: I0312 08:22:03.964887 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-t975n" Mar 12 08:22:04 crc kubenswrapper[4809]: I0312 08:22:04.120867 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tzhw\" (UniqueName: \"kubernetes.io/projected/68799bd7-b150-4834-a9c3-e1d95cb2af7f-kube-api-access-7tzhw\") pod \"68799bd7-b150-4834-a9c3-e1d95cb2af7f\" (UID: \"68799bd7-b150-4834-a9c3-e1d95cb2af7f\") " Mar 12 08:22:04 crc kubenswrapper[4809]: I0312 08:22:04.121388 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68799bd7-b150-4834-a9c3-e1d95cb2af7f-operator-scripts\") pod \"68799bd7-b150-4834-a9c3-e1d95cb2af7f\" (UID: \"68799bd7-b150-4834-a9c3-e1d95cb2af7f\") " Mar 12 08:22:04 crc kubenswrapper[4809]: I0312 08:22:04.127182 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68799bd7-b150-4834-a9c3-e1d95cb2af7f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "68799bd7-b150-4834-a9c3-e1d95cb2af7f" (UID: "68799bd7-b150-4834-a9c3-e1d95cb2af7f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:22:04 crc kubenswrapper[4809]: I0312 08:22:04.158230 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68799bd7-b150-4834-a9c3-e1d95cb2af7f-kube-api-access-7tzhw" (OuterVolumeSpecName: "kube-api-access-7tzhw") pod "68799bd7-b150-4834-a9c3-e1d95cb2af7f" (UID: "68799bd7-b150-4834-a9c3-e1d95cb2af7f"). InnerVolumeSpecName "kube-api-access-7tzhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:22:04 crc kubenswrapper[4809]: I0312 08:22:04.224621 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7tzhw\" (UniqueName: \"kubernetes.io/projected/68799bd7-b150-4834-a9c3-e1d95cb2af7f-kube-api-access-7tzhw\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:04 crc kubenswrapper[4809]: I0312 08:22:04.224932 4809 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68799bd7-b150-4834-a9c3-e1d95cb2af7f-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:04 crc kubenswrapper[4809]: I0312 08:22:04.278094 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-t975n" event={"ID":"68799bd7-b150-4834-a9c3-e1d95cb2af7f","Type":"ContainerDied","Data":"cd6bce521edb240076096c0a170d4b84896d7898013d73e286de308497ea3de5"} Mar 12 08:22:04 crc kubenswrapper[4809]: I0312 08:22:04.278174 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd6bce521edb240076096c0a170d4b84896d7898013d73e286de308497ea3de5" Mar 12 08:22:04 crc kubenswrapper[4809]: I0312 08:22:04.278254 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-t975n" Mar 12 08:22:04 crc kubenswrapper[4809]: I0312 08:22:04.283909 4809 generic.go:334] "Generic (PLEG): container finished" podID="cf54bb83-9452-47d2-ba9d-dc0b0e503202" containerID="20b12842050b232e599ccac60b81b93166fcd085b3529609c66c38802241c416" exitCode=0 Mar 12 08:22:04 crc kubenswrapper[4809]: I0312 08:22:04.284103 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-kbtbl" event={"ID":"cf54bb83-9452-47d2-ba9d-dc0b0e503202","Type":"ContainerDied","Data":"20b12842050b232e599ccac60b81b93166fcd085b3529609c66c38802241c416"} Mar 12 08:22:04 crc kubenswrapper[4809]: I0312 08:22:04.290532 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-hkstm" event={"ID":"5bea16ac-c763-4dea-891e-35af9814c6a8","Type":"ContainerDied","Data":"74d790da50f5081e5e55e4591846c636922bdf34cd1feb4b21d2f71648ce57b9"} Mar 12 08:22:04 crc kubenswrapper[4809]: I0312 08:22:04.290591 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74d790da50f5081e5e55e4591846c636922bdf34cd1feb4b21d2f71648ce57b9" Mar 12 08:22:04 crc kubenswrapper[4809]: I0312 08:22:04.301556 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7b33-account-create-update-zxwc5" event={"ID":"117d7e22-88db-4dd9-a3d9-625dd1b577de","Type":"ContainerDied","Data":"658dfa10f21dede66f2e419b336ad9e061bfaab010adbe0a18a642b00b4ee8a2"} Mar 12 08:22:04 crc kubenswrapper[4809]: I0312 08:22:04.301780 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="658dfa10f21dede66f2e419b336ad9e061bfaab010adbe0a18a642b00b4ee8a2" Mar 12 08:22:04 crc kubenswrapper[4809]: I0312 08:22:04.305591 4809 generic.go:334] "Generic (PLEG): container finished" podID="f9671632-9016-48e7-827c-6c440d55245e" containerID="cdde10bb8ba356c33cd0f0e3dd3997a2688213f7550653d5a00b2251fc8255cc" exitCode=0 Mar 12 08:22:04 crc kubenswrapper[4809]: I0312 08:22:04.305936 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555062-g8cq6" event={"ID":"f9671632-9016-48e7-827c-6c440d55245e","Type":"ContainerDied","Data":"cdde10bb8ba356c33cd0f0e3dd3997a2688213f7550653d5a00b2251fc8255cc"} Mar 12 08:22:04 crc kubenswrapper[4809]: I0312 08:22:04.355722 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-hkstm" Mar 12 08:22:04 crc kubenswrapper[4809]: I0312 08:22:04.361884 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7b33-account-create-update-zxwc5" Mar 12 08:22:04 crc kubenswrapper[4809]: I0312 08:22:04.534023 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5bea16ac-c763-4dea-891e-35af9814c6a8-operator-scripts\") pod \"5bea16ac-c763-4dea-891e-35af9814c6a8\" (UID: \"5bea16ac-c763-4dea-891e-35af9814c6a8\") " Mar 12 08:22:04 crc kubenswrapper[4809]: I0312 08:22:04.534384 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/117d7e22-88db-4dd9-a3d9-625dd1b577de-operator-scripts\") pod \"117d7e22-88db-4dd9-a3d9-625dd1b577de\" (UID: \"117d7e22-88db-4dd9-a3d9-625dd1b577de\") " Mar 12 08:22:04 crc kubenswrapper[4809]: I0312 08:22:04.534528 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mjwr\" (UniqueName: \"kubernetes.io/projected/117d7e22-88db-4dd9-a3d9-625dd1b577de-kube-api-access-4mjwr\") pod \"117d7e22-88db-4dd9-a3d9-625dd1b577de\" (UID: \"117d7e22-88db-4dd9-a3d9-625dd1b577de\") " Mar 12 08:22:04 crc kubenswrapper[4809]: I0312 08:22:04.534573 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbl8s\" (UniqueName: \"kubernetes.io/projected/5bea16ac-c763-4dea-891e-35af9814c6a8-kube-api-access-dbl8s\") pod \"5bea16ac-c763-4dea-891e-35af9814c6a8\" (UID: \"5bea16ac-c763-4dea-891e-35af9814c6a8\") " Mar 12 08:22:04 crc kubenswrapper[4809]: I0312 08:22:04.539094 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/117d7e22-88db-4dd9-a3d9-625dd1b577de-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "117d7e22-88db-4dd9-a3d9-625dd1b577de" (UID: "117d7e22-88db-4dd9-a3d9-625dd1b577de"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:22:04 crc kubenswrapper[4809]: I0312 08:22:04.544196 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bea16ac-c763-4dea-891e-35af9814c6a8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5bea16ac-c763-4dea-891e-35af9814c6a8" (UID: "5bea16ac-c763-4dea-891e-35af9814c6a8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:22:04 crc kubenswrapper[4809]: I0312 08:22:04.550143 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/117d7e22-88db-4dd9-a3d9-625dd1b577de-kube-api-access-4mjwr" (OuterVolumeSpecName: "kube-api-access-4mjwr") pod "117d7e22-88db-4dd9-a3d9-625dd1b577de" (UID: "117d7e22-88db-4dd9-a3d9-625dd1b577de"). InnerVolumeSpecName "kube-api-access-4mjwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:22:04 crc kubenswrapper[4809]: I0312 08:22:04.551189 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6hwhh" Mar 12 08:22:04 crc kubenswrapper[4809]: I0312 08:22:04.555499 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bea16ac-c763-4dea-891e-35af9814c6a8-kube-api-access-dbl8s" (OuterVolumeSpecName: "kube-api-access-dbl8s") pod "5bea16ac-c763-4dea-891e-35af9814c6a8" (UID: "5bea16ac-c763-4dea-891e-35af9814c6a8"). InnerVolumeSpecName "kube-api-access-dbl8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:22:04 crc kubenswrapper[4809]: I0312 08:22:04.648220 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lcm99\" (UniqueName: \"kubernetes.io/projected/b7b849d3-1e9c-4122-ac6f-d9100a6187d2-kube-api-access-lcm99\") pod \"b7b849d3-1e9c-4122-ac6f-d9100a6187d2\" (UID: \"b7b849d3-1e9c-4122-ac6f-d9100a6187d2\") " Mar 12 08:22:04 crc kubenswrapper[4809]: I0312 08:22:04.648303 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7b849d3-1e9c-4122-ac6f-d9100a6187d2-operator-scripts\") pod \"b7b849d3-1e9c-4122-ac6f-d9100a6187d2\" (UID: \"b7b849d3-1e9c-4122-ac6f-d9100a6187d2\") " Mar 12 08:22:04 crc kubenswrapper[4809]: I0312 08:22:04.651529 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-4855-account-create-update-m9b2t" Mar 12 08:22:04 crc kubenswrapper[4809]: I0312 08:22:04.652426 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7b849d3-1e9c-4122-ac6f-d9100a6187d2-kube-api-access-lcm99" (OuterVolumeSpecName: "kube-api-access-lcm99") pod "b7b849d3-1e9c-4122-ac6f-d9100a6187d2" (UID: "b7b849d3-1e9c-4122-ac6f-d9100a6187d2"). InnerVolumeSpecName "kube-api-access-lcm99". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:22:04 crc kubenswrapper[4809]: I0312 08:22:04.655928 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7b849d3-1e9c-4122-ac6f-d9100a6187d2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b7b849d3-1e9c-4122-ac6f-d9100a6187d2" (UID: "b7b849d3-1e9c-4122-ac6f-d9100a6187d2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:22:04 crc kubenswrapper[4809]: I0312 08:22:04.656326 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mjwr\" (UniqueName: \"kubernetes.io/projected/117d7e22-88db-4dd9-a3d9-625dd1b577de-kube-api-access-4mjwr\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:04 crc kubenswrapper[4809]: I0312 08:22:04.656374 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbl8s\" (UniqueName: \"kubernetes.io/projected/5bea16ac-c763-4dea-891e-35af9814c6a8-kube-api-access-dbl8s\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:04 crc kubenswrapper[4809]: I0312 08:22:04.656385 4809 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5bea16ac-c763-4dea-891e-35af9814c6a8-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:04 crc kubenswrapper[4809]: I0312 08:22:04.656395 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lcm99\" (UniqueName: \"kubernetes.io/projected/b7b849d3-1e9c-4122-ac6f-d9100a6187d2-kube-api-access-lcm99\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:04 crc kubenswrapper[4809]: I0312 08:22:04.656405 4809 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7b849d3-1e9c-4122-ac6f-d9100a6187d2-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:04 crc kubenswrapper[4809]: I0312 08:22:04.656413 4809 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/117d7e22-88db-4dd9-a3d9-625dd1b577de-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:04 crc kubenswrapper[4809]: I0312 08:22:04.757541 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d092868-620f-4652-a74b-a3650282474a-operator-scripts\") pod \"0d092868-620f-4652-a74b-a3650282474a\" (UID: \"0d092868-620f-4652-a74b-a3650282474a\") " Mar 12 08:22:04 crc kubenswrapper[4809]: I0312 08:22:04.758548 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d092868-620f-4652-a74b-a3650282474a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0d092868-620f-4652-a74b-a3650282474a" (UID: "0d092868-620f-4652-a74b-a3650282474a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:22:04 crc kubenswrapper[4809]: I0312 08:22:04.758573 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzdkk\" (UniqueName: \"kubernetes.io/projected/0d092868-620f-4652-a74b-a3650282474a-kube-api-access-gzdkk\") pod \"0d092868-620f-4652-a74b-a3650282474a\" (UID: \"0d092868-620f-4652-a74b-a3650282474a\") " Mar 12 08:22:04 crc kubenswrapper[4809]: I0312 08:22:04.760270 4809 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d092868-620f-4652-a74b-a3650282474a-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:04 crc kubenswrapper[4809]: I0312 08:22:04.780362 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d092868-620f-4652-a74b-a3650282474a-kube-api-access-gzdkk" (OuterVolumeSpecName: "kube-api-access-gzdkk") pod "0d092868-620f-4652-a74b-a3650282474a" (UID: "0d092868-620f-4652-a74b-a3650282474a"). InnerVolumeSpecName "kube-api-access-gzdkk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:22:04 crc kubenswrapper[4809]: I0312 08:22:04.865794 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzdkk\" (UniqueName: \"kubernetes.io/projected/0d092868-620f-4652-a74b-a3650282474a-kube-api-access-gzdkk\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:04 crc kubenswrapper[4809]: I0312 08:22:04.978396 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-5m8tq" Mar 12 08:22:05 crc kubenswrapper[4809]: I0312 08:22:05.072418 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9faf40d-5722-4d62-a8b6-017e0ab167d2-operator-scripts\") pod \"a9faf40d-5722-4d62-a8b6-017e0ab167d2\" (UID: \"a9faf40d-5722-4d62-a8b6-017e0ab167d2\") " Mar 12 08:22:05 crc kubenswrapper[4809]: I0312 08:22:05.072660 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9pbm\" (UniqueName: \"kubernetes.io/projected/a9faf40d-5722-4d62-a8b6-017e0ab167d2-kube-api-access-w9pbm\") pod \"a9faf40d-5722-4d62-a8b6-017e0ab167d2\" (UID: \"a9faf40d-5722-4d62-a8b6-017e0ab167d2\") " Mar 12 08:22:05 crc kubenswrapper[4809]: I0312 08:22:05.073096 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7b325264-3ac9-446e-b820-c40d942263e6-etc-swift\") pod \"swift-storage-0\" (UID: \"7b325264-3ac9-446e-b820-c40d942263e6\") " pod="openstack/swift-storage-0" Mar 12 08:22:05 crc kubenswrapper[4809]: I0312 08:22:05.074105 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9faf40d-5722-4d62-a8b6-017e0ab167d2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a9faf40d-5722-4d62-a8b6-017e0ab167d2" (UID: "a9faf40d-5722-4d62-a8b6-017e0ab167d2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:22:05 crc kubenswrapper[4809]: E0312 08:22:05.074352 4809 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 12 08:22:05 crc kubenswrapper[4809]: E0312 08:22:05.074373 4809 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 12 08:22:05 crc kubenswrapper[4809]: E0312 08:22:05.074427 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7b325264-3ac9-446e-b820-c40d942263e6-etc-swift podName:7b325264-3ac9-446e-b820-c40d942263e6 nodeName:}" failed. No retries permitted until 2026-03-12 08:22:21.074409654 +0000 UTC m=+1414.656445387 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7b325264-3ac9-446e-b820-c40d942263e6-etc-swift") pod "swift-storage-0" (UID: "7b325264-3ac9-446e-b820-c40d942263e6") : configmap "swift-ring-files" not found Mar 12 08:22:05 crc kubenswrapper[4809]: I0312 08:22:05.079400 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9faf40d-5722-4d62-a8b6-017e0ab167d2-kube-api-access-w9pbm" (OuterVolumeSpecName: "kube-api-access-w9pbm") pod "a9faf40d-5722-4d62-a8b6-017e0ab167d2" (UID: "a9faf40d-5722-4d62-a8b6-017e0ab167d2"). InnerVolumeSpecName "kube-api-access-w9pbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:22:05 crc kubenswrapper[4809]: I0312 08:22:05.080401 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-56b8-account-create-update-2c67x" Mar 12 08:22:05 crc kubenswrapper[4809]: I0312 08:22:05.175466 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aecd8d5a-ebc5-430d-b54d-f027d02c3def-operator-scripts\") pod \"aecd8d5a-ebc5-430d-b54d-f027d02c3def\" (UID: \"aecd8d5a-ebc5-430d-b54d-f027d02c3def\") " Mar 12 08:22:05 crc kubenswrapper[4809]: I0312 08:22:05.175744 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d89fg\" (UniqueName: \"kubernetes.io/projected/aecd8d5a-ebc5-430d-b54d-f027d02c3def-kube-api-access-d89fg\") pod \"aecd8d5a-ebc5-430d-b54d-f027d02c3def\" (UID: \"aecd8d5a-ebc5-430d-b54d-f027d02c3def\") " Mar 12 08:22:05 crc kubenswrapper[4809]: I0312 08:22:05.176016 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aecd8d5a-ebc5-430d-b54d-f027d02c3def-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "aecd8d5a-ebc5-430d-b54d-f027d02c3def" (UID: "aecd8d5a-ebc5-430d-b54d-f027d02c3def"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:22:05 crc kubenswrapper[4809]: I0312 08:22:05.176715 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9pbm\" (UniqueName: \"kubernetes.io/projected/a9faf40d-5722-4d62-a8b6-017e0ab167d2-kube-api-access-w9pbm\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:05 crc kubenswrapper[4809]: I0312 08:22:05.176743 4809 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aecd8d5a-ebc5-430d-b54d-f027d02c3def-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:05 crc kubenswrapper[4809]: I0312 08:22:05.176757 4809 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9faf40d-5722-4d62-a8b6-017e0ab167d2-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:05 crc kubenswrapper[4809]: I0312 08:22:05.196925 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aecd8d5a-ebc5-430d-b54d-f027d02c3def-kube-api-access-d89fg" (OuterVolumeSpecName: "kube-api-access-d89fg") pod "aecd8d5a-ebc5-430d-b54d-f027d02c3def" (UID: "aecd8d5a-ebc5-430d-b54d-f027d02c3def"). InnerVolumeSpecName "kube-api-access-d89fg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:22:05 crc kubenswrapper[4809]: I0312 08:22:05.280298 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d89fg\" (UniqueName: \"kubernetes.io/projected/aecd8d5a-ebc5-430d-b54d-f027d02c3def-kube-api-access-d89fg\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:05 crc kubenswrapper[4809]: I0312 08:22:05.316911 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6hwhh" event={"ID":"b7b849d3-1e9c-4122-ac6f-d9100a6187d2","Type":"ContainerDied","Data":"df0b9de8050adc233537769da77340f12a32272343d6be81086a05030e08be19"} Mar 12 08:22:05 crc kubenswrapper[4809]: I0312 08:22:05.316997 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df0b9de8050adc233537769da77340f12a32272343d6be81086a05030e08be19" Mar 12 08:22:05 crc kubenswrapper[4809]: I0312 08:22:05.316942 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6hwhh" Mar 12 08:22:05 crc kubenswrapper[4809]: I0312 08:22:05.318641 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-56b8-account-create-update-2c67x" event={"ID":"aecd8d5a-ebc5-430d-b54d-f027d02c3def","Type":"ContainerDied","Data":"1c57a2385eec63d67171f3239b8df1e5d067b4f547ea7817ed58208be1ae89f0"} Mar 12 08:22:05 crc kubenswrapper[4809]: I0312 08:22:05.318692 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c57a2385eec63d67171f3239b8df1e5d067b4f547ea7817ed58208be1ae89f0" Mar 12 08:22:05 crc kubenswrapper[4809]: I0312 08:22:05.318657 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-56b8-account-create-update-2c67x" Mar 12 08:22:05 crc kubenswrapper[4809]: I0312 08:22:05.320485 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-4855-account-create-update-m9b2t" event={"ID":"0d092868-620f-4652-a74b-a3650282474a","Type":"ContainerDied","Data":"d4347ea00a5ec931248c2a63c120945308f81c507af429928b5b3b11b63afd89"} Mar 12 08:22:05 crc kubenswrapper[4809]: I0312 08:22:05.320532 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4347ea00a5ec931248c2a63c120945308f81c507af429928b5b3b11b63afd89" Mar 12 08:22:05 crc kubenswrapper[4809]: I0312 08:22:05.320569 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-4855-account-create-update-m9b2t" Mar 12 08:22:05 crc kubenswrapper[4809]: I0312 08:22:05.323073 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-5m8tq" event={"ID":"a9faf40d-5722-4d62-a8b6-017e0ab167d2","Type":"ContainerDied","Data":"51b6902436f8590ade0f021bd5a7cb5cd409c279a4f1b28b8c8623203e9a3a01"} Mar 12 08:22:05 crc kubenswrapper[4809]: I0312 08:22:05.323161 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7b33-account-create-update-zxwc5" Mar 12 08:22:05 crc kubenswrapper[4809]: I0312 08:22:05.323108 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51b6902436f8590ade0f021bd5a7cb5cd409c279a4f1b28b8c8623203e9a3a01" Mar 12 08:22:05 crc kubenswrapper[4809]: I0312 08:22:05.323476 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-5m8tq" Mar 12 08:22:05 crc kubenswrapper[4809]: I0312 08:22:05.323555 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-hkstm" Mar 12 08:22:05 crc kubenswrapper[4809]: I0312 08:22:05.483458 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-57dffbbf6c-jl4cx" podUID="85ec0faa-1bca-4382-807d-35941e6d88fb" containerName="console" containerID="cri-o://c248c5956e33ecc134cd6c5a667469cc0ccb18db90f222daea902159ffc04638" gracePeriod=15 Mar 12 08:22:05 crc kubenswrapper[4809]: I0312 08:22:05.921373 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kbtbl" Mar 12 08:22:05 crc kubenswrapper[4809]: I0312 08:22:05.926609 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555062-g8cq6" Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:05.997670 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zt7rp\" (UniqueName: \"kubernetes.io/projected/cf54bb83-9452-47d2-ba9d-dc0b0e503202-kube-api-access-zt7rp\") pod \"cf54bb83-9452-47d2-ba9d-dc0b0e503202\" (UID: \"cf54bb83-9452-47d2-ba9d-dc0b0e503202\") " Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:05.997786 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4l9x6\" (UniqueName: \"kubernetes.io/projected/f9671632-9016-48e7-827c-6c440d55245e-kube-api-access-4l9x6\") pod \"f9671632-9016-48e7-827c-6c440d55245e\" (UID: \"f9671632-9016-48e7-827c-6c440d55245e\") " Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:05.998054 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf54bb83-9452-47d2-ba9d-dc0b0e503202-operator-scripts\") pod \"cf54bb83-9452-47d2-ba9d-dc0b0e503202\" (UID: \"cf54bb83-9452-47d2-ba9d-dc0b0e503202\") " Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.009852 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf54bb83-9452-47d2-ba9d-dc0b0e503202-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cf54bb83-9452-47d2-ba9d-dc0b0e503202" (UID: "cf54bb83-9452-47d2-ba9d-dc0b0e503202"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.013862 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9671632-9016-48e7-827c-6c440d55245e-kube-api-access-4l9x6" (OuterVolumeSpecName: "kube-api-access-4l9x6") pod "f9671632-9016-48e7-827c-6c440d55245e" (UID: "f9671632-9016-48e7-827c-6c440d55245e"). InnerVolumeSpecName "kube-api-access-4l9x6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.015186 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf54bb83-9452-47d2-ba9d-dc0b0e503202-kube-api-access-zt7rp" (OuterVolumeSpecName: "kube-api-access-zt7rp") pod "cf54bb83-9452-47d2-ba9d-dc0b0e503202" (UID: "cf54bb83-9452-47d2-ba9d-dc0b0e503202"). InnerVolumeSpecName "kube-api-access-zt7rp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.088480 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-57dffbbf6c-jl4cx_85ec0faa-1bca-4382-807d-35941e6d88fb/console/0.log" Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.088583 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-57dffbbf6c-jl4cx" Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.100339 4809 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf54bb83-9452-47d2-ba9d-dc0b0e503202-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.100378 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zt7rp\" (UniqueName: \"kubernetes.io/projected/cf54bb83-9452-47d2-ba9d-dc0b0e503202-kube-api-access-zt7rp\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.100402 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4l9x6\" (UniqueName: \"kubernetes.io/projected/f9671632-9016-48e7-827c-6c440d55245e-kube-api-access-4l9x6\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.176938 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-6pfp4"] Mar 12 08:22:06 crc kubenswrapper[4809]: E0312 08:22:06.177391 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aecd8d5a-ebc5-430d-b54d-f027d02c3def" containerName="mariadb-account-create-update" Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.177404 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="aecd8d5a-ebc5-430d-b54d-f027d02c3def" containerName="mariadb-account-create-update" Mar 12 08:22:06 crc kubenswrapper[4809]: E0312 08:22:06.177418 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68799bd7-b150-4834-a9c3-e1d95cb2af7f" containerName="mariadb-database-create" Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.177425 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="68799bd7-b150-4834-a9c3-e1d95cb2af7f" containerName="mariadb-database-create" Mar 12 08:22:06 crc kubenswrapper[4809]: E0312 08:22:06.177441 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9671632-9016-48e7-827c-6c440d55245e" containerName="oc" Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.177449 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9671632-9016-48e7-827c-6c440d55245e" containerName="oc" Mar 12 08:22:06 crc kubenswrapper[4809]: E0312 08:22:06.177460 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d092868-620f-4652-a74b-a3650282474a" containerName="mariadb-account-create-update" Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.177466 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d092868-620f-4652-a74b-a3650282474a" containerName="mariadb-account-create-update" Mar 12 08:22:06 crc kubenswrapper[4809]: E0312 08:22:06.177487 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bea16ac-c763-4dea-891e-35af9814c6a8" containerName="mariadb-database-create" Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.177493 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bea16ac-c763-4dea-891e-35af9814c6a8" containerName="mariadb-database-create" Mar 12 08:22:06 crc kubenswrapper[4809]: E0312 08:22:06.177505 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54d1448b-4fcf-49ce-aab4-884a71885bb6" containerName="mariadb-account-create-update" Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.177510 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="54d1448b-4fcf-49ce-aab4-884a71885bb6" containerName="mariadb-account-create-update" Mar 12 08:22:06 crc kubenswrapper[4809]: E0312 08:22:06.177518 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7b849d3-1e9c-4122-ac6f-d9100a6187d2" containerName="mariadb-database-create" Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.177523 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7b849d3-1e9c-4122-ac6f-d9100a6187d2" containerName="mariadb-database-create" Mar 12 08:22:06 crc kubenswrapper[4809]: E0312 08:22:06.177546 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9faf40d-5722-4d62-a8b6-017e0ab167d2" containerName="mariadb-database-create" Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.177552 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9faf40d-5722-4d62-a8b6-017e0ab167d2" containerName="mariadb-database-create" Mar 12 08:22:06 crc kubenswrapper[4809]: E0312 08:22:06.177563 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="117d7e22-88db-4dd9-a3d9-625dd1b577de" containerName="mariadb-account-create-update" Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.177569 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="117d7e22-88db-4dd9-a3d9-625dd1b577de" containerName="mariadb-account-create-update" Mar 12 08:22:06 crc kubenswrapper[4809]: E0312 08:22:06.177577 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf54bb83-9452-47d2-ba9d-dc0b0e503202" containerName="mariadb-account-create-update" Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.177583 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf54bb83-9452-47d2-ba9d-dc0b0e503202" containerName="mariadb-account-create-update" Mar 12 08:22:06 crc kubenswrapper[4809]: E0312 08:22:06.177593 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85ec0faa-1bca-4382-807d-35941e6d88fb" containerName="console" Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.177599 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="85ec0faa-1bca-4382-807d-35941e6d88fb" containerName="console" Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.177795 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="117d7e22-88db-4dd9-a3d9-625dd1b577de" containerName="mariadb-account-create-update" Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.177814 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf54bb83-9452-47d2-ba9d-dc0b0e503202" containerName="mariadb-account-create-update" Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.177824 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="68799bd7-b150-4834-a9c3-e1d95cb2af7f" containerName="mariadb-database-create" Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.177840 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9671632-9016-48e7-827c-6c440d55245e" containerName="oc" Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.177850 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d092868-620f-4652-a74b-a3650282474a" containerName="mariadb-account-create-update" Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.177857 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bea16ac-c763-4dea-891e-35af9814c6a8" containerName="mariadb-database-create" Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.177866 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7b849d3-1e9c-4122-ac6f-d9100a6187d2" containerName="mariadb-database-create" Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.177879 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9faf40d-5722-4d62-a8b6-017e0ab167d2" containerName="mariadb-database-create" Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.177890 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="aecd8d5a-ebc5-430d-b54d-f027d02c3def" containerName="mariadb-account-create-update" Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.177899 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="54d1448b-4fcf-49ce-aab4-884a71885bb6" containerName="mariadb-account-create-update" Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.177910 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="85ec0faa-1bca-4382-807d-35941e6d88fb" containerName="console" Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.178822 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-6pfp4" Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.182305 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-6kwkz" Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.182446 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.189877 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-6pfp4"] Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.200939 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdvv8\" (UniqueName: \"kubernetes.io/projected/85ec0faa-1bca-4382-807d-35941e6d88fb-kube-api-access-qdvv8\") pod \"85ec0faa-1bca-4382-807d-35941e6d88fb\" (UID: \"85ec0faa-1bca-4382-807d-35941e6d88fb\") " Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.200989 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/85ec0faa-1bca-4382-807d-35941e6d88fb-console-config\") pod \"85ec0faa-1bca-4382-807d-35941e6d88fb\" (UID: \"85ec0faa-1bca-4382-807d-35941e6d88fb\") " Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.201130 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/85ec0faa-1bca-4382-807d-35941e6d88fb-console-serving-cert\") pod \"85ec0faa-1bca-4382-807d-35941e6d88fb\" (UID: \"85ec0faa-1bca-4382-807d-35941e6d88fb\") " Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.201157 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/85ec0faa-1bca-4382-807d-35941e6d88fb-service-ca\") pod \"85ec0faa-1bca-4382-807d-35941e6d88fb\" (UID: \"85ec0faa-1bca-4382-807d-35941e6d88fb\") " Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.201381 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85ec0faa-1bca-4382-807d-35941e6d88fb-trusted-ca-bundle\") pod \"85ec0faa-1bca-4382-807d-35941e6d88fb\" (UID: \"85ec0faa-1bca-4382-807d-35941e6d88fb\") " Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.201437 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/85ec0faa-1bca-4382-807d-35941e6d88fb-console-oauth-config\") pod \"85ec0faa-1bca-4382-807d-35941e6d88fb\" (UID: \"85ec0faa-1bca-4382-807d-35941e6d88fb\") " Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.201578 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/85ec0faa-1bca-4382-807d-35941e6d88fb-oauth-serving-cert\") pod \"85ec0faa-1bca-4382-807d-35941e6d88fb\" (UID: \"85ec0faa-1bca-4382-807d-35941e6d88fb\") " Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.202371 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85ec0faa-1bca-4382-807d-35941e6d88fb-service-ca" (OuterVolumeSpecName: "service-ca") pod "85ec0faa-1bca-4382-807d-35941e6d88fb" (UID: "85ec0faa-1bca-4382-807d-35941e6d88fb"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.202769 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85ec0faa-1bca-4382-807d-35941e6d88fb-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "85ec0faa-1bca-4382-807d-35941e6d88fb" (UID: "85ec0faa-1bca-4382-807d-35941e6d88fb"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.203272 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85ec0faa-1bca-4382-807d-35941e6d88fb-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "85ec0faa-1bca-4382-807d-35941e6d88fb" (UID: "85ec0faa-1bca-4382-807d-35941e6d88fb"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.203420 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85ec0faa-1bca-4382-807d-35941e6d88fb-console-config" (OuterVolumeSpecName: "console-config") pod "85ec0faa-1bca-4382-807d-35941e6d88fb" (UID: "85ec0faa-1bca-4382-807d-35941e6d88fb"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.209706 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85ec0faa-1bca-4382-807d-35941e6d88fb-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "85ec0faa-1bca-4382-807d-35941e6d88fb" (UID: "85ec0faa-1bca-4382-807d-35941e6d88fb"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.210335 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85ec0faa-1bca-4382-807d-35941e6d88fb-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "85ec0faa-1bca-4382-807d-35941e6d88fb" (UID: "85ec0faa-1bca-4382-807d-35941e6d88fb"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.210367 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85ec0faa-1bca-4382-807d-35941e6d88fb-kube-api-access-qdvv8" (OuterVolumeSpecName: "kube-api-access-qdvv8") pod "85ec0faa-1bca-4382-807d-35941e6d88fb" (UID: "85ec0faa-1bca-4382-807d-35941e6d88fb"). InnerVolumeSpecName "kube-api-access-qdvv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.304317 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdb484e1-36e8-4bfe-aeb2-72fc1c331cda-combined-ca-bundle\") pod \"glance-db-sync-6pfp4\" (UID: \"cdb484e1-36e8-4bfe-aeb2-72fc1c331cda\") " pod="openstack/glance-db-sync-6pfp4" Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.304411 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cdb484e1-36e8-4bfe-aeb2-72fc1c331cda-db-sync-config-data\") pod \"glance-db-sync-6pfp4\" (UID: \"cdb484e1-36e8-4bfe-aeb2-72fc1c331cda\") " pod="openstack/glance-db-sync-6pfp4" Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.304486 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdb484e1-36e8-4bfe-aeb2-72fc1c331cda-config-data\") pod \"glance-db-sync-6pfp4\" (UID: \"cdb484e1-36e8-4bfe-aeb2-72fc1c331cda\") " pod="openstack/glance-db-sync-6pfp4" Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.304559 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlmz7\" (UniqueName: \"kubernetes.io/projected/cdb484e1-36e8-4bfe-aeb2-72fc1c331cda-kube-api-access-hlmz7\") pod \"glance-db-sync-6pfp4\" (UID: \"cdb484e1-36e8-4bfe-aeb2-72fc1c331cda\") " pod="openstack/glance-db-sync-6pfp4" Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.304625 4809 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/85ec0faa-1bca-4382-807d-35941e6d88fb-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.304639 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdvv8\" (UniqueName: \"kubernetes.io/projected/85ec0faa-1bca-4382-807d-35941e6d88fb-kube-api-access-qdvv8\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.304650 4809 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/85ec0faa-1bca-4382-807d-35941e6d88fb-console-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.304658 4809 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/85ec0faa-1bca-4382-807d-35941e6d88fb-console-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.304667 4809 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/85ec0faa-1bca-4382-807d-35941e6d88fb-service-ca\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.304675 4809 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85ec0faa-1bca-4382-807d-35941e6d88fb-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.304683 4809 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/85ec0faa-1bca-4382-807d-35941e6d88fb-console-oauth-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.325849 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555056-49bj8"] Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.335246 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-57dffbbf6c-jl4cx_85ec0faa-1bca-4382-807d-35941e6d88fb/console/0.log" Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.335309 4809 generic.go:334] "Generic (PLEG): container finished" podID="85ec0faa-1bca-4382-807d-35941e6d88fb" containerID="c248c5956e33ecc134cd6c5a667469cc0ccb18db90f222daea902159ffc04638" exitCode=2 Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.335378 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-57dffbbf6c-jl4cx" event={"ID":"85ec0faa-1bca-4382-807d-35941e6d88fb","Type":"ContainerDied","Data":"c248c5956e33ecc134cd6c5a667469cc0ccb18db90f222daea902159ffc04638"} Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.335410 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-57dffbbf6c-jl4cx" Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.335434 4809 scope.go:117] "RemoveContainer" containerID="c248c5956e33ecc134cd6c5a667469cc0ccb18db90f222daea902159ffc04638" Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.335417 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-57dffbbf6c-jl4cx" event={"ID":"85ec0faa-1bca-4382-807d-35941e6d88fb","Type":"ContainerDied","Data":"4d6b9a7c2ec5581de0fc1a6e456880f2629179afd572b05e672853a7bde66f67"} Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.340168 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555062-g8cq6" event={"ID":"f9671632-9016-48e7-827c-6c440d55245e","Type":"ContainerDied","Data":"8963b0c1befd2eb91246e155130bcfa171d284efe4b06736fbe4984cd5a5ce25"} Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.340298 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8963b0c1befd2eb91246e155130bcfa171d284efe4b06736fbe4984cd5a5ce25" Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.340326 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555062-g8cq6" Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.343658 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kbtbl" Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.343648 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-kbtbl" event={"ID":"cf54bb83-9452-47d2-ba9d-dc0b0e503202","Type":"ContainerDied","Data":"328e11e0dcc4f2cfbfb1523dd0c7a5ad2105735c2ca5a178039e399f9f9859e2"} Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.344188 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="328e11e0dcc4f2cfbfb1523dd0c7a5ad2105735c2ca5a178039e399f9f9859e2" Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.348567 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555056-49bj8"] Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.360659 4809 scope.go:117] "RemoveContainer" containerID="c248c5956e33ecc134cd6c5a667469cc0ccb18db90f222daea902159ffc04638" Mar 12 08:22:06 crc kubenswrapper[4809]: E0312 08:22:06.361223 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c248c5956e33ecc134cd6c5a667469cc0ccb18db90f222daea902159ffc04638\": container with ID starting with c248c5956e33ecc134cd6c5a667469cc0ccb18db90f222daea902159ffc04638 not found: ID does not exist" containerID="c248c5956e33ecc134cd6c5a667469cc0ccb18db90f222daea902159ffc04638" Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.361269 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c248c5956e33ecc134cd6c5a667469cc0ccb18db90f222daea902159ffc04638"} err="failed to get container status \"c248c5956e33ecc134cd6c5a667469cc0ccb18db90f222daea902159ffc04638\": rpc error: code = NotFound desc = could not find container \"c248c5956e33ecc134cd6c5a667469cc0ccb18db90f222daea902159ffc04638\": container with ID starting with c248c5956e33ecc134cd6c5a667469cc0ccb18db90f222daea902159ffc04638 not found: ID does not exist" Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.425003 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdb484e1-36e8-4bfe-aeb2-72fc1c331cda-combined-ca-bundle\") pod \"glance-db-sync-6pfp4\" (UID: \"cdb484e1-36e8-4bfe-aeb2-72fc1c331cda\") " pod="openstack/glance-db-sync-6pfp4" Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.425247 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cdb484e1-36e8-4bfe-aeb2-72fc1c331cda-db-sync-config-data\") pod \"glance-db-sync-6pfp4\" (UID: \"cdb484e1-36e8-4bfe-aeb2-72fc1c331cda\") " pod="openstack/glance-db-sync-6pfp4" Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.425445 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdb484e1-36e8-4bfe-aeb2-72fc1c331cda-config-data\") pod \"glance-db-sync-6pfp4\" (UID: \"cdb484e1-36e8-4bfe-aeb2-72fc1c331cda\") " pod="openstack/glance-db-sync-6pfp4" Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.425530 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlmz7\" (UniqueName: \"kubernetes.io/projected/cdb484e1-36e8-4bfe-aeb2-72fc1c331cda-kube-api-access-hlmz7\") pod \"glance-db-sync-6pfp4\" (UID: \"cdb484e1-36e8-4bfe-aeb2-72fc1c331cda\") " pod="openstack/glance-db-sync-6pfp4" Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.433196 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdb484e1-36e8-4bfe-aeb2-72fc1c331cda-combined-ca-bundle\") pod \"glance-db-sync-6pfp4\" (UID: \"cdb484e1-36e8-4bfe-aeb2-72fc1c331cda\") " pod="openstack/glance-db-sync-6pfp4" Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.436142 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cdb484e1-36e8-4bfe-aeb2-72fc1c331cda-db-sync-config-data\") pod \"glance-db-sync-6pfp4\" (UID: \"cdb484e1-36e8-4bfe-aeb2-72fc1c331cda\") " pod="openstack/glance-db-sync-6pfp4" Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.436717 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdb484e1-36e8-4bfe-aeb2-72fc1c331cda-config-data\") pod \"glance-db-sync-6pfp4\" (UID: \"cdb484e1-36e8-4bfe-aeb2-72fc1c331cda\") " pod="openstack/glance-db-sync-6pfp4" Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.455927 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-57dffbbf6c-jl4cx"] Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.470791 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlmz7\" (UniqueName: \"kubernetes.io/projected/cdb484e1-36e8-4bfe-aeb2-72fc1c331cda-kube-api-access-hlmz7\") pod \"glance-db-sync-6pfp4\" (UID: \"cdb484e1-36e8-4bfe-aeb2-72fc1c331cda\") " pod="openstack/glance-db-sync-6pfp4" Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.471320 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-57dffbbf6c-jl4cx"] Mar 12 08:22:06 crc kubenswrapper[4809]: I0312 08:22:06.500529 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-6pfp4" Mar 12 08:22:07 crc kubenswrapper[4809]: I0312 08:22:07.126649 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="549c9413-b716-47a7-975c-2b9ebf41d33a" path="/var/lib/kubelet/pods/549c9413-b716-47a7-975c-2b9ebf41d33a/volumes" Mar 12 08:22:07 crc kubenswrapper[4809]: I0312 08:22:07.128613 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85ec0faa-1bca-4382-807d-35941e6d88fb" path="/var/lib/kubelet/pods/85ec0faa-1bca-4382-807d-35941e6d88fb/volumes" Mar 12 08:22:07 crc kubenswrapper[4809]: I0312 08:22:07.149730 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-6pfp4"] Mar 12 08:22:07 crc kubenswrapper[4809]: W0312 08:22:07.158292 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcdb484e1_36e8_4bfe_aeb2_72fc1c331cda.slice/crio-36ab76b7b081ed3bda886d8dd032c34847125f28d121f5169cb8922ae69e1541 WatchSource:0}: Error finding container 36ab76b7b081ed3bda886d8dd032c34847125f28d121f5169cb8922ae69e1541: Status 404 returned error can't find the container with id 36ab76b7b081ed3bda886d8dd032c34847125f28d121f5169cb8922ae69e1541 Mar 12 08:22:07 crc kubenswrapper[4809]: I0312 08:22:07.364687 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-6pfp4" event={"ID":"cdb484e1-36e8-4bfe-aeb2-72fc1c331cda","Type":"ContainerStarted","Data":"36ab76b7b081ed3bda886d8dd032c34847125f28d121f5169cb8922ae69e1541"} Mar 12 08:22:07 crc kubenswrapper[4809]: I0312 08:22:07.900545 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-7hl7d"] Mar 12 08:22:07 crc kubenswrapper[4809]: I0312 08:22:07.902144 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-7hl7d" Mar 12 08:22:07 crc kubenswrapper[4809]: I0312 08:22:07.920002 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-7hl7d"] Mar 12 08:22:07 crc kubenswrapper[4809]: I0312 08:22:07.973717 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/688a9778-473c-44fa-a4ba-ac00d4e21a10-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-7hl7d\" (UID: \"688a9778-473c-44fa-a4ba-ac00d4e21a10\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-7hl7d" Mar 12 08:22:07 crc kubenswrapper[4809]: I0312 08:22:07.973792 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtwkt\" (UniqueName: \"kubernetes.io/projected/688a9778-473c-44fa-a4ba-ac00d4e21a10-kube-api-access-qtwkt\") pod \"mysqld-exporter-openstack-cell1-db-create-7hl7d\" (UID: \"688a9778-473c-44fa-a4ba-ac00d4e21a10\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-7hl7d" Mar 12 08:22:08 crc kubenswrapper[4809]: I0312 08:22:08.075469 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/688a9778-473c-44fa-a4ba-ac00d4e21a10-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-7hl7d\" (UID: \"688a9778-473c-44fa-a4ba-ac00d4e21a10\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-7hl7d" Mar 12 08:22:08 crc kubenswrapper[4809]: I0312 08:22:08.075549 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtwkt\" (UniqueName: \"kubernetes.io/projected/688a9778-473c-44fa-a4ba-ac00d4e21a10-kube-api-access-qtwkt\") pod \"mysqld-exporter-openstack-cell1-db-create-7hl7d\" (UID: \"688a9778-473c-44fa-a4ba-ac00d4e21a10\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-7hl7d" Mar 12 08:22:08 crc kubenswrapper[4809]: I0312 08:22:08.076805 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/688a9778-473c-44fa-a4ba-ac00d4e21a10-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-7hl7d\" (UID: \"688a9778-473c-44fa-a4ba-ac00d4e21a10\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-7hl7d" Mar 12 08:22:08 crc kubenswrapper[4809]: I0312 08:22:08.109515 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtwkt\" (UniqueName: \"kubernetes.io/projected/688a9778-473c-44fa-a4ba-ac00d4e21a10-kube-api-access-qtwkt\") pod \"mysqld-exporter-openstack-cell1-db-create-7hl7d\" (UID: \"688a9778-473c-44fa-a4ba-ac00d4e21a10\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-7hl7d" Mar 12 08:22:08 crc kubenswrapper[4809]: I0312 08:22:08.216571 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-cffe-account-create-update-4vbvq"] Mar 12 08:22:08 crc kubenswrapper[4809]: I0312 08:22:08.218000 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-cffe-account-create-update-4vbvq" Mar 12 08:22:08 crc kubenswrapper[4809]: I0312 08:22:08.222631 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-cell1-db-secret" Mar 12 08:22:08 crc kubenswrapper[4809]: I0312 08:22:08.226235 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-7hl7d" Mar 12 08:22:08 crc kubenswrapper[4809]: I0312 08:22:08.232790 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-cffe-account-create-update-4vbvq"] Mar 12 08:22:08 crc kubenswrapper[4809]: I0312 08:22:08.284868 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b3ee226-2b1e-4d6b-b0ac-6ede45a3cfff-operator-scripts\") pod \"mysqld-exporter-cffe-account-create-update-4vbvq\" (UID: \"3b3ee226-2b1e-4d6b-b0ac-6ede45a3cfff\") " pod="openstack/mysqld-exporter-cffe-account-create-update-4vbvq" Mar 12 08:22:08 crc kubenswrapper[4809]: I0312 08:22:08.285001 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq58l\" (UniqueName: \"kubernetes.io/projected/3b3ee226-2b1e-4d6b-b0ac-6ede45a3cfff-kube-api-access-pq58l\") pod \"mysqld-exporter-cffe-account-create-update-4vbvq\" (UID: \"3b3ee226-2b1e-4d6b-b0ac-6ede45a3cfff\") " pod="openstack/mysqld-exporter-cffe-account-create-update-4vbvq" Mar 12 08:22:08 crc kubenswrapper[4809]: I0312 08:22:08.387285 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b3ee226-2b1e-4d6b-b0ac-6ede45a3cfff-operator-scripts\") pod \"mysqld-exporter-cffe-account-create-update-4vbvq\" (UID: \"3b3ee226-2b1e-4d6b-b0ac-6ede45a3cfff\") " pod="openstack/mysqld-exporter-cffe-account-create-update-4vbvq" Mar 12 08:22:08 crc kubenswrapper[4809]: I0312 08:22:08.387745 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pq58l\" (UniqueName: \"kubernetes.io/projected/3b3ee226-2b1e-4d6b-b0ac-6ede45a3cfff-kube-api-access-pq58l\") pod \"mysqld-exporter-cffe-account-create-update-4vbvq\" (UID: \"3b3ee226-2b1e-4d6b-b0ac-6ede45a3cfff\") " pod="openstack/mysqld-exporter-cffe-account-create-update-4vbvq" Mar 12 08:22:08 crc kubenswrapper[4809]: I0312 08:22:08.388092 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b3ee226-2b1e-4d6b-b0ac-6ede45a3cfff-operator-scripts\") pod \"mysqld-exporter-cffe-account-create-update-4vbvq\" (UID: \"3b3ee226-2b1e-4d6b-b0ac-6ede45a3cfff\") " pod="openstack/mysqld-exporter-cffe-account-create-update-4vbvq" Mar 12 08:22:08 crc kubenswrapper[4809]: I0312 08:22:08.418290 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pq58l\" (UniqueName: \"kubernetes.io/projected/3b3ee226-2b1e-4d6b-b0ac-6ede45a3cfff-kube-api-access-pq58l\") pod \"mysqld-exporter-cffe-account-create-update-4vbvq\" (UID: \"3b3ee226-2b1e-4d6b-b0ac-6ede45a3cfff\") " pod="openstack/mysqld-exporter-cffe-account-create-update-4vbvq" Mar 12 08:22:08 crc kubenswrapper[4809]: I0312 08:22:08.548856 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-cffe-account-create-update-4vbvq" Mar 12 08:22:08 crc kubenswrapper[4809]: I0312 08:22:08.568186 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-kbtbl"] Mar 12 08:22:08 crc kubenswrapper[4809]: I0312 08:22:08.577450 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-kbtbl"] Mar 12 08:22:08 crc kubenswrapper[4809]: I0312 08:22:08.772695 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-7hl7d"] Mar 12 08:22:08 crc kubenswrapper[4809]: W0312 08:22:08.837465 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod688a9778_473c_44fa_a4ba_ac00d4e21a10.slice/crio-61b5e4dde601a373127a39f19f9ca8b29cdc2a0a61b32fe3f9edb8cdadcc76a1 WatchSource:0}: Error finding container 61b5e4dde601a373127a39f19f9ca8b29cdc2a0a61b32fe3f9edb8cdadcc76a1: Status 404 returned error can't find the container with id 61b5e4dde601a373127a39f19f9ca8b29cdc2a0a61b32fe3f9edb8cdadcc76a1 Mar 12 08:22:09 crc kubenswrapper[4809]: I0312 08:22:09.139482 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf54bb83-9452-47d2-ba9d-dc0b0e503202" path="/var/lib/kubelet/pods/cf54bb83-9452-47d2-ba9d-dc0b0e503202/volumes" Mar 12 08:22:09 crc kubenswrapper[4809]: I0312 08:22:09.268727 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-cffe-account-create-update-4vbvq"] Mar 12 08:22:09 crc kubenswrapper[4809]: W0312 08:22:09.278241 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b3ee226_2b1e_4d6b_b0ac_6ede45a3cfff.slice/crio-b38b8670bd176ab66766f78f6531952447b2f87854c4839aa34f5c4c089dce35 WatchSource:0}: Error finding container b38b8670bd176ab66766f78f6531952447b2f87854c4839aa34f5c4c089dce35: Status 404 returned error can't find the container with id b38b8670bd176ab66766f78f6531952447b2f87854c4839aa34f5c4c089dce35 Mar 12 08:22:09 crc kubenswrapper[4809]: I0312 08:22:09.385868 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-cffe-account-create-update-4vbvq" event={"ID":"3b3ee226-2b1e-4d6b-b0ac-6ede45a3cfff","Type":"ContainerStarted","Data":"b38b8670bd176ab66766f78f6531952447b2f87854c4839aa34f5c4c089dce35"} Mar 12 08:22:09 crc kubenswrapper[4809]: I0312 08:22:09.388465 4809 generic.go:334] "Generic (PLEG): container finished" podID="fefb3329-70bc-45d1-ac98-44f1836b3470" containerID="8fe98d57f4f19a9c27d3d704cb13af2e6683cbc0c4edb4b0457e6d79607de148" exitCode=0 Mar 12 08:22:09 crc kubenswrapper[4809]: I0312 08:22:09.388539 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fefb3329-70bc-45d1-ac98-44f1836b3470","Type":"ContainerDied","Data":"8fe98d57f4f19a9c27d3d704cb13af2e6683cbc0c4edb4b0457e6d79607de148"} Mar 12 08:22:09 crc kubenswrapper[4809]: I0312 08:22:09.391630 4809 generic.go:334] "Generic (PLEG): container finished" podID="688a9778-473c-44fa-a4ba-ac00d4e21a10" containerID="21b811943b08988a3596b1ba906a57bf8740487822c38eae8cd2592fc3f1d129" exitCode=0 Mar 12 08:22:09 crc kubenswrapper[4809]: I0312 08:22:09.391661 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-7hl7d" event={"ID":"688a9778-473c-44fa-a4ba-ac00d4e21a10","Type":"ContainerDied","Data":"21b811943b08988a3596b1ba906a57bf8740487822c38eae8cd2592fc3f1d129"} Mar 12 08:22:09 crc kubenswrapper[4809]: I0312 08:22:09.391680 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-7hl7d" event={"ID":"688a9778-473c-44fa-a4ba-ac00d4e21a10","Type":"ContainerStarted","Data":"61b5e4dde601a373127a39f19f9ca8b29cdc2a0a61b32fe3f9edb8cdadcc76a1"} Mar 12 08:22:09 crc kubenswrapper[4809]: I0312 08:22:09.614984 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Mar 12 08:22:10 crc kubenswrapper[4809]: I0312 08:22:10.404477 4809 generic.go:334] "Generic (PLEG): container finished" podID="7c349c64-fcf0-48cf-91c5-fac0131bacc6" containerID="02875af7f91784040d3dff2f284134bf4beb1b7dd95bd2602334fd2622d0b2ba" exitCode=0 Mar 12 08:22:10 crc kubenswrapper[4809]: I0312 08:22:10.404555 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-w9c45" event={"ID":"7c349c64-fcf0-48cf-91c5-fac0131bacc6","Type":"ContainerDied","Data":"02875af7f91784040d3dff2f284134bf4beb1b7dd95bd2602334fd2622d0b2ba"} Mar 12 08:22:10 crc kubenswrapper[4809]: I0312 08:22:10.407296 4809 generic.go:334] "Generic (PLEG): container finished" podID="3b3ee226-2b1e-4d6b-b0ac-6ede45a3cfff" containerID="0e0693879750442393e55afb37489772c8505c81eef03f3e4b5f44b06f753c89" exitCode=0 Mar 12 08:22:10 crc kubenswrapper[4809]: I0312 08:22:10.407495 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-cffe-account-create-update-4vbvq" event={"ID":"3b3ee226-2b1e-4d6b-b0ac-6ede45a3cfff","Type":"ContainerDied","Data":"0e0693879750442393e55afb37489772c8505c81eef03f3e4b5f44b06f753c89"} Mar 12 08:22:10 crc kubenswrapper[4809]: I0312 08:22:10.721887 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-x5bc6" podUID="e8f77780-9a39-4298-8bfe-76a54e1e41d9" containerName="ovn-controller" probeResult="failure" output=< Mar 12 08:22:10 crc kubenswrapper[4809]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Mar 12 08:22:10 crc kubenswrapper[4809]: > Mar 12 08:22:10 crc kubenswrapper[4809]: I0312 08:22:10.743457 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-pt4wh" Mar 12 08:22:10 crc kubenswrapper[4809]: I0312 08:22:10.758812 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-pt4wh" Mar 12 08:22:10 crc kubenswrapper[4809]: I0312 08:22:10.860234 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-7hl7d" Mar 12 08:22:10 crc kubenswrapper[4809]: I0312 08:22:10.949250 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/688a9778-473c-44fa-a4ba-ac00d4e21a10-operator-scripts\") pod \"688a9778-473c-44fa-a4ba-ac00d4e21a10\" (UID: \"688a9778-473c-44fa-a4ba-ac00d4e21a10\") " Mar 12 08:22:10 crc kubenswrapper[4809]: I0312 08:22:10.949338 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtwkt\" (UniqueName: \"kubernetes.io/projected/688a9778-473c-44fa-a4ba-ac00d4e21a10-kube-api-access-qtwkt\") pod \"688a9778-473c-44fa-a4ba-ac00d4e21a10\" (UID: \"688a9778-473c-44fa-a4ba-ac00d4e21a10\") " Mar 12 08:22:10 crc kubenswrapper[4809]: I0312 08:22:10.949771 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/688a9778-473c-44fa-a4ba-ac00d4e21a10-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "688a9778-473c-44fa-a4ba-ac00d4e21a10" (UID: "688a9778-473c-44fa-a4ba-ac00d4e21a10"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:22:10 crc kubenswrapper[4809]: I0312 08:22:10.950956 4809 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/688a9778-473c-44fa-a4ba-ac00d4e21a10-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:10 crc kubenswrapper[4809]: I0312 08:22:10.962066 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/688a9778-473c-44fa-a4ba-ac00d4e21a10-kube-api-access-qtwkt" (OuterVolumeSpecName: "kube-api-access-qtwkt") pod "688a9778-473c-44fa-a4ba-ac00d4e21a10" (UID: "688a9778-473c-44fa-a4ba-ac00d4e21a10"). InnerVolumeSpecName "kube-api-access-qtwkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:22:11 crc kubenswrapper[4809]: I0312 08:22:11.053594 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qtwkt\" (UniqueName: \"kubernetes.io/projected/688a9778-473c-44fa-a4ba-ac00d4e21a10-kube-api-access-qtwkt\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:11 crc kubenswrapper[4809]: I0312 08:22:11.070350 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-x5bc6-config-6mqmm"] Mar 12 08:22:11 crc kubenswrapper[4809]: E0312 08:22:11.070801 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="688a9778-473c-44fa-a4ba-ac00d4e21a10" containerName="mariadb-database-create" Mar 12 08:22:11 crc kubenswrapper[4809]: I0312 08:22:11.070818 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="688a9778-473c-44fa-a4ba-ac00d4e21a10" containerName="mariadb-database-create" Mar 12 08:22:11 crc kubenswrapper[4809]: I0312 08:22:11.071035 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="688a9778-473c-44fa-a4ba-ac00d4e21a10" containerName="mariadb-database-create" Mar 12 08:22:11 crc kubenswrapper[4809]: I0312 08:22:11.071796 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-x5bc6-config-6mqmm" Mar 12 08:22:11 crc kubenswrapper[4809]: I0312 08:22:11.078083 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Mar 12 08:22:11 crc kubenswrapper[4809]: I0312 08:22:11.098078 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-x5bc6-config-6mqmm"] Mar 12 08:22:11 crc kubenswrapper[4809]: I0312 08:22:11.159143 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9facde94-80d9-47d6-8eec-f05289b3f4e0-additional-scripts\") pod \"ovn-controller-x5bc6-config-6mqmm\" (UID: \"9facde94-80d9-47d6-8eec-f05289b3f4e0\") " pod="openstack/ovn-controller-x5bc6-config-6mqmm" Mar 12 08:22:11 crc kubenswrapper[4809]: I0312 08:22:11.159229 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9facde94-80d9-47d6-8eec-f05289b3f4e0-var-run\") pod \"ovn-controller-x5bc6-config-6mqmm\" (UID: \"9facde94-80d9-47d6-8eec-f05289b3f4e0\") " pod="openstack/ovn-controller-x5bc6-config-6mqmm" Mar 12 08:22:11 crc kubenswrapper[4809]: I0312 08:22:11.159299 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6m7fv\" (UniqueName: \"kubernetes.io/projected/9facde94-80d9-47d6-8eec-f05289b3f4e0-kube-api-access-6m7fv\") pod \"ovn-controller-x5bc6-config-6mqmm\" (UID: \"9facde94-80d9-47d6-8eec-f05289b3f4e0\") " pod="openstack/ovn-controller-x5bc6-config-6mqmm" Mar 12 08:22:11 crc kubenswrapper[4809]: I0312 08:22:11.159343 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9facde94-80d9-47d6-8eec-f05289b3f4e0-scripts\") pod \"ovn-controller-x5bc6-config-6mqmm\" (UID: \"9facde94-80d9-47d6-8eec-f05289b3f4e0\") " pod="openstack/ovn-controller-x5bc6-config-6mqmm" Mar 12 08:22:11 crc kubenswrapper[4809]: I0312 08:22:11.159367 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9facde94-80d9-47d6-8eec-f05289b3f4e0-var-log-ovn\") pod \"ovn-controller-x5bc6-config-6mqmm\" (UID: \"9facde94-80d9-47d6-8eec-f05289b3f4e0\") " pod="openstack/ovn-controller-x5bc6-config-6mqmm" Mar 12 08:22:11 crc kubenswrapper[4809]: I0312 08:22:11.159393 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9facde94-80d9-47d6-8eec-f05289b3f4e0-var-run-ovn\") pod \"ovn-controller-x5bc6-config-6mqmm\" (UID: \"9facde94-80d9-47d6-8eec-f05289b3f4e0\") " pod="openstack/ovn-controller-x5bc6-config-6mqmm" Mar 12 08:22:11 crc kubenswrapper[4809]: I0312 08:22:11.261896 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9facde94-80d9-47d6-8eec-f05289b3f4e0-additional-scripts\") pod \"ovn-controller-x5bc6-config-6mqmm\" (UID: \"9facde94-80d9-47d6-8eec-f05289b3f4e0\") " pod="openstack/ovn-controller-x5bc6-config-6mqmm" Mar 12 08:22:11 crc kubenswrapper[4809]: I0312 08:22:11.262040 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9facde94-80d9-47d6-8eec-f05289b3f4e0-var-run\") pod \"ovn-controller-x5bc6-config-6mqmm\" (UID: \"9facde94-80d9-47d6-8eec-f05289b3f4e0\") " pod="openstack/ovn-controller-x5bc6-config-6mqmm" Mar 12 08:22:11 crc kubenswrapper[4809]: I0312 08:22:11.262140 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6m7fv\" (UniqueName: \"kubernetes.io/projected/9facde94-80d9-47d6-8eec-f05289b3f4e0-kube-api-access-6m7fv\") pod \"ovn-controller-x5bc6-config-6mqmm\" (UID: \"9facde94-80d9-47d6-8eec-f05289b3f4e0\") " pod="openstack/ovn-controller-x5bc6-config-6mqmm" Mar 12 08:22:11 crc kubenswrapper[4809]: I0312 08:22:11.262224 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9facde94-80d9-47d6-8eec-f05289b3f4e0-scripts\") pod \"ovn-controller-x5bc6-config-6mqmm\" (UID: \"9facde94-80d9-47d6-8eec-f05289b3f4e0\") " pod="openstack/ovn-controller-x5bc6-config-6mqmm" Mar 12 08:22:11 crc kubenswrapper[4809]: I0312 08:22:11.262252 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9facde94-80d9-47d6-8eec-f05289b3f4e0-var-log-ovn\") pod \"ovn-controller-x5bc6-config-6mqmm\" (UID: \"9facde94-80d9-47d6-8eec-f05289b3f4e0\") " pod="openstack/ovn-controller-x5bc6-config-6mqmm" Mar 12 08:22:11 crc kubenswrapper[4809]: I0312 08:22:11.262283 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9facde94-80d9-47d6-8eec-f05289b3f4e0-var-run-ovn\") pod \"ovn-controller-x5bc6-config-6mqmm\" (UID: \"9facde94-80d9-47d6-8eec-f05289b3f4e0\") " pod="openstack/ovn-controller-x5bc6-config-6mqmm" Mar 12 08:22:11 crc kubenswrapper[4809]: I0312 08:22:11.262768 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9facde94-80d9-47d6-8eec-f05289b3f4e0-var-run-ovn\") pod \"ovn-controller-x5bc6-config-6mqmm\" (UID: \"9facde94-80d9-47d6-8eec-f05289b3f4e0\") " pod="openstack/ovn-controller-x5bc6-config-6mqmm" Mar 12 08:22:11 crc kubenswrapper[4809]: I0312 08:22:11.264411 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9facde94-80d9-47d6-8eec-f05289b3f4e0-var-run\") pod \"ovn-controller-x5bc6-config-6mqmm\" (UID: \"9facde94-80d9-47d6-8eec-f05289b3f4e0\") " pod="openstack/ovn-controller-x5bc6-config-6mqmm" Mar 12 08:22:11 crc kubenswrapper[4809]: I0312 08:22:11.264737 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9facde94-80d9-47d6-8eec-f05289b3f4e0-var-log-ovn\") pod \"ovn-controller-x5bc6-config-6mqmm\" (UID: \"9facde94-80d9-47d6-8eec-f05289b3f4e0\") " pod="openstack/ovn-controller-x5bc6-config-6mqmm" Mar 12 08:22:11 crc kubenswrapper[4809]: I0312 08:22:11.265515 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9facde94-80d9-47d6-8eec-f05289b3f4e0-scripts\") pod \"ovn-controller-x5bc6-config-6mqmm\" (UID: \"9facde94-80d9-47d6-8eec-f05289b3f4e0\") " pod="openstack/ovn-controller-x5bc6-config-6mqmm" Mar 12 08:22:11 crc kubenswrapper[4809]: I0312 08:22:11.266284 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9facde94-80d9-47d6-8eec-f05289b3f4e0-additional-scripts\") pod \"ovn-controller-x5bc6-config-6mqmm\" (UID: \"9facde94-80d9-47d6-8eec-f05289b3f4e0\") " pod="openstack/ovn-controller-x5bc6-config-6mqmm" Mar 12 08:22:11 crc kubenswrapper[4809]: I0312 08:22:11.289071 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6m7fv\" (UniqueName: \"kubernetes.io/projected/9facde94-80d9-47d6-8eec-f05289b3f4e0-kube-api-access-6m7fv\") pod \"ovn-controller-x5bc6-config-6mqmm\" (UID: \"9facde94-80d9-47d6-8eec-f05289b3f4e0\") " pod="openstack/ovn-controller-x5bc6-config-6mqmm" Mar 12 08:22:11 crc kubenswrapper[4809]: I0312 08:22:11.390652 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-x5bc6-config-6mqmm" Mar 12 08:22:11 crc kubenswrapper[4809]: I0312 08:22:11.420541 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-7hl7d" event={"ID":"688a9778-473c-44fa-a4ba-ac00d4e21a10","Type":"ContainerDied","Data":"61b5e4dde601a373127a39f19f9ca8b29cdc2a0a61b32fe3f9edb8cdadcc76a1"} Mar 12 08:22:11 crc kubenswrapper[4809]: I0312 08:22:11.420609 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61b5e4dde601a373127a39f19f9ca8b29cdc2a0a61b32fe3f9edb8cdadcc76a1" Mar 12 08:22:11 crc kubenswrapper[4809]: I0312 08:22:11.420635 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-7hl7d" Mar 12 08:22:12 crc kubenswrapper[4809]: I0312 08:22:12.026467 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-w9c45" Mar 12 08:22:12 crc kubenswrapper[4809]: I0312 08:22:12.031940 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-cffe-account-create-update-4vbvq" Mar 12 08:22:12 crc kubenswrapper[4809]: I0312 08:22:12.085835 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7c349c64-fcf0-48cf-91c5-fac0131bacc6-ring-data-devices\") pod \"7c349c64-fcf0-48cf-91c5-fac0131bacc6\" (UID: \"7c349c64-fcf0-48cf-91c5-fac0131bacc6\") " Mar 12 08:22:12 crc kubenswrapper[4809]: I0312 08:22:12.085948 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c349c64-fcf0-48cf-91c5-fac0131bacc6-combined-ca-bundle\") pod \"7c349c64-fcf0-48cf-91c5-fac0131bacc6\" (UID: \"7c349c64-fcf0-48cf-91c5-fac0131bacc6\") " Mar 12 08:22:12 crc kubenswrapper[4809]: I0312 08:22:12.085979 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b3ee226-2b1e-4d6b-b0ac-6ede45a3cfff-operator-scripts\") pod \"3b3ee226-2b1e-4d6b-b0ac-6ede45a3cfff\" (UID: \"3b3ee226-2b1e-4d6b-b0ac-6ede45a3cfff\") " Mar 12 08:22:12 crc kubenswrapper[4809]: I0312 08:22:12.086009 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7c349c64-fcf0-48cf-91c5-fac0131bacc6-etc-swift\") pod \"7c349c64-fcf0-48cf-91c5-fac0131bacc6\" (UID: \"7c349c64-fcf0-48cf-91c5-fac0131bacc6\") " Mar 12 08:22:12 crc kubenswrapper[4809]: I0312 08:22:12.086146 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c349c64-fcf0-48cf-91c5-fac0131bacc6-scripts\") pod \"7c349c64-fcf0-48cf-91c5-fac0131bacc6\" (UID: \"7c349c64-fcf0-48cf-91c5-fac0131bacc6\") " Mar 12 08:22:12 crc kubenswrapper[4809]: I0312 08:22:12.086186 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7c349c64-fcf0-48cf-91c5-fac0131bacc6-swiftconf\") pod \"7c349c64-fcf0-48cf-91c5-fac0131bacc6\" (UID: \"7c349c64-fcf0-48cf-91c5-fac0131bacc6\") " Mar 12 08:22:12 crc kubenswrapper[4809]: I0312 08:22:12.086221 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pq58l\" (UniqueName: \"kubernetes.io/projected/3b3ee226-2b1e-4d6b-b0ac-6ede45a3cfff-kube-api-access-pq58l\") pod \"3b3ee226-2b1e-4d6b-b0ac-6ede45a3cfff\" (UID: \"3b3ee226-2b1e-4d6b-b0ac-6ede45a3cfff\") " Mar 12 08:22:12 crc kubenswrapper[4809]: I0312 08:22:12.086252 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7c349c64-fcf0-48cf-91c5-fac0131bacc6-dispersionconf\") pod \"7c349c64-fcf0-48cf-91c5-fac0131bacc6\" (UID: \"7c349c64-fcf0-48cf-91c5-fac0131bacc6\") " Mar 12 08:22:12 crc kubenswrapper[4809]: I0312 08:22:12.086274 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crw7n\" (UniqueName: \"kubernetes.io/projected/7c349c64-fcf0-48cf-91c5-fac0131bacc6-kube-api-access-crw7n\") pod \"7c349c64-fcf0-48cf-91c5-fac0131bacc6\" (UID: \"7c349c64-fcf0-48cf-91c5-fac0131bacc6\") " Mar 12 08:22:12 crc kubenswrapper[4809]: I0312 08:22:12.089070 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c349c64-fcf0-48cf-91c5-fac0131bacc6-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "7c349c64-fcf0-48cf-91c5-fac0131bacc6" (UID: "7c349c64-fcf0-48cf-91c5-fac0131bacc6"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:22:12 crc kubenswrapper[4809]: I0312 08:22:12.090225 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b3ee226-2b1e-4d6b-b0ac-6ede45a3cfff-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3b3ee226-2b1e-4d6b-b0ac-6ede45a3cfff" (UID: "3b3ee226-2b1e-4d6b-b0ac-6ede45a3cfff"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:22:12 crc kubenswrapper[4809]: I0312 08:22:12.090228 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c349c64-fcf0-48cf-91c5-fac0131bacc6-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "7c349c64-fcf0-48cf-91c5-fac0131bacc6" (UID: "7c349c64-fcf0-48cf-91c5-fac0131bacc6"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:22:12 crc kubenswrapper[4809]: I0312 08:22:12.095092 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b3ee226-2b1e-4d6b-b0ac-6ede45a3cfff-kube-api-access-pq58l" (OuterVolumeSpecName: "kube-api-access-pq58l") pod "3b3ee226-2b1e-4d6b-b0ac-6ede45a3cfff" (UID: "3b3ee226-2b1e-4d6b-b0ac-6ede45a3cfff"). InnerVolumeSpecName "kube-api-access-pq58l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:22:12 crc kubenswrapper[4809]: I0312 08:22:12.095535 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c349c64-fcf0-48cf-91c5-fac0131bacc6-kube-api-access-crw7n" (OuterVolumeSpecName: "kube-api-access-crw7n") pod "7c349c64-fcf0-48cf-91c5-fac0131bacc6" (UID: "7c349c64-fcf0-48cf-91c5-fac0131bacc6"). InnerVolumeSpecName "kube-api-access-crw7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:22:12 crc kubenswrapper[4809]: I0312 08:22:12.115757 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c349c64-fcf0-48cf-91c5-fac0131bacc6-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "7c349c64-fcf0-48cf-91c5-fac0131bacc6" (UID: "7c349c64-fcf0-48cf-91c5-fac0131bacc6"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:22:12 crc kubenswrapper[4809]: I0312 08:22:12.120649 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-x5bc6-config-6mqmm"] Mar 12 08:22:12 crc kubenswrapper[4809]: I0312 08:22:12.155912 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c349c64-fcf0-48cf-91c5-fac0131bacc6-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "7c349c64-fcf0-48cf-91c5-fac0131bacc6" (UID: "7c349c64-fcf0-48cf-91c5-fac0131bacc6"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:22:12 crc kubenswrapper[4809]: I0312 08:22:12.161625 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c349c64-fcf0-48cf-91c5-fac0131bacc6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7c349c64-fcf0-48cf-91c5-fac0131bacc6" (UID: "7c349c64-fcf0-48cf-91c5-fac0131bacc6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:22:12 crc kubenswrapper[4809]: I0312 08:22:12.179953 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c349c64-fcf0-48cf-91c5-fac0131bacc6-scripts" (OuterVolumeSpecName: "scripts") pod "7c349c64-fcf0-48cf-91c5-fac0131bacc6" (UID: "7c349c64-fcf0-48cf-91c5-fac0131bacc6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:22:12 crc kubenswrapper[4809]: I0312 08:22:12.189016 4809 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c349c64-fcf0-48cf-91c5-fac0131bacc6-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:12 crc kubenswrapper[4809]: I0312 08:22:12.189060 4809 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7c349c64-fcf0-48cf-91c5-fac0131bacc6-swiftconf\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:12 crc kubenswrapper[4809]: I0312 08:22:12.189071 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pq58l\" (UniqueName: \"kubernetes.io/projected/3b3ee226-2b1e-4d6b-b0ac-6ede45a3cfff-kube-api-access-pq58l\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:12 crc kubenswrapper[4809]: I0312 08:22:12.189176 4809 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7c349c64-fcf0-48cf-91c5-fac0131bacc6-dispersionconf\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:12 crc kubenswrapper[4809]: I0312 08:22:12.189187 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crw7n\" (UniqueName: \"kubernetes.io/projected/7c349c64-fcf0-48cf-91c5-fac0131bacc6-kube-api-access-crw7n\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:12 crc kubenswrapper[4809]: I0312 08:22:12.189198 4809 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7c349c64-fcf0-48cf-91c5-fac0131bacc6-ring-data-devices\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:12 crc kubenswrapper[4809]: I0312 08:22:12.189207 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c349c64-fcf0-48cf-91c5-fac0131bacc6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:12 crc kubenswrapper[4809]: I0312 08:22:12.189217 4809 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b3ee226-2b1e-4d6b-b0ac-6ede45a3cfff-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:12 crc kubenswrapper[4809]: I0312 08:22:12.189226 4809 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7c349c64-fcf0-48cf-91c5-fac0131bacc6-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:12 crc kubenswrapper[4809]: I0312 08:22:12.443052 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-x5bc6-config-6mqmm" event={"ID":"9facde94-80d9-47d6-8eec-f05289b3f4e0","Type":"ContainerStarted","Data":"56809feba605d45ba3b81797ece110e11ab8eadb598060d15bf07f5a42d4543b"} Mar 12 08:22:12 crc kubenswrapper[4809]: I0312 08:22:12.448028 4809 generic.go:334] "Generic (PLEG): container finished" podID="dae47d35-2955-4c02-88bb-a0fbe4cd7bf0" containerID="22b536a9bcc350066bd43312ac398e7aea4d0d0be2649687f90ebdc261a880ce" exitCode=0 Mar 12 08:22:12 crc kubenswrapper[4809]: I0312 08:22:12.448180 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"dae47d35-2955-4c02-88bb-a0fbe4cd7bf0","Type":"ContainerDied","Data":"22b536a9bcc350066bd43312ac398e7aea4d0d0be2649687f90ebdc261a880ce"} Mar 12 08:22:12 crc kubenswrapper[4809]: I0312 08:22:12.454481 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-w9c45" event={"ID":"7c349c64-fcf0-48cf-91c5-fac0131bacc6","Type":"ContainerDied","Data":"a8ca8d2e85e526ced50a464b68bea5f4b0a3febc7f74127a57b6800f41da183a"} Mar 12 08:22:12 crc kubenswrapper[4809]: I0312 08:22:12.454533 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8ca8d2e85e526ced50a464b68bea5f4b0a3febc7f74127a57b6800f41da183a" Mar 12 08:22:12 crc kubenswrapper[4809]: I0312 08:22:12.454617 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-w9c45" Mar 12 08:22:12 crc kubenswrapper[4809]: I0312 08:22:12.468271 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-cffe-account-create-update-4vbvq" event={"ID":"3b3ee226-2b1e-4d6b-b0ac-6ede45a3cfff","Type":"ContainerDied","Data":"b38b8670bd176ab66766f78f6531952447b2f87854c4839aa34f5c4c089dce35"} Mar 12 08:22:12 crc kubenswrapper[4809]: I0312 08:22:12.468312 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b38b8670bd176ab66766f78f6531952447b2f87854c4839aa34f5c4c089dce35" Mar 12 08:22:12 crc kubenswrapper[4809]: I0312 08:22:12.468316 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-cffe-account-create-update-4vbvq" Mar 12 08:22:13 crc kubenswrapper[4809]: I0312 08:22:13.318927 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Mar 12 08:22:13 crc kubenswrapper[4809]: E0312 08:22:13.319697 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b3ee226-2b1e-4d6b-b0ac-6ede45a3cfff" containerName="mariadb-account-create-update" Mar 12 08:22:13 crc kubenswrapper[4809]: I0312 08:22:13.319723 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b3ee226-2b1e-4d6b-b0ac-6ede45a3cfff" containerName="mariadb-account-create-update" Mar 12 08:22:13 crc kubenswrapper[4809]: E0312 08:22:13.319739 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c349c64-fcf0-48cf-91c5-fac0131bacc6" containerName="swift-ring-rebalance" Mar 12 08:22:13 crc kubenswrapper[4809]: I0312 08:22:13.319747 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c349c64-fcf0-48cf-91c5-fac0131bacc6" containerName="swift-ring-rebalance" Mar 12 08:22:13 crc kubenswrapper[4809]: I0312 08:22:13.319920 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c349c64-fcf0-48cf-91c5-fac0131bacc6" containerName="swift-ring-rebalance" Mar 12 08:22:13 crc kubenswrapper[4809]: I0312 08:22:13.319937 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b3ee226-2b1e-4d6b-b0ac-6ede45a3cfff" containerName="mariadb-account-create-update" Mar 12 08:22:13 crc kubenswrapper[4809]: I0312 08:22:13.320663 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Mar 12 08:22:13 crc kubenswrapper[4809]: I0312 08:22:13.326831 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Mar 12 08:22:13 crc kubenswrapper[4809]: I0312 08:22:13.347029 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Mar 12 08:22:13 crc kubenswrapper[4809]: I0312 08:22:13.414985 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05554710-f410-4b78-9fb2-22fc55aeea98-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"05554710-f410-4b78-9fb2-22fc55aeea98\") " pod="openstack/mysqld-exporter-0" Mar 12 08:22:13 crc kubenswrapper[4809]: I0312 08:22:13.415071 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05554710-f410-4b78-9fb2-22fc55aeea98-config-data\") pod \"mysqld-exporter-0\" (UID: \"05554710-f410-4b78-9fb2-22fc55aeea98\") " pod="openstack/mysqld-exporter-0" Mar 12 08:22:13 crc kubenswrapper[4809]: I0312 08:22:13.415095 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttxwt\" (UniqueName: \"kubernetes.io/projected/05554710-f410-4b78-9fb2-22fc55aeea98-kube-api-access-ttxwt\") pod \"mysqld-exporter-0\" (UID: \"05554710-f410-4b78-9fb2-22fc55aeea98\") " pod="openstack/mysqld-exporter-0" Mar 12 08:22:13 crc kubenswrapper[4809]: I0312 08:22:13.492709 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"dae47d35-2955-4c02-88bb-a0fbe4cd7bf0","Type":"ContainerStarted","Data":"b965992fcf175c016f5c92c5db6095b16f8850df2597d532cfce28009b0f6aea"} Mar 12 08:22:13 crc kubenswrapper[4809]: I0312 08:22:13.493378 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:22:13 crc kubenswrapper[4809]: I0312 08:22:13.504794 4809 generic.go:334] "Generic (PLEG): container finished" podID="9facde94-80d9-47d6-8eec-f05289b3f4e0" containerID="e627c1b0d7956435ef06290158275a1f130d6733b653de2d5bda80602359cc3d" exitCode=0 Mar 12 08:22:13 crc kubenswrapper[4809]: I0312 08:22:13.504860 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-x5bc6-config-6mqmm" event={"ID":"9facde94-80d9-47d6-8eec-f05289b3f4e0","Type":"ContainerDied","Data":"e627c1b0d7956435ef06290158275a1f130d6733b653de2d5bda80602359cc3d"} Mar 12 08:22:13 crc kubenswrapper[4809]: I0312 08:22:13.516683 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05554710-f410-4b78-9fb2-22fc55aeea98-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"05554710-f410-4b78-9fb2-22fc55aeea98\") " pod="openstack/mysqld-exporter-0" Mar 12 08:22:13 crc kubenswrapper[4809]: I0312 08:22:13.516810 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05554710-f410-4b78-9fb2-22fc55aeea98-config-data\") pod \"mysqld-exporter-0\" (UID: \"05554710-f410-4b78-9fb2-22fc55aeea98\") " pod="openstack/mysqld-exporter-0" Mar 12 08:22:13 crc kubenswrapper[4809]: I0312 08:22:13.516854 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttxwt\" (UniqueName: \"kubernetes.io/projected/05554710-f410-4b78-9fb2-22fc55aeea98-kube-api-access-ttxwt\") pod \"mysqld-exporter-0\" (UID: \"05554710-f410-4b78-9fb2-22fc55aeea98\") " pod="openstack/mysqld-exporter-0" Mar 12 08:22:13 crc kubenswrapper[4809]: I0312 08:22:13.534622 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05554710-f410-4b78-9fb2-22fc55aeea98-config-data\") pod \"mysqld-exporter-0\" (UID: \"05554710-f410-4b78-9fb2-22fc55aeea98\") " pod="openstack/mysqld-exporter-0" Mar 12 08:22:13 crc kubenswrapper[4809]: I0312 08:22:13.534758 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05554710-f410-4b78-9fb2-22fc55aeea98-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"05554710-f410-4b78-9fb2-22fc55aeea98\") " pod="openstack/mysqld-exporter-0" Mar 12 08:22:13 crc kubenswrapper[4809]: I0312 08:22:13.542008 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttxwt\" (UniqueName: \"kubernetes.io/projected/05554710-f410-4b78-9fb2-22fc55aeea98-kube-api-access-ttxwt\") pod \"mysqld-exporter-0\" (UID: \"05554710-f410-4b78-9fb2-22fc55aeea98\") " pod="openstack/mysqld-exporter-0" Mar 12 08:22:13 crc kubenswrapper[4809]: I0312 08:22:13.560524 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=-9223371963.29427 podStartE2EDuration="1m13.560507112s" podCreationTimestamp="2026-03-12 08:21:00 +0000 UTC" firstStartedPulling="2026-03-12 08:21:03.171104278 +0000 UTC m=+1336.753140001" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:22:13.55897122 +0000 UTC m=+1407.141006953" watchObservedRunningTime="2026-03-12 08:22:13.560507112 +0000 UTC m=+1407.142542845" Mar 12 08:22:13 crc kubenswrapper[4809]: I0312 08:22:13.648074 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Mar 12 08:22:13 crc kubenswrapper[4809]: I0312 08:22:13.715648 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-pnpgj"] Mar 12 08:22:13 crc kubenswrapper[4809]: I0312 08:22:13.718002 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pnpgj" Mar 12 08:22:13 crc kubenswrapper[4809]: I0312 08:22:13.730716 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Mar 12 08:22:13 crc kubenswrapper[4809]: I0312 08:22:13.757688 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-pnpgj"] Mar 12 08:22:13 crc kubenswrapper[4809]: I0312 08:22:13.843441 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r22nq\" (UniqueName: \"kubernetes.io/projected/0874ad6a-f0b6-4b55-938a-aefe9dc8ebaf-kube-api-access-r22nq\") pod \"root-account-create-update-pnpgj\" (UID: \"0874ad6a-f0b6-4b55-938a-aefe9dc8ebaf\") " pod="openstack/root-account-create-update-pnpgj" Mar 12 08:22:13 crc kubenswrapper[4809]: I0312 08:22:13.843530 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0874ad6a-f0b6-4b55-938a-aefe9dc8ebaf-operator-scripts\") pod \"root-account-create-update-pnpgj\" (UID: \"0874ad6a-f0b6-4b55-938a-aefe9dc8ebaf\") " pod="openstack/root-account-create-update-pnpgj" Mar 12 08:22:13 crc kubenswrapper[4809]: I0312 08:22:13.945907 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0874ad6a-f0b6-4b55-938a-aefe9dc8ebaf-operator-scripts\") pod \"root-account-create-update-pnpgj\" (UID: \"0874ad6a-f0b6-4b55-938a-aefe9dc8ebaf\") " pod="openstack/root-account-create-update-pnpgj" Mar 12 08:22:13 crc kubenswrapper[4809]: I0312 08:22:13.946232 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r22nq\" (UniqueName: \"kubernetes.io/projected/0874ad6a-f0b6-4b55-938a-aefe9dc8ebaf-kube-api-access-r22nq\") pod \"root-account-create-update-pnpgj\" (UID: \"0874ad6a-f0b6-4b55-938a-aefe9dc8ebaf\") " pod="openstack/root-account-create-update-pnpgj" Mar 12 08:22:13 crc kubenswrapper[4809]: I0312 08:22:13.947171 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0874ad6a-f0b6-4b55-938a-aefe9dc8ebaf-operator-scripts\") pod \"root-account-create-update-pnpgj\" (UID: \"0874ad6a-f0b6-4b55-938a-aefe9dc8ebaf\") " pod="openstack/root-account-create-update-pnpgj" Mar 12 08:22:13 crc kubenswrapper[4809]: I0312 08:22:13.969021 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r22nq\" (UniqueName: \"kubernetes.io/projected/0874ad6a-f0b6-4b55-938a-aefe9dc8ebaf-kube-api-access-r22nq\") pod \"root-account-create-update-pnpgj\" (UID: \"0874ad6a-f0b6-4b55-938a-aefe9dc8ebaf\") " pod="openstack/root-account-create-update-pnpgj" Mar 12 08:22:14 crc kubenswrapper[4809]: I0312 08:22:14.102611 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pnpgj" Mar 12 08:22:14 crc kubenswrapper[4809]: I0312 08:22:14.292222 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Mar 12 08:22:14 crc kubenswrapper[4809]: I0312 08:22:14.550723 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"05554710-f410-4b78-9fb2-22fc55aeea98","Type":"ContainerStarted","Data":"8eb5d80b9d54c4678d3288ac32a72b38e9807d06a729e6ad2ea4ba962034c235"} Mar 12 08:22:14 crc kubenswrapper[4809]: I0312 08:22:14.936523 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-pnpgj"] Mar 12 08:22:15 crc kubenswrapper[4809]: I0312 08:22:15.288346 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-x5bc6-config-6mqmm" Mar 12 08:22:15 crc kubenswrapper[4809]: I0312 08:22:15.395567 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9facde94-80d9-47d6-8eec-f05289b3f4e0-var-run\") pod \"9facde94-80d9-47d6-8eec-f05289b3f4e0\" (UID: \"9facde94-80d9-47d6-8eec-f05289b3f4e0\") " Mar 12 08:22:15 crc kubenswrapper[4809]: I0312 08:22:15.395671 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9facde94-80d9-47d6-8eec-f05289b3f4e0-var-run" (OuterVolumeSpecName: "var-run") pod "9facde94-80d9-47d6-8eec-f05289b3f4e0" (UID: "9facde94-80d9-47d6-8eec-f05289b3f4e0"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 08:22:15 crc kubenswrapper[4809]: I0312 08:22:15.395690 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9facde94-80d9-47d6-8eec-f05289b3f4e0-var-run-ovn\") pod \"9facde94-80d9-47d6-8eec-f05289b3f4e0\" (UID: \"9facde94-80d9-47d6-8eec-f05289b3f4e0\") " Mar 12 08:22:15 crc kubenswrapper[4809]: I0312 08:22:15.395709 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9facde94-80d9-47d6-8eec-f05289b3f4e0-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "9facde94-80d9-47d6-8eec-f05289b3f4e0" (UID: "9facde94-80d9-47d6-8eec-f05289b3f4e0"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 08:22:15 crc kubenswrapper[4809]: I0312 08:22:15.395811 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6m7fv\" (UniqueName: \"kubernetes.io/projected/9facde94-80d9-47d6-8eec-f05289b3f4e0-kube-api-access-6m7fv\") pod \"9facde94-80d9-47d6-8eec-f05289b3f4e0\" (UID: \"9facde94-80d9-47d6-8eec-f05289b3f4e0\") " Mar 12 08:22:15 crc kubenswrapper[4809]: I0312 08:22:15.395852 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9facde94-80d9-47d6-8eec-f05289b3f4e0-var-log-ovn\") pod \"9facde94-80d9-47d6-8eec-f05289b3f4e0\" (UID: \"9facde94-80d9-47d6-8eec-f05289b3f4e0\") " Mar 12 08:22:15 crc kubenswrapper[4809]: I0312 08:22:15.395942 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9facde94-80d9-47d6-8eec-f05289b3f4e0-additional-scripts\") pod \"9facde94-80d9-47d6-8eec-f05289b3f4e0\" (UID: \"9facde94-80d9-47d6-8eec-f05289b3f4e0\") " Mar 12 08:22:15 crc kubenswrapper[4809]: I0312 08:22:15.395977 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9facde94-80d9-47d6-8eec-f05289b3f4e0-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "9facde94-80d9-47d6-8eec-f05289b3f4e0" (UID: "9facde94-80d9-47d6-8eec-f05289b3f4e0"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 08:22:15 crc kubenswrapper[4809]: I0312 08:22:15.396037 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9facde94-80d9-47d6-8eec-f05289b3f4e0-scripts\") pod \"9facde94-80d9-47d6-8eec-f05289b3f4e0\" (UID: \"9facde94-80d9-47d6-8eec-f05289b3f4e0\") " Mar 12 08:22:15 crc kubenswrapper[4809]: I0312 08:22:15.396482 4809 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9facde94-80d9-47d6-8eec-f05289b3f4e0-var-log-ovn\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:15 crc kubenswrapper[4809]: I0312 08:22:15.396494 4809 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9facde94-80d9-47d6-8eec-f05289b3f4e0-var-run\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:15 crc kubenswrapper[4809]: I0312 08:22:15.396502 4809 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9facde94-80d9-47d6-8eec-f05289b3f4e0-var-run-ovn\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:15 crc kubenswrapper[4809]: I0312 08:22:15.396659 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9facde94-80d9-47d6-8eec-f05289b3f4e0-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "9facde94-80d9-47d6-8eec-f05289b3f4e0" (UID: "9facde94-80d9-47d6-8eec-f05289b3f4e0"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:22:15 crc kubenswrapper[4809]: I0312 08:22:15.397551 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9facde94-80d9-47d6-8eec-f05289b3f4e0-scripts" (OuterVolumeSpecName: "scripts") pod "9facde94-80d9-47d6-8eec-f05289b3f4e0" (UID: "9facde94-80d9-47d6-8eec-f05289b3f4e0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:22:15 crc kubenswrapper[4809]: I0312 08:22:15.417666 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9facde94-80d9-47d6-8eec-f05289b3f4e0-kube-api-access-6m7fv" (OuterVolumeSpecName: "kube-api-access-6m7fv") pod "9facde94-80d9-47d6-8eec-f05289b3f4e0" (UID: "9facde94-80d9-47d6-8eec-f05289b3f4e0"). InnerVolumeSpecName "kube-api-access-6m7fv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:22:15 crc kubenswrapper[4809]: I0312 08:22:15.499173 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6m7fv\" (UniqueName: \"kubernetes.io/projected/9facde94-80d9-47d6-8eec-f05289b3f4e0-kube-api-access-6m7fv\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:15 crc kubenswrapper[4809]: I0312 08:22:15.499218 4809 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9facde94-80d9-47d6-8eec-f05289b3f4e0-additional-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:15 crc kubenswrapper[4809]: I0312 08:22:15.499232 4809 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9facde94-80d9-47d6-8eec-f05289b3f4e0-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:15 crc kubenswrapper[4809]: I0312 08:22:15.563585 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-x5bc6-config-6mqmm" event={"ID":"9facde94-80d9-47d6-8eec-f05289b3f4e0","Type":"ContainerDied","Data":"56809feba605d45ba3b81797ece110e11ab8eadb598060d15bf07f5a42d4543b"} Mar 12 08:22:15 crc kubenswrapper[4809]: I0312 08:22:15.563632 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56809feba605d45ba3b81797ece110e11ab8eadb598060d15bf07f5a42d4543b" Mar 12 08:22:15 crc kubenswrapper[4809]: I0312 08:22:15.563649 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-x5bc6-config-6mqmm" Mar 12 08:22:15 crc kubenswrapper[4809]: I0312 08:22:15.573373 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-pnpgj" event={"ID":"0874ad6a-f0b6-4b55-938a-aefe9dc8ebaf","Type":"ContainerStarted","Data":"380e4e9ac0328706b84c3eecd96967f7de7cf093daaeff21af047bcd4164aab5"} Mar 12 08:22:15 crc kubenswrapper[4809]: I0312 08:22:15.573420 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-pnpgj" event={"ID":"0874ad6a-f0b6-4b55-938a-aefe9dc8ebaf","Type":"ContainerStarted","Data":"952fca08c93ff13424b3ab30ab96798b1e35694580ff81a85617465ab9e7c680"} Mar 12 08:22:15 crc kubenswrapper[4809]: I0312 08:22:15.591279 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-pnpgj" podStartSLOduration=2.591261342 podStartE2EDuration="2.591261342s" podCreationTimestamp="2026-03-12 08:22:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:22:15.591253712 +0000 UTC m=+1409.173289445" watchObservedRunningTime="2026-03-12 08:22:15.591261342 +0000 UTC m=+1409.173297075" Mar 12 08:22:15 crc kubenswrapper[4809]: I0312 08:22:15.725872 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-x5bc6" Mar 12 08:22:16 crc kubenswrapper[4809]: I0312 08:22:16.401410 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-x5bc6-config-6mqmm"] Mar 12 08:22:16 crc kubenswrapper[4809]: I0312 08:22:16.410165 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-x5bc6-config-6mqmm"] Mar 12 08:22:16 crc kubenswrapper[4809]: I0312 08:22:16.607076 4809 generic.go:334] "Generic (PLEG): container finished" podID="0874ad6a-f0b6-4b55-938a-aefe9dc8ebaf" containerID="380e4e9ac0328706b84c3eecd96967f7de7cf093daaeff21af047bcd4164aab5" exitCode=0 Mar 12 08:22:16 crc kubenswrapper[4809]: I0312 08:22:16.607225 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-x5bc6-config-ll2pb"] Mar 12 08:22:16 crc kubenswrapper[4809]: E0312 08:22:16.608768 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9facde94-80d9-47d6-8eec-f05289b3f4e0" containerName="ovn-config" Mar 12 08:22:16 crc kubenswrapper[4809]: I0312 08:22:16.608785 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="9facde94-80d9-47d6-8eec-f05289b3f4e0" containerName="ovn-config" Mar 12 08:22:16 crc kubenswrapper[4809]: I0312 08:22:16.609008 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="9facde94-80d9-47d6-8eec-f05289b3f4e0" containerName="ovn-config" Mar 12 08:22:16 crc kubenswrapper[4809]: I0312 08:22:16.609797 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-pnpgj" event={"ID":"0874ad6a-f0b6-4b55-938a-aefe9dc8ebaf","Type":"ContainerDied","Data":"380e4e9ac0328706b84c3eecd96967f7de7cf093daaeff21af047bcd4164aab5"} Mar 12 08:22:16 crc kubenswrapper[4809]: I0312 08:22:16.609899 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-x5bc6-config-ll2pb" Mar 12 08:22:16 crc kubenswrapper[4809]: I0312 08:22:16.613695 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Mar 12 08:22:16 crc kubenswrapper[4809]: I0312 08:22:16.619041 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-x5bc6-config-ll2pb"] Mar 12 08:22:16 crc kubenswrapper[4809]: I0312 08:22:16.623936 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"05554710-f410-4b78-9fb2-22fc55aeea98","Type":"ContainerStarted","Data":"1e64bfd09d8d77d66d9596ed68feb37d5b5cebf7e3f4d7a789cd88e5c9770d10"} Mar 12 08:22:16 crc kubenswrapper[4809]: I0312 08:22:16.727678 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=1.930342977 podStartE2EDuration="3.727661057s" podCreationTimestamp="2026-03-12 08:22:13 +0000 UTC" firstStartedPulling="2026-03-12 08:22:14.35850961 +0000 UTC m=+1407.940545343" lastFinishedPulling="2026-03-12 08:22:16.15582769 +0000 UTC m=+1409.737863423" observedRunningTime="2026-03-12 08:22:16.725474857 +0000 UTC m=+1410.307510590" watchObservedRunningTime="2026-03-12 08:22:16.727661057 +0000 UTC m=+1410.309696790" Mar 12 08:22:16 crc kubenswrapper[4809]: I0312 08:22:16.737851 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ece91fae-9388-4231-a308-28d9ee06524b-var-run-ovn\") pod \"ovn-controller-x5bc6-config-ll2pb\" (UID: \"ece91fae-9388-4231-a308-28d9ee06524b\") " pod="openstack/ovn-controller-x5bc6-config-ll2pb" Mar 12 08:22:16 crc kubenswrapper[4809]: I0312 08:22:16.737927 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ece91fae-9388-4231-a308-28d9ee06524b-additional-scripts\") pod \"ovn-controller-x5bc6-config-ll2pb\" (UID: \"ece91fae-9388-4231-a308-28d9ee06524b\") " pod="openstack/ovn-controller-x5bc6-config-ll2pb" Mar 12 08:22:16 crc kubenswrapper[4809]: I0312 08:22:16.737947 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ece91fae-9388-4231-a308-28d9ee06524b-var-run\") pod \"ovn-controller-x5bc6-config-ll2pb\" (UID: \"ece91fae-9388-4231-a308-28d9ee06524b\") " pod="openstack/ovn-controller-x5bc6-config-ll2pb" Mar 12 08:22:16 crc kubenswrapper[4809]: I0312 08:22:16.737976 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ece91fae-9388-4231-a308-28d9ee06524b-var-log-ovn\") pod \"ovn-controller-x5bc6-config-ll2pb\" (UID: \"ece91fae-9388-4231-a308-28d9ee06524b\") " pod="openstack/ovn-controller-x5bc6-config-ll2pb" Mar 12 08:22:16 crc kubenswrapper[4809]: I0312 08:22:16.738169 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2srk\" (UniqueName: \"kubernetes.io/projected/ece91fae-9388-4231-a308-28d9ee06524b-kube-api-access-n2srk\") pod \"ovn-controller-x5bc6-config-ll2pb\" (UID: \"ece91fae-9388-4231-a308-28d9ee06524b\") " pod="openstack/ovn-controller-x5bc6-config-ll2pb" Mar 12 08:22:16 crc kubenswrapper[4809]: I0312 08:22:16.738428 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ece91fae-9388-4231-a308-28d9ee06524b-scripts\") pod \"ovn-controller-x5bc6-config-ll2pb\" (UID: \"ece91fae-9388-4231-a308-28d9ee06524b\") " pod="openstack/ovn-controller-x5bc6-config-ll2pb" Mar 12 08:22:16 crc kubenswrapper[4809]: I0312 08:22:16.842523 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ece91fae-9388-4231-a308-28d9ee06524b-var-run-ovn\") pod \"ovn-controller-x5bc6-config-ll2pb\" (UID: \"ece91fae-9388-4231-a308-28d9ee06524b\") " pod="openstack/ovn-controller-x5bc6-config-ll2pb" Mar 12 08:22:16 crc kubenswrapper[4809]: I0312 08:22:16.842634 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ece91fae-9388-4231-a308-28d9ee06524b-additional-scripts\") pod \"ovn-controller-x5bc6-config-ll2pb\" (UID: \"ece91fae-9388-4231-a308-28d9ee06524b\") " pod="openstack/ovn-controller-x5bc6-config-ll2pb" Mar 12 08:22:16 crc kubenswrapper[4809]: I0312 08:22:16.842676 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ece91fae-9388-4231-a308-28d9ee06524b-var-run\") pod \"ovn-controller-x5bc6-config-ll2pb\" (UID: \"ece91fae-9388-4231-a308-28d9ee06524b\") " pod="openstack/ovn-controller-x5bc6-config-ll2pb" Mar 12 08:22:16 crc kubenswrapper[4809]: I0312 08:22:16.842718 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ece91fae-9388-4231-a308-28d9ee06524b-var-log-ovn\") pod \"ovn-controller-x5bc6-config-ll2pb\" (UID: \"ece91fae-9388-4231-a308-28d9ee06524b\") " pod="openstack/ovn-controller-x5bc6-config-ll2pb" Mar 12 08:22:16 crc kubenswrapper[4809]: I0312 08:22:16.842812 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2srk\" (UniqueName: \"kubernetes.io/projected/ece91fae-9388-4231-a308-28d9ee06524b-kube-api-access-n2srk\") pod \"ovn-controller-x5bc6-config-ll2pb\" (UID: \"ece91fae-9388-4231-a308-28d9ee06524b\") " pod="openstack/ovn-controller-x5bc6-config-ll2pb" Mar 12 08:22:16 crc kubenswrapper[4809]: I0312 08:22:16.842902 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ece91fae-9388-4231-a308-28d9ee06524b-scripts\") pod \"ovn-controller-x5bc6-config-ll2pb\" (UID: \"ece91fae-9388-4231-a308-28d9ee06524b\") " pod="openstack/ovn-controller-x5bc6-config-ll2pb" Mar 12 08:22:16 crc kubenswrapper[4809]: I0312 08:22:16.842954 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ece91fae-9388-4231-a308-28d9ee06524b-var-run-ovn\") pod \"ovn-controller-x5bc6-config-ll2pb\" (UID: \"ece91fae-9388-4231-a308-28d9ee06524b\") " pod="openstack/ovn-controller-x5bc6-config-ll2pb" Mar 12 08:22:16 crc kubenswrapper[4809]: I0312 08:22:16.843282 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ece91fae-9388-4231-a308-28d9ee06524b-var-log-ovn\") pod \"ovn-controller-x5bc6-config-ll2pb\" (UID: \"ece91fae-9388-4231-a308-28d9ee06524b\") " pod="openstack/ovn-controller-x5bc6-config-ll2pb" Mar 12 08:22:16 crc kubenswrapper[4809]: I0312 08:22:16.843424 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ece91fae-9388-4231-a308-28d9ee06524b-var-run\") pod \"ovn-controller-x5bc6-config-ll2pb\" (UID: \"ece91fae-9388-4231-a308-28d9ee06524b\") " pod="openstack/ovn-controller-x5bc6-config-ll2pb" Mar 12 08:22:16 crc kubenswrapper[4809]: I0312 08:22:16.844269 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ece91fae-9388-4231-a308-28d9ee06524b-additional-scripts\") pod \"ovn-controller-x5bc6-config-ll2pb\" (UID: \"ece91fae-9388-4231-a308-28d9ee06524b\") " pod="openstack/ovn-controller-x5bc6-config-ll2pb" Mar 12 08:22:16 crc kubenswrapper[4809]: I0312 08:22:16.848019 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ece91fae-9388-4231-a308-28d9ee06524b-scripts\") pod \"ovn-controller-x5bc6-config-ll2pb\" (UID: \"ece91fae-9388-4231-a308-28d9ee06524b\") " pod="openstack/ovn-controller-x5bc6-config-ll2pb" Mar 12 08:22:16 crc kubenswrapper[4809]: I0312 08:22:16.875634 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2srk\" (UniqueName: \"kubernetes.io/projected/ece91fae-9388-4231-a308-28d9ee06524b-kube-api-access-n2srk\") pod \"ovn-controller-x5bc6-config-ll2pb\" (UID: \"ece91fae-9388-4231-a308-28d9ee06524b\") " pod="openstack/ovn-controller-x5bc6-config-ll2pb" Mar 12 08:22:16 crc kubenswrapper[4809]: I0312 08:22:16.994570 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-x5bc6-config-ll2pb" Mar 12 08:22:17 crc kubenswrapper[4809]: I0312 08:22:17.129935 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9facde94-80d9-47d6-8eec-f05289b3f4e0" path="/var/lib/kubelet/pods/9facde94-80d9-47d6-8eec-f05289b3f4e0/volumes" Mar 12 08:22:21 crc kubenswrapper[4809]: I0312 08:22:21.012593 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pnpgj" Mar 12 08:22:21 crc kubenswrapper[4809]: I0312 08:22:21.155525 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0874ad6a-f0b6-4b55-938a-aefe9dc8ebaf-operator-scripts\") pod \"0874ad6a-f0b6-4b55-938a-aefe9dc8ebaf\" (UID: \"0874ad6a-f0b6-4b55-938a-aefe9dc8ebaf\") " Mar 12 08:22:21 crc kubenswrapper[4809]: I0312 08:22:21.156309 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r22nq\" (UniqueName: \"kubernetes.io/projected/0874ad6a-f0b6-4b55-938a-aefe9dc8ebaf-kube-api-access-r22nq\") pod \"0874ad6a-f0b6-4b55-938a-aefe9dc8ebaf\" (UID: \"0874ad6a-f0b6-4b55-938a-aefe9dc8ebaf\") " Mar 12 08:22:21 crc kubenswrapper[4809]: I0312 08:22:21.156604 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0874ad6a-f0b6-4b55-938a-aefe9dc8ebaf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0874ad6a-f0b6-4b55-938a-aefe9dc8ebaf" (UID: "0874ad6a-f0b6-4b55-938a-aefe9dc8ebaf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:22:21 crc kubenswrapper[4809]: I0312 08:22:21.156717 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7b325264-3ac9-446e-b820-c40d942263e6-etc-swift\") pod \"swift-storage-0\" (UID: \"7b325264-3ac9-446e-b820-c40d942263e6\") " pod="openstack/swift-storage-0" Mar 12 08:22:21 crc kubenswrapper[4809]: I0312 08:22:21.157018 4809 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0874ad6a-f0b6-4b55-938a-aefe9dc8ebaf-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:21 crc kubenswrapper[4809]: I0312 08:22:21.165749 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0874ad6a-f0b6-4b55-938a-aefe9dc8ebaf-kube-api-access-r22nq" (OuterVolumeSpecName: "kube-api-access-r22nq") pod "0874ad6a-f0b6-4b55-938a-aefe9dc8ebaf" (UID: "0874ad6a-f0b6-4b55-938a-aefe9dc8ebaf"). InnerVolumeSpecName "kube-api-access-r22nq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:22:21 crc kubenswrapper[4809]: I0312 08:22:21.166963 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7b325264-3ac9-446e-b820-c40d942263e6-etc-swift\") pod \"swift-storage-0\" (UID: \"7b325264-3ac9-446e-b820-c40d942263e6\") " pod="openstack/swift-storage-0" Mar 12 08:22:21 crc kubenswrapper[4809]: I0312 08:22:21.215913 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 12 08:22:21 crc kubenswrapper[4809]: I0312 08:22:21.260983 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r22nq\" (UniqueName: \"kubernetes.io/projected/0874ad6a-f0b6-4b55-938a-aefe9dc8ebaf-kube-api-access-r22nq\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:21 crc kubenswrapper[4809]: I0312 08:22:21.694297 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-pnpgj" event={"ID":"0874ad6a-f0b6-4b55-938a-aefe9dc8ebaf","Type":"ContainerDied","Data":"952fca08c93ff13424b3ab30ab96798b1e35694580ff81a85617465ab9e7c680"} Mar 12 08:22:21 crc kubenswrapper[4809]: I0312 08:22:21.694360 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="952fca08c93ff13424b3ab30ab96798b1e35694580ff81a85617465ab9e7c680" Mar 12 08:22:21 crc kubenswrapper[4809]: I0312 08:22:21.694396 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pnpgj" Mar 12 08:22:22 crc kubenswrapper[4809]: I0312 08:22:22.188499 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:22:22 crc kubenswrapper[4809]: I0312 08:22:22.676517 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Mar 12 08:22:22 crc kubenswrapper[4809]: I0312 08:22:22.698473 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Mar 12 08:22:22 crc kubenswrapper[4809]: I0312 08:22:22.735375 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Mar 12 08:22:24 crc kubenswrapper[4809]: I0312 08:22:24.663100 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-drcgq"] Mar 12 08:22:24 crc kubenswrapper[4809]: E0312 08:22:24.664182 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0874ad6a-f0b6-4b55-938a-aefe9dc8ebaf" containerName="mariadb-account-create-update" Mar 12 08:22:24 crc kubenswrapper[4809]: I0312 08:22:24.664198 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="0874ad6a-f0b6-4b55-938a-aefe9dc8ebaf" containerName="mariadb-account-create-update" Mar 12 08:22:24 crc kubenswrapper[4809]: I0312 08:22:24.664501 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="0874ad6a-f0b6-4b55-938a-aefe9dc8ebaf" containerName="mariadb-account-create-update" Mar 12 08:22:24 crc kubenswrapper[4809]: I0312 08:22:24.665598 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-drcgq" Mar 12 08:22:24 crc kubenswrapper[4809]: I0312 08:22:24.682682 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-drcgq"] Mar 12 08:22:24 crc kubenswrapper[4809]: I0312 08:22:24.754144 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwj2h\" (UniqueName: \"kubernetes.io/projected/d11d110e-e009-481e-a5f6-1a380f66764c-kube-api-access-mwj2h\") pod \"heat-db-create-drcgq\" (UID: \"d11d110e-e009-481e-a5f6-1a380f66764c\") " pod="openstack/heat-db-create-drcgq" Mar 12 08:22:24 crc kubenswrapper[4809]: I0312 08:22:24.754198 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d11d110e-e009-481e-a5f6-1a380f66764c-operator-scripts\") pod \"heat-db-create-drcgq\" (UID: \"d11d110e-e009-481e-a5f6-1a380f66764c\") " pod="openstack/heat-db-create-drcgq" Mar 12 08:22:24 crc kubenswrapper[4809]: I0312 08:22:24.772923 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-3d72-account-create-update-vntwh"] Mar 12 08:22:24 crc kubenswrapper[4809]: I0312 08:22:24.774245 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-3d72-account-create-update-vntwh" Mar 12 08:22:24 crc kubenswrapper[4809]: I0312 08:22:24.777822 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Mar 12 08:22:24 crc kubenswrapper[4809]: I0312 08:22:24.793163 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-3d72-account-create-update-vntwh"] Mar 12 08:22:24 crc kubenswrapper[4809]: I0312 08:22:24.856557 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwmxc\" (UniqueName: \"kubernetes.io/projected/1fba0a11-b0dd-48b1-9dff-a9e30daf4a3d-kube-api-access-xwmxc\") pod \"cinder-3d72-account-create-update-vntwh\" (UID: \"1fba0a11-b0dd-48b1-9dff-a9e30daf4a3d\") " pod="openstack/cinder-3d72-account-create-update-vntwh" Mar 12 08:22:24 crc kubenswrapper[4809]: I0312 08:22:24.856884 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1fba0a11-b0dd-48b1-9dff-a9e30daf4a3d-operator-scripts\") pod \"cinder-3d72-account-create-update-vntwh\" (UID: \"1fba0a11-b0dd-48b1-9dff-a9e30daf4a3d\") " pod="openstack/cinder-3d72-account-create-update-vntwh" Mar 12 08:22:24 crc kubenswrapper[4809]: I0312 08:22:24.856998 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwj2h\" (UniqueName: \"kubernetes.io/projected/d11d110e-e009-481e-a5f6-1a380f66764c-kube-api-access-mwj2h\") pod \"heat-db-create-drcgq\" (UID: \"d11d110e-e009-481e-a5f6-1a380f66764c\") " pod="openstack/heat-db-create-drcgq" Mar 12 08:22:24 crc kubenswrapper[4809]: I0312 08:22:24.857126 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d11d110e-e009-481e-a5f6-1a380f66764c-operator-scripts\") pod \"heat-db-create-drcgq\" (UID: \"d11d110e-e009-481e-a5f6-1a380f66764c\") " pod="openstack/heat-db-create-drcgq" Mar 12 08:22:24 crc kubenswrapper[4809]: I0312 08:22:24.858005 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d11d110e-e009-481e-a5f6-1a380f66764c-operator-scripts\") pod \"heat-db-create-drcgq\" (UID: \"d11d110e-e009-481e-a5f6-1a380f66764c\") " pod="openstack/heat-db-create-drcgq" Mar 12 08:22:24 crc kubenswrapper[4809]: I0312 08:22:24.879007 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-qdxsg"] Mar 12 08:22:24 crc kubenswrapper[4809]: I0312 08:22:24.880833 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-qdxsg" Mar 12 08:22:24 crc kubenswrapper[4809]: I0312 08:22:24.902045 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwj2h\" (UniqueName: \"kubernetes.io/projected/d11d110e-e009-481e-a5f6-1a380f66764c-kube-api-access-mwj2h\") pod \"heat-db-create-drcgq\" (UID: \"d11d110e-e009-481e-a5f6-1a380f66764c\") " pod="openstack/heat-db-create-drcgq" Mar 12 08:22:24 crc kubenswrapper[4809]: I0312 08:22:24.926353 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-qdxsg"] Mar 12 08:22:24 crc kubenswrapper[4809]: I0312 08:22:24.958949 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5ececaf-3560-4284-9d82-ac39de15bf88-operator-scripts\") pod \"cinder-db-create-qdxsg\" (UID: \"a5ececaf-3560-4284-9d82-ac39de15bf88\") " pod="openstack/cinder-db-create-qdxsg" Mar 12 08:22:24 crc kubenswrapper[4809]: I0312 08:22:24.959358 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2cr9\" (UniqueName: \"kubernetes.io/projected/a5ececaf-3560-4284-9d82-ac39de15bf88-kube-api-access-h2cr9\") pod \"cinder-db-create-qdxsg\" (UID: \"a5ececaf-3560-4284-9d82-ac39de15bf88\") " pod="openstack/cinder-db-create-qdxsg" Mar 12 08:22:24 crc kubenswrapper[4809]: I0312 08:22:24.959663 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwmxc\" (UniqueName: \"kubernetes.io/projected/1fba0a11-b0dd-48b1-9dff-a9e30daf4a3d-kube-api-access-xwmxc\") pod \"cinder-3d72-account-create-update-vntwh\" (UID: \"1fba0a11-b0dd-48b1-9dff-a9e30daf4a3d\") " pod="openstack/cinder-3d72-account-create-update-vntwh" Mar 12 08:22:24 crc kubenswrapper[4809]: I0312 08:22:24.959765 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1fba0a11-b0dd-48b1-9dff-a9e30daf4a3d-operator-scripts\") pod \"cinder-3d72-account-create-update-vntwh\" (UID: \"1fba0a11-b0dd-48b1-9dff-a9e30daf4a3d\") " pod="openstack/cinder-3d72-account-create-update-vntwh" Mar 12 08:22:24 crc kubenswrapper[4809]: I0312 08:22:24.961395 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1fba0a11-b0dd-48b1-9dff-a9e30daf4a3d-operator-scripts\") pod \"cinder-3d72-account-create-update-vntwh\" (UID: \"1fba0a11-b0dd-48b1-9dff-a9e30daf4a3d\") " pod="openstack/cinder-3d72-account-create-update-vntwh" Mar 12 08:22:24 crc kubenswrapper[4809]: I0312 08:22:24.962671 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-756c-account-create-update-vv58x"] Mar 12 08:22:24 crc kubenswrapper[4809]: I0312 08:22:24.964571 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-756c-account-create-update-vv58x" Mar 12 08:22:24 crc kubenswrapper[4809]: I0312 08:22:24.966820 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Mar 12 08:22:24 crc kubenswrapper[4809]: I0312 08:22:24.989781 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwmxc\" (UniqueName: \"kubernetes.io/projected/1fba0a11-b0dd-48b1-9dff-a9e30daf4a3d-kube-api-access-xwmxc\") pod \"cinder-3d72-account-create-update-vntwh\" (UID: \"1fba0a11-b0dd-48b1-9dff-a9e30daf4a3d\") " pod="openstack/cinder-3d72-account-create-update-vntwh" Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.000269 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-756c-account-create-update-vv58x"] Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.037778 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-drcgq" Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.064363 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/656bf0f9-30c6-4f99-acd3-39996a0fa0b4-operator-scripts\") pod \"heat-756c-account-create-update-vv58x\" (UID: \"656bf0f9-30c6-4f99-acd3-39996a0fa0b4\") " pod="openstack/heat-756c-account-create-update-vv58x" Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.064480 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5ececaf-3560-4284-9d82-ac39de15bf88-operator-scripts\") pod \"cinder-db-create-qdxsg\" (UID: \"a5ececaf-3560-4284-9d82-ac39de15bf88\") " pod="openstack/cinder-db-create-qdxsg" Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.064514 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2cr9\" (UniqueName: \"kubernetes.io/projected/a5ececaf-3560-4284-9d82-ac39de15bf88-kube-api-access-h2cr9\") pod \"cinder-db-create-qdxsg\" (UID: \"a5ececaf-3560-4284-9d82-ac39de15bf88\") " pod="openstack/cinder-db-create-qdxsg" Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.064545 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjzb9\" (UniqueName: \"kubernetes.io/projected/656bf0f9-30c6-4f99-acd3-39996a0fa0b4-kube-api-access-mjzb9\") pod \"heat-756c-account-create-update-vv58x\" (UID: \"656bf0f9-30c6-4f99-acd3-39996a0fa0b4\") " pod="openstack/heat-756c-account-create-update-vv58x" Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.069289 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5ececaf-3560-4284-9d82-ac39de15bf88-operator-scripts\") pod \"cinder-db-create-qdxsg\" (UID: \"a5ececaf-3560-4284-9d82-ac39de15bf88\") " pod="openstack/cinder-db-create-qdxsg" Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.085910 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2cr9\" (UniqueName: \"kubernetes.io/projected/a5ececaf-3560-4284-9d82-ac39de15bf88-kube-api-access-h2cr9\") pod \"cinder-db-create-qdxsg\" (UID: \"a5ececaf-3560-4284-9d82-ac39de15bf88\") " pod="openstack/cinder-db-create-qdxsg" Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.097629 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-3d72-account-create-update-vntwh" Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.166590 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/656bf0f9-30c6-4f99-acd3-39996a0fa0b4-operator-scripts\") pod \"heat-756c-account-create-update-vv58x\" (UID: \"656bf0f9-30c6-4f99-acd3-39996a0fa0b4\") " pod="openstack/heat-756c-account-create-update-vv58x" Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.166734 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjzb9\" (UniqueName: \"kubernetes.io/projected/656bf0f9-30c6-4f99-acd3-39996a0fa0b4-kube-api-access-mjzb9\") pod \"heat-756c-account-create-update-vv58x\" (UID: \"656bf0f9-30c6-4f99-acd3-39996a0fa0b4\") " pod="openstack/heat-756c-account-create-update-vv58x" Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.167588 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/656bf0f9-30c6-4f99-acd3-39996a0fa0b4-operator-scripts\") pod \"heat-756c-account-create-update-vv58x\" (UID: \"656bf0f9-30c6-4f99-acd3-39996a0fa0b4\") " pod="openstack/heat-756c-account-create-update-vv58x" Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.181997 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-8rbct"] Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.184057 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-8rbct" Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.194666 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjzb9\" (UniqueName: \"kubernetes.io/projected/656bf0f9-30c6-4f99-acd3-39996a0fa0b4-kube-api-access-mjzb9\") pod \"heat-756c-account-create-update-vv58x\" (UID: \"656bf0f9-30c6-4f99-acd3-39996a0fa0b4\") " pod="openstack/heat-756c-account-create-update-vv58x" Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.205823 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-8rbct"] Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.259148 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-qdxsg" Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.269034 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/193b129f-b891-4890-88b0-bfcc2799127b-operator-scripts\") pod \"barbican-db-create-8rbct\" (UID: \"193b129f-b891-4890-88b0-bfcc2799127b\") " pod="openstack/barbican-db-create-8rbct" Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.269264 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnt56\" (UniqueName: \"kubernetes.io/projected/193b129f-b891-4890-88b0-bfcc2799127b-kube-api-access-tnt56\") pod \"barbican-db-create-8rbct\" (UID: \"193b129f-b891-4890-88b0-bfcc2799127b\") " pod="openstack/barbican-db-create-8rbct" Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.287648 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-85jsk"] Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.287666 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-756c-account-create-update-vv58x" Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.289623 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-85jsk" Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.292355 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-gndsl" Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.292390 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.293103 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.293304 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.300179 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-85jsk"] Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.371719 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/193b129f-b891-4890-88b0-bfcc2799127b-operator-scripts\") pod \"barbican-db-create-8rbct\" (UID: \"193b129f-b891-4890-88b0-bfcc2799127b\") " pod="openstack/barbican-db-create-8rbct" Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.371825 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b65ht\" (UniqueName: \"kubernetes.io/projected/2c55ab9f-27fc-410f-a8bf-fb2a34d4bec1-kube-api-access-b65ht\") pod \"keystone-db-sync-85jsk\" (UID: \"2c55ab9f-27fc-410f-a8bf-fb2a34d4bec1\") " pod="openstack/keystone-db-sync-85jsk" Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.371871 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnt56\" (UniqueName: \"kubernetes.io/projected/193b129f-b891-4890-88b0-bfcc2799127b-kube-api-access-tnt56\") pod \"barbican-db-create-8rbct\" (UID: \"193b129f-b891-4890-88b0-bfcc2799127b\") " pod="openstack/barbican-db-create-8rbct" Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.371893 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c55ab9f-27fc-410f-a8bf-fb2a34d4bec1-config-data\") pod \"keystone-db-sync-85jsk\" (UID: \"2c55ab9f-27fc-410f-a8bf-fb2a34d4bec1\") " pod="openstack/keystone-db-sync-85jsk" Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.371953 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c55ab9f-27fc-410f-a8bf-fb2a34d4bec1-combined-ca-bundle\") pod \"keystone-db-sync-85jsk\" (UID: \"2c55ab9f-27fc-410f-a8bf-fb2a34d4bec1\") " pod="openstack/keystone-db-sync-85jsk" Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.372509 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/193b129f-b891-4890-88b0-bfcc2799127b-operator-scripts\") pod \"barbican-db-create-8rbct\" (UID: \"193b129f-b891-4890-88b0-bfcc2799127b\") " pod="openstack/barbican-db-create-8rbct" Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.389125 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-vfrtk"] Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.391522 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-vfrtk" Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.402672 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-vfrtk"] Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.407061 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnt56\" (UniqueName: \"kubernetes.io/projected/193b129f-b891-4890-88b0-bfcc2799127b-kube-api-access-tnt56\") pod \"barbican-db-create-8rbct\" (UID: \"193b129f-b891-4890-88b0-bfcc2799127b\") " pod="openstack/barbican-db-create-8rbct" Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.436345 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-e5ed-account-create-update-tgts4"] Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.437925 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-e5ed-account-create-update-tgts4" Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.451883 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.455237 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-e5ed-account-create-update-tgts4"] Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.475786 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf7td\" (UniqueName: \"kubernetes.io/projected/d834299e-a8cc-4c17-9b41-8e00d9fa2929-kube-api-access-cf7td\") pod \"barbican-e5ed-account-create-update-tgts4\" (UID: \"d834299e-a8cc-4c17-9b41-8e00d9fa2929\") " pod="openstack/barbican-e5ed-account-create-update-tgts4" Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.476296 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b65ht\" (UniqueName: \"kubernetes.io/projected/2c55ab9f-27fc-410f-a8bf-fb2a34d4bec1-kube-api-access-b65ht\") pod \"keystone-db-sync-85jsk\" (UID: \"2c55ab9f-27fc-410f-a8bf-fb2a34d4bec1\") " pod="openstack/keystone-db-sync-85jsk" Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.476434 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm5km\" (UniqueName: \"kubernetes.io/projected/1ccdcb5e-d68e-4046-9f86-3d37634c9cf3-kube-api-access-fm5km\") pod \"neutron-db-create-vfrtk\" (UID: \"1ccdcb5e-d68e-4046-9f86-3d37634c9cf3\") " pod="openstack/neutron-db-create-vfrtk" Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.476528 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c55ab9f-27fc-410f-a8bf-fb2a34d4bec1-config-data\") pod \"keystone-db-sync-85jsk\" (UID: \"2c55ab9f-27fc-410f-a8bf-fb2a34d4bec1\") " pod="openstack/keystone-db-sync-85jsk" Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.476650 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ccdcb5e-d68e-4046-9f86-3d37634c9cf3-operator-scripts\") pod \"neutron-db-create-vfrtk\" (UID: \"1ccdcb5e-d68e-4046-9f86-3d37634c9cf3\") " pod="openstack/neutron-db-create-vfrtk" Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.476749 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c55ab9f-27fc-410f-a8bf-fb2a34d4bec1-combined-ca-bundle\") pod \"keystone-db-sync-85jsk\" (UID: \"2c55ab9f-27fc-410f-a8bf-fb2a34d4bec1\") " pod="openstack/keystone-db-sync-85jsk" Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.476863 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d834299e-a8cc-4c17-9b41-8e00d9fa2929-operator-scripts\") pod \"barbican-e5ed-account-create-update-tgts4\" (UID: \"d834299e-a8cc-4c17-9b41-8e00d9fa2929\") " pod="openstack/barbican-e5ed-account-create-update-tgts4" Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.482544 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c55ab9f-27fc-410f-a8bf-fb2a34d4bec1-combined-ca-bundle\") pod \"keystone-db-sync-85jsk\" (UID: \"2c55ab9f-27fc-410f-a8bf-fb2a34d4bec1\") " pod="openstack/keystone-db-sync-85jsk" Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.484442 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c55ab9f-27fc-410f-a8bf-fb2a34d4bec1-config-data\") pod \"keystone-db-sync-85jsk\" (UID: \"2c55ab9f-27fc-410f-a8bf-fb2a34d4bec1\") " pod="openstack/keystone-db-sync-85jsk" Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.512710 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b65ht\" (UniqueName: \"kubernetes.io/projected/2c55ab9f-27fc-410f-a8bf-fb2a34d4bec1-kube-api-access-b65ht\") pod \"keystone-db-sync-85jsk\" (UID: \"2c55ab9f-27fc-410f-a8bf-fb2a34d4bec1\") " pod="openstack/keystone-db-sync-85jsk" Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.526009 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-bfd7-account-create-update-c8q5g"] Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.527551 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bfd7-account-create-update-c8q5g" Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.531211 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.539357 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-bfd7-account-create-update-c8q5g"] Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.549953 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-8rbct" Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.591909 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgks9\" (UniqueName: \"kubernetes.io/projected/31d780d8-832e-43d9-81f7-8047de4d9076-kube-api-access-fgks9\") pod \"neutron-bfd7-account-create-update-c8q5g\" (UID: \"31d780d8-832e-43d9-81f7-8047de4d9076\") " pod="openstack/neutron-bfd7-account-create-update-c8q5g" Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.592014 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fm5km\" (UniqueName: \"kubernetes.io/projected/1ccdcb5e-d68e-4046-9f86-3d37634c9cf3-kube-api-access-fm5km\") pod \"neutron-db-create-vfrtk\" (UID: \"1ccdcb5e-d68e-4046-9f86-3d37634c9cf3\") " pod="openstack/neutron-db-create-vfrtk" Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.592125 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ccdcb5e-d68e-4046-9f86-3d37634c9cf3-operator-scripts\") pod \"neutron-db-create-vfrtk\" (UID: \"1ccdcb5e-d68e-4046-9f86-3d37634c9cf3\") " pod="openstack/neutron-db-create-vfrtk" Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.592276 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d834299e-a8cc-4c17-9b41-8e00d9fa2929-operator-scripts\") pod \"barbican-e5ed-account-create-update-tgts4\" (UID: \"d834299e-a8cc-4c17-9b41-8e00d9fa2929\") " pod="openstack/barbican-e5ed-account-create-update-tgts4" Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.592336 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31d780d8-832e-43d9-81f7-8047de4d9076-operator-scripts\") pod \"neutron-bfd7-account-create-update-c8q5g\" (UID: \"31d780d8-832e-43d9-81f7-8047de4d9076\") " pod="openstack/neutron-bfd7-account-create-update-c8q5g" Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.592440 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cf7td\" (UniqueName: \"kubernetes.io/projected/d834299e-a8cc-4c17-9b41-8e00d9fa2929-kube-api-access-cf7td\") pod \"barbican-e5ed-account-create-update-tgts4\" (UID: \"d834299e-a8cc-4c17-9b41-8e00d9fa2929\") " pod="openstack/barbican-e5ed-account-create-update-tgts4" Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.594352 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ccdcb5e-d68e-4046-9f86-3d37634c9cf3-operator-scripts\") pod \"neutron-db-create-vfrtk\" (UID: \"1ccdcb5e-d68e-4046-9f86-3d37634c9cf3\") " pod="openstack/neutron-db-create-vfrtk" Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.595222 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d834299e-a8cc-4c17-9b41-8e00d9fa2929-operator-scripts\") pod \"barbican-e5ed-account-create-update-tgts4\" (UID: \"d834299e-a8cc-4c17-9b41-8e00d9fa2929\") " pod="openstack/barbican-e5ed-account-create-update-tgts4" Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.604907 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-85jsk" Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.610047 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fm5km\" (UniqueName: \"kubernetes.io/projected/1ccdcb5e-d68e-4046-9f86-3d37634c9cf3-kube-api-access-fm5km\") pod \"neutron-db-create-vfrtk\" (UID: \"1ccdcb5e-d68e-4046-9f86-3d37634c9cf3\") " pod="openstack/neutron-db-create-vfrtk" Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.617512 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cf7td\" (UniqueName: \"kubernetes.io/projected/d834299e-a8cc-4c17-9b41-8e00d9fa2929-kube-api-access-cf7td\") pod \"barbican-e5ed-account-create-update-tgts4\" (UID: \"d834299e-a8cc-4c17-9b41-8e00d9fa2929\") " pod="openstack/barbican-e5ed-account-create-update-tgts4" Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.694786 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgks9\" (UniqueName: \"kubernetes.io/projected/31d780d8-832e-43d9-81f7-8047de4d9076-kube-api-access-fgks9\") pod \"neutron-bfd7-account-create-update-c8q5g\" (UID: \"31d780d8-832e-43d9-81f7-8047de4d9076\") " pod="openstack/neutron-bfd7-account-create-update-c8q5g" Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.694883 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31d780d8-832e-43d9-81f7-8047de4d9076-operator-scripts\") pod \"neutron-bfd7-account-create-update-c8q5g\" (UID: \"31d780d8-832e-43d9-81f7-8047de4d9076\") " pod="openstack/neutron-bfd7-account-create-update-c8q5g" Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.695748 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31d780d8-832e-43d9-81f7-8047de4d9076-operator-scripts\") pod \"neutron-bfd7-account-create-update-c8q5g\" (UID: \"31d780d8-832e-43d9-81f7-8047de4d9076\") " pod="openstack/neutron-bfd7-account-create-update-c8q5g" Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.720890 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgks9\" (UniqueName: \"kubernetes.io/projected/31d780d8-832e-43d9-81f7-8047de4d9076-kube-api-access-fgks9\") pod \"neutron-bfd7-account-create-update-c8q5g\" (UID: \"31d780d8-832e-43d9-81f7-8047de4d9076\") " pod="openstack/neutron-bfd7-account-create-update-c8q5g" Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.759876 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-vfrtk" Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.793392 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-e5ed-account-create-update-tgts4" Mar 12 08:22:25 crc kubenswrapper[4809]: I0312 08:22:25.849343 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bfd7-account-create-update-c8q5g" Mar 12 08:22:30 crc kubenswrapper[4809]: E0312 08:22:30.769796 4809 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Mar 12 08:22:30 crc kubenswrapper[4809]: E0312 08:22:30.770866 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hlmz7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-6pfp4_openstack(cdb484e1-36e8-4bfe-aeb2-72fc1c331cda): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 08:22:30 crc kubenswrapper[4809]: E0312 08:22:30.772052 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-6pfp4" podUID="cdb484e1-36e8-4bfe-aeb2-72fc1c331cda" Mar 12 08:22:30 crc kubenswrapper[4809]: E0312 08:22:30.884851 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-6pfp4" podUID="cdb484e1-36e8-4bfe-aeb2-72fc1c331cda" Mar 12 08:22:31 crc kubenswrapper[4809]: I0312 08:22:31.334822 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-qdxsg"] Mar 12 08:22:31 crc kubenswrapper[4809]: I0312 08:22:31.871447 4809 generic.go:334] "Generic (PLEG): container finished" podID="a5ececaf-3560-4284-9d82-ac39de15bf88" containerID="114022f72c6823d90e88665866d16f0dfca63980b30479a6b187a9af05e9d09c" exitCode=0 Mar 12 08:22:31 crc kubenswrapper[4809]: I0312 08:22:31.871947 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-qdxsg" event={"ID":"a5ececaf-3560-4284-9d82-ac39de15bf88","Type":"ContainerDied","Data":"114022f72c6823d90e88665866d16f0dfca63980b30479a6b187a9af05e9d09c"} Mar 12 08:22:31 crc kubenswrapper[4809]: I0312 08:22:31.871987 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-qdxsg" event={"ID":"a5ececaf-3560-4284-9d82-ac39de15bf88","Type":"ContainerStarted","Data":"e3a507fcc17db941bdb98133c69b8654bbb66ad56b560c6bb81da50997e14de4"} Mar 12 08:22:31 crc kubenswrapper[4809]: I0312 08:22:31.875244 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fefb3329-70bc-45d1-ac98-44f1836b3470","Type":"ContainerStarted","Data":"d0e866a7bcc1ce3dd51665675baac963d3e25c704cf9588d88017c88de540836"} Mar 12 08:22:31 crc kubenswrapper[4809]: I0312 08:22:31.941335 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-x5bc6-config-ll2pb"] Mar 12 08:22:31 crc kubenswrapper[4809]: I0312 08:22:31.964104 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-8rbct"] Mar 12 08:22:31 crc kubenswrapper[4809]: I0312 08:22:31.989567 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-e5ed-account-create-update-tgts4"] Mar 12 08:22:32 crc kubenswrapper[4809]: I0312 08:22:32.002684 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-3d72-account-create-update-vntwh"] Mar 12 08:22:32 crc kubenswrapper[4809]: I0312 08:22:32.016719 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-vfrtk"] Mar 12 08:22:32 crc kubenswrapper[4809]: I0312 08:22:32.098472 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Mar 12 08:22:32 crc kubenswrapper[4809]: I0312 08:22:32.169592 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-bfd7-account-create-update-c8q5g"] Mar 12 08:22:32 crc kubenswrapper[4809]: I0312 08:22:32.320079 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-85jsk"] Mar 12 08:22:32 crc kubenswrapper[4809]: I0312 08:22:32.338442 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-drcgq"] Mar 12 08:22:32 crc kubenswrapper[4809]: I0312 08:22:32.366730 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-756c-account-create-update-vv58x"] Mar 12 08:22:32 crc kubenswrapper[4809]: W0312 08:22:32.640291 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd11d110e_e009_481e_a5f6_1a380f66764c.slice/crio-8ee94d1febc5589e0a1254e798fd6f50bd969acf89452cf9643307e25db588b8 WatchSource:0}: Error finding container 8ee94d1febc5589e0a1254e798fd6f50bd969acf89452cf9643307e25db588b8: Status 404 returned error can't find the container with id 8ee94d1febc5589e0a1254e798fd6f50bd969acf89452cf9643307e25db588b8 Mar 12 08:22:32 crc kubenswrapper[4809]: I0312 08:22:32.889081 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bfd7-account-create-update-c8q5g" event={"ID":"31d780d8-832e-43d9-81f7-8047de4d9076","Type":"ContainerStarted","Data":"551651526d1ff4630552aebaddbd52a0050cb25324ae624248bd16f7c0ac28ba"} Mar 12 08:22:32 crc kubenswrapper[4809]: I0312 08:22:32.894736 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-e5ed-account-create-update-tgts4" event={"ID":"d834299e-a8cc-4c17-9b41-8e00d9fa2929","Type":"ContainerStarted","Data":"cb9c0650431f2f1b74e7b39bc5735f6bbd388833594ca9be2ee0e9ffcc18175d"} Mar 12 08:22:32 crc kubenswrapper[4809]: I0312 08:22:32.894785 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-e5ed-account-create-update-tgts4" event={"ID":"d834299e-a8cc-4c17-9b41-8e00d9fa2929","Type":"ContainerStarted","Data":"b3bb186d80c2bd5b4b14f6796d605cdda22d8ed47f0ec8f3f7881bb94e435e98"} Mar 12 08:22:32 crc kubenswrapper[4809]: I0312 08:22:32.898484 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-8rbct" event={"ID":"193b129f-b891-4890-88b0-bfcc2799127b","Type":"ContainerStarted","Data":"7c5fa709bfaea8ecec4a762140e3d2c31d7eb5c3cb7982be000b977a40205591"} Mar 12 08:22:32 crc kubenswrapper[4809]: I0312 08:22:32.898517 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-8rbct" event={"ID":"193b129f-b891-4890-88b0-bfcc2799127b","Type":"ContainerStarted","Data":"21e3109a38e0425ce9377915c71d8579fcbacc955d519e52376b882a4f9da181"} Mar 12 08:22:32 crc kubenswrapper[4809]: I0312 08:22:32.903728 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-756c-account-create-update-vv58x" event={"ID":"656bf0f9-30c6-4f99-acd3-39996a0fa0b4","Type":"ContainerStarted","Data":"8025f6ab9102fc9c4080a55d1efc43c09c59e57bf32447bb60cb3edb4f2c8033"} Mar 12 08:22:32 crc kubenswrapper[4809]: I0312 08:22:32.912004 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-drcgq" event={"ID":"d11d110e-e009-481e-a5f6-1a380f66764c","Type":"ContainerStarted","Data":"8ee94d1febc5589e0a1254e798fd6f50bd969acf89452cf9643307e25db588b8"} Mar 12 08:22:32 crc kubenswrapper[4809]: I0312 08:22:32.912943 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-e5ed-account-create-update-tgts4" podStartSLOduration=7.912924888 podStartE2EDuration="7.912924888s" podCreationTimestamp="2026-03-12 08:22:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:22:32.910579914 +0000 UTC m=+1426.492615647" watchObservedRunningTime="2026-03-12 08:22:32.912924888 +0000 UTC m=+1426.494960621" Mar 12 08:22:32 crc kubenswrapper[4809]: I0312 08:22:32.914318 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-85jsk" event={"ID":"2c55ab9f-27fc-410f-a8bf-fb2a34d4bec1","Type":"ContainerStarted","Data":"e6895341ff1fccbf823cac9dcea88cc2c0f9a68d525252119621c5c54abc1414"} Mar 12 08:22:32 crc kubenswrapper[4809]: I0312 08:22:32.917530 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7b325264-3ac9-446e-b820-c40d942263e6","Type":"ContainerStarted","Data":"4645669df7058c43176680e5bcc04da67df5ef6a5aca11daa356609ee494577e"} Mar 12 08:22:32 crc kubenswrapper[4809]: I0312 08:22:32.925960 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-vfrtk" event={"ID":"1ccdcb5e-d68e-4046-9f86-3d37634c9cf3","Type":"ContainerStarted","Data":"00dfeb858ed8f349814317a06f81c52920176c3f4d3f5cba44c6e61078c86a9b"} Mar 12 08:22:32 crc kubenswrapper[4809]: I0312 08:22:32.926008 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-vfrtk" event={"ID":"1ccdcb5e-d68e-4046-9f86-3d37634c9cf3","Type":"ContainerStarted","Data":"b745e73134e43f9deb843f26df326fe7842df1452f5494c70fb894416bb4a011"} Mar 12 08:22:32 crc kubenswrapper[4809]: I0312 08:22:32.937086 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-x5bc6-config-ll2pb" event={"ID":"ece91fae-9388-4231-a308-28d9ee06524b","Type":"ContainerStarted","Data":"46c81eccdfc52f3dfa31b19209225eeb568e8d916856648b7a637a255ffb6e1d"} Mar 12 08:22:32 crc kubenswrapper[4809]: I0312 08:22:32.937160 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-x5bc6-config-ll2pb" event={"ID":"ece91fae-9388-4231-a308-28d9ee06524b","Type":"ContainerStarted","Data":"4c3ad4606b3fae584651e97674b85d7e801ac8703f97024a3a997497eca228ee"} Mar 12 08:22:32 crc kubenswrapper[4809]: I0312 08:22:32.948358 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-8rbct" podStartSLOduration=7.948336888 podStartE2EDuration="7.948336888s" podCreationTimestamp="2026-03-12 08:22:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:22:32.938425907 +0000 UTC m=+1426.520461640" watchObservedRunningTime="2026-03-12 08:22:32.948336888 +0000 UTC m=+1426.530372621" Mar 12 08:22:32 crc kubenswrapper[4809]: I0312 08:22:32.960947 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-3d72-account-create-update-vntwh" event={"ID":"1fba0a11-b0dd-48b1-9dff-a9e30daf4a3d","Type":"ContainerStarted","Data":"9ce6278a38d95a21cb0eee8368fa80075e844b43c4cd9c1c74615d57a007e847"} Mar 12 08:22:32 crc kubenswrapper[4809]: I0312 08:22:32.960998 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-3d72-account-create-update-vntwh" event={"ID":"1fba0a11-b0dd-48b1-9dff-a9e30daf4a3d","Type":"ContainerStarted","Data":"83c21214d15084b061a8b187326804aeefe523879101dca8d7d6757e360ae06b"} Mar 12 08:22:32 crc kubenswrapper[4809]: I0312 08:22:32.968718 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-x5bc6-config-ll2pb" podStartSLOduration=16.968695755 podStartE2EDuration="16.968695755s" podCreationTimestamp="2026-03-12 08:22:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:22:32.958860416 +0000 UTC m=+1426.540896149" watchObservedRunningTime="2026-03-12 08:22:32.968695755 +0000 UTC m=+1426.550731478" Mar 12 08:22:33 crc kubenswrapper[4809]: I0312 08:22:33.047598 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-3d72-account-create-update-vntwh" podStartSLOduration=9.047570305 podStartE2EDuration="9.047570305s" podCreationTimestamp="2026-03-12 08:22:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:22:33.034516937 +0000 UTC m=+1426.616552680" watchObservedRunningTime="2026-03-12 08:22:33.047570305 +0000 UTC m=+1426.629606038" Mar 12 08:22:33 crc kubenswrapper[4809]: I0312 08:22:33.065551 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-vfrtk" podStartSLOduration=8.065520536 podStartE2EDuration="8.065520536s" podCreationTimestamp="2026-03-12 08:22:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:22:32.982123223 +0000 UTC m=+1426.564158956" watchObservedRunningTime="2026-03-12 08:22:33.065520536 +0000 UTC m=+1426.647556259" Mar 12 08:22:33 crc kubenswrapper[4809]: I0312 08:22:33.974205 4809 generic.go:334] "Generic (PLEG): container finished" podID="d834299e-a8cc-4c17-9b41-8e00d9fa2929" containerID="cb9c0650431f2f1b74e7b39bc5735f6bbd388833594ca9be2ee0e9ffcc18175d" exitCode=0 Mar 12 08:22:33 crc kubenswrapper[4809]: I0312 08:22:33.974502 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-e5ed-account-create-update-tgts4" event={"ID":"d834299e-a8cc-4c17-9b41-8e00d9fa2929","Type":"ContainerDied","Data":"cb9c0650431f2f1b74e7b39bc5735f6bbd388833594ca9be2ee0e9ffcc18175d"} Mar 12 08:22:33 crc kubenswrapper[4809]: I0312 08:22:33.977853 4809 generic.go:334] "Generic (PLEG): container finished" podID="1fba0a11-b0dd-48b1-9dff-a9e30daf4a3d" containerID="9ce6278a38d95a21cb0eee8368fa80075e844b43c4cd9c1c74615d57a007e847" exitCode=0 Mar 12 08:22:33 crc kubenswrapper[4809]: I0312 08:22:33.977901 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-3d72-account-create-update-vntwh" event={"ID":"1fba0a11-b0dd-48b1-9dff-a9e30daf4a3d","Type":"ContainerDied","Data":"9ce6278a38d95a21cb0eee8368fa80075e844b43c4cd9c1c74615d57a007e847"} Mar 12 08:22:33 crc kubenswrapper[4809]: I0312 08:22:33.978957 4809 generic.go:334] "Generic (PLEG): container finished" podID="193b129f-b891-4890-88b0-bfcc2799127b" containerID="7c5fa709bfaea8ecec4a762140e3d2c31d7eb5c3cb7982be000b977a40205591" exitCode=0 Mar 12 08:22:33 crc kubenswrapper[4809]: I0312 08:22:33.978998 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-8rbct" event={"ID":"193b129f-b891-4890-88b0-bfcc2799127b","Type":"ContainerDied","Data":"7c5fa709bfaea8ecec4a762140e3d2c31d7eb5c3cb7982be000b977a40205591"} Mar 12 08:22:33 crc kubenswrapper[4809]: I0312 08:22:33.980053 4809 generic.go:334] "Generic (PLEG): container finished" podID="656bf0f9-30c6-4f99-acd3-39996a0fa0b4" containerID="6708c0691d84e5151f002f9e2d9eb1612e0bc72aaf3d3923ae49e2bf6f2f414d" exitCode=0 Mar 12 08:22:33 crc kubenswrapper[4809]: I0312 08:22:33.980091 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-756c-account-create-update-vv58x" event={"ID":"656bf0f9-30c6-4f99-acd3-39996a0fa0b4","Type":"ContainerDied","Data":"6708c0691d84e5151f002f9e2d9eb1612e0bc72aaf3d3923ae49e2bf6f2f414d"} Mar 12 08:22:33 crc kubenswrapper[4809]: I0312 08:22:33.981473 4809 generic.go:334] "Generic (PLEG): container finished" podID="d11d110e-e009-481e-a5f6-1a380f66764c" containerID="0ae94b47a680a52a07d23639fe4c7bc0466feb5b16b9b932a0f45b2a9739e885" exitCode=0 Mar 12 08:22:33 crc kubenswrapper[4809]: I0312 08:22:33.981511 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-drcgq" event={"ID":"d11d110e-e009-481e-a5f6-1a380f66764c","Type":"ContainerDied","Data":"0ae94b47a680a52a07d23639fe4c7bc0466feb5b16b9b932a0f45b2a9739e885"} Mar 12 08:22:33 crc kubenswrapper[4809]: I0312 08:22:33.982541 4809 generic.go:334] "Generic (PLEG): container finished" podID="1ccdcb5e-d68e-4046-9f86-3d37634c9cf3" containerID="00dfeb858ed8f349814317a06f81c52920176c3f4d3f5cba44c6e61078c86a9b" exitCode=0 Mar 12 08:22:33 crc kubenswrapper[4809]: I0312 08:22:33.982580 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-vfrtk" event={"ID":"1ccdcb5e-d68e-4046-9f86-3d37634c9cf3","Type":"ContainerDied","Data":"00dfeb858ed8f349814317a06f81c52920176c3f4d3f5cba44c6e61078c86a9b"} Mar 12 08:22:33 crc kubenswrapper[4809]: I0312 08:22:33.986912 4809 generic.go:334] "Generic (PLEG): container finished" podID="ece91fae-9388-4231-a308-28d9ee06524b" containerID="46c81eccdfc52f3dfa31b19209225eeb568e8d916856648b7a637a255ffb6e1d" exitCode=0 Mar 12 08:22:33 crc kubenswrapper[4809]: I0312 08:22:33.986960 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-x5bc6-config-ll2pb" event={"ID":"ece91fae-9388-4231-a308-28d9ee06524b","Type":"ContainerDied","Data":"46c81eccdfc52f3dfa31b19209225eeb568e8d916856648b7a637a255ffb6e1d"} Mar 12 08:22:33 crc kubenswrapper[4809]: I0312 08:22:33.999730 4809 generic.go:334] "Generic (PLEG): container finished" podID="31d780d8-832e-43d9-81f7-8047de4d9076" containerID="4941303b1e8bcbb1797326a4cfdbad9d29bb0afcff0c88fd24ebb36eb9690275" exitCode=0 Mar 12 08:22:33 crc kubenswrapper[4809]: I0312 08:22:33.999776 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bfd7-account-create-update-c8q5g" event={"ID":"31d780d8-832e-43d9-81f7-8047de4d9076","Type":"ContainerDied","Data":"4941303b1e8bcbb1797326a4cfdbad9d29bb0afcff0c88fd24ebb36eb9690275"} Mar 12 08:22:34 crc kubenswrapper[4809]: I0312 08:22:34.372193 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-qdxsg" Mar 12 08:22:34 crc kubenswrapper[4809]: I0312 08:22:34.489300 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5ececaf-3560-4284-9d82-ac39de15bf88-operator-scripts\") pod \"a5ececaf-3560-4284-9d82-ac39de15bf88\" (UID: \"a5ececaf-3560-4284-9d82-ac39de15bf88\") " Mar 12 08:22:34 crc kubenswrapper[4809]: I0312 08:22:34.489756 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2cr9\" (UniqueName: \"kubernetes.io/projected/a5ececaf-3560-4284-9d82-ac39de15bf88-kube-api-access-h2cr9\") pod \"a5ececaf-3560-4284-9d82-ac39de15bf88\" (UID: \"a5ececaf-3560-4284-9d82-ac39de15bf88\") " Mar 12 08:22:34 crc kubenswrapper[4809]: I0312 08:22:34.490526 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5ececaf-3560-4284-9d82-ac39de15bf88-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a5ececaf-3560-4284-9d82-ac39de15bf88" (UID: "a5ececaf-3560-4284-9d82-ac39de15bf88"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:22:34 crc kubenswrapper[4809]: I0312 08:22:34.493390 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5ececaf-3560-4284-9d82-ac39de15bf88-kube-api-access-h2cr9" (OuterVolumeSpecName: "kube-api-access-h2cr9") pod "a5ececaf-3560-4284-9d82-ac39de15bf88" (UID: "a5ececaf-3560-4284-9d82-ac39de15bf88"). InnerVolumeSpecName "kube-api-access-h2cr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:22:34 crc kubenswrapper[4809]: I0312 08:22:34.592823 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2cr9\" (UniqueName: \"kubernetes.io/projected/a5ececaf-3560-4284-9d82-ac39de15bf88-kube-api-access-h2cr9\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:34 crc kubenswrapper[4809]: I0312 08:22:34.592863 4809 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5ececaf-3560-4284-9d82-ac39de15bf88-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:35 crc kubenswrapper[4809]: I0312 08:22:35.012455 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7b325264-3ac9-446e-b820-c40d942263e6","Type":"ContainerStarted","Data":"c423e5b0a6ce12f4eaaf865efe9a47f7f948a50871e5a9bbfc98548ba3dee638"} Mar 12 08:22:35 crc kubenswrapper[4809]: I0312 08:22:35.012765 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7b325264-3ac9-446e-b820-c40d942263e6","Type":"ContainerStarted","Data":"6cbfefe2e467663e3791c2b5d5d7d4cd91a10dc59be81b902fbb0857c5690316"} Mar 12 08:22:35 crc kubenswrapper[4809]: I0312 08:22:35.014635 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-qdxsg" event={"ID":"a5ececaf-3560-4284-9d82-ac39de15bf88","Type":"ContainerDied","Data":"e3a507fcc17db941bdb98133c69b8654bbb66ad56b560c6bb81da50997e14de4"} Mar 12 08:22:35 crc kubenswrapper[4809]: I0312 08:22:35.014662 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3a507fcc17db941bdb98133c69b8654bbb66ad56b560c6bb81da50997e14de4" Mar 12 08:22:35 crc kubenswrapper[4809]: I0312 08:22:35.014727 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-qdxsg" Mar 12 08:22:35 crc kubenswrapper[4809]: I0312 08:22:35.025605 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fefb3329-70bc-45d1-ac98-44f1836b3470","Type":"ContainerStarted","Data":"cb451e2cafc21b5864c8d08ce2407b5b5779919d542166eca1ee8fd0cb372aa9"} Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.086704 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-drcgq" event={"ID":"d11d110e-e009-481e-a5f6-1a380f66764c","Type":"ContainerDied","Data":"8ee94d1febc5589e0a1254e798fd6f50bd969acf89452cf9643307e25db588b8"} Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.087472 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ee94d1febc5589e0a1254e798fd6f50bd969acf89452cf9643307e25db588b8" Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.091065 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-vfrtk" event={"ID":"1ccdcb5e-d68e-4046-9f86-3d37634c9cf3","Type":"ContainerDied","Data":"b745e73134e43f9deb843f26df326fe7842df1452f5494c70fb894416bb4a011"} Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.091103 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b745e73134e43f9deb843f26df326fe7842df1452f5494c70fb894416bb4a011" Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.099720 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-x5bc6-config-ll2pb" event={"ID":"ece91fae-9388-4231-a308-28d9ee06524b","Type":"ContainerDied","Data":"4c3ad4606b3fae584651e97674b85d7e801ac8703f97024a3a997497eca228ee"} Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.099844 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c3ad4606b3fae584651e97674b85d7e801ac8703f97024a3a997497eca228ee" Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.103377 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bfd7-account-create-update-c8q5g" event={"ID":"31d780d8-832e-43d9-81f7-8047de4d9076","Type":"ContainerDied","Data":"551651526d1ff4630552aebaddbd52a0050cb25324ae624248bd16f7c0ac28ba"} Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.103435 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="551651526d1ff4630552aebaddbd52a0050cb25324ae624248bd16f7c0ac28ba" Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.110510 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-3d72-account-create-update-vntwh" event={"ID":"1fba0a11-b0dd-48b1-9dff-a9e30daf4a3d","Type":"ContainerDied","Data":"83c21214d15084b061a8b187326804aeefe523879101dca8d7d6757e360ae06b"} Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.110613 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83c21214d15084b061a8b187326804aeefe523879101dca8d7d6757e360ae06b" Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.112903 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-756c-account-create-update-vv58x" event={"ID":"656bf0f9-30c6-4f99-acd3-39996a0fa0b4","Type":"ContainerDied","Data":"8025f6ab9102fc9c4080a55d1efc43c09c59e57bf32447bb60cb3edb4f2c8033"} Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.112946 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8025f6ab9102fc9c4080a55d1efc43c09c59e57bf32447bb60cb3edb4f2c8033" Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.114195 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-8rbct" event={"ID":"193b129f-b891-4890-88b0-bfcc2799127b","Type":"ContainerDied","Data":"21e3109a38e0425ce9377915c71d8579fcbacc955d519e52376b882a4f9da181"} Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.114227 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21e3109a38e0425ce9377915c71d8579fcbacc955d519e52376b882a4f9da181" Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.357384 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-756c-account-create-update-vv58x" Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.404751 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-3d72-account-create-update-vntwh" Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.406872 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/656bf0f9-30c6-4f99-acd3-39996a0fa0b4-operator-scripts\") pod \"656bf0f9-30c6-4f99-acd3-39996a0fa0b4\" (UID: \"656bf0f9-30c6-4f99-acd3-39996a0fa0b4\") " Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.407010 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjzb9\" (UniqueName: \"kubernetes.io/projected/656bf0f9-30c6-4f99-acd3-39996a0fa0b4-kube-api-access-mjzb9\") pod \"656bf0f9-30c6-4f99-acd3-39996a0fa0b4\" (UID: \"656bf0f9-30c6-4f99-acd3-39996a0fa0b4\") " Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.409160 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/656bf0f9-30c6-4f99-acd3-39996a0fa0b4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "656bf0f9-30c6-4f99-acd3-39996a0fa0b4" (UID: "656bf0f9-30c6-4f99-acd3-39996a0fa0b4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.413622 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/656bf0f9-30c6-4f99-acd3-39996a0fa0b4-kube-api-access-mjzb9" (OuterVolumeSpecName: "kube-api-access-mjzb9") pod "656bf0f9-30c6-4f99-acd3-39996a0fa0b4" (UID: "656bf0f9-30c6-4f99-acd3-39996a0fa0b4"). InnerVolumeSpecName "kube-api-access-mjzb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.420948 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-drcgq" Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.494912 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-8rbct" Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.504333 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-x5bc6-config-ll2pb" Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.510466 4809 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/656bf0f9-30c6-4f99-acd3-39996a0fa0b4-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.510540 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjzb9\" (UniqueName: \"kubernetes.io/projected/656bf0f9-30c6-4f99-acd3-39996a0fa0b4-kube-api-access-mjzb9\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.511666 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-vfrtk" Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.535184 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bfd7-account-create-update-c8q5g" Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.545570 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-e5ed-account-create-update-tgts4" Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.613186 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31d780d8-832e-43d9-81f7-8047de4d9076-operator-scripts\") pod \"31d780d8-832e-43d9-81f7-8047de4d9076\" (UID: \"31d780d8-832e-43d9-81f7-8047de4d9076\") " Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.613238 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fm5km\" (UniqueName: \"kubernetes.io/projected/1ccdcb5e-d68e-4046-9f86-3d37634c9cf3-kube-api-access-fm5km\") pod \"1ccdcb5e-d68e-4046-9f86-3d37634c9cf3\" (UID: \"1ccdcb5e-d68e-4046-9f86-3d37634c9cf3\") " Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.613269 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgks9\" (UniqueName: \"kubernetes.io/projected/31d780d8-832e-43d9-81f7-8047de4d9076-kube-api-access-fgks9\") pod \"31d780d8-832e-43d9-81f7-8047de4d9076\" (UID: \"31d780d8-832e-43d9-81f7-8047de4d9076\") " Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.613289 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ece91fae-9388-4231-a308-28d9ee06524b-additional-scripts\") pod \"ece91fae-9388-4231-a308-28d9ee06524b\" (UID: \"ece91fae-9388-4231-a308-28d9ee06524b\") " Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.613315 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ece91fae-9388-4231-a308-28d9ee06524b-var-run-ovn\") pod \"ece91fae-9388-4231-a308-28d9ee06524b\" (UID: \"ece91fae-9388-4231-a308-28d9ee06524b\") " Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.613340 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d834299e-a8cc-4c17-9b41-8e00d9fa2929-operator-scripts\") pod \"d834299e-a8cc-4c17-9b41-8e00d9fa2929\" (UID: \"d834299e-a8cc-4c17-9b41-8e00d9fa2929\") " Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.613368 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwmxc\" (UniqueName: \"kubernetes.io/projected/1fba0a11-b0dd-48b1-9dff-a9e30daf4a3d-kube-api-access-xwmxc\") pod \"1fba0a11-b0dd-48b1-9dff-a9e30daf4a3d\" (UID: \"1fba0a11-b0dd-48b1-9dff-a9e30daf4a3d\") " Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.613395 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cf7td\" (UniqueName: \"kubernetes.io/projected/d834299e-a8cc-4c17-9b41-8e00d9fa2929-kube-api-access-cf7td\") pod \"d834299e-a8cc-4c17-9b41-8e00d9fa2929\" (UID: \"d834299e-a8cc-4c17-9b41-8e00d9fa2929\") " Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.613413 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1fba0a11-b0dd-48b1-9dff-a9e30daf4a3d-operator-scripts\") pod \"1fba0a11-b0dd-48b1-9dff-a9e30daf4a3d\" (UID: \"1fba0a11-b0dd-48b1-9dff-a9e30daf4a3d\") " Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.613432 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwj2h\" (UniqueName: \"kubernetes.io/projected/d11d110e-e009-481e-a5f6-1a380f66764c-kube-api-access-mwj2h\") pod \"d11d110e-e009-481e-a5f6-1a380f66764c\" (UID: \"d11d110e-e009-481e-a5f6-1a380f66764c\") " Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.613454 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnt56\" (UniqueName: \"kubernetes.io/projected/193b129f-b891-4890-88b0-bfcc2799127b-kube-api-access-tnt56\") pod \"193b129f-b891-4890-88b0-bfcc2799127b\" (UID: \"193b129f-b891-4890-88b0-bfcc2799127b\") " Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.613483 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ece91fae-9388-4231-a308-28d9ee06524b-var-run\") pod \"ece91fae-9388-4231-a308-28d9ee06524b\" (UID: \"ece91fae-9388-4231-a308-28d9ee06524b\") " Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.613502 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ece91fae-9388-4231-a308-28d9ee06524b-var-log-ovn\") pod \"ece91fae-9388-4231-a308-28d9ee06524b\" (UID: \"ece91fae-9388-4231-a308-28d9ee06524b\") " Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.613531 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ece91fae-9388-4231-a308-28d9ee06524b-scripts\") pod \"ece91fae-9388-4231-a308-28d9ee06524b\" (UID: \"ece91fae-9388-4231-a308-28d9ee06524b\") " Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.613552 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ccdcb5e-d68e-4046-9f86-3d37634c9cf3-operator-scripts\") pod \"1ccdcb5e-d68e-4046-9f86-3d37634c9cf3\" (UID: \"1ccdcb5e-d68e-4046-9f86-3d37634c9cf3\") " Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.613575 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d11d110e-e009-481e-a5f6-1a380f66764c-operator-scripts\") pod \"d11d110e-e009-481e-a5f6-1a380f66764c\" (UID: \"d11d110e-e009-481e-a5f6-1a380f66764c\") " Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.613611 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/193b129f-b891-4890-88b0-bfcc2799127b-operator-scripts\") pod \"193b129f-b891-4890-88b0-bfcc2799127b\" (UID: \"193b129f-b891-4890-88b0-bfcc2799127b\") " Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.613637 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2srk\" (UniqueName: \"kubernetes.io/projected/ece91fae-9388-4231-a308-28d9ee06524b-kube-api-access-n2srk\") pod \"ece91fae-9388-4231-a308-28d9ee06524b\" (UID: \"ece91fae-9388-4231-a308-28d9ee06524b\") " Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.614741 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ece91fae-9388-4231-a308-28d9ee06524b-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "ece91fae-9388-4231-a308-28d9ee06524b" (UID: "ece91fae-9388-4231-a308-28d9ee06524b"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.615153 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fba0a11-b0dd-48b1-9dff-a9e30daf4a3d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1fba0a11-b0dd-48b1-9dff-a9e30daf4a3d" (UID: "1fba0a11-b0dd-48b1-9dff-a9e30daf4a3d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.615749 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d780d8-832e-43d9-81f7-8047de4d9076-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "31d780d8-832e-43d9-81f7-8047de4d9076" (UID: "31d780d8-832e-43d9-81f7-8047de4d9076"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.617362 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ece91fae-9388-4231-a308-28d9ee06524b-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "ece91fae-9388-4231-a308-28d9ee06524b" (UID: "ece91fae-9388-4231-a308-28d9ee06524b"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.621068 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ece91fae-9388-4231-a308-28d9ee06524b-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "ece91fae-9388-4231-a308-28d9ee06524b" (UID: "ece91fae-9388-4231-a308-28d9ee06524b"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.621878 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ece91fae-9388-4231-a308-28d9ee06524b-kube-api-access-n2srk" (OuterVolumeSpecName: "kube-api-access-n2srk") pod "ece91fae-9388-4231-a308-28d9ee06524b" (UID: "ece91fae-9388-4231-a308-28d9ee06524b"). InnerVolumeSpecName "kube-api-access-n2srk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.622074 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ece91fae-9388-4231-a308-28d9ee06524b-var-run" (OuterVolumeSpecName: "var-run") pod "ece91fae-9388-4231-a308-28d9ee06524b" (UID: "ece91fae-9388-4231-a308-28d9ee06524b"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.623023 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d834299e-a8cc-4c17-9b41-8e00d9fa2929-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d834299e-a8cc-4c17-9b41-8e00d9fa2929" (UID: "d834299e-a8cc-4c17-9b41-8e00d9fa2929"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.623612 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ece91fae-9388-4231-a308-28d9ee06524b-scripts" (OuterVolumeSpecName: "scripts") pod "ece91fae-9388-4231-a308-28d9ee06524b" (UID: "ece91fae-9388-4231-a308-28d9ee06524b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.624163 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ccdcb5e-d68e-4046-9f86-3d37634c9cf3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1ccdcb5e-d68e-4046-9f86-3d37634c9cf3" (UID: "1ccdcb5e-d68e-4046-9f86-3d37634c9cf3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.624712 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d11d110e-e009-481e-a5f6-1a380f66764c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d11d110e-e009-481e-a5f6-1a380f66764c" (UID: "d11d110e-e009-481e-a5f6-1a380f66764c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.625170 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d11d110e-e009-481e-a5f6-1a380f66764c-kube-api-access-mwj2h" (OuterVolumeSpecName: "kube-api-access-mwj2h") pod "d11d110e-e009-481e-a5f6-1a380f66764c" (UID: "d11d110e-e009-481e-a5f6-1a380f66764c"). InnerVolumeSpecName "kube-api-access-mwj2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.625200 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/193b129f-b891-4890-88b0-bfcc2799127b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "193b129f-b891-4890-88b0-bfcc2799127b" (UID: "193b129f-b891-4890-88b0-bfcc2799127b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.626593 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d780d8-832e-43d9-81f7-8047de4d9076-kube-api-access-fgks9" (OuterVolumeSpecName: "kube-api-access-fgks9") pod "31d780d8-832e-43d9-81f7-8047de4d9076" (UID: "31d780d8-832e-43d9-81f7-8047de4d9076"). InnerVolumeSpecName "kube-api-access-fgks9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.626655 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ccdcb5e-d68e-4046-9f86-3d37634c9cf3-kube-api-access-fm5km" (OuterVolumeSpecName: "kube-api-access-fm5km") pod "1ccdcb5e-d68e-4046-9f86-3d37634c9cf3" (UID: "1ccdcb5e-d68e-4046-9f86-3d37634c9cf3"). InnerVolumeSpecName "kube-api-access-fm5km". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.626676 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/193b129f-b891-4890-88b0-bfcc2799127b-kube-api-access-tnt56" (OuterVolumeSpecName: "kube-api-access-tnt56") pod "193b129f-b891-4890-88b0-bfcc2799127b" (UID: "193b129f-b891-4890-88b0-bfcc2799127b"). InnerVolumeSpecName "kube-api-access-tnt56". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.628308 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d834299e-a8cc-4c17-9b41-8e00d9fa2929-kube-api-access-cf7td" (OuterVolumeSpecName: "kube-api-access-cf7td") pod "d834299e-a8cc-4c17-9b41-8e00d9fa2929" (UID: "d834299e-a8cc-4c17-9b41-8e00d9fa2929"). InnerVolumeSpecName "kube-api-access-cf7td". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.631480 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fba0a11-b0dd-48b1-9dff-a9e30daf4a3d-kube-api-access-xwmxc" (OuterVolumeSpecName: "kube-api-access-xwmxc") pod "1fba0a11-b0dd-48b1-9dff-a9e30daf4a3d" (UID: "1fba0a11-b0dd-48b1-9dff-a9e30daf4a3d"). InnerVolumeSpecName "kube-api-access-xwmxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.715623 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwmxc\" (UniqueName: \"kubernetes.io/projected/1fba0a11-b0dd-48b1-9dff-a9e30daf4a3d-kube-api-access-xwmxc\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.715818 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cf7td\" (UniqueName: \"kubernetes.io/projected/d834299e-a8cc-4c17-9b41-8e00d9fa2929-kube-api-access-cf7td\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.715879 4809 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1fba0a11-b0dd-48b1-9dff-a9e30daf4a3d-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.715953 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwj2h\" (UniqueName: \"kubernetes.io/projected/d11d110e-e009-481e-a5f6-1a380f66764c-kube-api-access-mwj2h\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.716014 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tnt56\" (UniqueName: \"kubernetes.io/projected/193b129f-b891-4890-88b0-bfcc2799127b-kube-api-access-tnt56\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.716077 4809 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ece91fae-9388-4231-a308-28d9ee06524b-var-run\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.716155 4809 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ece91fae-9388-4231-a308-28d9ee06524b-var-log-ovn\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.716214 4809 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ece91fae-9388-4231-a308-28d9ee06524b-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.716282 4809 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ccdcb5e-d68e-4046-9f86-3d37634c9cf3-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.716343 4809 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d11d110e-e009-481e-a5f6-1a380f66764c-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.716407 4809 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/193b129f-b891-4890-88b0-bfcc2799127b-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.716475 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2srk\" (UniqueName: \"kubernetes.io/projected/ece91fae-9388-4231-a308-28d9ee06524b-kube-api-access-n2srk\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.716534 4809 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31d780d8-832e-43d9-81f7-8047de4d9076-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.716587 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fm5km\" (UniqueName: \"kubernetes.io/projected/1ccdcb5e-d68e-4046-9f86-3d37634c9cf3-kube-api-access-fm5km\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.716645 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgks9\" (UniqueName: \"kubernetes.io/projected/31d780d8-832e-43d9-81f7-8047de4d9076-kube-api-access-fgks9\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.716701 4809 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ece91fae-9388-4231-a308-28d9ee06524b-additional-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.716752 4809 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ece91fae-9388-4231-a308-28d9ee06524b-var-run-ovn\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:38 crc kubenswrapper[4809]: I0312 08:22:38.716811 4809 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d834299e-a8cc-4c17-9b41-8e00d9fa2929-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:39 crc kubenswrapper[4809]: I0312 08:22:39.148771 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-e5ed-account-create-update-tgts4" Mar 12 08:22:39 crc kubenswrapper[4809]: I0312 08:22:39.160417 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7b325264-3ac9-446e-b820-c40d942263e6","Type":"ContainerStarted","Data":"2b82dfa960da21219e18e845c90afd42664f000b07d6e7d87ed125a1608a2fd9"} Mar 12 08:22:39 crc kubenswrapper[4809]: I0312 08:22:39.160508 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7b325264-3ac9-446e-b820-c40d942263e6","Type":"ContainerStarted","Data":"8152f199f79659fddbbba99e48e85fab2f6c13b2c4696495bd118bb4bfebdfa2"} Mar 12 08:22:39 crc kubenswrapper[4809]: I0312 08:22:39.160555 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-e5ed-account-create-update-tgts4" event={"ID":"d834299e-a8cc-4c17-9b41-8e00d9fa2929","Type":"ContainerDied","Data":"b3bb186d80c2bd5b4b14f6796d605cdda22d8ed47f0ec8f3f7881bb94e435e98"} Mar 12 08:22:39 crc kubenswrapper[4809]: I0312 08:22:39.160618 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3bb186d80c2bd5b4b14f6796d605cdda22d8ed47f0ec8f3f7881bb94e435e98" Mar 12 08:22:39 crc kubenswrapper[4809]: I0312 08:22:39.162425 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-drcgq" Mar 12 08:22:39 crc kubenswrapper[4809]: I0312 08:22:39.162530 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-85jsk" event={"ID":"2c55ab9f-27fc-410f-a8bf-fb2a34d4bec1","Type":"ContainerStarted","Data":"b70e9b0f42a41567f7d3edfc8bd25ec04b7f4ae8f4c5ae879608b77c71628bd1"} Mar 12 08:22:39 crc kubenswrapper[4809]: I0312 08:22:39.162629 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-8rbct" Mar 12 08:22:39 crc kubenswrapper[4809]: I0312 08:22:39.162678 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bfd7-account-create-update-c8q5g" Mar 12 08:22:39 crc kubenswrapper[4809]: I0312 08:22:39.162705 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-vfrtk" Mar 12 08:22:39 crc kubenswrapper[4809]: I0312 08:22:39.162776 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-x5bc6-config-ll2pb" Mar 12 08:22:39 crc kubenswrapper[4809]: I0312 08:22:39.163011 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-756c-account-create-update-vv58x" Mar 12 08:22:39 crc kubenswrapper[4809]: I0312 08:22:39.163420 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-3d72-account-create-update-vntwh" Mar 12 08:22:39 crc kubenswrapper[4809]: I0312 08:22:39.215797 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-85jsk" podStartSLOduration=8.71543341 podStartE2EDuration="14.215771875s" podCreationTimestamp="2026-03-12 08:22:25 +0000 UTC" firstStartedPulling="2026-03-12 08:22:32.631728789 +0000 UTC m=+1426.213764522" lastFinishedPulling="2026-03-12 08:22:38.132067254 +0000 UTC m=+1431.714102987" observedRunningTime="2026-03-12 08:22:39.199510165 +0000 UTC m=+1432.781545898" watchObservedRunningTime="2026-03-12 08:22:39.215771875 +0000 UTC m=+1432.797807608" Mar 12 08:22:39 crc kubenswrapper[4809]: I0312 08:22:39.652334 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-x5bc6-config-ll2pb"] Mar 12 08:22:39 crc kubenswrapper[4809]: I0312 08:22:39.669345 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-x5bc6-config-ll2pb"] Mar 12 08:22:41 crc kubenswrapper[4809]: I0312 08:22:41.123989 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ece91fae-9388-4231-a308-28d9ee06524b" path="/var/lib/kubelet/pods/ece91fae-9388-4231-a308-28d9ee06524b/volumes" Mar 12 08:22:42 crc kubenswrapper[4809]: I0312 08:22:42.210483 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fefb3329-70bc-45d1-ac98-44f1836b3470","Type":"ContainerStarted","Data":"a87fa4653ed1debf5091aff3f72f0c8d027e56931516dd414e3d23d806fa33b1"} Mar 12 08:22:42 crc kubenswrapper[4809]: I0312 08:22:42.229012 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7b325264-3ac9-446e-b820-c40d942263e6","Type":"ContainerStarted","Data":"13e367f57426cea8e354530ec2bd15f373c1a4b6c78f801dd359723ad2f89f9d"} Mar 12 08:22:42 crc kubenswrapper[4809]: I0312 08:22:42.229098 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7b325264-3ac9-446e-b820-c40d942263e6","Type":"ContainerStarted","Data":"0f39167856ccf16e2efa5184728973d189a918e66b9a1c6781fc0af0d70b8543"} Mar 12 08:22:42 crc kubenswrapper[4809]: I0312 08:22:42.229129 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7b325264-3ac9-446e-b820-c40d942263e6","Type":"ContainerStarted","Data":"a8dfbb5764775af30976cfc8e8234d6f1d35ebe8eb3ac1d8ad132bb5b150eaa0"} Mar 12 08:22:42 crc kubenswrapper[4809]: I0312 08:22:42.229141 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7b325264-3ac9-446e-b820-c40d942263e6","Type":"ContainerStarted","Data":"f8bd6c88635459c29aecfafad87b2a9b71751eb74dd857c89e119ec1e413b3df"} Mar 12 08:22:42 crc kubenswrapper[4809]: I0312 08:22:42.240439 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=15.387589946 podStartE2EDuration="47.240419214s" podCreationTimestamp="2026-03-12 08:21:55 +0000 UTC" firstStartedPulling="2026-03-12 08:22:09.390136138 +0000 UTC m=+1402.972171871" lastFinishedPulling="2026-03-12 08:22:41.242965406 +0000 UTC m=+1434.825001139" observedRunningTime="2026-03-12 08:22:42.237328301 +0000 UTC m=+1435.819364044" watchObservedRunningTime="2026-03-12 08:22:42.240419214 +0000 UTC m=+1435.822454947" Mar 12 08:22:43 crc kubenswrapper[4809]: I0312 08:22:43.255191 4809 generic.go:334] "Generic (PLEG): container finished" podID="2c55ab9f-27fc-410f-a8bf-fb2a34d4bec1" containerID="b70e9b0f42a41567f7d3edfc8bd25ec04b7f4ae8f4c5ae879608b77c71628bd1" exitCode=0 Mar 12 08:22:43 crc kubenswrapper[4809]: I0312 08:22:43.255219 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-85jsk" event={"ID":"2c55ab9f-27fc-410f-a8bf-fb2a34d4bec1","Type":"ContainerDied","Data":"b70e9b0f42a41567f7d3edfc8bd25ec04b7f4ae8f4c5ae879608b77c71628bd1"} Mar 12 08:22:44 crc kubenswrapper[4809]: I0312 08:22:44.272558 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7b325264-3ac9-446e-b820-c40d942263e6","Type":"ContainerStarted","Data":"85670283d0c5497d5285d96f0ec87717cd87851a8309693b100494696d48c53d"} Mar 12 08:22:44 crc kubenswrapper[4809]: I0312 08:22:44.678177 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-85jsk" Mar 12 08:22:44 crc kubenswrapper[4809]: I0312 08:22:44.811314 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c55ab9f-27fc-410f-a8bf-fb2a34d4bec1-combined-ca-bundle\") pod \"2c55ab9f-27fc-410f-a8bf-fb2a34d4bec1\" (UID: \"2c55ab9f-27fc-410f-a8bf-fb2a34d4bec1\") " Mar 12 08:22:44 crc kubenswrapper[4809]: I0312 08:22:44.811466 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b65ht\" (UniqueName: \"kubernetes.io/projected/2c55ab9f-27fc-410f-a8bf-fb2a34d4bec1-kube-api-access-b65ht\") pod \"2c55ab9f-27fc-410f-a8bf-fb2a34d4bec1\" (UID: \"2c55ab9f-27fc-410f-a8bf-fb2a34d4bec1\") " Mar 12 08:22:44 crc kubenswrapper[4809]: I0312 08:22:44.811644 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c55ab9f-27fc-410f-a8bf-fb2a34d4bec1-config-data\") pod \"2c55ab9f-27fc-410f-a8bf-fb2a34d4bec1\" (UID: \"2c55ab9f-27fc-410f-a8bf-fb2a34d4bec1\") " Mar 12 08:22:44 crc kubenswrapper[4809]: I0312 08:22:44.827599 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c55ab9f-27fc-410f-a8bf-fb2a34d4bec1-kube-api-access-b65ht" (OuterVolumeSpecName: "kube-api-access-b65ht") pod "2c55ab9f-27fc-410f-a8bf-fb2a34d4bec1" (UID: "2c55ab9f-27fc-410f-a8bf-fb2a34d4bec1"). InnerVolumeSpecName "kube-api-access-b65ht". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:22:44 crc kubenswrapper[4809]: I0312 08:22:44.862160 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c55ab9f-27fc-410f-a8bf-fb2a34d4bec1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2c55ab9f-27fc-410f-a8bf-fb2a34d4bec1" (UID: "2c55ab9f-27fc-410f-a8bf-fb2a34d4bec1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:22:44 crc kubenswrapper[4809]: I0312 08:22:44.879287 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c55ab9f-27fc-410f-a8bf-fb2a34d4bec1-config-data" (OuterVolumeSpecName: "config-data") pod "2c55ab9f-27fc-410f-a8bf-fb2a34d4bec1" (UID: "2c55ab9f-27fc-410f-a8bf-fb2a34d4bec1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:22:44 crc kubenswrapper[4809]: I0312 08:22:44.915490 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c55ab9f-27fc-410f-a8bf-fb2a34d4bec1-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:44 crc kubenswrapper[4809]: I0312 08:22:44.915535 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c55ab9f-27fc-410f-a8bf-fb2a34d4bec1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:44 crc kubenswrapper[4809]: I0312 08:22:44.915553 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b65ht\" (UniqueName: \"kubernetes.io/projected/2c55ab9f-27fc-410f-a8bf-fb2a34d4bec1-kube-api-access-b65ht\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.321266 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-85jsk" event={"ID":"2c55ab9f-27fc-410f-a8bf-fb2a34d4bec1","Type":"ContainerDied","Data":"e6895341ff1fccbf823cac9dcea88cc2c0f9a68d525252119621c5c54abc1414"} Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.322337 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6895341ff1fccbf823cac9dcea88cc2c0f9a68d525252119621c5c54abc1414" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.321327 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-85jsk" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.329172 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7b325264-3ac9-446e-b820-c40d942263e6","Type":"ContainerStarted","Data":"81192f544df531abf1cdd83699ce0c6e0e00069ecc22964ae26b2f09102eca6a"} Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.329233 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7b325264-3ac9-446e-b820-c40d942263e6","Type":"ContainerStarted","Data":"e9b9ab46e5cd72e11112a3b949e5a1fc42db0f60e39021a4fabf9d580f1656f7"} Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.329256 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7b325264-3ac9-446e-b820-c40d942263e6","Type":"ContainerStarted","Data":"8dd474eaeaff22293c935b8b22ec0a444aa273c0ee996d0b3b0a640186702f17"} Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.329272 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7b325264-3ac9-446e-b820-c40d942263e6","Type":"ContainerStarted","Data":"04e2e804df81a5d0eb11ba62993ca2b44fcb2ccc675bdc335a76f7a641ab477c"} Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.579500 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-9qw28"] Mar 12 08:22:45 crc kubenswrapper[4809]: E0312 08:22:45.579941 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ece91fae-9388-4231-a308-28d9ee06524b" containerName="ovn-config" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.579963 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="ece91fae-9388-4231-a308-28d9ee06524b" containerName="ovn-config" Mar 12 08:22:45 crc kubenswrapper[4809]: E0312 08:22:45.579980 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d11d110e-e009-481e-a5f6-1a380f66764c" containerName="mariadb-database-create" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.579986 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="d11d110e-e009-481e-a5f6-1a380f66764c" containerName="mariadb-database-create" Mar 12 08:22:45 crc kubenswrapper[4809]: E0312 08:22:45.580007 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ccdcb5e-d68e-4046-9f86-3d37634c9cf3" containerName="mariadb-database-create" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.580014 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ccdcb5e-d68e-4046-9f86-3d37634c9cf3" containerName="mariadb-database-create" Mar 12 08:22:45 crc kubenswrapper[4809]: E0312 08:22:45.580028 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c55ab9f-27fc-410f-a8bf-fb2a34d4bec1" containerName="keystone-db-sync" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.580034 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c55ab9f-27fc-410f-a8bf-fb2a34d4bec1" containerName="keystone-db-sync" Mar 12 08:22:45 crc kubenswrapper[4809]: E0312 08:22:45.580047 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d834299e-a8cc-4c17-9b41-8e00d9fa2929" containerName="mariadb-account-create-update" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.580053 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="d834299e-a8cc-4c17-9b41-8e00d9fa2929" containerName="mariadb-account-create-update" Mar 12 08:22:45 crc kubenswrapper[4809]: E0312 08:22:45.580066 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="193b129f-b891-4890-88b0-bfcc2799127b" containerName="mariadb-database-create" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.580072 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="193b129f-b891-4890-88b0-bfcc2799127b" containerName="mariadb-database-create" Mar 12 08:22:45 crc kubenswrapper[4809]: E0312 08:22:45.580084 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fba0a11-b0dd-48b1-9dff-a9e30daf4a3d" containerName="mariadb-account-create-update" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.580090 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fba0a11-b0dd-48b1-9dff-a9e30daf4a3d" containerName="mariadb-account-create-update" Mar 12 08:22:45 crc kubenswrapper[4809]: E0312 08:22:45.580105 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5ececaf-3560-4284-9d82-ac39de15bf88" containerName="mariadb-database-create" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.580127 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5ececaf-3560-4284-9d82-ac39de15bf88" containerName="mariadb-database-create" Mar 12 08:22:45 crc kubenswrapper[4809]: E0312 08:22:45.580139 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="656bf0f9-30c6-4f99-acd3-39996a0fa0b4" containerName="mariadb-account-create-update" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.580146 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="656bf0f9-30c6-4f99-acd3-39996a0fa0b4" containerName="mariadb-account-create-update" Mar 12 08:22:45 crc kubenswrapper[4809]: E0312 08:22:45.580156 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31d780d8-832e-43d9-81f7-8047de4d9076" containerName="mariadb-account-create-update" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.580162 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="31d780d8-832e-43d9-81f7-8047de4d9076" containerName="mariadb-account-create-update" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.580359 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5ececaf-3560-4284-9d82-ac39de15bf88" containerName="mariadb-database-create" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.580371 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fba0a11-b0dd-48b1-9dff-a9e30daf4a3d" containerName="mariadb-account-create-update" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.580378 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="656bf0f9-30c6-4f99-acd3-39996a0fa0b4" containerName="mariadb-account-create-update" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.580392 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="d11d110e-e009-481e-a5f6-1a380f66764c" containerName="mariadb-database-create" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.580407 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="193b129f-b891-4890-88b0-bfcc2799127b" containerName="mariadb-database-create" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.580416 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="d834299e-a8cc-4c17-9b41-8e00d9fa2929" containerName="mariadb-account-create-update" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.580425 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ccdcb5e-d68e-4046-9f86-3d37634c9cf3" containerName="mariadb-database-create" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.580435 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="ece91fae-9388-4231-a308-28d9ee06524b" containerName="ovn-config" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.580443 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="31d780d8-832e-43d9-81f7-8047de4d9076" containerName="mariadb-account-create-update" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.580453 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c55ab9f-27fc-410f-a8bf-fb2a34d4bec1" containerName="keystone-db-sync" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.582050 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f877ddd87-9qw28" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.582634 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.606615 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-9qw28"] Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.679149 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-kpsqs"] Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.681875 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kpsqs" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.690483 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f-fernet-keys\") pod \"keystone-bootstrap-kpsqs\" (UID: \"d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f\") " pod="openstack/keystone-bootstrap-kpsqs" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.690532 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f-combined-ca-bundle\") pod \"keystone-bootstrap-kpsqs\" (UID: \"d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f\") " pod="openstack/keystone-bootstrap-kpsqs" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.690591 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f-credential-keys\") pod \"keystone-bootstrap-kpsqs\" (UID: \"d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f\") " pod="openstack/keystone-bootstrap-kpsqs" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.690622 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f-config-data\") pod \"keystone-bootstrap-kpsqs\" (UID: \"d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f\") " pod="openstack/keystone-bootstrap-kpsqs" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.690680 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f-scripts\") pod \"keystone-bootstrap-kpsqs\" (UID: \"d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f\") " pod="openstack/keystone-bootstrap-kpsqs" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.690724 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqhrf\" (UniqueName: \"kubernetes.io/projected/d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f-kube-api-access-zqhrf\") pod \"keystone-bootstrap-kpsqs\" (UID: \"d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f\") " pod="openstack/keystone-bootstrap-kpsqs" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.691091 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.691289 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.706060 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.706300 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.706406 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-gndsl" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.728224 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-kpsqs"] Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.738350 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-jw2qr"] Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.741524 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-jw2qr" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.745496 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-4t7mv" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.745839 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.799750 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f-fernet-keys\") pod \"keystone-bootstrap-kpsqs\" (UID: \"d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f\") " pod="openstack/keystone-bootstrap-kpsqs" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.799833 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f-combined-ca-bundle\") pod \"keystone-bootstrap-kpsqs\" (UID: \"d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f\") " pod="openstack/keystone-bootstrap-kpsqs" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.799911 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f-credential-keys\") pod \"keystone-bootstrap-kpsqs\" (UID: \"d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f\") " pod="openstack/keystone-bootstrap-kpsqs" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.799961 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662-config\") pod \"dnsmasq-dns-f877ddd87-9qw28\" (UID: \"f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662\") " pod="openstack/dnsmasq-dns-f877ddd87-9qw28" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.800009 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f-config-data\") pod \"keystone-bootstrap-kpsqs\" (UID: \"d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f\") " pod="openstack/keystone-bootstrap-kpsqs" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.800215 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f-scripts\") pod \"keystone-bootstrap-kpsqs\" (UID: \"d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f\") " pod="openstack/keystone-bootstrap-kpsqs" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.800294 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662-dns-svc\") pod \"dnsmasq-dns-f877ddd87-9qw28\" (UID: \"f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662\") " pod="openstack/dnsmasq-dns-f877ddd87-9qw28" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.800406 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqhrf\" (UniqueName: \"kubernetes.io/projected/d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f-kube-api-access-zqhrf\") pod \"keystone-bootstrap-kpsqs\" (UID: \"d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f\") " pod="openstack/keystone-bootstrap-kpsqs" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.800456 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662-ovsdbserver-sb\") pod \"dnsmasq-dns-f877ddd87-9qw28\" (UID: \"f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662\") " pod="openstack/dnsmasq-dns-f877ddd87-9qw28" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.800487 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jdlq\" (UniqueName: \"kubernetes.io/projected/f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662-kube-api-access-7jdlq\") pod \"dnsmasq-dns-f877ddd87-9qw28\" (UID: \"f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662\") " pod="openstack/dnsmasq-dns-f877ddd87-9qw28" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.800516 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662-ovsdbserver-nb\") pod \"dnsmasq-dns-f877ddd87-9qw28\" (UID: \"f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662\") " pod="openstack/dnsmasq-dns-f877ddd87-9qw28" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.813943 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f-config-data\") pod \"keystone-bootstrap-kpsqs\" (UID: \"d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f\") " pod="openstack/keystone-bootstrap-kpsqs" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.814558 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-jw2qr"] Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.818955 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f-fernet-keys\") pod \"keystone-bootstrap-kpsqs\" (UID: \"d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f\") " pod="openstack/keystone-bootstrap-kpsqs" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.839202 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f-credential-keys\") pod \"keystone-bootstrap-kpsqs\" (UID: \"d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f\") " pod="openstack/keystone-bootstrap-kpsqs" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.839787 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f-scripts\") pod \"keystone-bootstrap-kpsqs\" (UID: \"d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f\") " pod="openstack/keystone-bootstrap-kpsqs" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.840294 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f-combined-ca-bundle\") pod \"keystone-bootstrap-kpsqs\" (UID: \"d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f\") " pod="openstack/keystone-bootstrap-kpsqs" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.843024 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqhrf\" (UniqueName: \"kubernetes.io/projected/d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f-kube-api-access-zqhrf\") pod \"keystone-bootstrap-kpsqs\" (UID: \"d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f\") " pod="openstack/keystone-bootstrap-kpsqs" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.913968 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8ef0743-567a-4a4b-aada-a0bc3659b200-config-data\") pod \"heat-db-sync-jw2qr\" (UID: \"a8ef0743-567a-4a4b-aada-a0bc3659b200\") " pod="openstack/heat-db-sync-jw2qr" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.914907 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662-dns-svc\") pod \"dnsmasq-dns-f877ddd87-9qw28\" (UID: \"f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662\") " pod="openstack/dnsmasq-dns-f877ddd87-9qw28" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.915093 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htrmz\" (UniqueName: \"kubernetes.io/projected/a8ef0743-567a-4a4b-aada-a0bc3659b200-kube-api-access-htrmz\") pod \"heat-db-sync-jw2qr\" (UID: \"a8ef0743-567a-4a4b-aada-a0bc3659b200\") " pod="openstack/heat-db-sync-jw2qr" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.915377 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662-ovsdbserver-sb\") pod \"dnsmasq-dns-f877ddd87-9qw28\" (UID: \"f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662\") " pod="openstack/dnsmasq-dns-f877ddd87-9qw28" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.915431 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jdlq\" (UniqueName: \"kubernetes.io/projected/f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662-kube-api-access-7jdlq\") pod \"dnsmasq-dns-f877ddd87-9qw28\" (UID: \"f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662\") " pod="openstack/dnsmasq-dns-f877ddd87-9qw28" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.915842 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662-ovsdbserver-nb\") pod \"dnsmasq-dns-f877ddd87-9qw28\" (UID: \"f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662\") " pod="openstack/dnsmasq-dns-f877ddd87-9qw28" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.915882 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8ef0743-567a-4a4b-aada-a0bc3659b200-combined-ca-bundle\") pod \"heat-db-sync-jw2qr\" (UID: \"a8ef0743-567a-4a4b-aada-a0bc3659b200\") " pod="openstack/heat-db-sync-jw2qr" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.916231 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662-config\") pod \"dnsmasq-dns-f877ddd87-9qw28\" (UID: \"f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662\") " pod="openstack/dnsmasq-dns-f877ddd87-9qw28" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.916885 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662-dns-svc\") pod \"dnsmasq-dns-f877ddd87-9qw28\" (UID: \"f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662\") " pod="openstack/dnsmasq-dns-f877ddd87-9qw28" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.917251 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662-ovsdbserver-nb\") pod \"dnsmasq-dns-f877ddd87-9qw28\" (UID: \"f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662\") " pod="openstack/dnsmasq-dns-f877ddd87-9qw28" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.917444 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662-ovsdbserver-sb\") pod \"dnsmasq-dns-f877ddd87-9qw28\" (UID: \"f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662\") " pod="openstack/dnsmasq-dns-f877ddd87-9qw28" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.917728 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662-config\") pod \"dnsmasq-dns-f877ddd87-9qw28\" (UID: \"f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662\") " pod="openstack/dnsmasq-dns-f877ddd87-9qw28" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.957556 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-qxpjh"] Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.958910 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-qxpjh" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.963544 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-m2fpj" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.964776 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.969178 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Mar 12 08:22:45 crc kubenswrapper[4809]: I0312 08:22:45.978483 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-qxpjh"] Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:45.992254 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jdlq\" (UniqueName: \"kubernetes.io/projected/f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662-kube-api-access-7jdlq\") pod \"dnsmasq-dns-f877ddd87-9qw28\" (UID: \"f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662\") " pod="openstack/dnsmasq-dns-f877ddd87-9qw28" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.020242 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htrmz\" (UniqueName: \"kubernetes.io/projected/a8ef0743-567a-4a4b-aada-a0bc3659b200-kube-api-access-htrmz\") pod \"heat-db-sync-jw2qr\" (UID: \"a8ef0743-567a-4a4b-aada-a0bc3659b200\") " pod="openstack/heat-db-sync-jw2qr" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.020372 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8ef0743-567a-4a4b-aada-a0bc3659b200-combined-ca-bundle\") pod \"heat-db-sync-jw2qr\" (UID: \"a8ef0743-567a-4a4b-aada-a0bc3659b200\") " pod="openstack/heat-db-sync-jw2qr" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.020603 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8ef0743-567a-4a4b-aada-a0bc3659b200-config-data\") pod \"heat-db-sync-jw2qr\" (UID: \"a8ef0743-567a-4a4b-aada-a0bc3659b200\") " pod="openstack/heat-db-sync-jw2qr" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.035346 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kpsqs" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.041345 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8ef0743-567a-4a4b-aada-a0bc3659b200-combined-ca-bundle\") pod \"heat-db-sync-jw2qr\" (UID: \"a8ef0743-567a-4a4b-aada-a0bc3659b200\") " pod="openstack/heat-db-sync-jw2qr" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.057617 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8ef0743-567a-4a4b-aada-a0bc3659b200-config-data\") pod \"heat-db-sync-jw2qr\" (UID: \"a8ef0743-567a-4a4b-aada-a0bc3659b200\") " pod="openstack/heat-db-sync-jw2qr" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.090281 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htrmz\" (UniqueName: \"kubernetes.io/projected/a8ef0743-567a-4a4b-aada-a0bc3659b200-kube-api-access-htrmz\") pod \"heat-db-sync-jw2qr\" (UID: \"a8ef0743-567a-4a4b-aada-a0bc3659b200\") " pod="openstack/heat-db-sync-jw2qr" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.146479 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a-scripts\") pod \"cinder-db-sync-qxpjh\" (UID: \"8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a\") " pod="openstack/cinder-db-sync-qxpjh" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.146655 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a-etc-machine-id\") pod \"cinder-db-sync-qxpjh\" (UID: \"8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a\") " pod="openstack/cinder-db-sync-qxpjh" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.146702 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a-combined-ca-bundle\") pod \"cinder-db-sync-qxpjh\" (UID: \"8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a\") " pod="openstack/cinder-db-sync-qxpjh" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.146764 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44gf7\" (UniqueName: \"kubernetes.io/projected/8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a-kube-api-access-44gf7\") pod \"cinder-db-sync-qxpjh\" (UID: \"8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a\") " pod="openstack/cinder-db-sync-qxpjh" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.146805 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a-db-sync-config-data\") pod \"cinder-db-sync-qxpjh\" (UID: \"8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a\") " pod="openstack/cinder-db-sync-qxpjh" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.146834 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a-config-data\") pod \"cinder-db-sync-qxpjh\" (UID: \"8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a\") " pod="openstack/cinder-db-sync-qxpjh" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.250825 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f877ddd87-9qw28" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.251250 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a-scripts\") pod \"cinder-db-sync-qxpjh\" (UID: \"8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a\") " pod="openstack/cinder-db-sync-qxpjh" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.251416 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a-etc-machine-id\") pod \"cinder-db-sync-qxpjh\" (UID: \"8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a\") " pod="openstack/cinder-db-sync-qxpjh" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.251470 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a-combined-ca-bundle\") pod \"cinder-db-sync-qxpjh\" (UID: \"8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a\") " pod="openstack/cinder-db-sync-qxpjh" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.251574 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44gf7\" (UniqueName: \"kubernetes.io/projected/8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a-kube-api-access-44gf7\") pod \"cinder-db-sync-qxpjh\" (UID: \"8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a\") " pod="openstack/cinder-db-sync-qxpjh" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.251631 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a-db-sync-config-data\") pod \"cinder-db-sync-qxpjh\" (UID: \"8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a\") " pod="openstack/cinder-db-sync-qxpjh" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.251661 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a-config-data\") pod \"cinder-db-sync-qxpjh\" (UID: \"8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a\") " pod="openstack/cinder-db-sync-qxpjh" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.251931 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-jw2qr" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.266132 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a-etc-machine-id\") pod \"cinder-db-sync-qxpjh\" (UID: \"8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a\") " pod="openstack/cinder-db-sync-qxpjh" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.345762 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a-combined-ca-bundle\") pod \"cinder-db-sync-qxpjh\" (UID: \"8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a\") " pod="openstack/cinder-db-sync-qxpjh" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.348791 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a-db-sync-config-data\") pod \"cinder-db-sync-qxpjh\" (UID: \"8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a\") " pod="openstack/cinder-db-sync-qxpjh" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.348893 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-98nmv"] Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.355371 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a-scripts\") pod \"cinder-db-sync-qxpjh\" (UID: \"8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a\") " pod="openstack/cinder-db-sync-qxpjh" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.356760 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44gf7\" (UniqueName: \"kubernetes.io/projected/8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a-kube-api-access-44gf7\") pod \"cinder-db-sync-qxpjh\" (UID: \"8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a\") " pod="openstack/cinder-db-sync-qxpjh" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.359974 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a-config-data\") pod \"cinder-db-sync-qxpjh\" (UID: \"8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a\") " pod="openstack/cinder-db-sync-qxpjh" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.360193 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-98nmv" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.368231 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.368553 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.373425 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-9g5gm" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.409377 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-nzpz5"] Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.410259 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-qxpjh" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.438715 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-9qw28"] Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.439059 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7b325264-3ac9-446e-b820-c40d942263e6","Type":"ContainerStarted","Data":"cf579c2c2ff73e5f61de6c65f561ab616d2cb3d7e2585dd11ee533572ef6fb2e"} Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.439391 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7b325264-3ac9-446e-b820-c40d942263e6","Type":"ContainerStarted","Data":"ff6f9a50d03a1c8809a9bc10ac03dc86b36c15baac880917686897e495e5d36d"} Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.439014 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-nzpz5" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.441794 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.442335 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-6pfp4" event={"ID":"cdb484e1-36e8-4bfe-aeb2-72fc1c331cda","Type":"ContainerStarted","Data":"b1cdfec6c07a1cb3b52e6148237b1b1cffebcdeb51474dc302a93b1b39f0ba4c"} Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.442470 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-25nrr" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.444802 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.460604 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-nzpz5"] Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.468288 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0857990f-7921-4ea0-a0c1-e431cc7de107-scripts\") pod \"placement-db-sync-98nmv\" (UID: \"0857990f-7921-4ea0-a0c1-e431cc7de107\") " pod="openstack/placement-db-sync-98nmv" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.468350 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0857990f-7921-4ea0-a0c1-e431cc7de107-combined-ca-bundle\") pod \"placement-db-sync-98nmv\" (UID: \"0857990f-7921-4ea0-a0c1-e431cc7de107\") " pod="openstack/placement-db-sync-98nmv" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.468413 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0857990f-7921-4ea0-a0c1-e431cc7de107-config-data\") pod \"placement-db-sync-98nmv\" (UID: \"0857990f-7921-4ea0-a0c1-e431cc7de107\") " pod="openstack/placement-db-sync-98nmv" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.468475 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsllc\" (UniqueName: \"kubernetes.io/projected/0857990f-7921-4ea0-a0c1-e431cc7de107-kube-api-access-nsllc\") pod \"placement-db-sync-98nmv\" (UID: \"0857990f-7921-4ea0-a0c1-e431cc7de107\") " pod="openstack/placement-db-sync-98nmv" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.468584 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0857990f-7921-4ea0-a0c1-e431cc7de107-logs\") pod \"placement-db-sync-98nmv\" (UID: \"0857990f-7921-4ea0-a0c1-e431cc7de107\") " pod="openstack/placement-db-sync-98nmv" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.503037 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-98nmv"] Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.522894 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-jf5zs"] Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.526073 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-jf5zs" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.537697 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-w2csq" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.551706 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.557055 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-jf5zs"] Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.573375 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsllc\" (UniqueName: \"kubernetes.io/projected/0857990f-7921-4ea0-a0c1-e431cc7de107-kube-api-access-nsllc\") pod \"placement-db-sync-98nmv\" (UID: \"0857990f-7921-4ea0-a0c1-e431cc7de107\") " pod="openstack/placement-db-sync-98nmv" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.573469 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3108f49-e70c-4650-a969-b83a1ed46a14-combined-ca-bundle\") pod \"neutron-db-sync-nzpz5\" (UID: \"d3108f49-e70c-4650-a969-b83a1ed46a14\") " pod="openstack/neutron-db-sync-nzpz5" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.573539 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0857990f-7921-4ea0-a0c1-e431cc7de107-logs\") pod \"placement-db-sync-98nmv\" (UID: \"0857990f-7921-4ea0-a0c1-e431cc7de107\") " pod="openstack/placement-db-sync-98nmv" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.573566 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd64d\" (UniqueName: \"kubernetes.io/projected/d3108f49-e70c-4650-a969-b83a1ed46a14-kube-api-access-xd64d\") pod \"neutron-db-sync-nzpz5\" (UID: \"d3108f49-e70c-4650-a969-b83a1ed46a14\") " pod="openstack/neutron-db-sync-nzpz5" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.573735 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d3108f49-e70c-4650-a969-b83a1ed46a14-config\") pod \"neutron-db-sync-nzpz5\" (UID: \"d3108f49-e70c-4650-a969-b83a1ed46a14\") " pod="openstack/neutron-db-sync-nzpz5" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.573825 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0857990f-7921-4ea0-a0c1-e431cc7de107-scripts\") pod \"placement-db-sync-98nmv\" (UID: \"0857990f-7921-4ea0-a0c1-e431cc7de107\") " pod="openstack/placement-db-sync-98nmv" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.573860 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0857990f-7921-4ea0-a0c1-e431cc7de107-combined-ca-bundle\") pod \"placement-db-sync-98nmv\" (UID: \"0857990f-7921-4ea0-a0c1-e431cc7de107\") " pod="openstack/placement-db-sync-98nmv" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.573910 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0857990f-7921-4ea0-a0c1-e431cc7de107-config-data\") pod \"placement-db-sync-98nmv\" (UID: \"0857990f-7921-4ea0-a0c1-e431cc7de107\") " pod="openstack/placement-db-sync-98nmv" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.578600 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0857990f-7921-4ea0-a0c1-e431cc7de107-logs\") pod \"placement-db-sync-98nmv\" (UID: \"0857990f-7921-4ea0-a0c1-e431cc7de107\") " pod="openstack/placement-db-sync-98nmv" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.582344 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-dqdxz"] Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.584786 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68dcc9cf6f-dqdxz" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.599002 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0857990f-7921-4ea0-a0c1-e431cc7de107-combined-ca-bundle\") pod \"placement-db-sync-98nmv\" (UID: \"0857990f-7921-4ea0-a0c1-e431cc7de107\") " pod="openstack/placement-db-sync-98nmv" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.603385 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0857990f-7921-4ea0-a0c1-e431cc7de107-scripts\") pod \"placement-db-sync-98nmv\" (UID: \"0857990f-7921-4ea0-a0c1-e431cc7de107\") " pod="openstack/placement-db-sync-98nmv" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.607063 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0857990f-7921-4ea0-a0c1-e431cc7de107-config-data\") pod \"placement-db-sync-98nmv\" (UID: \"0857990f-7921-4ea0-a0c1-e431cc7de107\") " pod="openstack/placement-db-sync-98nmv" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.626391 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-dqdxz"] Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.631147 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsllc\" (UniqueName: \"kubernetes.io/projected/0857990f-7921-4ea0-a0c1-e431cc7de107-kube-api-access-nsllc\") pod \"placement-db-sync-98nmv\" (UID: \"0857990f-7921-4ea0-a0c1-e431cc7de107\") " pod="openstack/placement-db-sync-98nmv" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.667077 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=48.074355363 podStartE2EDuration="59.667052604s" podCreationTimestamp="2026-03-12 08:21:47 +0000 UTC" firstStartedPulling="2026-03-12 08:22:32.140848779 +0000 UTC m=+1425.722884502" lastFinishedPulling="2026-03-12 08:22:43.733546 +0000 UTC m=+1437.315581743" observedRunningTime="2026-03-12 08:22:46.516133512 +0000 UTC m=+1440.098169255" watchObservedRunningTime="2026-03-12 08:22:46.667052604 +0000 UTC m=+1440.249088337" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.675978 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dffab84a-0090-4a84-a40b-74a3c49452ef-ovsdbserver-nb\") pod \"dnsmasq-dns-68dcc9cf6f-dqdxz\" (UID: \"dffab84a-0090-4a84-a40b-74a3c49452ef\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-dqdxz" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.676036 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbtl6\" (UniqueName: \"kubernetes.io/projected/dffab84a-0090-4a84-a40b-74a3c49452ef-kube-api-access-dbtl6\") pod \"dnsmasq-dns-68dcc9cf6f-dqdxz\" (UID: \"dffab84a-0090-4a84-a40b-74a3c49452ef\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-dqdxz" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.676086 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3108f49-e70c-4650-a969-b83a1ed46a14-combined-ca-bundle\") pod \"neutron-db-sync-nzpz5\" (UID: \"d3108f49-e70c-4650-a969-b83a1ed46a14\") " pod="openstack/neutron-db-sync-nzpz5" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.676135 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dffab84a-0090-4a84-a40b-74a3c49452ef-config\") pod \"dnsmasq-dns-68dcc9cf6f-dqdxz\" (UID: \"dffab84a-0090-4a84-a40b-74a3c49452ef\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-dqdxz" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.676160 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xd64d\" (UniqueName: \"kubernetes.io/projected/d3108f49-e70c-4650-a969-b83a1ed46a14-kube-api-access-xd64d\") pod \"neutron-db-sync-nzpz5\" (UID: \"d3108f49-e70c-4650-a969-b83a1ed46a14\") " pod="openstack/neutron-db-sync-nzpz5" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.676225 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dffab84a-0090-4a84-a40b-74a3c49452ef-dns-svc\") pod \"dnsmasq-dns-68dcc9cf6f-dqdxz\" (UID: \"dffab84a-0090-4a84-a40b-74a3c49452ef\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-dqdxz" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.676255 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/760b497c-568e-4501-86c1-1c9f6b5e7f7d-db-sync-config-data\") pod \"barbican-db-sync-jf5zs\" (UID: \"760b497c-568e-4501-86c1-1c9f6b5e7f7d\") " pod="openstack/barbican-db-sync-jf5zs" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.676285 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dffab84a-0090-4a84-a40b-74a3c49452ef-ovsdbserver-sb\") pod \"dnsmasq-dns-68dcc9cf6f-dqdxz\" (UID: \"dffab84a-0090-4a84-a40b-74a3c49452ef\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-dqdxz" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.676315 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d3108f49-e70c-4650-a969-b83a1ed46a14-config\") pod \"neutron-db-sync-nzpz5\" (UID: \"d3108f49-e70c-4650-a969-b83a1ed46a14\") " pod="openstack/neutron-db-sync-nzpz5" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.676336 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/760b497c-568e-4501-86c1-1c9f6b5e7f7d-combined-ca-bundle\") pod \"barbican-db-sync-jf5zs\" (UID: \"760b497c-568e-4501-86c1-1c9f6b5e7f7d\") " pod="openstack/barbican-db-sync-jf5zs" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.676391 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4vcv\" (UniqueName: \"kubernetes.io/projected/760b497c-568e-4501-86c1-1c9f6b5e7f7d-kube-api-access-j4vcv\") pod \"barbican-db-sync-jf5zs\" (UID: \"760b497c-568e-4501-86c1-1c9f6b5e7f7d\") " pod="openstack/barbican-db-sync-jf5zs" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.697446 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d3108f49-e70c-4650-a969-b83a1ed46a14-config\") pod \"neutron-db-sync-nzpz5\" (UID: \"d3108f49-e70c-4650-a969-b83a1ed46a14\") " pod="openstack/neutron-db-sync-nzpz5" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.697892 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3108f49-e70c-4650-a969-b83a1ed46a14-combined-ca-bundle\") pod \"neutron-db-sync-nzpz5\" (UID: \"d3108f49-e70c-4650-a969-b83a1ed46a14\") " pod="openstack/neutron-db-sync-nzpz5" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.710742 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-98nmv" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.717672 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xd64d\" (UniqueName: \"kubernetes.io/projected/d3108f49-e70c-4650-a969-b83a1ed46a14-kube-api-access-xd64d\") pod \"neutron-db-sync-nzpz5\" (UID: \"d3108f49-e70c-4650-a969-b83a1ed46a14\") " pod="openstack/neutron-db-sync-nzpz5" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.718493 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-6pfp4" podStartSLOduration=3.08672988 podStartE2EDuration="40.718457194s" podCreationTimestamp="2026-03-12 08:22:06 +0000 UTC" firstStartedPulling="2026-03-12 08:22:07.161378624 +0000 UTC m=+1400.743414357" lastFinishedPulling="2026-03-12 08:22:44.793105928 +0000 UTC m=+1438.375141671" observedRunningTime="2026-03-12 08:22:46.55117743 +0000 UTC m=+1440.133213153" watchObservedRunningTime="2026-03-12 08:22:46.718457194 +0000 UTC m=+1440.300492927" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.777868 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4vcv\" (UniqueName: \"kubernetes.io/projected/760b497c-568e-4501-86c1-1c9f6b5e7f7d-kube-api-access-j4vcv\") pod \"barbican-db-sync-jf5zs\" (UID: \"760b497c-568e-4501-86c1-1c9f6b5e7f7d\") " pod="openstack/barbican-db-sync-jf5zs" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.777928 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dffab84a-0090-4a84-a40b-74a3c49452ef-ovsdbserver-nb\") pod \"dnsmasq-dns-68dcc9cf6f-dqdxz\" (UID: \"dffab84a-0090-4a84-a40b-74a3c49452ef\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-dqdxz" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.777953 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbtl6\" (UniqueName: \"kubernetes.io/projected/dffab84a-0090-4a84-a40b-74a3c49452ef-kube-api-access-dbtl6\") pod \"dnsmasq-dns-68dcc9cf6f-dqdxz\" (UID: \"dffab84a-0090-4a84-a40b-74a3c49452ef\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-dqdxz" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.777993 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dffab84a-0090-4a84-a40b-74a3c49452ef-config\") pod \"dnsmasq-dns-68dcc9cf6f-dqdxz\" (UID: \"dffab84a-0090-4a84-a40b-74a3c49452ef\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-dqdxz" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.778055 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dffab84a-0090-4a84-a40b-74a3c49452ef-dns-svc\") pod \"dnsmasq-dns-68dcc9cf6f-dqdxz\" (UID: \"dffab84a-0090-4a84-a40b-74a3c49452ef\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-dqdxz" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.778080 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/760b497c-568e-4501-86c1-1c9f6b5e7f7d-db-sync-config-data\") pod \"barbican-db-sync-jf5zs\" (UID: \"760b497c-568e-4501-86c1-1c9f6b5e7f7d\") " pod="openstack/barbican-db-sync-jf5zs" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.778105 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dffab84a-0090-4a84-a40b-74a3c49452ef-ovsdbserver-sb\") pod \"dnsmasq-dns-68dcc9cf6f-dqdxz\" (UID: \"dffab84a-0090-4a84-a40b-74a3c49452ef\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-dqdxz" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.778145 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/760b497c-568e-4501-86c1-1c9f6b5e7f7d-combined-ca-bundle\") pod \"barbican-db-sync-jf5zs\" (UID: \"760b497c-568e-4501-86c1-1c9f6b5e7f7d\") " pod="openstack/barbican-db-sync-jf5zs" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.790968 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dffab84a-0090-4a84-a40b-74a3c49452ef-ovsdbserver-nb\") pod \"dnsmasq-dns-68dcc9cf6f-dqdxz\" (UID: \"dffab84a-0090-4a84-a40b-74a3c49452ef\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-dqdxz" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.799327 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dffab84a-0090-4a84-a40b-74a3c49452ef-ovsdbserver-sb\") pod \"dnsmasq-dns-68dcc9cf6f-dqdxz\" (UID: \"dffab84a-0090-4a84-a40b-74a3c49452ef\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-dqdxz" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.799898 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dffab84a-0090-4a84-a40b-74a3c49452ef-dns-svc\") pod \"dnsmasq-dns-68dcc9cf6f-dqdxz\" (UID: \"dffab84a-0090-4a84-a40b-74a3c49452ef\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-dqdxz" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.801195 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-nzpz5" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.815805 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/760b497c-568e-4501-86c1-1c9f6b5e7f7d-combined-ca-bundle\") pod \"barbican-db-sync-jf5zs\" (UID: \"760b497c-568e-4501-86c1-1c9f6b5e7f7d\") " pod="openstack/barbican-db-sync-jf5zs" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.827475 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/760b497c-568e-4501-86c1-1c9f6b5e7f7d-db-sync-config-data\") pod \"barbican-db-sync-jf5zs\" (UID: \"760b497c-568e-4501-86c1-1c9f6b5e7f7d\") " pod="openstack/barbican-db-sync-jf5zs" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.834342 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dffab84a-0090-4a84-a40b-74a3c49452ef-config\") pod \"dnsmasq-dns-68dcc9cf6f-dqdxz\" (UID: \"dffab84a-0090-4a84-a40b-74a3c49452ef\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-dqdxz" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.859403 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbtl6\" (UniqueName: \"kubernetes.io/projected/dffab84a-0090-4a84-a40b-74a3c49452ef-kube-api-access-dbtl6\") pod \"dnsmasq-dns-68dcc9cf6f-dqdxz\" (UID: \"dffab84a-0090-4a84-a40b-74a3c49452ef\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-dqdxz" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.869198 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4vcv\" (UniqueName: \"kubernetes.io/projected/760b497c-568e-4501-86c1-1c9f6b5e7f7d-kube-api-access-j4vcv\") pod \"barbican-db-sync-jf5zs\" (UID: \"760b497c-568e-4501-86c1-1c9f6b5e7f7d\") " pod="openstack/barbican-db-sync-jf5zs" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.876744 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-jf5zs" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.982797 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68dcc9cf6f-dqdxz" Mar 12 08:22:46 crc kubenswrapper[4809]: I0312 08:22:46.997504 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-kpsqs"] Mar 12 08:22:47 crc kubenswrapper[4809]: I0312 08:22:47.323814 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-dqdxz"] Mar 12 08:22:47 crc kubenswrapper[4809]: I0312 08:22:47.363190 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-s5tlv"] Mar 12 08:22:47 crc kubenswrapper[4809]: I0312 08:22:47.365376 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-s5tlv" Mar 12 08:22:47 crc kubenswrapper[4809]: I0312 08:22:47.374571 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Mar 12 08:22:47 crc kubenswrapper[4809]: I0312 08:22:47.387328 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-s5tlv"] Mar 12 08:22:47 crc kubenswrapper[4809]: I0312 08:22:47.542957 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 12 08:22:47 crc kubenswrapper[4809]: I0312 08:22:47.610493 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kpsqs" event={"ID":"d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f","Type":"ContainerStarted","Data":"7340ff9dd9fe6096cebba1927d04bc3433f5dd70048d6e1e46f26e7dfdfd751c"} Mar 12 08:22:47 crc kubenswrapper[4809]: I0312 08:22:47.620847 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 12 08:22:47 crc kubenswrapper[4809]: I0312 08:22:47.638521 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/57e7abd3-40b2-4f84-b669-01f4a2bf9bd2-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-s5tlv\" (UID: \"57e7abd3-40b2-4f84-b669-01f4a2bf9bd2\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-s5tlv" Mar 12 08:22:47 crc kubenswrapper[4809]: I0312 08:22:47.638602 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57e7abd3-40b2-4f84-b669-01f4a2bf9bd2-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-s5tlv\" (UID: \"57e7abd3-40b2-4f84-b669-01f4a2bf9bd2\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-s5tlv" Mar 12 08:22:47 crc kubenswrapper[4809]: I0312 08:22:47.638715 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4q997\" (UniqueName: \"kubernetes.io/projected/57e7abd3-40b2-4f84-b669-01f4a2bf9bd2-kube-api-access-4q997\") pod \"dnsmasq-dns-58dd9ff6bc-s5tlv\" (UID: \"57e7abd3-40b2-4f84-b669-01f4a2bf9bd2\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-s5tlv" Mar 12 08:22:47 crc kubenswrapper[4809]: I0312 08:22:47.638772 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/57e7abd3-40b2-4f84-b669-01f4a2bf9bd2-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-s5tlv\" (UID: \"57e7abd3-40b2-4f84-b669-01f4a2bf9bd2\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-s5tlv" Mar 12 08:22:47 crc kubenswrapper[4809]: I0312 08:22:47.638796 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/57e7abd3-40b2-4f84-b669-01f4a2bf9bd2-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-s5tlv\" (UID: \"57e7abd3-40b2-4f84-b669-01f4a2bf9bd2\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-s5tlv" Mar 12 08:22:47 crc kubenswrapper[4809]: I0312 08:22:47.638847 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57e7abd3-40b2-4f84-b669-01f4a2bf9bd2-config\") pod \"dnsmasq-dns-58dd9ff6bc-s5tlv\" (UID: \"57e7abd3-40b2-4f84-b669-01f4a2bf9bd2\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-s5tlv" Mar 12 08:22:47 crc kubenswrapper[4809]: I0312 08:22:47.643324 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 08:22:47 crc kubenswrapper[4809]: I0312 08:22:47.650459 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 12 08:22:47 crc kubenswrapper[4809]: I0312 08:22:47.659548 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 12 08:22:47 crc kubenswrapper[4809]: I0312 08:22:47.707095 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 08:22:47 crc kubenswrapper[4809]: I0312 08:22:47.743698 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e08acf54-60ca-4dc7-bd01-ce8fed2bcc51-run-httpd\") pod \"ceilometer-0\" (UID: \"e08acf54-60ca-4dc7-bd01-ce8fed2bcc51\") " pod="openstack/ceilometer-0" Mar 12 08:22:47 crc kubenswrapper[4809]: I0312 08:22:47.743786 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e08acf54-60ca-4dc7-bd01-ce8fed2bcc51-log-httpd\") pod \"ceilometer-0\" (UID: \"e08acf54-60ca-4dc7-bd01-ce8fed2bcc51\") " pod="openstack/ceilometer-0" Mar 12 08:22:47 crc kubenswrapper[4809]: I0312 08:22:47.743868 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4q997\" (UniqueName: \"kubernetes.io/projected/57e7abd3-40b2-4f84-b669-01f4a2bf9bd2-kube-api-access-4q997\") pod \"dnsmasq-dns-58dd9ff6bc-s5tlv\" (UID: \"57e7abd3-40b2-4f84-b669-01f4a2bf9bd2\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-s5tlv" Mar 12 08:22:47 crc kubenswrapper[4809]: I0312 08:22:47.743934 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/57e7abd3-40b2-4f84-b669-01f4a2bf9bd2-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-s5tlv\" (UID: \"57e7abd3-40b2-4f84-b669-01f4a2bf9bd2\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-s5tlv" Mar 12 08:22:47 crc kubenswrapper[4809]: I0312 08:22:47.743981 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/57e7abd3-40b2-4f84-b669-01f4a2bf9bd2-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-s5tlv\" (UID: \"57e7abd3-40b2-4f84-b669-01f4a2bf9bd2\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-s5tlv" Mar 12 08:22:47 crc kubenswrapper[4809]: I0312 08:22:47.744035 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e08acf54-60ca-4dc7-bd01-ce8fed2bcc51-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e08acf54-60ca-4dc7-bd01-ce8fed2bcc51\") " pod="openstack/ceilometer-0" Mar 12 08:22:47 crc kubenswrapper[4809]: I0312 08:22:47.744099 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57e7abd3-40b2-4f84-b669-01f4a2bf9bd2-config\") pod \"dnsmasq-dns-58dd9ff6bc-s5tlv\" (UID: \"57e7abd3-40b2-4f84-b669-01f4a2bf9bd2\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-s5tlv" Mar 12 08:22:47 crc kubenswrapper[4809]: I0312 08:22:47.744170 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e08acf54-60ca-4dc7-bd01-ce8fed2bcc51-config-data\") pod \"ceilometer-0\" (UID: \"e08acf54-60ca-4dc7-bd01-ce8fed2bcc51\") " pod="openstack/ceilometer-0" Mar 12 08:22:47 crc kubenswrapper[4809]: I0312 08:22:47.744195 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jh8xw\" (UniqueName: \"kubernetes.io/projected/e08acf54-60ca-4dc7-bd01-ce8fed2bcc51-kube-api-access-jh8xw\") pod \"ceilometer-0\" (UID: \"e08acf54-60ca-4dc7-bd01-ce8fed2bcc51\") " pod="openstack/ceilometer-0" Mar 12 08:22:47 crc kubenswrapper[4809]: I0312 08:22:47.744226 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e08acf54-60ca-4dc7-bd01-ce8fed2bcc51-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e08acf54-60ca-4dc7-bd01-ce8fed2bcc51\") " pod="openstack/ceilometer-0" Mar 12 08:22:47 crc kubenswrapper[4809]: I0312 08:22:47.744263 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e08acf54-60ca-4dc7-bd01-ce8fed2bcc51-scripts\") pod \"ceilometer-0\" (UID: \"e08acf54-60ca-4dc7-bd01-ce8fed2bcc51\") " pod="openstack/ceilometer-0" Mar 12 08:22:47 crc kubenswrapper[4809]: I0312 08:22:47.744301 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/57e7abd3-40b2-4f84-b669-01f4a2bf9bd2-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-s5tlv\" (UID: \"57e7abd3-40b2-4f84-b669-01f4a2bf9bd2\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-s5tlv" Mar 12 08:22:47 crc kubenswrapper[4809]: I0312 08:22:47.744361 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57e7abd3-40b2-4f84-b669-01f4a2bf9bd2-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-s5tlv\" (UID: \"57e7abd3-40b2-4f84-b669-01f4a2bf9bd2\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-s5tlv" Mar 12 08:22:47 crc kubenswrapper[4809]: I0312 08:22:47.746773 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/57e7abd3-40b2-4f84-b669-01f4a2bf9bd2-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-s5tlv\" (UID: \"57e7abd3-40b2-4f84-b669-01f4a2bf9bd2\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-s5tlv" Mar 12 08:22:47 crc kubenswrapper[4809]: I0312 08:22:47.747081 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57e7abd3-40b2-4f84-b669-01f4a2bf9bd2-config\") pod \"dnsmasq-dns-58dd9ff6bc-s5tlv\" (UID: \"57e7abd3-40b2-4f84-b669-01f4a2bf9bd2\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-s5tlv" Mar 12 08:22:47 crc kubenswrapper[4809]: I0312 08:22:47.748557 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/57e7abd3-40b2-4f84-b669-01f4a2bf9bd2-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-s5tlv\" (UID: \"57e7abd3-40b2-4f84-b669-01f4a2bf9bd2\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-s5tlv" Mar 12 08:22:47 crc kubenswrapper[4809]: I0312 08:22:47.751228 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57e7abd3-40b2-4f84-b669-01f4a2bf9bd2-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-s5tlv\" (UID: \"57e7abd3-40b2-4f84-b669-01f4a2bf9bd2\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-s5tlv" Mar 12 08:22:47 crc kubenswrapper[4809]: I0312 08:22:47.758006 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/57e7abd3-40b2-4f84-b669-01f4a2bf9bd2-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-s5tlv\" (UID: \"57e7abd3-40b2-4f84-b669-01f4a2bf9bd2\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-s5tlv" Mar 12 08:22:47 crc kubenswrapper[4809]: I0312 08:22:47.763860 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4q997\" (UniqueName: \"kubernetes.io/projected/57e7abd3-40b2-4f84-b669-01f4a2bf9bd2-kube-api-access-4q997\") pod \"dnsmasq-dns-58dd9ff6bc-s5tlv\" (UID: \"57e7abd3-40b2-4f84-b669-01f4a2bf9bd2\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-s5tlv" Mar 12 08:22:47 crc kubenswrapper[4809]: I0312 08:22:47.796904 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-9qw28"] Mar 12 08:22:47 crc kubenswrapper[4809]: I0312 08:22:47.832994 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-jw2qr"] Mar 12 08:22:47 crc kubenswrapper[4809]: I0312 08:22:47.846775 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e08acf54-60ca-4dc7-bd01-ce8fed2bcc51-config-data\") pod \"ceilometer-0\" (UID: \"e08acf54-60ca-4dc7-bd01-ce8fed2bcc51\") " pod="openstack/ceilometer-0" Mar 12 08:22:47 crc kubenswrapper[4809]: I0312 08:22:47.846880 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jh8xw\" (UniqueName: \"kubernetes.io/projected/e08acf54-60ca-4dc7-bd01-ce8fed2bcc51-kube-api-access-jh8xw\") pod \"ceilometer-0\" (UID: \"e08acf54-60ca-4dc7-bd01-ce8fed2bcc51\") " pod="openstack/ceilometer-0" Mar 12 08:22:47 crc kubenswrapper[4809]: I0312 08:22:47.847469 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e08acf54-60ca-4dc7-bd01-ce8fed2bcc51-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e08acf54-60ca-4dc7-bd01-ce8fed2bcc51\") " pod="openstack/ceilometer-0" Mar 12 08:22:47 crc kubenswrapper[4809]: I0312 08:22:47.847515 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e08acf54-60ca-4dc7-bd01-ce8fed2bcc51-scripts\") pod \"ceilometer-0\" (UID: \"e08acf54-60ca-4dc7-bd01-ce8fed2bcc51\") " pod="openstack/ceilometer-0" Mar 12 08:22:47 crc kubenswrapper[4809]: I0312 08:22:47.847608 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e08acf54-60ca-4dc7-bd01-ce8fed2bcc51-run-httpd\") pod \"ceilometer-0\" (UID: \"e08acf54-60ca-4dc7-bd01-ce8fed2bcc51\") " pod="openstack/ceilometer-0" Mar 12 08:22:47 crc kubenswrapper[4809]: I0312 08:22:47.847642 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e08acf54-60ca-4dc7-bd01-ce8fed2bcc51-log-httpd\") pod \"ceilometer-0\" (UID: \"e08acf54-60ca-4dc7-bd01-ce8fed2bcc51\") " pod="openstack/ceilometer-0" Mar 12 08:22:47 crc kubenswrapper[4809]: I0312 08:22:47.847701 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e08acf54-60ca-4dc7-bd01-ce8fed2bcc51-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e08acf54-60ca-4dc7-bd01-ce8fed2bcc51\") " pod="openstack/ceilometer-0" Mar 12 08:22:47 crc kubenswrapper[4809]: I0312 08:22:47.849819 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e08acf54-60ca-4dc7-bd01-ce8fed2bcc51-run-httpd\") pod \"ceilometer-0\" (UID: \"e08acf54-60ca-4dc7-bd01-ce8fed2bcc51\") " pod="openstack/ceilometer-0" Mar 12 08:22:47 crc kubenswrapper[4809]: I0312 08:22:47.849928 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e08acf54-60ca-4dc7-bd01-ce8fed2bcc51-log-httpd\") pod \"ceilometer-0\" (UID: \"e08acf54-60ca-4dc7-bd01-ce8fed2bcc51\") " pod="openstack/ceilometer-0" Mar 12 08:22:47 crc kubenswrapper[4809]: I0312 08:22:47.862237 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e08acf54-60ca-4dc7-bd01-ce8fed2bcc51-scripts\") pod \"ceilometer-0\" (UID: \"e08acf54-60ca-4dc7-bd01-ce8fed2bcc51\") " pod="openstack/ceilometer-0" Mar 12 08:22:47 crc kubenswrapper[4809]: I0312 08:22:47.863523 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e08acf54-60ca-4dc7-bd01-ce8fed2bcc51-config-data\") pod \"ceilometer-0\" (UID: \"e08acf54-60ca-4dc7-bd01-ce8fed2bcc51\") " pod="openstack/ceilometer-0" Mar 12 08:22:47 crc kubenswrapper[4809]: I0312 08:22:47.867171 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e08acf54-60ca-4dc7-bd01-ce8fed2bcc51-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e08acf54-60ca-4dc7-bd01-ce8fed2bcc51\") " pod="openstack/ceilometer-0" Mar 12 08:22:47 crc kubenswrapper[4809]: I0312 08:22:47.869466 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jh8xw\" (UniqueName: \"kubernetes.io/projected/e08acf54-60ca-4dc7-bd01-ce8fed2bcc51-kube-api-access-jh8xw\") pod \"ceilometer-0\" (UID: \"e08acf54-60ca-4dc7-bd01-ce8fed2bcc51\") " pod="openstack/ceilometer-0" Mar 12 08:22:47 crc kubenswrapper[4809]: I0312 08:22:47.870216 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e08acf54-60ca-4dc7-bd01-ce8fed2bcc51-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e08acf54-60ca-4dc7-bd01-ce8fed2bcc51\") " pod="openstack/ceilometer-0" Mar 12 08:22:47 crc kubenswrapper[4809]: I0312 08:22:47.930604 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-s5tlv" Mar 12 08:22:48 crc kubenswrapper[4809]: I0312 08:22:48.003495 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 08:22:48 crc kubenswrapper[4809]: I0312 08:22:48.086004 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-qxpjh"] Mar 12 08:22:48 crc kubenswrapper[4809]: I0312 08:22:48.129815 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-98nmv"] Mar 12 08:22:48 crc kubenswrapper[4809]: I0312 08:22:48.514472 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-nzpz5"] Mar 12 08:22:48 crc kubenswrapper[4809]: I0312 08:22:48.534327 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-dqdxz"] Mar 12 08:22:48 crc kubenswrapper[4809]: I0312 08:22:48.567382 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-jf5zs"] Mar 12 08:22:48 crc kubenswrapper[4809]: I0312 08:22:48.633882 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f877ddd87-9qw28" event={"ID":"f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662","Type":"ContainerStarted","Data":"1c20627a91ef1cf668a895f329fbc5ea6ed98c7f24fe23f79ac4d7c0cf882f6f"} Mar 12 08:22:49 crc kubenswrapper[4809]: I0312 08:22:49.028546 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 08:22:49 crc kubenswrapper[4809]: W0312 08:22:49.473217 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f0ce648_579a_4e2a_9b18_3f5e1a9ead4a.slice/crio-c54e6f190b7896abc8d058d0e76090b51ca3e3354e2fe3714dfc91d9096fb985 WatchSource:0}: Error finding container c54e6f190b7896abc8d058d0e76090b51ca3e3354e2fe3714dfc91d9096fb985: Status 404 returned error can't find the container with id c54e6f190b7896abc8d058d0e76090b51ca3e3354e2fe3714dfc91d9096fb985 Mar 12 08:22:49 crc kubenswrapper[4809]: W0312 08:22:49.474104 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0857990f_7921_4ea0_a0c1_e431cc7de107.slice/crio-c6c4ca2ae7f80b7583b132d644021d0548ec90b370e1c687d1418dbea4d71710 WatchSource:0}: Error finding container c6c4ca2ae7f80b7583b132d644021d0548ec90b370e1c687d1418dbea4d71710: Status 404 returned error can't find the container with id c6c4ca2ae7f80b7583b132d644021d0548ec90b370e1c687d1418dbea4d71710 Mar 12 08:22:49 crc kubenswrapper[4809]: I0312 08:22:49.690099 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-qxpjh" event={"ID":"8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a","Type":"ContainerStarted","Data":"c54e6f190b7896abc8d058d0e76090b51ca3e3354e2fe3714dfc91d9096fb985"} Mar 12 08:22:49 crc kubenswrapper[4809]: I0312 08:22:49.730450 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-jf5zs" event={"ID":"760b497c-568e-4501-86c1-1c9f6b5e7f7d","Type":"ContainerStarted","Data":"5886a8d03274b4f268332922be13df87fd48b74f00b04268eeead251b5518848"} Mar 12 08:22:49 crc kubenswrapper[4809]: I0312 08:22:49.735794 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68dcc9cf6f-dqdxz" event={"ID":"dffab84a-0090-4a84-a40b-74a3c49452ef","Type":"ContainerStarted","Data":"d291213949e50e7e5f203c50e3cd3c94bdcedb0a3e9cb410c84e4c4fd329af54"} Mar 12 08:22:49 crc kubenswrapper[4809]: I0312 08:22:49.737293 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-98nmv" event={"ID":"0857990f-7921-4ea0-a0c1-e431cc7de107","Type":"ContainerStarted","Data":"c6c4ca2ae7f80b7583b132d644021d0548ec90b370e1c687d1418dbea4d71710"} Mar 12 08:22:49 crc kubenswrapper[4809]: I0312 08:22:49.738421 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-nzpz5" event={"ID":"d3108f49-e70c-4650-a969-b83a1ed46a14","Type":"ContainerStarted","Data":"bccef75b97c19b238681abeea313e4de7fd1bd102e0e3ed76eed8620c8d5779c"} Mar 12 08:22:49 crc kubenswrapper[4809]: I0312 08:22:49.739524 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-jw2qr" event={"ID":"a8ef0743-567a-4a4b-aada-a0bc3659b200","Type":"ContainerStarted","Data":"ced1accef93230581153926941451cabea3c98d64f62b21f05bc7086675351b1"} Mar 12 08:22:50 crc kubenswrapper[4809]: I0312 08:22:50.262399 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-s5tlv"] Mar 12 08:22:50 crc kubenswrapper[4809]: I0312 08:22:50.383247 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 08:22:50 crc kubenswrapper[4809]: I0312 08:22:50.771409 4809 generic.go:334] "Generic (PLEG): container finished" podID="f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662" containerID="67cda58e0ebb55f9a8c4f5692956aaf61ebbb8502360d4901fba42a23857ae99" exitCode=0 Mar 12 08:22:50 crc kubenswrapper[4809]: I0312 08:22:50.772008 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f877ddd87-9qw28" event={"ID":"f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662","Type":"ContainerDied","Data":"67cda58e0ebb55f9a8c4f5692956aaf61ebbb8502360d4901fba42a23857ae99"} Mar 12 08:22:50 crc kubenswrapper[4809]: I0312 08:22:50.784240 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kpsqs" event={"ID":"d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f","Type":"ContainerStarted","Data":"17079a3b61c88ca57390b8f5db5fd1ebd79265ceda08b79633bb30c0994cd283"} Mar 12 08:22:50 crc kubenswrapper[4809]: I0312 08:22:50.790912 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e08acf54-60ca-4dc7-bd01-ce8fed2bcc51","Type":"ContainerStarted","Data":"d8443bf1cf96bab059695e478cd26dc91c322ff1cbb87bc7966e5b7dfc72a8ed"} Mar 12 08:22:50 crc kubenswrapper[4809]: I0312 08:22:50.794252 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-s5tlv" event={"ID":"57e7abd3-40b2-4f84-b669-01f4a2bf9bd2","Type":"ContainerStarted","Data":"ecdb64c88d0c1ac990f48631d321a7d031dd118c75166821e331178164af0ca0"} Mar 12 08:22:50 crc kubenswrapper[4809]: I0312 08:22:50.794280 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-s5tlv" event={"ID":"57e7abd3-40b2-4f84-b669-01f4a2bf9bd2","Type":"ContainerStarted","Data":"6366638038dee3b7d69f579b0b92fc3bf2800896219928f489dcc0bea68a47cb"} Mar 12 08:22:50 crc kubenswrapper[4809]: I0312 08:22:50.825258 4809 generic.go:334] "Generic (PLEG): container finished" podID="dffab84a-0090-4a84-a40b-74a3c49452ef" containerID="cde5fc503f70c11c0bf6161915f99fa7576b6b385bdd7a42d19de8cc6ece466f" exitCode=0 Mar 12 08:22:50 crc kubenswrapper[4809]: I0312 08:22:50.825741 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68dcc9cf6f-dqdxz" event={"ID":"dffab84a-0090-4a84-a40b-74a3c49452ef","Type":"ContainerDied","Data":"cde5fc503f70c11c0bf6161915f99fa7576b6b385bdd7a42d19de8cc6ece466f"} Mar 12 08:22:50 crc kubenswrapper[4809]: I0312 08:22:50.834317 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-kpsqs" podStartSLOduration=5.834297077 podStartE2EDuration="5.834297077s" podCreationTimestamp="2026-03-12 08:22:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:22:50.826673792 +0000 UTC m=+1444.408709525" watchObservedRunningTime="2026-03-12 08:22:50.834297077 +0000 UTC m=+1444.416332810" Mar 12 08:22:50 crc kubenswrapper[4809]: I0312 08:22:50.835464 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-nzpz5" event={"ID":"d3108f49-e70c-4650-a969-b83a1ed46a14","Type":"ContainerStarted","Data":"4ee61acb61b7383a33753520c01f0f3ea94393b21b79270375c526414b7d0d57"} Mar 12 08:22:50 crc kubenswrapper[4809]: I0312 08:22:50.947286 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-nzpz5" podStartSLOduration=4.947265093 podStartE2EDuration="4.947265093s" podCreationTimestamp="2026-03-12 08:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:22:50.899567463 +0000 UTC m=+1444.481603196" watchObservedRunningTime="2026-03-12 08:22:50.947265093 +0000 UTC m=+1444.529300826" Mar 12 08:22:51 crc kubenswrapper[4809]: I0312 08:22:51.421819 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68dcc9cf6f-dqdxz" Mar 12 08:22:51 crc kubenswrapper[4809]: I0312 08:22:51.584434 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbtl6\" (UniqueName: \"kubernetes.io/projected/dffab84a-0090-4a84-a40b-74a3c49452ef-kube-api-access-dbtl6\") pod \"dffab84a-0090-4a84-a40b-74a3c49452ef\" (UID: \"dffab84a-0090-4a84-a40b-74a3c49452ef\") " Mar 12 08:22:51 crc kubenswrapper[4809]: I0312 08:22:51.584483 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dffab84a-0090-4a84-a40b-74a3c49452ef-ovsdbserver-sb\") pod \"dffab84a-0090-4a84-a40b-74a3c49452ef\" (UID: \"dffab84a-0090-4a84-a40b-74a3c49452ef\") " Mar 12 08:22:51 crc kubenswrapper[4809]: I0312 08:22:51.584609 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dffab84a-0090-4a84-a40b-74a3c49452ef-config\") pod \"dffab84a-0090-4a84-a40b-74a3c49452ef\" (UID: \"dffab84a-0090-4a84-a40b-74a3c49452ef\") " Mar 12 08:22:51 crc kubenswrapper[4809]: I0312 08:22:51.584757 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dffab84a-0090-4a84-a40b-74a3c49452ef-ovsdbserver-nb\") pod \"dffab84a-0090-4a84-a40b-74a3c49452ef\" (UID: \"dffab84a-0090-4a84-a40b-74a3c49452ef\") " Mar 12 08:22:51 crc kubenswrapper[4809]: I0312 08:22:51.584785 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dffab84a-0090-4a84-a40b-74a3c49452ef-dns-svc\") pod \"dffab84a-0090-4a84-a40b-74a3c49452ef\" (UID: \"dffab84a-0090-4a84-a40b-74a3c49452ef\") " Mar 12 08:22:51 crc kubenswrapper[4809]: I0312 08:22:51.640487 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dffab84a-0090-4a84-a40b-74a3c49452ef-kube-api-access-dbtl6" (OuterVolumeSpecName: "kube-api-access-dbtl6") pod "dffab84a-0090-4a84-a40b-74a3c49452ef" (UID: "dffab84a-0090-4a84-a40b-74a3c49452ef"). InnerVolumeSpecName "kube-api-access-dbtl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:22:51 crc kubenswrapper[4809]: I0312 08:22:51.652162 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dffab84a-0090-4a84-a40b-74a3c49452ef-config" (OuterVolumeSpecName: "config") pod "dffab84a-0090-4a84-a40b-74a3c49452ef" (UID: "dffab84a-0090-4a84-a40b-74a3c49452ef"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:22:51 crc kubenswrapper[4809]: I0312 08:22:51.665640 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dffab84a-0090-4a84-a40b-74a3c49452ef-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dffab84a-0090-4a84-a40b-74a3c49452ef" (UID: "dffab84a-0090-4a84-a40b-74a3c49452ef"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:22:51 crc kubenswrapper[4809]: I0312 08:22:51.672685 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dffab84a-0090-4a84-a40b-74a3c49452ef-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "dffab84a-0090-4a84-a40b-74a3c49452ef" (UID: "dffab84a-0090-4a84-a40b-74a3c49452ef"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:22:51 crc kubenswrapper[4809]: I0312 08:22:51.682012 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dffab84a-0090-4a84-a40b-74a3c49452ef-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "dffab84a-0090-4a84-a40b-74a3c49452ef" (UID: "dffab84a-0090-4a84-a40b-74a3c49452ef"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:22:51 crc kubenswrapper[4809]: I0312 08:22:51.690793 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbtl6\" (UniqueName: \"kubernetes.io/projected/dffab84a-0090-4a84-a40b-74a3c49452ef-kube-api-access-dbtl6\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:51 crc kubenswrapper[4809]: I0312 08:22:51.690839 4809 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dffab84a-0090-4a84-a40b-74a3c49452ef-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:51 crc kubenswrapper[4809]: I0312 08:22:51.690849 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dffab84a-0090-4a84-a40b-74a3c49452ef-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:51 crc kubenswrapper[4809]: I0312 08:22:51.690860 4809 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dffab84a-0090-4a84-a40b-74a3c49452ef-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:51 crc kubenswrapper[4809]: I0312 08:22:51.690868 4809 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dffab84a-0090-4a84-a40b-74a3c49452ef-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:51 crc kubenswrapper[4809]: I0312 08:22:51.770413 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f877ddd87-9qw28" Mar 12 08:22:51 crc kubenswrapper[4809]: I0312 08:22:51.857694 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68dcc9cf6f-dqdxz" event={"ID":"dffab84a-0090-4a84-a40b-74a3c49452ef","Type":"ContainerDied","Data":"d291213949e50e7e5f203c50e3cd3c94bdcedb0a3e9cb410c84e4c4fd329af54"} Mar 12 08:22:51 crc kubenswrapper[4809]: I0312 08:22:51.857749 4809 scope.go:117] "RemoveContainer" containerID="cde5fc503f70c11c0bf6161915f99fa7576b6b385bdd7a42d19de8cc6ece466f" Mar 12 08:22:51 crc kubenswrapper[4809]: I0312 08:22:51.857859 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68dcc9cf6f-dqdxz" Mar 12 08:22:51 crc kubenswrapper[4809]: I0312 08:22:51.870846 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f877ddd87-9qw28" event={"ID":"f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662","Type":"ContainerDied","Data":"1c20627a91ef1cf668a895f329fbc5ea6ed98c7f24fe23f79ac4d7c0cf882f6f"} Mar 12 08:22:51 crc kubenswrapper[4809]: I0312 08:22:51.870901 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f877ddd87-9qw28" Mar 12 08:22:51 crc kubenswrapper[4809]: I0312 08:22:51.883460 4809 generic.go:334] "Generic (PLEG): container finished" podID="57e7abd3-40b2-4f84-b669-01f4a2bf9bd2" containerID="ecdb64c88d0c1ac990f48631d321a7d031dd118c75166821e331178164af0ca0" exitCode=0 Mar 12 08:22:51 crc kubenswrapper[4809]: I0312 08:22:51.885682 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-s5tlv" event={"ID":"57e7abd3-40b2-4f84-b669-01f4a2bf9bd2","Type":"ContainerDied","Data":"ecdb64c88d0c1ac990f48631d321a7d031dd118c75166821e331178164af0ca0"} Mar 12 08:22:51 crc kubenswrapper[4809]: I0312 08:22:51.885722 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-s5tlv" event={"ID":"57e7abd3-40b2-4f84-b669-01f4a2bf9bd2","Type":"ContainerStarted","Data":"83c9a8a069771586937d929aaccafb02a39c6ae671f69689b2c351e022fa1bef"} Mar 12 08:22:51 crc kubenswrapper[4809]: I0312 08:22:51.885737 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58dd9ff6bc-s5tlv" Mar 12 08:22:51 crc kubenswrapper[4809]: I0312 08:22:51.901600 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jdlq\" (UniqueName: \"kubernetes.io/projected/f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662-kube-api-access-7jdlq\") pod \"f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662\" (UID: \"f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662\") " Mar 12 08:22:51 crc kubenswrapper[4809]: I0312 08:22:51.901843 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662-ovsdbserver-sb\") pod \"f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662\" (UID: \"f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662\") " Mar 12 08:22:51 crc kubenswrapper[4809]: I0312 08:22:51.901864 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662-ovsdbserver-nb\") pod \"f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662\" (UID: \"f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662\") " Mar 12 08:22:51 crc kubenswrapper[4809]: I0312 08:22:51.901925 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662-dns-svc\") pod \"f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662\" (UID: \"f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662\") " Mar 12 08:22:51 crc kubenswrapper[4809]: I0312 08:22:51.902040 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662-config\") pod \"f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662\" (UID: \"f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662\") " Mar 12 08:22:51 crc kubenswrapper[4809]: I0312 08:22:51.916385 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662-kube-api-access-7jdlq" (OuterVolumeSpecName: "kube-api-access-7jdlq") pod "f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662" (UID: "f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662"). InnerVolumeSpecName "kube-api-access-7jdlq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:22:51 crc kubenswrapper[4809]: I0312 08:22:51.936098 4809 scope.go:117] "RemoveContainer" containerID="67cda58e0ebb55f9a8c4f5692956aaf61ebbb8502360d4901fba42a23857ae99" Mar 12 08:22:51 crc kubenswrapper[4809]: I0312 08:22:51.942959 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662" (UID: "f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:22:51 crc kubenswrapper[4809]: I0312 08:22:51.960402 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662" (UID: "f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:22:51 crc kubenswrapper[4809]: I0312 08:22:51.969048 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662-config" (OuterVolumeSpecName: "config") pod "f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662" (UID: "f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:22:51 crc kubenswrapper[4809]: I0312 08:22:51.982732 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662" (UID: "f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:22:52 crc kubenswrapper[4809]: I0312 08:22:52.008787 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-dqdxz"] Mar 12 08:22:52 crc kubenswrapper[4809]: I0312 08:22:52.011613 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:52 crc kubenswrapper[4809]: I0312 08:22:52.011652 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jdlq\" (UniqueName: \"kubernetes.io/projected/f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662-kube-api-access-7jdlq\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:52 crc kubenswrapper[4809]: I0312 08:22:52.011664 4809 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:52 crc kubenswrapper[4809]: I0312 08:22:52.011678 4809 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:52 crc kubenswrapper[4809]: I0312 08:22:52.011688 4809 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 12 08:22:52 crc kubenswrapper[4809]: I0312 08:22:52.046767 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-dqdxz"] Mar 12 08:22:52 crc kubenswrapper[4809]: I0312 08:22:52.065445 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58dd9ff6bc-s5tlv" podStartSLOduration=5.065419496 podStartE2EDuration="5.065419496s" podCreationTimestamp="2026-03-12 08:22:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:22:51.946021766 +0000 UTC m=+1445.528057509" watchObservedRunningTime="2026-03-12 08:22:52.065419496 +0000 UTC m=+1445.647455229" Mar 12 08:22:52 crc kubenswrapper[4809]: I0312 08:22:52.281323 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-9qw28"] Mar 12 08:22:52 crc kubenswrapper[4809]: I0312 08:22:52.333045 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-9qw28"] Mar 12 08:22:53 crc kubenswrapper[4809]: I0312 08:22:53.128610 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dffab84a-0090-4a84-a40b-74a3c49452ef" path="/var/lib/kubelet/pods/dffab84a-0090-4a84-a40b-74a3c49452ef/volumes" Mar 12 08:22:53 crc kubenswrapper[4809]: I0312 08:22:53.129310 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662" path="/var/lib/kubelet/pods/f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662/volumes" Mar 12 08:22:55 crc kubenswrapper[4809]: I0312 08:22:55.583372 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Mar 12 08:22:55 crc kubenswrapper[4809]: I0312 08:22:55.590737 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Mar 12 08:22:55 crc kubenswrapper[4809]: I0312 08:22:55.970640 4809 generic.go:334] "Generic (PLEG): container finished" podID="d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f" containerID="17079a3b61c88ca57390b8f5db5fd1ebd79265ceda08b79633bb30c0994cd283" exitCode=0 Mar 12 08:22:55 crc kubenswrapper[4809]: I0312 08:22:55.970864 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kpsqs" event={"ID":"d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f","Type":"ContainerDied","Data":"17079a3b61c88ca57390b8f5db5fd1ebd79265ceda08b79633bb30c0994cd283"} Mar 12 08:22:55 crc kubenswrapper[4809]: I0312 08:22:55.973561 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Mar 12 08:22:57 crc kubenswrapper[4809]: I0312 08:22:57.932361 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58dd9ff6bc-s5tlv" Mar 12 08:22:58 crc kubenswrapper[4809]: I0312 08:22:58.103642 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-hd7gx"] Mar 12 08:22:58 crc kubenswrapper[4809]: I0312 08:22:58.106820 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-hd7gx" podUID="085fdfa8-88d8-460c-82cf-87d59d145d7c" containerName="dnsmasq-dns" containerID="cri-o://6701a3188bee1691395f8296e3fe5c114f89c6c2a68e1136a6a5ed9ec9661fc8" gracePeriod=10 Mar 12 08:22:58 crc kubenswrapper[4809]: I0312 08:22:58.868885 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 12 08:22:58 crc kubenswrapper[4809]: I0312 08:22:58.869358 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="fefb3329-70bc-45d1-ac98-44f1836b3470" containerName="prometheus" containerID="cri-o://d0e866a7bcc1ce3dd51665675baac963d3e25c704cf9588d88017c88de540836" gracePeriod=600 Mar 12 08:22:58 crc kubenswrapper[4809]: I0312 08:22:58.869529 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="fefb3329-70bc-45d1-ac98-44f1836b3470" containerName="thanos-sidecar" containerID="cri-o://a87fa4653ed1debf5091aff3f72f0c8d027e56931516dd414e3d23d806fa33b1" gracePeriod=600 Mar 12 08:22:58 crc kubenswrapper[4809]: I0312 08:22:58.869617 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="fefb3329-70bc-45d1-ac98-44f1836b3470" containerName="config-reloader" containerID="cri-o://cb451e2cafc21b5864c8d08ce2407b5b5779919d542166eca1ee8fd0cb372aa9" gracePeriod=600 Mar 12 08:22:59 crc kubenswrapper[4809]: I0312 08:22:59.135329 4809 generic.go:334] "Generic (PLEG): container finished" podID="fefb3329-70bc-45d1-ac98-44f1836b3470" containerID="a87fa4653ed1debf5091aff3f72f0c8d027e56931516dd414e3d23d806fa33b1" exitCode=0 Mar 12 08:22:59 crc kubenswrapper[4809]: I0312 08:22:59.135849 4809 generic.go:334] "Generic (PLEG): container finished" podID="fefb3329-70bc-45d1-ac98-44f1836b3470" containerID="d0e866a7bcc1ce3dd51665675baac963d3e25c704cf9588d88017c88de540836" exitCode=0 Mar 12 08:22:59 crc kubenswrapper[4809]: I0312 08:22:59.135414 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fefb3329-70bc-45d1-ac98-44f1836b3470","Type":"ContainerDied","Data":"a87fa4653ed1debf5091aff3f72f0c8d027e56931516dd414e3d23d806fa33b1"} Mar 12 08:22:59 crc kubenswrapper[4809]: I0312 08:22:59.135947 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fefb3329-70bc-45d1-ac98-44f1836b3470","Type":"ContainerDied","Data":"d0e866a7bcc1ce3dd51665675baac963d3e25c704cf9588d88017c88de540836"} Mar 12 08:22:59 crc kubenswrapper[4809]: I0312 08:22:59.138252 4809 generic.go:334] "Generic (PLEG): container finished" podID="085fdfa8-88d8-460c-82cf-87d59d145d7c" containerID="6701a3188bee1691395f8296e3fe5c114f89c6c2a68e1136a6a5ed9ec9661fc8" exitCode=0 Mar 12 08:22:59 crc kubenswrapper[4809]: I0312 08:22:59.138334 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-hd7gx" event={"ID":"085fdfa8-88d8-460c-82cf-87d59d145d7c","Type":"ContainerDied","Data":"6701a3188bee1691395f8296e3fe5c114f89c6c2a68e1136a6a5ed9ec9661fc8"} Mar 12 08:22:59 crc kubenswrapper[4809]: I0312 08:22:59.140342 4809 generic.go:334] "Generic (PLEG): container finished" podID="cdb484e1-36e8-4bfe-aeb2-72fc1c331cda" containerID="b1cdfec6c07a1cb3b52e6148237b1b1cffebcdeb51474dc302a93b1b39f0ba4c" exitCode=0 Mar 12 08:22:59 crc kubenswrapper[4809]: I0312 08:22:59.140372 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-6pfp4" event={"ID":"cdb484e1-36e8-4bfe-aeb2-72fc1c331cda","Type":"ContainerDied","Data":"b1cdfec6c07a1cb3b52e6148237b1b1cffebcdeb51474dc302a93b1b39f0ba4c"} Mar 12 08:22:59 crc kubenswrapper[4809]: I0312 08:22:59.225758 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-hd7gx" podUID="085fdfa8-88d8-460c-82cf-87d59d145d7c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.152:5353: connect: connection refused" Mar 12 08:22:59 crc kubenswrapper[4809]: I0312 08:22:59.719798 4809 scope.go:117] "RemoveContainer" containerID="a943e7d437baa098c22ea2afb32f2ab7a588ef39c670ce6fe89510e51b74ed6f" Mar 12 08:23:00 crc kubenswrapper[4809]: I0312 08:23:00.157588 4809 generic.go:334] "Generic (PLEG): container finished" podID="fefb3329-70bc-45d1-ac98-44f1836b3470" containerID="cb451e2cafc21b5864c8d08ce2407b5b5779919d542166eca1ee8fd0cb372aa9" exitCode=0 Mar 12 08:23:00 crc kubenswrapper[4809]: I0312 08:23:00.157681 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fefb3329-70bc-45d1-ac98-44f1836b3470","Type":"ContainerDied","Data":"cb451e2cafc21b5864c8d08ce2407b5b5779919d542166eca1ee8fd0cb372aa9"} Mar 12 08:23:00 crc kubenswrapper[4809]: I0312 08:23:00.583931 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="fefb3329-70bc-45d1-ac98-44f1836b3470" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.156:9090/-/ready\": dial tcp 10.217.0.156:9090: connect: connection refused" Mar 12 08:23:01 crc kubenswrapper[4809]: I0312 08:23:01.882445 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8bcp4"] Mar 12 08:23:01 crc kubenswrapper[4809]: E0312 08:23:01.886312 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662" containerName="init" Mar 12 08:23:01 crc kubenswrapper[4809]: I0312 08:23:01.886354 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662" containerName="init" Mar 12 08:23:01 crc kubenswrapper[4809]: E0312 08:23:01.886396 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dffab84a-0090-4a84-a40b-74a3c49452ef" containerName="init" Mar 12 08:23:01 crc kubenswrapper[4809]: I0312 08:23:01.886403 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="dffab84a-0090-4a84-a40b-74a3c49452ef" containerName="init" Mar 12 08:23:01 crc kubenswrapper[4809]: I0312 08:23:01.886904 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="f83d29a3-fb96-4ad9-9d41-d2dd9bbf2662" containerName="init" Mar 12 08:23:01 crc kubenswrapper[4809]: I0312 08:23:01.886928 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="dffab84a-0090-4a84-a40b-74a3c49452ef" containerName="init" Mar 12 08:23:01 crc kubenswrapper[4809]: I0312 08:23:01.888738 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8bcp4" Mar 12 08:23:01 crc kubenswrapper[4809]: I0312 08:23:01.910159 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8bcp4"] Mar 12 08:23:01 crc kubenswrapper[4809]: I0312 08:23:01.979886 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh9zt\" (UniqueName: \"kubernetes.io/projected/2d78fd7f-3718-4999-a661-ad590dc808a5-kube-api-access-vh9zt\") pod \"redhat-operators-8bcp4\" (UID: \"2d78fd7f-3718-4999-a661-ad590dc808a5\") " pod="openshift-marketplace/redhat-operators-8bcp4" Mar 12 08:23:01 crc kubenswrapper[4809]: I0312 08:23:01.980016 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d78fd7f-3718-4999-a661-ad590dc808a5-catalog-content\") pod \"redhat-operators-8bcp4\" (UID: \"2d78fd7f-3718-4999-a661-ad590dc808a5\") " pod="openshift-marketplace/redhat-operators-8bcp4" Mar 12 08:23:01 crc kubenswrapper[4809]: I0312 08:23:01.980243 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d78fd7f-3718-4999-a661-ad590dc808a5-utilities\") pod \"redhat-operators-8bcp4\" (UID: \"2d78fd7f-3718-4999-a661-ad590dc808a5\") " pod="openshift-marketplace/redhat-operators-8bcp4" Mar 12 08:23:02 crc kubenswrapper[4809]: I0312 08:23:02.082494 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vh9zt\" (UniqueName: \"kubernetes.io/projected/2d78fd7f-3718-4999-a661-ad590dc808a5-kube-api-access-vh9zt\") pod \"redhat-operators-8bcp4\" (UID: \"2d78fd7f-3718-4999-a661-ad590dc808a5\") " pod="openshift-marketplace/redhat-operators-8bcp4" Mar 12 08:23:02 crc kubenswrapper[4809]: I0312 08:23:02.082566 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d78fd7f-3718-4999-a661-ad590dc808a5-catalog-content\") pod \"redhat-operators-8bcp4\" (UID: \"2d78fd7f-3718-4999-a661-ad590dc808a5\") " pod="openshift-marketplace/redhat-operators-8bcp4" Mar 12 08:23:02 crc kubenswrapper[4809]: I0312 08:23:02.082648 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d78fd7f-3718-4999-a661-ad590dc808a5-utilities\") pod \"redhat-operators-8bcp4\" (UID: \"2d78fd7f-3718-4999-a661-ad590dc808a5\") " pod="openshift-marketplace/redhat-operators-8bcp4" Mar 12 08:23:02 crc kubenswrapper[4809]: I0312 08:23:02.083421 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d78fd7f-3718-4999-a661-ad590dc808a5-utilities\") pod \"redhat-operators-8bcp4\" (UID: \"2d78fd7f-3718-4999-a661-ad590dc808a5\") " pod="openshift-marketplace/redhat-operators-8bcp4" Mar 12 08:23:02 crc kubenswrapper[4809]: I0312 08:23:02.083954 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d78fd7f-3718-4999-a661-ad590dc808a5-catalog-content\") pod \"redhat-operators-8bcp4\" (UID: \"2d78fd7f-3718-4999-a661-ad590dc808a5\") " pod="openshift-marketplace/redhat-operators-8bcp4" Mar 12 08:23:02 crc kubenswrapper[4809]: I0312 08:23:02.106603 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vh9zt\" (UniqueName: \"kubernetes.io/projected/2d78fd7f-3718-4999-a661-ad590dc808a5-kube-api-access-vh9zt\") pod \"redhat-operators-8bcp4\" (UID: \"2d78fd7f-3718-4999-a661-ad590dc808a5\") " pod="openshift-marketplace/redhat-operators-8bcp4" Mar 12 08:23:02 crc kubenswrapper[4809]: I0312 08:23:02.259191 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8bcp4" Mar 12 08:23:04 crc kubenswrapper[4809]: I0312 08:23:04.225891 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-hd7gx" podUID="085fdfa8-88d8-460c-82cf-87d59d145d7c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.152:5353: connect: connection refused" Mar 12 08:23:05 crc kubenswrapper[4809]: I0312 08:23:05.583663 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="fefb3329-70bc-45d1-ac98-44f1836b3470" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.156:9090/-/ready\": dial tcp 10.217.0.156:9090: connect: connection refused" Mar 12 08:23:07 crc kubenswrapper[4809]: I0312 08:23:07.922493 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kpsqs" Mar 12 08:23:07 crc kubenswrapper[4809]: I0312 08:23:07.932217 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-6pfp4" Mar 12 08:23:08 crc kubenswrapper[4809]: I0312 08:23:08.070837 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdb484e1-36e8-4bfe-aeb2-72fc1c331cda-combined-ca-bundle\") pod \"cdb484e1-36e8-4bfe-aeb2-72fc1c331cda\" (UID: \"cdb484e1-36e8-4bfe-aeb2-72fc1c331cda\") " Mar 12 08:23:08 crc kubenswrapper[4809]: I0312 08:23:08.071051 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f-config-data\") pod \"d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f\" (UID: \"d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f\") " Mar 12 08:23:08 crc kubenswrapper[4809]: I0312 08:23:08.071140 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f-fernet-keys\") pod \"d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f\" (UID: \"d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f\") " Mar 12 08:23:08 crc kubenswrapper[4809]: I0312 08:23:08.071211 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f-credential-keys\") pod \"d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f\" (UID: \"d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f\") " Mar 12 08:23:08 crc kubenswrapper[4809]: I0312 08:23:08.071236 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlmz7\" (UniqueName: \"kubernetes.io/projected/cdb484e1-36e8-4bfe-aeb2-72fc1c331cda-kube-api-access-hlmz7\") pod \"cdb484e1-36e8-4bfe-aeb2-72fc1c331cda\" (UID: \"cdb484e1-36e8-4bfe-aeb2-72fc1c331cda\") " Mar 12 08:23:08 crc kubenswrapper[4809]: I0312 08:23:08.071253 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqhrf\" (UniqueName: \"kubernetes.io/projected/d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f-kube-api-access-zqhrf\") pod \"d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f\" (UID: \"d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f\") " Mar 12 08:23:08 crc kubenswrapper[4809]: I0312 08:23:08.071329 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f-scripts\") pod \"d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f\" (UID: \"d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f\") " Mar 12 08:23:08 crc kubenswrapper[4809]: I0312 08:23:08.071388 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f-combined-ca-bundle\") pod \"d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f\" (UID: \"d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f\") " Mar 12 08:23:08 crc kubenswrapper[4809]: I0312 08:23:08.071415 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdb484e1-36e8-4bfe-aeb2-72fc1c331cda-config-data\") pod \"cdb484e1-36e8-4bfe-aeb2-72fc1c331cda\" (UID: \"cdb484e1-36e8-4bfe-aeb2-72fc1c331cda\") " Mar 12 08:23:08 crc kubenswrapper[4809]: I0312 08:23:08.071439 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cdb484e1-36e8-4bfe-aeb2-72fc1c331cda-db-sync-config-data\") pod \"cdb484e1-36e8-4bfe-aeb2-72fc1c331cda\" (UID: \"cdb484e1-36e8-4bfe-aeb2-72fc1c331cda\") " Mar 12 08:23:08 crc kubenswrapper[4809]: I0312 08:23:08.080666 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f" (UID: "d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:23:08 crc kubenswrapper[4809]: I0312 08:23:08.082058 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f" (UID: "d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:23:08 crc kubenswrapper[4809]: I0312 08:23:08.086092 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f-kube-api-access-zqhrf" (OuterVolumeSpecName: "kube-api-access-zqhrf") pod "d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f" (UID: "d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f"). InnerVolumeSpecName "kube-api-access-zqhrf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:23:08 crc kubenswrapper[4809]: I0312 08:23:08.087526 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f-scripts" (OuterVolumeSpecName: "scripts") pod "d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f" (UID: "d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:23:08 crc kubenswrapper[4809]: I0312 08:23:08.086703 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdb484e1-36e8-4bfe-aeb2-72fc1c331cda-kube-api-access-hlmz7" (OuterVolumeSpecName: "kube-api-access-hlmz7") pod "cdb484e1-36e8-4bfe-aeb2-72fc1c331cda" (UID: "cdb484e1-36e8-4bfe-aeb2-72fc1c331cda"). InnerVolumeSpecName "kube-api-access-hlmz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:23:08 crc kubenswrapper[4809]: I0312 08:23:08.099915 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdb484e1-36e8-4bfe-aeb2-72fc1c331cda-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "cdb484e1-36e8-4bfe-aeb2-72fc1c331cda" (UID: "cdb484e1-36e8-4bfe-aeb2-72fc1c331cda"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:23:08 crc kubenswrapper[4809]: I0312 08:23:08.134031 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f-config-data" (OuterVolumeSpecName: "config-data") pod "d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f" (UID: "d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:23:08 crc kubenswrapper[4809]: I0312 08:23:08.136204 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f" (UID: "d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:23:08 crc kubenswrapper[4809]: I0312 08:23:08.158721 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdb484e1-36e8-4bfe-aeb2-72fc1c331cda-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cdb484e1-36e8-4bfe-aeb2-72fc1c331cda" (UID: "cdb484e1-36e8-4bfe-aeb2-72fc1c331cda"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:23:08 crc kubenswrapper[4809]: I0312 08:23:08.174053 4809 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:08 crc kubenswrapper[4809]: I0312 08:23:08.179274 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlmz7\" (UniqueName: \"kubernetes.io/projected/cdb484e1-36e8-4bfe-aeb2-72fc1c331cda-kube-api-access-hlmz7\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:08 crc kubenswrapper[4809]: I0312 08:23:08.179434 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqhrf\" (UniqueName: \"kubernetes.io/projected/d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f-kube-api-access-zqhrf\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:08 crc kubenswrapper[4809]: I0312 08:23:08.179497 4809 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:08 crc kubenswrapper[4809]: I0312 08:23:08.179568 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:08 crc kubenswrapper[4809]: I0312 08:23:08.179633 4809 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cdb484e1-36e8-4bfe-aeb2-72fc1c331cda-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:08 crc kubenswrapper[4809]: I0312 08:23:08.179707 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdb484e1-36e8-4bfe-aeb2-72fc1c331cda-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:08 crc kubenswrapper[4809]: I0312 08:23:08.179769 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:08 crc kubenswrapper[4809]: I0312 08:23:08.179833 4809 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:08 crc kubenswrapper[4809]: I0312 08:23:08.187862 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdb484e1-36e8-4bfe-aeb2-72fc1c331cda-config-data" (OuterVolumeSpecName: "config-data") pod "cdb484e1-36e8-4bfe-aeb2-72fc1c331cda" (UID: "cdb484e1-36e8-4bfe-aeb2-72fc1c331cda"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:23:08 crc kubenswrapper[4809]: I0312 08:23:08.261780 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kpsqs" Mar 12 08:23:08 crc kubenswrapper[4809]: I0312 08:23:08.262587 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kpsqs" event={"ID":"d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f","Type":"ContainerDied","Data":"7340ff9dd9fe6096cebba1927d04bc3433f5dd70048d6e1e46f26e7dfdfd751c"} Mar 12 08:23:08 crc kubenswrapper[4809]: I0312 08:23:08.262650 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7340ff9dd9fe6096cebba1927d04bc3433f5dd70048d6e1e46f26e7dfdfd751c" Mar 12 08:23:08 crc kubenswrapper[4809]: I0312 08:23:08.273123 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-6pfp4" event={"ID":"cdb484e1-36e8-4bfe-aeb2-72fc1c331cda","Type":"ContainerDied","Data":"36ab76b7b081ed3bda886d8dd032c34847125f28d121f5169cb8922ae69e1541"} Mar 12 08:23:08 crc kubenswrapper[4809]: I0312 08:23:08.273183 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36ab76b7b081ed3bda886d8dd032c34847125f28d121f5169cb8922ae69e1541" Mar 12 08:23:08 crc kubenswrapper[4809]: I0312 08:23:08.273377 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-6pfp4" Mar 12 08:23:08 crc kubenswrapper[4809]: I0312 08:23:08.281919 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdb484e1-36e8-4bfe-aeb2-72fc1c331cda-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:09 crc kubenswrapper[4809]: I0312 08:23:09.061021 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-kpsqs"] Mar 12 08:23:09 crc kubenswrapper[4809]: I0312 08:23:09.070552 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-kpsqs"] Mar 12 08:23:09 crc kubenswrapper[4809]: I0312 08:23:09.130789 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f" path="/var/lib/kubelet/pods/d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f/volumes" Mar 12 08:23:09 crc kubenswrapper[4809]: I0312 08:23:09.158958 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-jssh7"] Mar 12 08:23:09 crc kubenswrapper[4809]: E0312 08:23:09.159900 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f" containerName="keystone-bootstrap" Mar 12 08:23:09 crc kubenswrapper[4809]: I0312 08:23:09.159927 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f" containerName="keystone-bootstrap" Mar 12 08:23:09 crc kubenswrapper[4809]: E0312 08:23:09.159962 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdb484e1-36e8-4bfe-aeb2-72fc1c331cda" containerName="glance-db-sync" Mar 12 08:23:09 crc kubenswrapper[4809]: I0312 08:23:09.159975 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdb484e1-36e8-4bfe-aeb2-72fc1c331cda" containerName="glance-db-sync" Mar 12 08:23:09 crc kubenswrapper[4809]: I0312 08:23:09.160298 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdb484e1-36e8-4bfe-aeb2-72fc1c331cda" containerName="glance-db-sync" Mar 12 08:23:09 crc kubenswrapper[4809]: I0312 08:23:09.160334 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="d65a2f27-1059-4ee6-ac2c-5a42b92c4a5f" containerName="keystone-bootstrap" Mar 12 08:23:09 crc kubenswrapper[4809]: I0312 08:23:09.161615 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jssh7" Mar 12 08:23:09 crc kubenswrapper[4809]: I0312 08:23:09.170178 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 12 08:23:09 crc kubenswrapper[4809]: I0312 08:23:09.170378 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 12 08:23:09 crc kubenswrapper[4809]: I0312 08:23:09.170541 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 12 08:23:09 crc kubenswrapper[4809]: I0312 08:23:09.170755 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-gndsl" Mar 12 08:23:09 crc kubenswrapper[4809]: I0312 08:23:09.170853 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 12 08:23:09 crc kubenswrapper[4809]: I0312 08:23:09.177097 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-jssh7"] Mar 12 08:23:09 crc kubenswrapper[4809]: I0312 08:23:09.304506 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/23f0c997-5c6b-4229-911e-efe7b42f59f7-fernet-keys\") pod \"keystone-bootstrap-jssh7\" (UID: \"23f0c997-5c6b-4229-911e-efe7b42f59f7\") " pod="openstack/keystone-bootstrap-jssh7" Mar 12 08:23:09 crc kubenswrapper[4809]: I0312 08:23:09.304573 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtphd\" (UniqueName: \"kubernetes.io/projected/23f0c997-5c6b-4229-911e-efe7b42f59f7-kube-api-access-gtphd\") pod \"keystone-bootstrap-jssh7\" (UID: \"23f0c997-5c6b-4229-911e-efe7b42f59f7\") " pod="openstack/keystone-bootstrap-jssh7" Mar 12 08:23:09 crc kubenswrapper[4809]: I0312 08:23:09.304598 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23f0c997-5c6b-4229-911e-efe7b42f59f7-config-data\") pod \"keystone-bootstrap-jssh7\" (UID: \"23f0c997-5c6b-4229-911e-efe7b42f59f7\") " pod="openstack/keystone-bootstrap-jssh7" Mar 12 08:23:09 crc kubenswrapper[4809]: I0312 08:23:09.304633 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23f0c997-5c6b-4229-911e-efe7b42f59f7-combined-ca-bundle\") pod \"keystone-bootstrap-jssh7\" (UID: \"23f0c997-5c6b-4229-911e-efe7b42f59f7\") " pod="openstack/keystone-bootstrap-jssh7" Mar 12 08:23:09 crc kubenswrapper[4809]: I0312 08:23:09.304651 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/23f0c997-5c6b-4229-911e-efe7b42f59f7-credential-keys\") pod \"keystone-bootstrap-jssh7\" (UID: \"23f0c997-5c6b-4229-911e-efe7b42f59f7\") " pod="openstack/keystone-bootstrap-jssh7" Mar 12 08:23:09 crc kubenswrapper[4809]: I0312 08:23:09.304754 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23f0c997-5c6b-4229-911e-efe7b42f59f7-scripts\") pod \"keystone-bootstrap-jssh7\" (UID: \"23f0c997-5c6b-4229-911e-efe7b42f59f7\") " pod="openstack/keystone-bootstrap-jssh7" Mar 12 08:23:09 crc kubenswrapper[4809]: I0312 08:23:09.406844 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23f0c997-5c6b-4229-911e-efe7b42f59f7-combined-ca-bundle\") pod \"keystone-bootstrap-jssh7\" (UID: \"23f0c997-5c6b-4229-911e-efe7b42f59f7\") " pod="openstack/keystone-bootstrap-jssh7" Mar 12 08:23:09 crc kubenswrapper[4809]: I0312 08:23:09.406903 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/23f0c997-5c6b-4229-911e-efe7b42f59f7-credential-keys\") pod \"keystone-bootstrap-jssh7\" (UID: \"23f0c997-5c6b-4229-911e-efe7b42f59f7\") " pod="openstack/keystone-bootstrap-jssh7" Mar 12 08:23:09 crc kubenswrapper[4809]: I0312 08:23:09.407049 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23f0c997-5c6b-4229-911e-efe7b42f59f7-scripts\") pod \"keystone-bootstrap-jssh7\" (UID: \"23f0c997-5c6b-4229-911e-efe7b42f59f7\") " pod="openstack/keystone-bootstrap-jssh7" Mar 12 08:23:09 crc kubenswrapper[4809]: I0312 08:23:09.407183 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/23f0c997-5c6b-4229-911e-efe7b42f59f7-fernet-keys\") pod \"keystone-bootstrap-jssh7\" (UID: \"23f0c997-5c6b-4229-911e-efe7b42f59f7\") " pod="openstack/keystone-bootstrap-jssh7" Mar 12 08:23:09 crc kubenswrapper[4809]: I0312 08:23:09.407223 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtphd\" (UniqueName: \"kubernetes.io/projected/23f0c997-5c6b-4229-911e-efe7b42f59f7-kube-api-access-gtphd\") pod \"keystone-bootstrap-jssh7\" (UID: \"23f0c997-5c6b-4229-911e-efe7b42f59f7\") " pod="openstack/keystone-bootstrap-jssh7" Mar 12 08:23:09 crc kubenswrapper[4809]: I0312 08:23:09.407242 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23f0c997-5c6b-4229-911e-efe7b42f59f7-config-data\") pod \"keystone-bootstrap-jssh7\" (UID: \"23f0c997-5c6b-4229-911e-efe7b42f59f7\") " pod="openstack/keystone-bootstrap-jssh7" Mar 12 08:23:09 crc kubenswrapper[4809]: I0312 08:23:09.415732 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23f0c997-5c6b-4229-911e-efe7b42f59f7-config-data\") pod \"keystone-bootstrap-jssh7\" (UID: \"23f0c997-5c6b-4229-911e-efe7b42f59f7\") " pod="openstack/keystone-bootstrap-jssh7" Mar 12 08:23:09 crc kubenswrapper[4809]: I0312 08:23:09.415835 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23f0c997-5c6b-4229-911e-efe7b42f59f7-combined-ca-bundle\") pod \"keystone-bootstrap-jssh7\" (UID: \"23f0c997-5c6b-4229-911e-efe7b42f59f7\") " pod="openstack/keystone-bootstrap-jssh7" Mar 12 08:23:09 crc kubenswrapper[4809]: I0312 08:23:09.416324 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/23f0c997-5c6b-4229-911e-efe7b42f59f7-fernet-keys\") pod \"keystone-bootstrap-jssh7\" (UID: \"23f0c997-5c6b-4229-911e-efe7b42f59f7\") " pod="openstack/keystone-bootstrap-jssh7" Mar 12 08:23:09 crc kubenswrapper[4809]: I0312 08:23:09.421238 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23f0c997-5c6b-4229-911e-efe7b42f59f7-scripts\") pod \"keystone-bootstrap-jssh7\" (UID: \"23f0c997-5c6b-4229-911e-efe7b42f59f7\") " pod="openstack/keystone-bootstrap-jssh7" Mar 12 08:23:09 crc kubenswrapper[4809]: I0312 08:23:09.422132 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/23f0c997-5c6b-4229-911e-efe7b42f59f7-credential-keys\") pod \"keystone-bootstrap-jssh7\" (UID: \"23f0c997-5c6b-4229-911e-efe7b42f59f7\") " pod="openstack/keystone-bootstrap-jssh7" Mar 12 08:23:09 crc kubenswrapper[4809]: I0312 08:23:09.433784 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtphd\" (UniqueName: \"kubernetes.io/projected/23f0c997-5c6b-4229-911e-efe7b42f59f7-kube-api-access-gtphd\") pod \"keystone-bootstrap-jssh7\" (UID: \"23f0c997-5c6b-4229-911e-efe7b42f59f7\") " pod="openstack/keystone-bootstrap-jssh7" Mar 12 08:23:09 crc kubenswrapper[4809]: I0312 08:23:09.484048 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jssh7" Mar 12 08:23:09 crc kubenswrapper[4809]: I0312 08:23:09.562386 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-kn7bt"] Mar 12 08:23:09 crc kubenswrapper[4809]: I0312 08:23:09.569828 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-kn7bt" Mar 12 08:23:09 crc kubenswrapper[4809]: I0312 08:23:09.585260 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-kn7bt"] Mar 12 08:23:09 crc kubenswrapper[4809]: I0312 08:23:09.716447 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nm4f\" (UniqueName: \"kubernetes.io/projected/a6fc9567-7eef-4445-be15-c70a81872ae6-kube-api-access-7nm4f\") pod \"dnsmasq-dns-785d8bcb8c-kn7bt\" (UID: \"a6fc9567-7eef-4445-be15-c70a81872ae6\") " pod="openstack/dnsmasq-dns-785d8bcb8c-kn7bt" Mar 12 08:23:09 crc kubenswrapper[4809]: I0312 08:23:09.716999 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a6fc9567-7eef-4445-be15-c70a81872ae6-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-kn7bt\" (UID: \"a6fc9567-7eef-4445-be15-c70a81872ae6\") " pod="openstack/dnsmasq-dns-785d8bcb8c-kn7bt" Mar 12 08:23:09 crc kubenswrapper[4809]: I0312 08:23:09.717022 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6fc9567-7eef-4445-be15-c70a81872ae6-config\") pod \"dnsmasq-dns-785d8bcb8c-kn7bt\" (UID: \"a6fc9567-7eef-4445-be15-c70a81872ae6\") " pod="openstack/dnsmasq-dns-785d8bcb8c-kn7bt" Mar 12 08:23:09 crc kubenswrapper[4809]: I0312 08:23:09.717045 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a6fc9567-7eef-4445-be15-c70a81872ae6-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-kn7bt\" (UID: \"a6fc9567-7eef-4445-be15-c70a81872ae6\") " pod="openstack/dnsmasq-dns-785d8bcb8c-kn7bt" Mar 12 08:23:09 crc kubenswrapper[4809]: I0312 08:23:09.717070 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a6fc9567-7eef-4445-be15-c70a81872ae6-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-kn7bt\" (UID: \"a6fc9567-7eef-4445-be15-c70a81872ae6\") " pod="openstack/dnsmasq-dns-785d8bcb8c-kn7bt" Mar 12 08:23:09 crc kubenswrapper[4809]: I0312 08:23:09.717094 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a6fc9567-7eef-4445-be15-c70a81872ae6-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-kn7bt\" (UID: \"a6fc9567-7eef-4445-be15-c70a81872ae6\") " pod="openstack/dnsmasq-dns-785d8bcb8c-kn7bt" Mar 12 08:23:09 crc kubenswrapper[4809]: I0312 08:23:09.819240 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6fc9567-7eef-4445-be15-c70a81872ae6-config\") pod \"dnsmasq-dns-785d8bcb8c-kn7bt\" (UID: \"a6fc9567-7eef-4445-be15-c70a81872ae6\") " pod="openstack/dnsmasq-dns-785d8bcb8c-kn7bt" Mar 12 08:23:09 crc kubenswrapper[4809]: I0312 08:23:09.819287 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a6fc9567-7eef-4445-be15-c70a81872ae6-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-kn7bt\" (UID: \"a6fc9567-7eef-4445-be15-c70a81872ae6\") " pod="openstack/dnsmasq-dns-785d8bcb8c-kn7bt" Mar 12 08:23:09 crc kubenswrapper[4809]: I0312 08:23:09.819320 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a6fc9567-7eef-4445-be15-c70a81872ae6-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-kn7bt\" (UID: \"a6fc9567-7eef-4445-be15-c70a81872ae6\") " pod="openstack/dnsmasq-dns-785d8bcb8c-kn7bt" Mar 12 08:23:09 crc kubenswrapper[4809]: I0312 08:23:09.819348 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a6fc9567-7eef-4445-be15-c70a81872ae6-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-kn7bt\" (UID: \"a6fc9567-7eef-4445-be15-c70a81872ae6\") " pod="openstack/dnsmasq-dns-785d8bcb8c-kn7bt" Mar 12 08:23:09 crc kubenswrapper[4809]: I0312 08:23:09.819374 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a6fc9567-7eef-4445-be15-c70a81872ae6-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-kn7bt\" (UID: \"a6fc9567-7eef-4445-be15-c70a81872ae6\") " pod="openstack/dnsmasq-dns-785d8bcb8c-kn7bt" Mar 12 08:23:09 crc kubenswrapper[4809]: I0312 08:23:09.819454 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nm4f\" (UniqueName: \"kubernetes.io/projected/a6fc9567-7eef-4445-be15-c70a81872ae6-kube-api-access-7nm4f\") pod \"dnsmasq-dns-785d8bcb8c-kn7bt\" (UID: \"a6fc9567-7eef-4445-be15-c70a81872ae6\") " pod="openstack/dnsmasq-dns-785d8bcb8c-kn7bt" Mar 12 08:23:09 crc kubenswrapper[4809]: I0312 08:23:09.820498 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a6fc9567-7eef-4445-be15-c70a81872ae6-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-kn7bt\" (UID: \"a6fc9567-7eef-4445-be15-c70a81872ae6\") " pod="openstack/dnsmasq-dns-785d8bcb8c-kn7bt" Mar 12 08:23:09 crc kubenswrapper[4809]: I0312 08:23:09.820576 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6fc9567-7eef-4445-be15-c70a81872ae6-config\") pod \"dnsmasq-dns-785d8bcb8c-kn7bt\" (UID: \"a6fc9567-7eef-4445-be15-c70a81872ae6\") " pod="openstack/dnsmasq-dns-785d8bcb8c-kn7bt" Mar 12 08:23:09 crc kubenswrapper[4809]: I0312 08:23:09.820705 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a6fc9567-7eef-4445-be15-c70a81872ae6-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-kn7bt\" (UID: \"a6fc9567-7eef-4445-be15-c70a81872ae6\") " pod="openstack/dnsmasq-dns-785d8bcb8c-kn7bt" Mar 12 08:23:09 crc kubenswrapper[4809]: I0312 08:23:09.820817 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a6fc9567-7eef-4445-be15-c70a81872ae6-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-kn7bt\" (UID: \"a6fc9567-7eef-4445-be15-c70a81872ae6\") " pod="openstack/dnsmasq-dns-785d8bcb8c-kn7bt" Mar 12 08:23:09 crc kubenswrapper[4809]: I0312 08:23:09.821158 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a6fc9567-7eef-4445-be15-c70a81872ae6-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-kn7bt\" (UID: \"a6fc9567-7eef-4445-be15-c70a81872ae6\") " pod="openstack/dnsmasq-dns-785d8bcb8c-kn7bt" Mar 12 08:23:09 crc kubenswrapper[4809]: I0312 08:23:09.852531 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nm4f\" (UniqueName: \"kubernetes.io/projected/a6fc9567-7eef-4445-be15-c70a81872ae6-kube-api-access-7nm4f\") pod \"dnsmasq-dns-785d8bcb8c-kn7bt\" (UID: \"a6fc9567-7eef-4445-be15-c70a81872ae6\") " pod="openstack/dnsmasq-dns-785d8bcb8c-kn7bt" Mar 12 08:23:09 crc kubenswrapper[4809]: I0312 08:23:09.930540 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-kn7bt" Mar 12 08:23:10 crc kubenswrapper[4809]: I0312 08:23:10.404176 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 12 08:23:10 crc kubenswrapper[4809]: I0312 08:23:10.410569 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 12 08:23:10 crc kubenswrapper[4809]: I0312 08:23:10.413715 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Mar 12 08:23:10 crc kubenswrapper[4809]: I0312 08:23:10.414074 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-6kwkz" Mar 12 08:23:10 crc kubenswrapper[4809]: I0312 08:23:10.420390 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 12 08:23:10 crc kubenswrapper[4809]: I0312 08:23:10.444413 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 12 08:23:10 crc kubenswrapper[4809]: I0312 08:23:10.545736 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2216a968-959e-4bb5-943e-7d8c0dbd5405-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2216a968-959e-4bb5-943e-7d8c0dbd5405\") " pod="openstack/glance-default-external-api-0" Mar 12 08:23:10 crc kubenswrapper[4809]: I0312 08:23:10.545809 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2216a968-959e-4bb5-943e-7d8c0dbd5405-scripts\") pod \"glance-default-external-api-0\" (UID: \"2216a968-959e-4bb5-943e-7d8c0dbd5405\") " pod="openstack/glance-default-external-api-0" Mar 12 08:23:10 crc kubenswrapper[4809]: I0312 08:23:10.546061 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrzw6\" (UniqueName: \"kubernetes.io/projected/2216a968-959e-4bb5-943e-7d8c0dbd5405-kube-api-access-zrzw6\") pod \"glance-default-external-api-0\" (UID: \"2216a968-959e-4bb5-943e-7d8c0dbd5405\") " pod="openstack/glance-default-external-api-0" Mar 12 08:23:10 crc kubenswrapper[4809]: I0312 08:23:10.546195 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3f492e23-d032-4adb-b2a5-07078c3cb028\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3f492e23-d032-4adb-b2a5-07078c3cb028\") pod \"glance-default-external-api-0\" (UID: \"2216a968-959e-4bb5-943e-7d8c0dbd5405\") " pod="openstack/glance-default-external-api-0" Mar 12 08:23:10 crc kubenswrapper[4809]: I0312 08:23:10.546237 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2216a968-959e-4bb5-943e-7d8c0dbd5405-logs\") pod \"glance-default-external-api-0\" (UID: \"2216a968-959e-4bb5-943e-7d8c0dbd5405\") " pod="openstack/glance-default-external-api-0" Mar 12 08:23:10 crc kubenswrapper[4809]: I0312 08:23:10.546458 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2216a968-959e-4bb5-943e-7d8c0dbd5405-config-data\") pod \"glance-default-external-api-0\" (UID: \"2216a968-959e-4bb5-943e-7d8c0dbd5405\") " pod="openstack/glance-default-external-api-0" Mar 12 08:23:10 crc kubenswrapper[4809]: I0312 08:23:10.546610 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2216a968-959e-4bb5-943e-7d8c0dbd5405-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2216a968-959e-4bb5-943e-7d8c0dbd5405\") " pod="openstack/glance-default-external-api-0" Mar 12 08:23:10 crc kubenswrapper[4809]: I0312 08:23:10.649133 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3f492e23-d032-4adb-b2a5-07078c3cb028\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3f492e23-d032-4adb-b2a5-07078c3cb028\") pod \"glance-default-external-api-0\" (UID: \"2216a968-959e-4bb5-943e-7d8c0dbd5405\") " pod="openstack/glance-default-external-api-0" Mar 12 08:23:10 crc kubenswrapper[4809]: I0312 08:23:10.649472 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2216a968-959e-4bb5-943e-7d8c0dbd5405-logs\") pod \"glance-default-external-api-0\" (UID: \"2216a968-959e-4bb5-943e-7d8c0dbd5405\") " pod="openstack/glance-default-external-api-0" Mar 12 08:23:10 crc kubenswrapper[4809]: I0312 08:23:10.649654 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2216a968-959e-4bb5-943e-7d8c0dbd5405-config-data\") pod \"glance-default-external-api-0\" (UID: \"2216a968-959e-4bb5-943e-7d8c0dbd5405\") " pod="openstack/glance-default-external-api-0" Mar 12 08:23:10 crc kubenswrapper[4809]: I0312 08:23:10.649802 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2216a968-959e-4bb5-943e-7d8c0dbd5405-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2216a968-959e-4bb5-943e-7d8c0dbd5405\") " pod="openstack/glance-default-external-api-0" Mar 12 08:23:10 crc kubenswrapper[4809]: I0312 08:23:10.649974 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2216a968-959e-4bb5-943e-7d8c0dbd5405-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2216a968-959e-4bb5-943e-7d8c0dbd5405\") " pod="openstack/glance-default-external-api-0" Mar 12 08:23:10 crc kubenswrapper[4809]: I0312 08:23:10.650068 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2216a968-959e-4bb5-943e-7d8c0dbd5405-scripts\") pod \"glance-default-external-api-0\" (UID: \"2216a968-959e-4bb5-943e-7d8c0dbd5405\") " pod="openstack/glance-default-external-api-0" Mar 12 08:23:10 crc kubenswrapper[4809]: I0312 08:23:10.650202 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrzw6\" (UniqueName: \"kubernetes.io/projected/2216a968-959e-4bb5-943e-7d8c0dbd5405-kube-api-access-zrzw6\") pod \"glance-default-external-api-0\" (UID: \"2216a968-959e-4bb5-943e-7d8c0dbd5405\") " pod="openstack/glance-default-external-api-0" Mar 12 08:23:10 crc kubenswrapper[4809]: I0312 08:23:10.651451 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2216a968-959e-4bb5-943e-7d8c0dbd5405-logs\") pod \"glance-default-external-api-0\" (UID: \"2216a968-959e-4bb5-943e-7d8c0dbd5405\") " pod="openstack/glance-default-external-api-0" Mar 12 08:23:10 crc kubenswrapper[4809]: I0312 08:23:10.652610 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2216a968-959e-4bb5-943e-7d8c0dbd5405-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2216a968-959e-4bb5-943e-7d8c0dbd5405\") " pod="openstack/glance-default-external-api-0" Mar 12 08:23:10 crc kubenswrapper[4809]: I0312 08:23:10.656871 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2216a968-959e-4bb5-943e-7d8c0dbd5405-config-data\") pod \"glance-default-external-api-0\" (UID: \"2216a968-959e-4bb5-943e-7d8c0dbd5405\") " pod="openstack/glance-default-external-api-0" Mar 12 08:23:10 crc kubenswrapper[4809]: I0312 08:23:10.657351 4809 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 08:23:10 crc kubenswrapper[4809]: I0312 08:23:10.657403 4809 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3f492e23-d032-4adb-b2a5-07078c3cb028\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3f492e23-d032-4adb-b2a5-07078c3cb028\") pod \"glance-default-external-api-0\" (UID: \"2216a968-959e-4bb5-943e-7d8c0dbd5405\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/13e7c719e4a31debe9dfef24b9790869cfa71b5da18da849b6185df18b1db2d8/globalmount\"" pod="openstack/glance-default-external-api-0" Mar 12 08:23:10 crc kubenswrapper[4809]: I0312 08:23:10.657630 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2216a968-959e-4bb5-943e-7d8c0dbd5405-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2216a968-959e-4bb5-943e-7d8c0dbd5405\") " pod="openstack/glance-default-external-api-0" Mar 12 08:23:10 crc kubenswrapper[4809]: I0312 08:23:10.672933 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2216a968-959e-4bb5-943e-7d8c0dbd5405-scripts\") pod \"glance-default-external-api-0\" (UID: \"2216a968-959e-4bb5-943e-7d8c0dbd5405\") " pod="openstack/glance-default-external-api-0" Mar 12 08:23:10 crc kubenswrapper[4809]: I0312 08:23:10.678497 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrzw6\" (UniqueName: \"kubernetes.io/projected/2216a968-959e-4bb5-943e-7d8c0dbd5405-kube-api-access-zrzw6\") pod \"glance-default-external-api-0\" (UID: \"2216a968-959e-4bb5-943e-7d8c0dbd5405\") " pod="openstack/glance-default-external-api-0" Mar 12 08:23:10 crc kubenswrapper[4809]: I0312 08:23:10.740570 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 12 08:23:10 crc kubenswrapper[4809]: I0312 08:23:10.747829 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3f492e23-d032-4adb-b2a5-07078c3cb028\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3f492e23-d032-4adb-b2a5-07078c3cb028\") pod \"glance-default-external-api-0\" (UID: \"2216a968-959e-4bb5-943e-7d8c0dbd5405\") " pod="openstack/glance-default-external-api-0" Mar 12 08:23:10 crc kubenswrapper[4809]: I0312 08:23:10.749860 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 12 08:23:10 crc kubenswrapper[4809]: I0312 08:23:10.752962 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 12 08:23:10 crc kubenswrapper[4809]: I0312 08:23:10.765369 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 12 08:23:10 crc kubenswrapper[4809]: I0312 08:23:10.781725 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 12 08:23:10 crc kubenswrapper[4809]: I0312 08:23:10.858178 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3259c31-3ce2-44c0-8a04-7672fffde9a2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d3259c31-3ce2-44c0-8a04-7672fffde9a2\") " pod="openstack/glance-default-internal-api-0" Mar 12 08:23:10 crc kubenswrapper[4809]: I0312 08:23:10.858515 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3259c31-3ce2-44c0-8a04-7672fffde9a2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d3259c31-3ce2-44c0-8a04-7672fffde9a2\") " pod="openstack/glance-default-internal-api-0" Mar 12 08:23:10 crc kubenswrapper[4809]: I0312 08:23:10.858683 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3259c31-3ce2-44c0-8a04-7672fffde9a2-logs\") pod \"glance-default-internal-api-0\" (UID: \"d3259c31-3ce2-44c0-8a04-7672fffde9a2\") " pod="openstack/glance-default-internal-api-0" Mar 12 08:23:10 crc kubenswrapper[4809]: I0312 08:23:10.858785 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7ftz\" (UniqueName: \"kubernetes.io/projected/d3259c31-3ce2-44c0-8a04-7672fffde9a2-kube-api-access-l7ftz\") pod \"glance-default-internal-api-0\" (UID: \"d3259c31-3ce2-44c0-8a04-7672fffde9a2\") " pod="openstack/glance-default-internal-api-0" Mar 12 08:23:10 crc kubenswrapper[4809]: I0312 08:23:10.858967 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-cf6534d4-9b30-4b23-b28c-87ea2179a8ac\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cf6534d4-9b30-4b23-b28c-87ea2179a8ac\") pod \"glance-default-internal-api-0\" (UID: \"d3259c31-3ce2-44c0-8a04-7672fffde9a2\") " pod="openstack/glance-default-internal-api-0" Mar 12 08:23:10 crc kubenswrapper[4809]: I0312 08:23:10.859101 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3259c31-3ce2-44c0-8a04-7672fffde9a2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d3259c31-3ce2-44c0-8a04-7672fffde9a2\") " pod="openstack/glance-default-internal-api-0" Mar 12 08:23:10 crc kubenswrapper[4809]: I0312 08:23:10.859239 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d3259c31-3ce2-44c0-8a04-7672fffde9a2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d3259c31-3ce2-44c0-8a04-7672fffde9a2\") " pod="openstack/glance-default-internal-api-0" Mar 12 08:23:10 crc kubenswrapper[4809]: I0312 08:23:10.961593 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3259c31-3ce2-44c0-8a04-7672fffde9a2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d3259c31-3ce2-44c0-8a04-7672fffde9a2\") " pod="openstack/glance-default-internal-api-0" Mar 12 08:23:10 crc kubenswrapper[4809]: I0312 08:23:10.961671 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d3259c31-3ce2-44c0-8a04-7672fffde9a2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d3259c31-3ce2-44c0-8a04-7672fffde9a2\") " pod="openstack/glance-default-internal-api-0" Mar 12 08:23:10 crc kubenswrapper[4809]: I0312 08:23:10.961746 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3259c31-3ce2-44c0-8a04-7672fffde9a2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d3259c31-3ce2-44c0-8a04-7672fffde9a2\") " pod="openstack/glance-default-internal-api-0" Mar 12 08:23:10 crc kubenswrapper[4809]: I0312 08:23:10.961775 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3259c31-3ce2-44c0-8a04-7672fffde9a2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d3259c31-3ce2-44c0-8a04-7672fffde9a2\") " pod="openstack/glance-default-internal-api-0" Mar 12 08:23:10 crc kubenswrapper[4809]: I0312 08:23:10.961817 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3259c31-3ce2-44c0-8a04-7672fffde9a2-logs\") pod \"glance-default-internal-api-0\" (UID: \"d3259c31-3ce2-44c0-8a04-7672fffde9a2\") " pod="openstack/glance-default-internal-api-0" Mar 12 08:23:10 crc kubenswrapper[4809]: I0312 08:23:10.961836 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7ftz\" (UniqueName: \"kubernetes.io/projected/d3259c31-3ce2-44c0-8a04-7672fffde9a2-kube-api-access-l7ftz\") pod \"glance-default-internal-api-0\" (UID: \"d3259c31-3ce2-44c0-8a04-7672fffde9a2\") " pod="openstack/glance-default-internal-api-0" Mar 12 08:23:10 crc kubenswrapper[4809]: I0312 08:23:10.961893 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-cf6534d4-9b30-4b23-b28c-87ea2179a8ac\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cf6534d4-9b30-4b23-b28c-87ea2179a8ac\") pod \"glance-default-internal-api-0\" (UID: \"d3259c31-3ce2-44c0-8a04-7672fffde9a2\") " pod="openstack/glance-default-internal-api-0" Mar 12 08:23:10 crc kubenswrapper[4809]: I0312 08:23:10.963529 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3259c31-3ce2-44c0-8a04-7672fffde9a2-logs\") pod \"glance-default-internal-api-0\" (UID: \"d3259c31-3ce2-44c0-8a04-7672fffde9a2\") " pod="openstack/glance-default-internal-api-0" Mar 12 08:23:10 crc kubenswrapper[4809]: I0312 08:23:10.963801 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d3259c31-3ce2-44c0-8a04-7672fffde9a2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d3259c31-3ce2-44c0-8a04-7672fffde9a2\") " pod="openstack/glance-default-internal-api-0" Mar 12 08:23:10 crc kubenswrapper[4809]: I0312 08:23:10.971782 4809 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 08:23:10 crc kubenswrapper[4809]: I0312 08:23:10.971849 4809 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-cf6534d4-9b30-4b23-b28c-87ea2179a8ac\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cf6534d4-9b30-4b23-b28c-87ea2179a8ac\") pod \"glance-default-internal-api-0\" (UID: \"d3259c31-3ce2-44c0-8a04-7672fffde9a2\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f5a7d6d655feca30f79868ff57bcbc2eb87574a4dc9b83f3feb9e7096f859fa1/globalmount\"" pod="openstack/glance-default-internal-api-0" Mar 12 08:23:10 crc kubenswrapper[4809]: I0312 08:23:10.973864 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3259c31-3ce2-44c0-8a04-7672fffde9a2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d3259c31-3ce2-44c0-8a04-7672fffde9a2\") " pod="openstack/glance-default-internal-api-0" Mar 12 08:23:10 crc kubenswrapper[4809]: I0312 08:23:10.977998 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3259c31-3ce2-44c0-8a04-7672fffde9a2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d3259c31-3ce2-44c0-8a04-7672fffde9a2\") " pod="openstack/glance-default-internal-api-0" Mar 12 08:23:10 crc kubenswrapper[4809]: I0312 08:23:10.980345 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3259c31-3ce2-44c0-8a04-7672fffde9a2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d3259c31-3ce2-44c0-8a04-7672fffde9a2\") " pod="openstack/glance-default-internal-api-0" Mar 12 08:23:10 crc kubenswrapper[4809]: I0312 08:23:10.991975 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7ftz\" (UniqueName: \"kubernetes.io/projected/d3259c31-3ce2-44c0-8a04-7672fffde9a2-kube-api-access-l7ftz\") pod \"glance-default-internal-api-0\" (UID: \"d3259c31-3ce2-44c0-8a04-7672fffde9a2\") " pod="openstack/glance-default-internal-api-0" Mar 12 08:23:11 crc kubenswrapper[4809]: I0312 08:23:11.046274 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-cf6534d4-9b30-4b23-b28c-87ea2179a8ac\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cf6534d4-9b30-4b23-b28c-87ea2179a8ac\") pod \"glance-default-internal-api-0\" (UID: \"d3259c31-3ce2-44c0-8a04-7672fffde9a2\") " pod="openstack/glance-default-internal-api-0" Mar 12 08:23:11 crc kubenswrapper[4809]: I0312 08:23:11.143755 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 12 08:23:12 crc kubenswrapper[4809]: I0312 08:23:12.485775 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 12 08:23:12 crc kubenswrapper[4809]: I0312 08:23:12.682087 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 12 08:23:13 crc kubenswrapper[4809]: I0312 08:23:13.376601 4809 generic.go:334] "Generic (PLEG): container finished" podID="d3108f49-e70c-4650-a969-b83a1ed46a14" containerID="4ee61acb61b7383a33753520c01f0f3ea94393b21b79270375c526414b7d0d57" exitCode=0 Mar 12 08:23:13 crc kubenswrapper[4809]: I0312 08:23:13.376675 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-nzpz5" event={"ID":"d3108f49-e70c-4650-a969-b83a1ed46a14","Type":"ContainerDied","Data":"4ee61acb61b7383a33753520c01f0f3ea94393b21b79270375c526414b7d0d57"} Mar 12 08:23:13 crc kubenswrapper[4809]: I0312 08:23:13.583682 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="fefb3329-70bc-45d1-ac98-44f1836b3470" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.156:9090/-/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 08:23:13 crc kubenswrapper[4809]: I0312 08:23:13.583858 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Mar 12 08:23:14 crc kubenswrapper[4809]: I0312 08:23:14.226237 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-hd7gx" podUID="085fdfa8-88d8-460c-82cf-87d59d145d7c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.152:5353: i/o timeout" Mar 12 08:23:14 crc kubenswrapper[4809]: I0312 08:23:14.226950 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-hd7gx" Mar 12 08:23:18 crc kubenswrapper[4809]: I0312 08:23:18.584429 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="fefb3329-70bc-45d1-ac98-44f1836b3470" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.156:9090/-/ready\": dial tcp 10.217.0.156:9090: i/o timeout (Client.Timeout exceeded while awaiting headers)" Mar 12 08:23:19 crc kubenswrapper[4809]: I0312 08:23:19.227572 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-hd7gx" podUID="085fdfa8-88d8-460c-82cf-87d59d145d7c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.152:5353: i/o timeout" Mar 12 08:23:20 crc kubenswrapper[4809]: E0312 08:23:20.920808 4809 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Mar 12 08:23:20 crc kubenswrapper[4809]: E0312 08:23:20.921762 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nchd6h67h5cfh64bh58dh665h555h58dh584h645h5fbh646h54dh68bh96h65dh5d6h56chb4hd9h5cch546hf6h55ch5ch577hd8hb8hbh556hfbq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jh8xw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e08acf54-60ca-4dc7-bd01-ce8fed2bcc51): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 08:23:20 crc kubenswrapper[4809]: I0312 08:23:20.925206 4809 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 08:23:21 crc kubenswrapper[4809]: E0312 08:23:21.191265 4809 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified" Mar 12 08:23:21 crc kubenswrapper[4809]: E0312 08:23:21.191512 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-htrmz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-jw2qr_openstack(a8ef0743-567a-4a4b-aada-a0bc3659b200): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 08:23:21 crc kubenswrapper[4809]: E0312 08:23:21.192733 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-jw2qr" podUID="a8ef0743-567a-4a4b-aada-a0bc3659b200" Mar 12 08:23:21 crc kubenswrapper[4809]: I0312 08:23:21.378244 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-hd7gx" Mar 12 08:23:21 crc kubenswrapper[4809]: I0312 08:23:21.483672 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4qdw\" (UniqueName: \"kubernetes.io/projected/085fdfa8-88d8-460c-82cf-87d59d145d7c-kube-api-access-f4qdw\") pod \"085fdfa8-88d8-460c-82cf-87d59d145d7c\" (UID: \"085fdfa8-88d8-460c-82cf-87d59d145d7c\") " Mar 12 08:23:21 crc kubenswrapper[4809]: I0312 08:23:21.483811 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/085fdfa8-88d8-460c-82cf-87d59d145d7c-config\") pod \"085fdfa8-88d8-460c-82cf-87d59d145d7c\" (UID: \"085fdfa8-88d8-460c-82cf-87d59d145d7c\") " Mar 12 08:23:21 crc kubenswrapper[4809]: I0312 08:23:21.483898 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/085fdfa8-88d8-460c-82cf-87d59d145d7c-dns-svc\") pod \"085fdfa8-88d8-460c-82cf-87d59d145d7c\" (UID: \"085fdfa8-88d8-460c-82cf-87d59d145d7c\") " Mar 12 08:23:21 crc kubenswrapper[4809]: I0312 08:23:21.484035 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/085fdfa8-88d8-460c-82cf-87d59d145d7c-ovsdbserver-sb\") pod \"085fdfa8-88d8-460c-82cf-87d59d145d7c\" (UID: \"085fdfa8-88d8-460c-82cf-87d59d145d7c\") " Mar 12 08:23:21 crc kubenswrapper[4809]: I0312 08:23:21.484063 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/085fdfa8-88d8-460c-82cf-87d59d145d7c-ovsdbserver-nb\") pod \"085fdfa8-88d8-460c-82cf-87d59d145d7c\" (UID: \"085fdfa8-88d8-460c-82cf-87d59d145d7c\") " Mar 12 08:23:21 crc kubenswrapper[4809]: I0312 08:23:21.503521 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/085fdfa8-88d8-460c-82cf-87d59d145d7c-kube-api-access-f4qdw" (OuterVolumeSpecName: "kube-api-access-f4qdw") pod "085fdfa8-88d8-460c-82cf-87d59d145d7c" (UID: "085fdfa8-88d8-460c-82cf-87d59d145d7c"). InnerVolumeSpecName "kube-api-access-f4qdw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:23:21 crc kubenswrapper[4809]: I0312 08:23:21.509048 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-hd7gx" Mar 12 08:23:21 crc kubenswrapper[4809]: I0312 08:23:21.509195 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-hd7gx" event={"ID":"085fdfa8-88d8-460c-82cf-87d59d145d7c","Type":"ContainerDied","Data":"3b1cb9d7be66c6b8c750119aae9d6caaff862813e6431b51f08b7f7bb54fda04"} Mar 12 08:23:21 crc kubenswrapper[4809]: I0312 08:23:21.509301 4809 scope.go:117] "RemoveContainer" containerID="6701a3188bee1691395f8296e3fe5c114f89c6c2a68e1136a6a5ed9ec9661fc8" Mar 12 08:23:21 crc kubenswrapper[4809]: E0312 08:23:21.511148 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified\\\"\"" pod="openstack/heat-db-sync-jw2qr" podUID="a8ef0743-567a-4a4b-aada-a0bc3659b200" Mar 12 08:23:21 crc kubenswrapper[4809]: I0312 08:23:21.543471 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/085fdfa8-88d8-460c-82cf-87d59d145d7c-config" (OuterVolumeSpecName: "config") pod "085fdfa8-88d8-460c-82cf-87d59d145d7c" (UID: "085fdfa8-88d8-460c-82cf-87d59d145d7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:23:21 crc kubenswrapper[4809]: I0312 08:23:21.561617 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/085fdfa8-88d8-460c-82cf-87d59d145d7c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "085fdfa8-88d8-460c-82cf-87d59d145d7c" (UID: "085fdfa8-88d8-460c-82cf-87d59d145d7c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:23:21 crc kubenswrapper[4809]: I0312 08:23:21.567685 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/085fdfa8-88d8-460c-82cf-87d59d145d7c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "085fdfa8-88d8-460c-82cf-87d59d145d7c" (UID: "085fdfa8-88d8-460c-82cf-87d59d145d7c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:23:21 crc kubenswrapper[4809]: I0312 08:23:21.583218 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/085fdfa8-88d8-460c-82cf-87d59d145d7c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "085fdfa8-88d8-460c-82cf-87d59d145d7c" (UID: "085fdfa8-88d8-460c-82cf-87d59d145d7c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:23:21 crc kubenswrapper[4809]: I0312 08:23:21.586829 4809 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/085fdfa8-88d8-460c-82cf-87d59d145d7c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:21 crc kubenswrapper[4809]: I0312 08:23:21.586859 4809 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/085fdfa8-88d8-460c-82cf-87d59d145d7c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:21 crc kubenswrapper[4809]: I0312 08:23:21.586872 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4qdw\" (UniqueName: \"kubernetes.io/projected/085fdfa8-88d8-460c-82cf-87d59d145d7c-kube-api-access-f4qdw\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:21 crc kubenswrapper[4809]: I0312 08:23:21.586885 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/085fdfa8-88d8-460c-82cf-87d59d145d7c-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:21 crc kubenswrapper[4809]: I0312 08:23:21.586893 4809 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/085fdfa8-88d8-460c-82cf-87d59d145d7c-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:21 crc kubenswrapper[4809]: I0312 08:23:21.851426 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-hd7gx"] Mar 12 08:23:21 crc kubenswrapper[4809]: I0312 08:23:21.861352 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-hd7gx"] Mar 12 08:23:23 crc kubenswrapper[4809]: E0312 08:23:23.009202 4809 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Mar 12 08:23:23 crc kubenswrapper[4809]: E0312 08:23:23.009794 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-44gf7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-qxpjh_openstack(8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 08:23:23 crc kubenswrapper[4809]: E0312 08:23:23.010992 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-qxpjh" podUID="8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.079701 4809 scope.go:117] "RemoveContainer" containerID="2d85fec4720bcc579fded2d352fa130084e253678bb26851f353a19f38483718" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.178918 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="085fdfa8-88d8-460c-82cf-87d59d145d7c" path="/var/lib/kubelet/pods/085fdfa8-88d8-460c-82cf-87d59d145d7c/volumes" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.291915 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.317821 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-nzpz5" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.370157 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fefb3329-70bc-45d1-ac98-44f1836b3470-tls-assets\") pod \"fefb3329-70bc-45d1-ac98-44f1836b3470\" (UID: \"fefb3329-70bc-45d1-ac98-44f1836b3470\") " Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.370227 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fefb3329-70bc-45d1-ac98-44f1836b3470-config\") pod \"fefb3329-70bc-45d1-ac98-44f1836b3470\" (UID: \"fefb3329-70bc-45d1-ac98-44f1836b3470\") " Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.370349 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/fefb3329-70bc-45d1-ac98-44f1836b3470-prometheus-metric-storage-rulefiles-0\") pod \"fefb3329-70bc-45d1-ac98-44f1836b3470\" (UID: \"fefb3329-70bc-45d1-ac98-44f1836b3470\") " Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.370444 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/fefb3329-70bc-45d1-ac98-44f1836b3470-prometheus-metric-storage-rulefiles-1\") pod \"fefb3329-70bc-45d1-ac98-44f1836b3470\" (UID: \"fefb3329-70bc-45d1-ac98-44f1836b3470\") " Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.370512 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/fefb3329-70bc-45d1-ac98-44f1836b3470-thanos-prometheus-http-client-file\") pod \"fefb3329-70bc-45d1-ac98-44f1836b3470\" (UID: \"fefb3329-70bc-45d1-ac98-44f1836b3470\") " Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.370592 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fefb3329-70bc-45d1-ac98-44f1836b3470-config-out\") pod \"fefb3329-70bc-45d1-ac98-44f1836b3470\" (UID: \"fefb3329-70bc-45d1-ac98-44f1836b3470\") " Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.370719 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpplf\" (UniqueName: \"kubernetes.io/projected/fefb3329-70bc-45d1-ac98-44f1836b3470-kube-api-access-dpplf\") pod \"fefb3329-70bc-45d1-ac98-44f1836b3470\" (UID: \"fefb3329-70bc-45d1-ac98-44f1836b3470\") " Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.370894 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8df94e62-7c52-43ba-aa96-2fa0463399b9\") pod \"fefb3329-70bc-45d1-ac98-44f1836b3470\" (UID: \"fefb3329-70bc-45d1-ac98-44f1836b3470\") " Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.370957 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fefb3329-70bc-45d1-ac98-44f1836b3470-web-config\") pod \"fefb3329-70bc-45d1-ac98-44f1836b3470\" (UID: \"fefb3329-70bc-45d1-ac98-44f1836b3470\") " Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.370988 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/fefb3329-70bc-45d1-ac98-44f1836b3470-prometheus-metric-storage-rulefiles-2\") pod \"fefb3329-70bc-45d1-ac98-44f1836b3470\" (UID: \"fefb3329-70bc-45d1-ac98-44f1836b3470\") " Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.372812 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fefb3329-70bc-45d1-ac98-44f1836b3470-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "fefb3329-70bc-45d1-ac98-44f1836b3470" (UID: "fefb3329-70bc-45d1-ac98-44f1836b3470"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.373292 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fefb3329-70bc-45d1-ac98-44f1836b3470-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "fefb3329-70bc-45d1-ac98-44f1836b3470" (UID: "fefb3329-70bc-45d1-ac98-44f1836b3470"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.373873 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fefb3329-70bc-45d1-ac98-44f1836b3470-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "fefb3329-70bc-45d1-ac98-44f1836b3470" (UID: "fefb3329-70bc-45d1-ac98-44f1836b3470"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.374538 4809 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/fefb3329-70bc-45d1-ac98-44f1836b3470-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.374557 4809 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/fefb3329-70bc-45d1-ac98-44f1836b3470-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.374567 4809 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/fefb3329-70bc-45d1-ac98-44f1836b3470-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.382549 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fefb3329-70bc-45d1-ac98-44f1836b3470-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "fefb3329-70bc-45d1-ac98-44f1836b3470" (UID: "fefb3329-70bc-45d1-ac98-44f1836b3470"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.382634 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fefb3329-70bc-45d1-ac98-44f1836b3470-config-out" (OuterVolumeSpecName: "config-out") pod "fefb3329-70bc-45d1-ac98-44f1836b3470" (UID: "fefb3329-70bc-45d1-ac98-44f1836b3470"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.382659 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fefb3329-70bc-45d1-ac98-44f1836b3470-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "fefb3329-70bc-45d1-ac98-44f1836b3470" (UID: "fefb3329-70bc-45d1-ac98-44f1836b3470"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.382715 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fefb3329-70bc-45d1-ac98-44f1836b3470-kube-api-access-dpplf" (OuterVolumeSpecName: "kube-api-access-dpplf") pod "fefb3329-70bc-45d1-ac98-44f1836b3470" (UID: "fefb3329-70bc-45d1-ac98-44f1836b3470"). InnerVolumeSpecName "kube-api-access-dpplf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.383516 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fefb3329-70bc-45d1-ac98-44f1836b3470-config" (OuterVolumeSpecName: "config") pod "fefb3329-70bc-45d1-ac98-44f1836b3470" (UID: "fefb3329-70bc-45d1-ac98-44f1836b3470"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.403582 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8df94e62-7c52-43ba-aa96-2fa0463399b9" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "fefb3329-70bc-45d1-ac98-44f1836b3470" (UID: "fefb3329-70bc-45d1-ac98-44f1836b3470"). InnerVolumeSpecName "pvc-8df94e62-7c52-43ba-aa96-2fa0463399b9". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.435300 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fefb3329-70bc-45d1-ac98-44f1836b3470-web-config" (OuterVolumeSpecName: "web-config") pod "fefb3329-70bc-45d1-ac98-44f1836b3470" (UID: "fefb3329-70bc-45d1-ac98-44f1836b3470"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.476173 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xd64d\" (UniqueName: \"kubernetes.io/projected/d3108f49-e70c-4650-a969-b83a1ed46a14-kube-api-access-xd64d\") pod \"d3108f49-e70c-4650-a969-b83a1ed46a14\" (UID: \"d3108f49-e70c-4650-a969-b83a1ed46a14\") " Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.476478 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d3108f49-e70c-4650-a969-b83a1ed46a14-config\") pod \"d3108f49-e70c-4650-a969-b83a1ed46a14\" (UID: \"d3108f49-e70c-4650-a969-b83a1ed46a14\") " Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.476561 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3108f49-e70c-4650-a969-b83a1ed46a14-combined-ca-bundle\") pod \"d3108f49-e70c-4650-a969-b83a1ed46a14\" (UID: \"d3108f49-e70c-4650-a969-b83a1ed46a14\") " Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.477078 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dpplf\" (UniqueName: \"kubernetes.io/projected/fefb3329-70bc-45d1-ac98-44f1836b3470-kube-api-access-dpplf\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.477132 4809 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-8df94e62-7c52-43ba-aa96-2fa0463399b9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8df94e62-7c52-43ba-aa96-2fa0463399b9\") on node \"crc\" " Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.477148 4809 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fefb3329-70bc-45d1-ac98-44f1836b3470-web-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.477158 4809 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fefb3329-70bc-45d1-ac98-44f1836b3470-tls-assets\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.477169 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/fefb3329-70bc-45d1-ac98-44f1836b3470-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.477182 4809 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/fefb3329-70bc-45d1-ac98-44f1836b3470-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.477193 4809 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fefb3329-70bc-45d1-ac98-44f1836b3470-config-out\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.485197 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3108f49-e70c-4650-a969-b83a1ed46a14-kube-api-access-xd64d" (OuterVolumeSpecName: "kube-api-access-xd64d") pod "d3108f49-e70c-4650-a969-b83a1ed46a14" (UID: "d3108f49-e70c-4650-a969-b83a1ed46a14"). InnerVolumeSpecName "kube-api-access-xd64d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.528227 4809 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.528478 4809 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-8df94e62-7c52-43ba-aa96-2fa0463399b9" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8df94e62-7c52-43ba-aa96-2fa0463399b9") on node "crc" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.553058 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.553101 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fefb3329-70bc-45d1-ac98-44f1836b3470","Type":"ContainerDied","Data":"cd575372eb730b27c2d60f0239e38424a260f94b7dbba5b1c3812c57c07c508c"} Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.553779 4809 scope.go:117] "RemoveContainer" containerID="a87fa4653ed1debf5091aff3f72f0c8d027e56931516dd414e3d23d806fa33b1" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.572138 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3108f49-e70c-4650-a969-b83a1ed46a14-config" (OuterVolumeSpecName: "config") pod "d3108f49-e70c-4650-a969-b83a1ed46a14" (UID: "d3108f49-e70c-4650-a969-b83a1ed46a14"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.579646 4809 reconciler_common.go:293] "Volume detached for volume \"pvc-8df94e62-7c52-43ba-aa96-2fa0463399b9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8df94e62-7c52-43ba-aa96-2fa0463399b9\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.579680 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/d3108f49-e70c-4650-a969-b83a1ed46a14-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.579690 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xd64d\" (UniqueName: \"kubernetes.io/projected/d3108f49-e70c-4650-a969-b83a1ed46a14-kube-api-access-xd64d\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.580862 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-nzpz5" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.581085 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-nzpz5" event={"ID":"d3108f49-e70c-4650-a969-b83a1ed46a14","Type":"ContainerDied","Data":"bccef75b97c19b238681abeea313e4de7fd1bd102e0e3ed76eed8620c8d5779c"} Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.581136 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bccef75b97c19b238681abeea313e4de7fd1bd102e0e3ed76eed8620c8d5779c" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.582639 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="fefb3329-70bc-45d1-ac98-44f1836b3470" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.156:9090/-/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.590608 4809 scope.go:117] "RemoveContainer" containerID="cb451e2cafc21b5864c8d08ce2407b5b5779919d542166eca1ee8fd0cb372aa9" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.593551 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3108f49-e70c-4650-a969-b83a1ed46a14-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d3108f49-e70c-4650-a969-b83a1ed46a14" (UID: "d3108f49-e70c-4650-a969-b83a1ed46a14"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:23:23 crc kubenswrapper[4809]: E0312 08:23:23.594023 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-qxpjh" podUID="8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.612849 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.630239 4809 scope.go:117] "RemoveContainer" containerID="d0e866a7bcc1ce3dd51665675baac963d3e25c704cf9588d88017c88de540836" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.652182 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.669091 4809 scope.go:117] "RemoveContainer" containerID="8fe98d57f4f19a9c27d3d704cb13af2e6683cbc0c4edb4b0457e6d79607de148" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.678470 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 12 08:23:23 crc kubenswrapper[4809]: E0312 08:23:23.681021 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fefb3329-70bc-45d1-ac98-44f1836b3470" containerName="init-config-reloader" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.681092 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="fefb3329-70bc-45d1-ac98-44f1836b3470" containerName="init-config-reloader" Mar 12 08:23:23 crc kubenswrapper[4809]: E0312 08:23:23.681135 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fefb3329-70bc-45d1-ac98-44f1836b3470" containerName="thanos-sidecar" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.681144 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="fefb3329-70bc-45d1-ac98-44f1836b3470" containerName="thanos-sidecar" Mar 12 08:23:23 crc kubenswrapper[4809]: E0312 08:23:23.681165 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="085fdfa8-88d8-460c-82cf-87d59d145d7c" containerName="init" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.681172 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="085fdfa8-88d8-460c-82cf-87d59d145d7c" containerName="init" Mar 12 08:23:23 crc kubenswrapper[4809]: E0312 08:23:23.681188 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fefb3329-70bc-45d1-ac98-44f1836b3470" containerName="prometheus" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.681215 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="fefb3329-70bc-45d1-ac98-44f1836b3470" containerName="prometheus" Mar 12 08:23:23 crc kubenswrapper[4809]: E0312 08:23:23.681243 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fefb3329-70bc-45d1-ac98-44f1836b3470" containerName="config-reloader" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.681249 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="fefb3329-70bc-45d1-ac98-44f1836b3470" containerName="config-reloader" Mar 12 08:23:23 crc kubenswrapper[4809]: E0312 08:23:23.681265 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3108f49-e70c-4650-a969-b83a1ed46a14" containerName="neutron-db-sync" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.681271 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3108f49-e70c-4650-a969-b83a1ed46a14" containerName="neutron-db-sync" Mar 12 08:23:23 crc kubenswrapper[4809]: E0312 08:23:23.681308 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="085fdfa8-88d8-460c-82cf-87d59d145d7c" containerName="dnsmasq-dns" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.681315 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="085fdfa8-88d8-460c-82cf-87d59d145d7c" containerName="dnsmasq-dns" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.682096 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="fefb3329-70bc-45d1-ac98-44f1836b3470" containerName="prometheus" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.682145 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="fefb3329-70bc-45d1-ac98-44f1836b3470" containerName="thanos-sidecar" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.682165 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="fefb3329-70bc-45d1-ac98-44f1836b3470" containerName="config-reloader" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.682178 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="085fdfa8-88d8-460c-82cf-87d59d145d7c" containerName="dnsmasq-dns" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.682588 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3108f49-e70c-4650-a969-b83a1ed46a14" containerName="neutron-db-sync" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.686004 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3108f49-e70c-4650-a969-b83a1ed46a14-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.698908 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.700812 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.713176 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.713446 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-p5jc5" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.715801 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.716328 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.716910 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.717035 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.717168 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.717351 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.724952 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.789014 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.789063 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fd662\" (UniqueName: \"kubernetes.io/projected/9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5-kube-api-access-fd662\") pod \"prometheus-metric-storage-0\" (UID: \"9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.789088 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.789155 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.789221 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.789250 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.789292 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.789331 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8df94e62-7c52-43ba-aa96-2fa0463399b9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8df94e62-7c52-43ba-aa96-2fa0463399b9\") pod \"prometheus-metric-storage-0\" (UID: \"9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.789356 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.789389 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5-config\") pod \"prometheus-metric-storage-0\" (UID: \"9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.789404 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.789472 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.789491 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.892007 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.892051 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.892091 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.892132 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fd662\" (UniqueName: \"kubernetes.io/projected/9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5-kube-api-access-fd662\") pod \"prometheus-metric-storage-0\" (UID: \"9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.892160 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.892211 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.892284 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.892318 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.892353 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.892392 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8df94e62-7c52-43ba-aa96-2fa0463399b9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8df94e62-7c52-43ba-aa96-2fa0463399b9\") pod \"prometheus-metric-storage-0\" (UID: \"9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.892417 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.892452 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5-config\") pod \"prometheus-metric-storage-0\" (UID: \"9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.892470 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.893054 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.896067 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.896173 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.897172 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.897965 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.902781 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5-config\") pod \"prometheus-metric-storage-0\" (UID: \"9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.905328 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.906399 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.910649 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.912573 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.916805 4809 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.916864 4809 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8df94e62-7c52-43ba-aa96-2fa0463399b9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8df94e62-7c52-43ba-aa96-2fa0463399b9\") pod \"prometheus-metric-storage-0\" (UID: \"9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/07900c41f367c6133b243a696847e564daee14d73a2768d37c66f6e5f7b4cf48/globalmount\"" pod="openstack/prometheus-metric-storage-0" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.917838 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.922379 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fd662\" (UniqueName: \"kubernetes.io/projected/9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5-kube-api-access-fd662\") pod \"prometheus-metric-storage-0\" (UID: \"9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:23:23 crc kubenswrapper[4809]: I0312 08:23:23.972760 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8df94e62-7c52-43ba-aa96-2fa0463399b9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8df94e62-7c52-43ba-aa96-2fa0463399b9\") pod \"prometheus-metric-storage-0\" (UID: \"9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5\") " pod="openstack/prometheus-metric-storage-0" Mar 12 08:23:24 crc kubenswrapper[4809]: I0312 08:23:24.011214 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-jssh7"] Mar 12 08:23:24 crc kubenswrapper[4809]: I0312 08:23:24.023100 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8bcp4"] Mar 12 08:23:24 crc kubenswrapper[4809]: W0312 08:23:24.023959 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod23f0c997_5c6b_4229_911e_efe7b42f59f7.slice/crio-1b79888e4c6ec91d793ebcd313e30d7b38397604180b0ae74604293dfc52597c WatchSource:0}: Error finding container 1b79888e4c6ec91d793ebcd313e30d7b38397604180b0ae74604293dfc52597c: Status 404 returned error can't find the container with id 1b79888e4c6ec91d793ebcd313e30d7b38397604180b0ae74604293dfc52597c Mar 12 08:23:24 crc kubenswrapper[4809]: I0312 08:23:24.058002 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 12 08:23:24 crc kubenswrapper[4809]: I0312 08:23:24.067868 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-kn7bt"] Mar 12 08:23:24 crc kubenswrapper[4809]: I0312 08:23:24.148670 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 12 08:23:24 crc kubenswrapper[4809]: I0312 08:23:24.228739 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-hd7gx" podUID="085fdfa8-88d8-460c-82cf-87d59d145d7c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.152:5353: i/o timeout" Mar 12 08:23:24 crc kubenswrapper[4809]: I0312 08:23:24.591700 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-kn7bt"] Mar 12 08:23:24 crc kubenswrapper[4809]: I0312 08:23:24.607990 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-jf5zs" event={"ID":"760b497c-568e-4501-86c1-1c9f6b5e7f7d","Type":"ContainerStarted","Data":"0db4a14003c408fe75529fbe0259f755b43e44c40008f4a33a1ce14f88f9e1f5"} Mar 12 08:23:24 crc kubenswrapper[4809]: I0312 08:23:24.617272 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8bcp4" event={"ID":"2d78fd7f-3718-4999-a661-ad590dc808a5","Type":"ContainerStarted","Data":"7d41e40b9a666531a3ebf1ec61e9a147c13de70d148d8d03485e7f098977c5d2"} Mar 12 08:23:24 crc kubenswrapper[4809]: I0312 08:23:24.622263 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-98nmv" event={"ID":"0857990f-7921-4ea0-a0c1-e431cc7de107","Type":"ContainerStarted","Data":"0be9d23b1007dffd29b365806fec94ed61f269d21b25655a004b7ec4fa52ad75"} Mar 12 08:23:24 crc kubenswrapper[4809]: I0312 08:23:24.625307 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-fp9lq"] Mar 12 08:23:24 crc kubenswrapper[4809]: I0312 08:23:24.627238 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jssh7" event={"ID":"23f0c997-5c6b-4229-911e-efe7b42f59f7","Type":"ContainerStarted","Data":"1b79888e4c6ec91d793ebcd313e30d7b38397604180b0ae74604293dfc52597c"} Mar 12 08:23:24 crc kubenswrapper[4809]: I0312 08:23:24.629364 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-fp9lq" Mar 12 08:23:24 crc kubenswrapper[4809]: I0312 08:23:24.679284 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-fp9lq"] Mar 12 08:23:24 crc kubenswrapper[4809]: I0312 08:23:24.708434 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-jf5zs" podStartSLOduration=5.285995314 podStartE2EDuration="38.708412127s" podCreationTimestamp="2026-03-12 08:22:46 +0000 UTC" firstStartedPulling="2026-03-12 08:22:49.542090036 +0000 UTC m=+1443.124125769" lastFinishedPulling="2026-03-12 08:23:22.964506849 +0000 UTC m=+1476.546542582" observedRunningTime="2026-03-12 08:23:24.641824286 +0000 UTC m=+1478.223860019" watchObservedRunningTime="2026-03-12 08:23:24.708412127 +0000 UTC m=+1478.290447860" Mar 12 08:23:24 crc kubenswrapper[4809]: I0312 08:23:24.778825 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe74aae4-4752-4c38-99a7-eb88886cf5bf-config\") pod \"dnsmasq-dns-55f844cf75-fp9lq\" (UID: \"fe74aae4-4752-4c38-99a7-eb88886cf5bf\") " pod="openstack/dnsmasq-dns-55f844cf75-fp9lq" Mar 12 08:23:24 crc kubenswrapper[4809]: I0312 08:23:24.779817 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fe74aae4-4752-4c38-99a7-eb88886cf5bf-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-fp9lq\" (UID: \"fe74aae4-4752-4c38-99a7-eb88886cf5bf\") " pod="openstack/dnsmasq-dns-55f844cf75-fp9lq" Mar 12 08:23:24 crc kubenswrapper[4809]: I0312 08:23:24.780015 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe74aae4-4752-4c38-99a7-eb88886cf5bf-dns-svc\") pod \"dnsmasq-dns-55f844cf75-fp9lq\" (UID: \"fe74aae4-4752-4c38-99a7-eb88886cf5bf\") " pod="openstack/dnsmasq-dns-55f844cf75-fp9lq" Mar 12 08:23:24 crc kubenswrapper[4809]: I0312 08:23:24.780379 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gk9j\" (UniqueName: \"kubernetes.io/projected/fe74aae4-4752-4c38-99a7-eb88886cf5bf-kube-api-access-8gk9j\") pod \"dnsmasq-dns-55f844cf75-fp9lq\" (UID: \"fe74aae4-4752-4c38-99a7-eb88886cf5bf\") " pod="openstack/dnsmasq-dns-55f844cf75-fp9lq" Mar 12 08:23:24 crc kubenswrapper[4809]: I0312 08:23:24.780484 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fe74aae4-4752-4c38-99a7-eb88886cf5bf-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-fp9lq\" (UID: \"fe74aae4-4752-4c38-99a7-eb88886cf5bf\") " pod="openstack/dnsmasq-dns-55f844cf75-fp9lq" Mar 12 08:23:24 crc kubenswrapper[4809]: I0312 08:23:24.843407 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-98nmv" podStartSLOduration=7.104909142 podStartE2EDuration="38.843378128s" podCreationTimestamp="2026-03-12 08:22:46 +0000 UTC" firstStartedPulling="2026-03-12 08:22:49.487182761 +0000 UTC m=+1443.069218494" lastFinishedPulling="2026-03-12 08:23:21.225651747 +0000 UTC m=+1474.807687480" observedRunningTime="2026-03-12 08:23:24.739373524 +0000 UTC m=+1478.321409257" watchObservedRunningTime="2026-03-12 08:23:24.843378128 +0000 UTC m=+1478.425413861" Mar 12 08:23:24 crc kubenswrapper[4809]: I0312 08:23:24.907755 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fe74aae4-4752-4c38-99a7-eb88886cf5bf-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-fp9lq\" (UID: \"fe74aae4-4752-4c38-99a7-eb88886cf5bf\") " pod="openstack/dnsmasq-dns-55f844cf75-fp9lq" Mar 12 08:23:24 crc kubenswrapper[4809]: I0312 08:23:24.960250 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7877ddd69d-mkc5h"] Mar 12 08:23:24 crc kubenswrapper[4809]: I0312 08:23:24.962897 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7877ddd69d-mkc5h" Mar 12 08:23:24 crc kubenswrapper[4809]: I0312 08:23:24.968867 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Mar 12 08:23:24 crc kubenswrapper[4809]: I0312 08:23:24.982083 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-25nrr" Mar 12 08:23:24 crc kubenswrapper[4809]: I0312 08:23:24.982412 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Mar 12 08:23:24 crc kubenswrapper[4809]: I0312 08:23:24.982652 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Mar 12 08:23:24 crc kubenswrapper[4809]: I0312 08:23:24.983389 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7877ddd69d-mkc5h"] Mar 12 08:23:25 crc kubenswrapper[4809]: I0312 08:23:25.034601 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/160fb026-73c9-4fbd-8602-aa6de6bc9417-ovndb-tls-certs\") pod \"neutron-7877ddd69d-mkc5h\" (UID: \"160fb026-73c9-4fbd-8602-aa6de6bc9417\") " pod="openstack/neutron-7877ddd69d-mkc5h" Mar 12 08:23:25 crc kubenswrapper[4809]: I0312 08:23:25.034723 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe74aae4-4752-4c38-99a7-eb88886cf5bf-config\") pod \"dnsmasq-dns-55f844cf75-fp9lq\" (UID: \"fe74aae4-4752-4c38-99a7-eb88886cf5bf\") " pod="openstack/dnsmasq-dns-55f844cf75-fp9lq" Mar 12 08:23:25 crc kubenswrapper[4809]: I0312 08:23:25.034765 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/160fb026-73c9-4fbd-8602-aa6de6bc9417-httpd-config\") pod \"neutron-7877ddd69d-mkc5h\" (UID: \"160fb026-73c9-4fbd-8602-aa6de6bc9417\") " pod="openstack/neutron-7877ddd69d-mkc5h" Mar 12 08:23:25 crc kubenswrapper[4809]: I0312 08:23:25.034833 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fe74aae4-4752-4c38-99a7-eb88886cf5bf-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-fp9lq\" (UID: \"fe74aae4-4752-4c38-99a7-eb88886cf5bf\") " pod="openstack/dnsmasq-dns-55f844cf75-fp9lq" Mar 12 08:23:25 crc kubenswrapper[4809]: I0312 08:23:25.034857 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svvbt\" (UniqueName: \"kubernetes.io/projected/160fb026-73c9-4fbd-8602-aa6de6bc9417-kube-api-access-svvbt\") pod \"neutron-7877ddd69d-mkc5h\" (UID: \"160fb026-73c9-4fbd-8602-aa6de6bc9417\") " pod="openstack/neutron-7877ddd69d-mkc5h" Mar 12 08:23:25 crc kubenswrapper[4809]: I0312 08:23:25.034915 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe74aae4-4752-4c38-99a7-eb88886cf5bf-dns-svc\") pod \"dnsmasq-dns-55f844cf75-fp9lq\" (UID: \"fe74aae4-4752-4c38-99a7-eb88886cf5bf\") " pod="openstack/dnsmasq-dns-55f844cf75-fp9lq" Mar 12 08:23:25 crc kubenswrapper[4809]: I0312 08:23:25.034968 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/160fb026-73c9-4fbd-8602-aa6de6bc9417-config\") pod \"neutron-7877ddd69d-mkc5h\" (UID: \"160fb026-73c9-4fbd-8602-aa6de6bc9417\") " pod="openstack/neutron-7877ddd69d-mkc5h" Mar 12 08:23:25 crc kubenswrapper[4809]: I0312 08:23:25.035004 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gk9j\" (UniqueName: \"kubernetes.io/projected/fe74aae4-4752-4c38-99a7-eb88886cf5bf-kube-api-access-8gk9j\") pod \"dnsmasq-dns-55f844cf75-fp9lq\" (UID: \"fe74aae4-4752-4c38-99a7-eb88886cf5bf\") " pod="openstack/dnsmasq-dns-55f844cf75-fp9lq" Mar 12 08:23:25 crc kubenswrapper[4809]: I0312 08:23:25.035033 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/160fb026-73c9-4fbd-8602-aa6de6bc9417-combined-ca-bundle\") pod \"neutron-7877ddd69d-mkc5h\" (UID: \"160fb026-73c9-4fbd-8602-aa6de6bc9417\") " pod="openstack/neutron-7877ddd69d-mkc5h" Mar 12 08:23:25 crc kubenswrapper[4809]: I0312 08:23:25.035074 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fe74aae4-4752-4c38-99a7-eb88886cf5bf-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-fp9lq\" (UID: \"fe74aae4-4752-4c38-99a7-eb88886cf5bf\") " pod="openstack/dnsmasq-dns-55f844cf75-fp9lq" Mar 12 08:23:25 crc kubenswrapper[4809]: I0312 08:23:25.035179 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fe74aae4-4752-4c38-99a7-eb88886cf5bf-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-fp9lq\" (UID: \"fe74aae4-4752-4c38-99a7-eb88886cf5bf\") " pod="openstack/dnsmasq-dns-55f844cf75-fp9lq" Mar 12 08:23:25 crc kubenswrapper[4809]: I0312 08:23:25.036362 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fe74aae4-4752-4c38-99a7-eb88886cf5bf-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-fp9lq\" (UID: \"fe74aae4-4752-4c38-99a7-eb88886cf5bf\") " pod="openstack/dnsmasq-dns-55f844cf75-fp9lq" Mar 12 08:23:25 crc kubenswrapper[4809]: I0312 08:23:25.036973 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe74aae4-4752-4c38-99a7-eb88886cf5bf-config\") pod \"dnsmasq-dns-55f844cf75-fp9lq\" (UID: \"fe74aae4-4752-4c38-99a7-eb88886cf5bf\") " pod="openstack/dnsmasq-dns-55f844cf75-fp9lq" Mar 12 08:23:25 crc kubenswrapper[4809]: I0312 08:23:25.037555 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fe74aae4-4752-4c38-99a7-eb88886cf5bf-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-fp9lq\" (UID: \"fe74aae4-4752-4c38-99a7-eb88886cf5bf\") " pod="openstack/dnsmasq-dns-55f844cf75-fp9lq" Mar 12 08:23:25 crc kubenswrapper[4809]: I0312 08:23:25.038626 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fe74aae4-4752-4c38-99a7-eb88886cf5bf-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-fp9lq\" (UID: \"fe74aae4-4752-4c38-99a7-eb88886cf5bf\") " pod="openstack/dnsmasq-dns-55f844cf75-fp9lq" Mar 12 08:23:25 crc kubenswrapper[4809]: I0312 08:23:25.038830 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe74aae4-4752-4c38-99a7-eb88886cf5bf-dns-svc\") pod \"dnsmasq-dns-55f844cf75-fp9lq\" (UID: \"fe74aae4-4752-4c38-99a7-eb88886cf5bf\") " pod="openstack/dnsmasq-dns-55f844cf75-fp9lq" Mar 12 08:23:25 crc kubenswrapper[4809]: I0312 08:23:25.105222 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gk9j\" (UniqueName: \"kubernetes.io/projected/fe74aae4-4752-4c38-99a7-eb88886cf5bf-kube-api-access-8gk9j\") pod \"dnsmasq-dns-55f844cf75-fp9lq\" (UID: \"fe74aae4-4752-4c38-99a7-eb88886cf5bf\") " pod="openstack/dnsmasq-dns-55f844cf75-fp9lq" Mar 12 08:23:25 crc kubenswrapper[4809]: I0312 08:23:25.158173 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/160fb026-73c9-4fbd-8602-aa6de6bc9417-httpd-config\") pod \"neutron-7877ddd69d-mkc5h\" (UID: \"160fb026-73c9-4fbd-8602-aa6de6bc9417\") " pod="openstack/neutron-7877ddd69d-mkc5h" Mar 12 08:23:25 crc kubenswrapper[4809]: I0312 08:23:25.158528 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svvbt\" (UniqueName: \"kubernetes.io/projected/160fb026-73c9-4fbd-8602-aa6de6bc9417-kube-api-access-svvbt\") pod \"neutron-7877ddd69d-mkc5h\" (UID: \"160fb026-73c9-4fbd-8602-aa6de6bc9417\") " pod="openstack/neutron-7877ddd69d-mkc5h" Mar 12 08:23:25 crc kubenswrapper[4809]: I0312 08:23:25.158839 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/160fb026-73c9-4fbd-8602-aa6de6bc9417-config\") pod \"neutron-7877ddd69d-mkc5h\" (UID: \"160fb026-73c9-4fbd-8602-aa6de6bc9417\") " pod="openstack/neutron-7877ddd69d-mkc5h" Mar 12 08:23:25 crc kubenswrapper[4809]: I0312 08:23:25.158912 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/160fb026-73c9-4fbd-8602-aa6de6bc9417-combined-ca-bundle\") pod \"neutron-7877ddd69d-mkc5h\" (UID: \"160fb026-73c9-4fbd-8602-aa6de6bc9417\") " pod="openstack/neutron-7877ddd69d-mkc5h" Mar 12 08:23:25 crc kubenswrapper[4809]: I0312 08:23:25.159422 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/160fb026-73c9-4fbd-8602-aa6de6bc9417-ovndb-tls-certs\") pod \"neutron-7877ddd69d-mkc5h\" (UID: \"160fb026-73c9-4fbd-8602-aa6de6bc9417\") " pod="openstack/neutron-7877ddd69d-mkc5h" Mar 12 08:23:25 crc kubenswrapper[4809]: I0312 08:23:25.170820 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/160fb026-73c9-4fbd-8602-aa6de6bc9417-httpd-config\") pod \"neutron-7877ddd69d-mkc5h\" (UID: \"160fb026-73c9-4fbd-8602-aa6de6bc9417\") " pod="openstack/neutron-7877ddd69d-mkc5h" Mar 12 08:23:25 crc kubenswrapper[4809]: I0312 08:23:25.174474 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/160fb026-73c9-4fbd-8602-aa6de6bc9417-config\") pod \"neutron-7877ddd69d-mkc5h\" (UID: \"160fb026-73c9-4fbd-8602-aa6de6bc9417\") " pod="openstack/neutron-7877ddd69d-mkc5h" Mar 12 08:23:25 crc kubenswrapper[4809]: I0312 08:23:25.183368 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/160fb026-73c9-4fbd-8602-aa6de6bc9417-combined-ca-bundle\") pod \"neutron-7877ddd69d-mkc5h\" (UID: \"160fb026-73c9-4fbd-8602-aa6de6bc9417\") " pod="openstack/neutron-7877ddd69d-mkc5h" Mar 12 08:23:25 crc kubenswrapper[4809]: I0312 08:23:25.188217 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svvbt\" (UniqueName: \"kubernetes.io/projected/160fb026-73c9-4fbd-8602-aa6de6bc9417-kube-api-access-svvbt\") pod \"neutron-7877ddd69d-mkc5h\" (UID: \"160fb026-73c9-4fbd-8602-aa6de6bc9417\") " pod="openstack/neutron-7877ddd69d-mkc5h" Mar 12 08:23:25 crc kubenswrapper[4809]: I0312 08:23:25.208092 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/160fb026-73c9-4fbd-8602-aa6de6bc9417-ovndb-tls-certs\") pod \"neutron-7877ddd69d-mkc5h\" (UID: \"160fb026-73c9-4fbd-8602-aa6de6bc9417\") " pod="openstack/neutron-7877ddd69d-mkc5h" Mar 12 08:23:25 crc kubenswrapper[4809]: I0312 08:23:25.230130 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fefb3329-70bc-45d1-ac98-44f1836b3470" path="/var/lib/kubelet/pods/fefb3329-70bc-45d1-ac98-44f1836b3470/volumes" Mar 12 08:23:25 crc kubenswrapper[4809]: I0312 08:23:25.230959 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 12 08:23:25 crc kubenswrapper[4809]: I0312 08:23:25.284630 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-fp9lq" Mar 12 08:23:25 crc kubenswrapper[4809]: I0312 08:23:25.356471 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7877ddd69d-mkc5h" Mar 12 08:23:25 crc kubenswrapper[4809]: W0312 08:23:25.762085 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3259c31_3ce2_44c0_8a04_7672fffde9a2.slice/crio-1821c14abd9155306fef5013067538396e92cba1ab7a6be88198f93044253a7d WatchSource:0}: Error finding container 1821c14abd9155306fef5013067538396e92cba1ab7a6be88198f93044253a7d: Status 404 returned error can't find the container with id 1821c14abd9155306fef5013067538396e92cba1ab7a6be88198f93044253a7d Mar 12 08:23:26 crc kubenswrapper[4809]: I0312 08:23:26.608694 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-fp9lq"] Mar 12 08:23:26 crc kubenswrapper[4809]: I0312 08:23:26.652831 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 12 08:23:26 crc kubenswrapper[4809]: I0312 08:23:26.697143 4809 generic.go:334] "Generic (PLEG): container finished" podID="2d78fd7f-3718-4999-a661-ad590dc808a5" containerID="68d476243deb09e4b447eeaaae4fdeeb56b84c38379d5fa1fa4cdb7dc90b66ff" exitCode=0 Mar 12 08:23:26 crc kubenswrapper[4809]: I0312 08:23:26.698624 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8bcp4" event={"ID":"2d78fd7f-3718-4999-a661-ad590dc808a5","Type":"ContainerDied","Data":"68d476243deb09e4b447eeaaae4fdeeb56b84c38379d5fa1fa4cdb7dc90b66ff"} Mar 12 08:23:26 crc kubenswrapper[4809]: W0312 08:23:26.701049 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe74aae4_4752_4c38_99a7_eb88886cf5bf.slice/crio-df796b21d07e69e67d2a4a972870f33362b8dc904a857a3681c4166308719c00 WatchSource:0}: Error finding container df796b21d07e69e67d2a4a972870f33362b8dc904a857a3681c4166308719c00: Status 404 returned error can't find the container with id df796b21d07e69e67d2a4a972870f33362b8dc904a857a3681c4166308719c00 Mar 12 08:23:26 crc kubenswrapper[4809]: I0312 08:23:26.709943 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2216a968-959e-4bb5-943e-7d8c0dbd5405","Type":"ContainerStarted","Data":"5ed8c959f50a9aed8c0ae867fac0902a9f96082674b24b5536520430be2bf532"} Mar 12 08:23:26 crc kubenswrapper[4809]: I0312 08:23:26.713599 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5","Type":"ContainerStarted","Data":"ff42f1fb7bc68015d782f7ceaee2982eb07a0c86e3b4ed235219b374d3c3052c"} Mar 12 08:23:26 crc kubenswrapper[4809]: I0312 08:23:26.737187 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jssh7" event={"ID":"23f0c997-5c6b-4229-911e-efe7b42f59f7","Type":"ContainerStarted","Data":"d8fa398a028701803e96d3104388e36869dab06e89ada1de3c840908f82cf454"} Mar 12 08:23:26 crc kubenswrapper[4809]: I0312 08:23:26.742709 4809 generic.go:334] "Generic (PLEG): container finished" podID="a6fc9567-7eef-4445-be15-c70a81872ae6" containerID="c99b1a7ed8285bd03656959c39167924a9a4d9d7beb19db8f9a8ffd5c5af4b50" exitCode=0 Mar 12 08:23:26 crc kubenswrapper[4809]: I0312 08:23:26.742800 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-kn7bt" event={"ID":"a6fc9567-7eef-4445-be15-c70a81872ae6","Type":"ContainerDied","Data":"c99b1a7ed8285bd03656959c39167924a9a4d9d7beb19db8f9a8ffd5c5af4b50"} Mar 12 08:23:26 crc kubenswrapper[4809]: I0312 08:23:26.742854 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-kn7bt" event={"ID":"a6fc9567-7eef-4445-be15-c70a81872ae6","Type":"ContainerStarted","Data":"0aed50646fd11d60981a63a0be618e398d5104d7dd32730bea98e7afacd8fe0a"} Mar 12 08:23:26 crc kubenswrapper[4809]: I0312 08:23:26.746280 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d3259c31-3ce2-44c0-8a04-7672fffde9a2","Type":"ContainerStarted","Data":"1821c14abd9155306fef5013067538396e92cba1ab7a6be88198f93044253a7d"} Mar 12 08:23:26 crc kubenswrapper[4809]: I0312 08:23:26.772498 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-jssh7" podStartSLOduration=17.772476364 podStartE2EDuration="17.772476364s" podCreationTimestamp="2026-03-12 08:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:23:26.767746707 +0000 UTC m=+1480.349782440" watchObservedRunningTime="2026-03-12 08:23:26.772476364 +0000 UTC m=+1480.354512097" Mar 12 08:23:27 crc kubenswrapper[4809]: I0312 08:23:27.063187 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7877ddd69d-mkc5h"] Mar 12 08:23:27 crc kubenswrapper[4809]: W0312 08:23:27.093856 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod160fb026_73c9_4fbd_8602_aa6de6bc9417.slice/crio-95967317ce2ce6ace655df6f194afdab6cb92db9202c013ec8808385ee4def76 WatchSource:0}: Error finding container 95967317ce2ce6ace655df6f194afdab6cb92db9202c013ec8808385ee4def76: Status 404 returned error can't find the container with id 95967317ce2ce6ace655df6f194afdab6cb92db9202c013ec8808385ee4def76 Mar 12 08:23:27 crc kubenswrapper[4809]: I0312 08:23:27.497559 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-kn7bt" Mar 12 08:23:27 crc kubenswrapper[4809]: I0312 08:23:27.510692 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5b85bc8f8c-p6zd6"] Mar 12 08:23:27 crc kubenswrapper[4809]: E0312 08:23:27.511354 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6fc9567-7eef-4445-be15-c70a81872ae6" containerName="init" Mar 12 08:23:27 crc kubenswrapper[4809]: I0312 08:23:27.511374 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6fc9567-7eef-4445-be15-c70a81872ae6" containerName="init" Mar 12 08:23:27 crc kubenswrapper[4809]: I0312 08:23:27.511636 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6fc9567-7eef-4445-be15-c70a81872ae6" containerName="init" Mar 12 08:23:27 crc kubenswrapper[4809]: I0312 08:23:27.513028 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5b85bc8f8c-p6zd6" Mar 12 08:23:27 crc kubenswrapper[4809]: I0312 08:23:27.519594 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Mar 12 08:23:27 crc kubenswrapper[4809]: I0312 08:23:27.519822 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Mar 12 08:23:27 crc kubenswrapper[4809]: I0312 08:23:27.568988 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5b85bc8f8c-p6zd6"] Mar 12 08:23:27 crc kubenswrapper[4809]: I0312 08:23:27.587644 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a6fc9567-7eef-4445-be15-c70a81872ae6-dns-swift-storage-0\") pod \"a6fc9567-7eef-4445-be15-c70a81872ae6\" (UID: \"a6fc9567-7eef-4445-be15-c70a81872ae6\") " Mar 12 08:23:27 crc kubenswrapper[4809]: I0312 08:23:27.587688 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a6fc9567-7eef-4445-be15-c70a81872ae6-ovsdbserver-sb\") pod \"a6fc9567-7eef-4445-be15-c70a81872ae6\" (UID: \"a6fc9567-7eef-4445-be15-c70a81872ae6\") " Mar 12 08:23:27 crc kubenswrapper[4809]: I0312 08:23:27.587815 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7nm4f\" (UniqueName: \"kubernetes.io/projected/a6fc9567-7eef-4445-be15-c70a81872ae6-kube-api-access-7nm4f\") pod \"a6fc9567-7eef-4445-be15-c70a81872ae6\" (UID: \"a6fc9567-7eef-4445-be15-c70a81872ae6\") " Mar 12 08:23:27 crc kubenswrapper[4809]: I0312 08:23:27.587889 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a6fc9567-7eef-4445-be15-c70a81872ae6-dns-svc\") pod \"a6fc9567-7eef-4445-be15-c70a81872ae6\" (UID: \"a6fc9567-7eef-4445-be15-c70a81872ae6\") " Mar 12 08:23:27 crc kubenswrapper[4809]: I0312 08:23:27.587917 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6fc9567-7eef-4445-be15-c70a81872ae6-config\") pod \"a6fc9567-7eef-4445-be15-c70a81872ae6\" (UID: \"a6fc9567-7eef-4445-be15-c70a81872ae6\") " Mar 12 08:23:27 crc kubenswrapper[4809]: I0312 08:23:27.588029 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a6fc9567-7eef-4445-be15-c70a81872ae6-ovsdbserver-nb\") pod \"a6fc9567-7eef-4445-be15-c70a81872ae6\" (UID: \"a6fc9567-7eef-4445-be15-c70a81872ae6\") " Mar 12 08:23:27 crc kubenswrapper[4809]: I0312 08:23:27.588659 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5fcbe319-aa82-46b8-a366-ec7ef1a7484e-httpd-config\") pod \"neutron-5b85bc8f8c-p6zd6\" (UID: \"5fcbe319-aa82-46b8-a366-ec7ef1a7484e\") " pod="openstack/neutron-5b85bc8f8c-p6zd6" Mar 12 08:23:27 crc kubenswrapper[4809]: I0312 08:23:27.588699 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcprg\" (UniqueName: \"kubernetes.io/projected/5fcbe319-aa82-46b8-a366-ec7ef1a7484e-kube-api-access-fcprg\") pod \"neutron-5b85bc8f8c-p6zd6\" (UID: \"5fcbe319-aa82-46b8-a366-ec7ef1a7484e\") " pod="openstack/neutron-5b85bc8f8c-p6zd6" Mar 12 08:23:27 crc kubenswrapper[4809]: I0312 08:23:27.588721 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fcbe319-aa82-46b8-a366-ec7ef1a7484e-internal-tls-certs\") pod \"neutron-5b85bc8f8c-p6zd6\" (UID: \"5fcbe319-aa82-46b8-a366-ec7ef1a7484e\") " pod="openstack/neutron-5b85bc8f8c-p6zd6" Mar 12 08:23:27 crc kubenswrapper[4809]: I0312 08:23:27.588740 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fcbe319-aa82-46b8-a366-ec7ef1a7484e-public-tls-certs\") pod \"neutron-5b85bc8f8c-p6zd6\" (UID: \"5fcbe319-aa82-46b8-a366-ec7ef1a7484e\") " pod="openstack/neutron-5b85bc8f8c-p6zd6" Mar 12 08:23:27 crc kubenswrapper[4809]: I0312 08:23:27.588759 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fcbe319-aa82-46b8-a366-ec7ef1a7484e-combined-ca-bundle\") pod \"neutron-5b85bc8f8c-p6zd6\" (UID: \"5fcbe319-aa82-46b8-a366-ec7ef1a7484e\") " pod="openstack/neutron-5b85bc8f8c-p6zd6" Mar 12 08:23:27 crc kubenswrapper[4809]: I0312 08:23:27.588877 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5fcbe319-aa82-46b8-a366-ec7ef1a7484e-config\") pod \"neutron-5b85bc8f8c-p6zd6\" (UID: \"5fcbe319-aa82-46b8-a366-ec7ef1a7484e\") " pod="openstack/neutron-5b85bc8f8c-p6zd6" Mar 12 08:23:27 crc kubenswrapper[4809]: I0312 08:23:27.588942 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fcbe319-aa82-46b8-a366-ec7ef1a7484e-ovndb-tls-certs\") pod \"neutron-5b85bc8f8c-p6zd6\" (UID: \"5fcbe319-aa82-46b8-a366-ec7ef1a7484e\") " pod="openstack/neutron-5b85bc8f8c-p6zd6" Mar 12 08:23:27 crc kubenswrapper[4809]: I0312 08:23:27.622465 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6fc9567-7eef-4445-be15-c70a81872ae6-kube-api-access-7nm4f" (OuterVolumeSpecName: "kube-api-access-7nm4f") pod "a6fc9567-7eef-4445-be15-c70a81872ae6" (UID: "a6fc9567-7eef-4445-be15-c70a81872ae6"). InnerVolumeSpecName "kube-api-access-7nm4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:23:27 crc kubenswrapper[4809]: I0312 08:23:27.691439 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5fcbe319-aa82-46b8-a366-ec7ef1a7484e-config\") pod \"neutron-5b85bc8f8c-p6zd6\" (UID: \"5fcbe319-aa82-46b8-a366-ec7ef1a7484e\") " pod="openstack/neutron-5b85bc8f8c-p6zd6" Mar 12 08:23:27 crc kubenswrapper[4809]: I0312 08:23:27.692015 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fcbe319-aa82-46b8-a366-ec7ef1a7484e-ovndb-tls-certs\") pod \"neutron-5b85bc8f8c-p6zd6\" (UID: \"5fcbe319-aa82-46b8-a366-ec7ef1a7484e\") " pod="openstack/neutron-5b85bc8f8c-p6zd6" Mar 12 08:23:27 crc kubenswrapper[4809]: I0312 08:23:27.692086 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5fcbe319-aa82-46b8-a366-ec7ef1a7484e-httpd-config\") pod \"neutron-5b85bc8f8c-p6zd6\" (UID: \"5fcbe319-aa82-46b8-a366-ec7ef1a7484e\") " pod="openstack/neutron-5b85bc8f8c-p6zd6" Mar 12 08:23:27 crc kubenswrapper[4809]: I0312 08:23:27.692133 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcprg\" (UniqueName: \"kubernetes.io/projected/5fcbe319-aa82-46b8-a366-ec7ef1a7484e-kube-api-access-fcprg\") pod \"neutron-5b85bc8f8c-p6zd6\" (UID: \"5fcbe319-aa82-46b8-a366-ec7ef1a7484e\") " pod="openstack/neutron-5b85bc8f8c-p6zd6" Mar 12 08:23:27 crc kubenswrapper[4809]: I0312 08:23:27.692151 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fcbe319-aa82-46b8-a366-ec7ef1a7484e-internal-tls-certs\") pod \"neutron-5b85bc8f8c-p6zd6\" (UID: \"5fcbe319-aa82-46b8-a366-ec7ef1a7484e\") " pod="openstack/neutron-5b85bc8f8c-p6zd6" Mar 12 08:23:27 crc kubenswrapper[4809]: I0312 08:23:27.692184 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fcbe319-aa82-46b8-a366-ec7ef1a7484e-public-tls-certs\") pod \"neutron-5b85bc8f8c-p6zd6\" (UID: \"5fcbe319-aa82-46b8-a366-ec7ef1a7484e\") " pod="openstack/neutron-5b85bc8f8c-p6zd6" Mar 12 08:23:27 crc kubenswrapper[4809]: I0312 08:23:27.692201 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fcbe319-aa82-46b8-a366-ec7ef1a7484e-combined-ca-bundle\") pod \"neutron-5b85bc8f8c-p6zd6\" (UID: \"5fcbe319-aa82-46b8-a366-ec7ef1a7484e\") " pod="openstack/neutron-5b85bc8f8c-p6zd6" Mar 12 08:23:27 crc kubenswrapper[4809]: I0312 08:23:27.692331 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7nm4f\" (UniqueName: \"kubernetes.io/projected/a6fc9567-7eef-4445-be15-c70a81872ae6-kube-api-access-7nm4f\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:27 crc kubenswrapper[4809]: I0312 08:23:27.702023 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6fc9567-7eef-4445-be15-c70a81872ae6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a6fc9567-7eef-4445-be15-c70a81872ae6" (UID: "a6fc9567-7eef-4445-be15-c70a81872ae6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:23:27 crc kubenswrapper[4809]: I0312 08:23:27.716562 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fcbe319-aa82-46b8-a366-ec7ef1a7484e-ovndb-tls-certs\") pod \"neutron-5b85bc8f8c-p6zd6\" (UID: \"5fcbe319-aa82-46b8-a366-ec7ef1a7484e\") " pod="openstack/neutron-5b85bc8f8c-p6zd6" Mar 12 08:23:27 crc kubenswrapper[4809]: I0312 08:23:27.725068 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fcbe319-aa82-46b8-a366-ec7ef1a7484e-combined-ca-bundle\") pod \"neutron-5b85bc8f8c-p6zd6\" (UID: \"5fcbe319-aa82-46b8-a366-ec7ef1a7484e\") " pod="openstack/neutron-5b85bc8f8c-p6zd6" Mar 12 08:23:27 crc kubenswrapper[4809]: I0312 08:23:27.725162 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5fcbe319-aa82-46b8-a366-ec7ef1a7484e-httpd-config\") pod \"neutron-5b85bc8f8c-p6zd6\" (UID: \"5fcbe319-aa82-46b8-a366-ec7ef1a7484e\") " pod="openstack/neutron-5b85bc8f8c-p6zd6" Mar 12 08:23:27 crc kubenswrapper[4809]: I0312 08:23:27.725349 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fcbe319-aa82-46b8-a366-ec7ef1a7484e-public-tls-certs\") pod \"neutron-5b85bc8f8c-p6zd6\" (UID: \"5fcbe319-aa82-46b8-a366-ec7ef1a7484e\") " pod="openstack/neutron-5b85bc8f8c-p6zd6" Mar 12 08:23:27 crc kubenswrapper[4809]: I0312 08:23:27.734292 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/5fcbe319-aa82-46b8-a366-ec7ef1a7484e-config\") pod \"neutron-5b85bc8f8c-p6zd6\" (UID: \"5fcbe319-aa82-46b8-a366-ec7ef1a7484e\") " pod="openstack/neutron-5b85bc8f8c-p6zd6" Mar 12 08:23:27 crc kubenswrapper[4809]: I0312 08:23:27.735022 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fcbe319-aa82-46b8-a366-ec7ef1a7484e-internal-tls-certs\") pod \"neutron-5b85bc8f8c-p6zd6\" (UID: \"5fcbe319-aa82-46b8-a366-ec7ef1a7484e\") " pod="openstack/neutron-5b85bc8f8c-p6zd6" Mar 12 08:23:27 crc kubenswrapper[4809]: I0312 08:23:27.735873 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6fc9567-7eef-4445-be15-c70a81872ae6-config" (OuterVolumeSpecName: "config") pod "a6fc9567-7eef-4445-be15-c70a81872ae6" (UID: "a6fc9567-7eef-4445-be15-c70a81872ae6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:23:27 crc kubenswrapper[4809]: I0312 08:23:27.740618 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcprg\" (UniqueName: \"kubernetes.io/projected/5fcbe319-aa82-46b8-a366-ec7ef1a7484e-kube-api-access-fcprg\") pod \"neutron-5b85bc8f8c-p6zd6\" (UID: \"5fcbe319-aa82-46b8-a366-ec7ef1a7484e\") " pod="openstack/neutron-5b85bc8f8c-p6zd6" Mar 12 08:23:27 crc kubenswrapper[4809]: I0312 08:23:27.765009 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6fc9567-7eef-4445-be15-c70a81872ae6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a6fc9567-7eef-4445-be15-c70a81872ae6" (UID: "a6fc9567-7eef-4445-be15-c70a81872ae6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:23:27 crc kubenswrapper[4809]: I0312 08:23:27.775001 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-kn7bt" Mar 12 08:23:27 crc kubenswrapper[4809]: I0312 08:23:27.775707 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-kn7bt" event={"ID":"a6fc9567-7eef-4445-be15-c70a81872ae6","Type":"ContainerDied","Data":"0aed50646fd11d60981a63a0be618e398d5104d7dd32730bea98e7afacd8fe0a"} Mar 12 08:23:27 crc kubenswrapper[4809]: I0312 08:23:27.775797 4809 scope.go:117] "RemoveContainer" containerID="c99b1a7ed8285bd03656959c39167924a9a4d9d7beb19db8f9a8ffd5c5af4b50" Mar 12 08:23:27 crc kubenswrapper[4809]: I0312 08:23:27.790673 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e08acf54-60ca-4dc7-bd01-ce8fed2bcc51","Type":"ContainerStarted","Data":"54e1da993d99079580efe774fb59ccc0164bd5825314a8f4a1085cba9d11fa1c"} Mar 12 08:23:27 crc kubenswrapper[4809]: I0312 08:23:27.794813 4809 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a6fc9567-7eef-4445-be15-c70a81872ae6-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:27 crc kubenswrapper[4809]: I0312 08:23:27.794841 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6fc9567-7eef-4445-be15-c70a81872ae6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a6fc9567-7eef-4445-be15-c70a81872ae6" (UID: "a6fc9567-7eef-4445-be15-c70a81872ae6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:23:27 crc kubenswrapper[4809]: I0312 08:23:27.794872 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6fc9567-7eef-4445-be15-c70a81872ae6-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:27 crc kubenswrapper[4809]: I0312 08:23:27.794942 4809 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a6fc9567-7eef-4445-be15-c70a81872ae6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:27 crc kubenswrapper[4809]: I0312 08:23:27.802565 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7877ddd69d-mkc5h" event={"ID":"160fb026-73c9-4fbd-8602-aa6de6bc9417","Type":"ContainerStarted","Data":"95967317ce2ce6ace655df6f194afdab6cb92db9202c013ec8808385ee4def76"} Mar 12 08:23:27 crc kubenswrapper[4809]: I0312 08:23:27.803494 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6fc9567-7eef-4445-be15-c70a81872ae6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a6fc9567-7eef-4445-be15-c70a81872ae6" (UID: "a6fc9567-7eef-4445-be15-c70a81872ae6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:23:27 crc kubenswrapper[4809]: I0312 08:23:27.826193 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-fp9lq" event={"ID":"fe74aae4-4752-4c38-99a7-eb88886cf5bf","Type":"ContainerStarted","Data":"df796b21d07e69e67d2a4a972870f33362b8dc904a857a3681c4166308719c00"} Mar 12 08:23:27 crc kubenswrapper[4809]: I0312 08:23:27.857162 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5b85bc8f8c-p6zd6" Mar 12 08:23:27 crc kubenswrapper[4809]: I0312 08:23:27.897961 4809 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a6fc9567-7eef-4445-be15-c70a81872ae6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:27 crc kubenswrapper[4809]: I0312 08:23:27.898006 4809 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a6fc9567-7eef-4445-be15-c70a81872ae6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:28 crc kubenswrapper[4809]: I0312 08:23:28.642589 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-kn7bt"] Mar 12 08:23:28 crc kubenswrapper[4809]: I0312 08:23:28.679059 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-kn7bt"] Mar 12 08:23:28 crc kubenswrapper[4809]: I0312 08:23:28.891341 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7877ddd69d-mkc5h" event={"ID":"160fb026-73c9-4fbd-8602-aa6de6bc9417","Type":"ContainerStarted","Data":"3eecb5ef1d8b229c29f7ec07f94dec773a8f4e06959ae3d2077c7143eb61d022"} Mar 12 08:23:28 crc kubenswrapper[4809]: I0312 08:23:28.897952 4809 generic.go:334] "Generic (PLEG): container finished" podID="0857990f-7921-4ea0-a0c1-e431cc7de107" containerID="0be9d23b1007dffd29b365806fec94ed61f269d21b25655a004b7ec4fa52ad75" exitCode=0 Mar 12 08:23:28 crc kubenswrapper[4809]: I0312 08:23:28.898059 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-98nmv" event={"ID":"0857990f-7921-4ea0-a0c1-e431cc7de107","Type":"ContainerDied","Data":"0be9d23b1007dffd29b365806fec94ed61f269d21b25655a004b7ec4fa52ad75"} Mar 12 08:23:28 crc kubenswrapper[4809]: I0312 08:23:28.910672 4809 generic.go:334] "Generic (PLEG): container finished" podID="fe74aae4-4752-4c38-99a7-eb88886cf5bf" containerID="b3dab38dd189fe1f376d08d668b2eb0b5e4d2d1dc23f1f3c258ce68086087b81" exitCode=0 Mar 12 08:23:28 crc kubenswrapper[4809]: I0312 08:23:28.910794 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-fp9lq" event={"ID":"fe74aae4-4752-4c38-99a7-eb88886cf5bf","Type":"ContainerDied","Data":"b3dab38dd189fe1f376d08d668b2eb0b5e4d2d1dc23f1f3c258ce68086087b81"} Mar 12 08:23:28 crc kubenswrapper[4809]: I0312 08:23:28.925156 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5b85bc8f8c-p6zd6"] Mar 12 08:23:28 crc kubenswrapper[4809]: I0312 08:23:28.948411 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d3259c31-3ce2-44c0-8a04-7672fffde9a2","Type":"ContainerStarted","Data":"344108671cc743ec609779e0e68defe3aec9702a8e7ba7dda4bf24082f7848c8"} Mar 12 08:23:28 crc kubenswrapper[4809]: I0312 08:23:28.957651 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8bcp4" event={"ID":"2d78fd7f-3718-4999-a661-ad590dc808a5","Type":"ContainerStarted","Data":"fdfa9153c7267601195f32a3e9028ddb46b0a9fe65503b33b8fb79591d309c31"} Mar 12 08:23:28 crc kubenswrapper[4809]: I0312 08:23:28.965713 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2216a968-959e-4bb5-943e-7d8c0dbd5405","Type":"ContainerStarted","Data":"7aa9fa32293caff7118941d4182a6d7ffabfd6f3165197d927ff88a0d06b2b92"} Mar 12 08:23:29 crc kubenswrapper[4809]: I0312 08:23:29.130962 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6fc9567-7eef-4445-be15-c70a81872ae6" path="/var/lib/kubelet/pods/a6fc9567-7eef-4445-be15-c70a81872ae6/volumes" Mar 12 08:23:29 crc kubenswrapper[4809]: I0312 08:23:29.979137 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5b85bc8f8c-p6zd6" event={"ID":"5fcbe319-aa82-46b8-a366-ec7ef1a7484e","Type":"ContainerStarted","Data":"42d2150770d0ebc2d2447195520ab3d790c97fb65d07e62ea164fa87c24b823c"} Mar 12 08:23:29 crc kubenswrapper[4809]: I0312 08:23:29.982010 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7877ddd69d-mkc5h" event={"ID":"160fb026-73c9-4fbd-8602-aa6de6bc9417","Type":"ContainerStarted","Data":"fe20fa75b8ebe3a499eff0c4a37968a4213fb9728f5ae5283afc6a8e419a7d05"} Mar 12 08:23:30 crc kubenswrapper[4809]: I0312 08:23:30.015408 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7877ddd69d-mkc5h" podStartSLOduration=6.015385267 podStartE2EDuration="6.015385267s" podCreationTimestamp="2026-03-12 08:23:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:23:30.012056457 +0000 UTC m=+1483.594092400" watchObservedRunningTime="2026-03-12 08:23:30.015385267 +0000 UTC m=+1483.597421000" Mar 12 08:23:30 crc kubenswrapper[4809]: I0312 08:23:30.584587 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-98nmv" Mar 12 08:23:30 crc kubenswrapper[4809]: I0312 08:23:30.701823 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0857990f-7921-4ea0-a0c1-e431cc7de107-scripts\") pod \"0857990f-7921-4ea0-a0c1-e431cc7de107\" (UID: \"0857990f-7921-4ea0-a0c1-e431cc7de107\") " Mar 12 08:23:30 crc kubenswrapper[4809]: I0312 08:23:30.701893 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0857990f-7921-4ea0-a0c1-e431cc7de107-config-data\") pod \"0857990f-7921-4ea0-a0c1-e431cc7de107\" (UID: \"0857990f-7921-4ea0-a0c1-e431cc7de107\") " Mar 12 08:23:30 crc kubenswrapper[4809]: I0312 08:23:30.703444 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0857990f-7921-4ea0-a0c1-e431cc7de107-combined-ca-bundle\") pod \"0857990f-7921-4ea0-a0c1-e431cc7de107\" (UID: \"0857990f-7921-4ea0-a0c1-e431cc7de107\") " Mar 12 08:23:30 crc kubenswrapper[4809]: I0312 08:23:30.703582 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0857990f-7921-4ea0-a0c1-e431cc7de107-logs\") pod \"0857990f-7921-4ea0-a0c1-e431cc7de107\" (UID: \"0857990f-7921-4ea0-a0c1-e431cc7de107\") " Mar 12 08:23:30 crc kubenswrapper[4809]: I0312 08:23:30.703752 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsllc\" (UniqueName: \"kubernetes.io/projected/0857990f-7921-4ea0-a0c1-e431cc7de107-kube-api-access-nsllc\") pod \"0857990f-7921-4ea0-a0c1-e431cc7de107\" (UID: \"0857990f-7921-4ea0-a0c1-e431cc7de107\") " Mar 12 08:23:30 crc kubenswrapper[4809]: I0312 08:23:30.706421 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0857990f-7921-4ea0-a0c1-e431cc7de107-logs" (OuterVolumeSpecName: "logs") pod "0857990f-7921-4ea0-a0c1-e431cc7de107" (UID: "0857990f-7921-4ea0-a0c1-e431cc7de107"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:23:30 crc kubenswrapper[4809]: I0312 08:23:30.713328 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0857990f-7921-4ea0-a0c1-e431cc7de107-scripts" (OuterVolumeSpecName: "scripts") pod "0857990f-7921-4ea0-a0c1-e431cc7de107" (UID: "0857990f-7921-4ea0-a0c1-e431cc7de107"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:23:30 crc kubenswrapper[4809]: I0312 08:23:30.803846 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0857990f-7921-4ea0-a0c1-e431cc7de107-kube-api-access-nsllc" (OuterVolumeSpecName: "kube-api-access-nsllc") pod "0857990f-7921-4ea0-a0c1-e431cc7de107" (UID: "0857990f-7921-4ea0-a0c1-e431cc7de107"). InnerVolumeSpecName "kube-api-access-nsllc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:23:30 crc kubenswrapper[4809]: I0312 08:23:30.807292 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nsllc\" (UniqueName: \"kubernetes.io/projected/0857990f-7921-4ea0-a0c1-e431cc7de107-kube-api-access-nsllc\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:30 crc kubenswrapper[4809]: I0312 08:23:30.807711 4809 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0857990f-7921-4ea0-a0c1-e431cc7de107-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:30 crc kubenswrapper[4809]: I0312 08:23:30.807731 4809 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0857990f-7921-4ea0-a0c1-e431cc7de107-logs\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:30 crc kubenswrapper[4809]: I0312 08:23:30.933174 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0857990f-7921-4ea0-a0c1-e431cc7de107-config-data" (OuterVolumeSpecName: "config-data") pod "0857990f-7921-4ea0-a0c1-e431cc7de107" (UID: "0857990f-7921-4ea0-a0c1-e431cc7de107"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:23:30 crc kubenswrapper[4809]: I0312 08:23:30.933964 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0857990f-7921-4ea0-a0c1-e431cc7de107-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0857990f-7921-4ea0-a0c1-e431cc7de107" (UID: "0857990f-7921-4ea0-a0c1-e431cc7de107"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:23:31 crc kubenswrapper[4809]: I0312 08:23:31.013152 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0857990f-7921-4ea0-a0c1-e431cc7de107-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:31 crc kubenswrapper[4809]: I0312 08:23:31.013184 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0857990f-7921-4ea0-a0c1-e431cc7de107-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:31 crc kubenswrapper[4809]: I0312 08:23:31.019977 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5b85bc8f8c-p6zd6" event={"ID":"5fcbe319-aa82-46b8-a366-ec7ef1a7484e","Type":"ContainerStarted","Data":"6c0f8b1e491a8e1f35e2308ea7ce7e7a255d03400277325bb1a47117c3ba423f"} Mar 12 08:23:31 crc kubenswrapper[4809]: I0312 08:23:31.023044 4809 generic.go:334] "Generic (PLEG): container finished" podID="2d78fd7f-3718-4999-a661-ad590dc808a5" containerID="fdfa9153c7267601195f32a3e9028ddb46b0a9fe65503b33b8fb79591d309c31" exitCode=0 Mar 12 08:23:31 crc kubenswrapper[4809]: I0312 08:23:31.023134 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8bcp4" event={"ID":"2d78fd7f-3718-4999-a661-ad590dc808a5","Type":"ContainerDied","Data":"fdfa9153c7267601195f32a3e9028ddb46b0a9fe65503b33b8fb79591d309c31"} Mar 12 08:23:31 crc kubenswrapper[4809]: I0312 08:23:31.063703 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2216a968-959e-4bb5-943e-7d8c0dbd5405","Type":"ContainerStarted","Data":"2df21a3581682d789abb70904f542fc22c4a2da2da2b9e1e8db2eec9684d0f45"} Mar 12 08:23:31 crc kubenswrapper[4809]: I0312 08:23:31.064855 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6586c99478-72j5b"] Mar 12 08:23:31 crc kubenswrapper[4809]: E0312 08:23:31.065570 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0857990f-7921-4ea0-a0c1-e431cc7de107" containerName="placement-db-sync" Mar 12 08:23:31 crc kubenswrapper[4809]: I0312 08:23:31.065592 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="0857990f-7921-4ea0-a0c1-e431cc7de107" containerName="placement-db-sync" Mar 12 08:23:31 crc kubenswrapper[4809]: I0312 08:23:31.065840 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="0857990f-7921-4ea0-a0c1-e431cc7de107" containerName="placement-db-sync" Mar 12 08:23:31 crc kubenswrapper[4809]: I0312 08:23:31.068517 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6586c99478-72j5b" Mar 12 08:23:31 crc kubenswrapper[4809]: I0312 08:23:31.076653 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Mar 12 08:23:31 crc kubenswrapper[4809]: I0312 08:23:31.076942 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-98nmv" event={"ID":"0857990f-7921-4ea0-a0c1-e431cc7de107","Type":"ContainerDied","Data":"c6c4ca2ae7f80b7583b132d644021d0548ec90b370e1c687d1418dbea4d71710"} Mar 12 08:23:31 crc kubenswrapper[4809]: I0312 08:23:31.076996 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6c4ca2ae7f80b7583b132d644021d0548ec90b370e1c687d1418dbea4d71710" Mar 12 08:23:31 crc kubenswrapper[4809]: I0312 08:23:31.076967 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Mar 12 08:23:31 crc kubenswrapper[4809]: I0312 08:23:31.077357 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-98nmv" Mar 12 08:23:31 crc kubenswrapper[4809]: I0312 08:23:31.089670 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-fp9lq" event={"ID":"fe74aae4-4752-4c38-99a7-eb88886cf5bf","Type":"ContainerStarted","Data":"2efb76dd17a8757c1e6bd2b3322198730b10e90bda1d870079d1ba0616392268"} Mar 12 08:23:31 crc kubenswrapper[4809]: I0312 08:23:31.091054 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-fp9lq" Mar 12 08:23:31 crc kubenswrapper[4809]: I0312 08:23:31.116228 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-949jb\" (UniqueName: \"kubernetes.io/projected/0f8141f6-bf63-4133-9451-df9c0dd0c1e7-kube-api-access-949jb\") pod \"placement-6586c99478-72j5b\" (UID: \"0f8141f6-bf63-4133-9451-df9c0dd0c1e7\") " pod="openstack/placement-6586c99478-72j5b" Mar 12 08:23:31 crc kubenswrapper[4809]: I0312 08:23:31.116296 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f8141f6-bf63-4133-9451-df9c0dd0c1e7-config-data\") pod \"placement-6586c99478-72j5b\" (UID: \"0f8141f6-bf63-4133-9451-df9c0dd0c1e7\") " pod="openstack/placement-6586c99478-72j5b" Mar 12 08:23:31 crc kubenswrapper[4809]: I0312 08:23:31.116464 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f8141f6-bf63-4133-9451-df9c0dd0c1e7-internal-tls-certs\") pod \"placement-6586c99478-72j5b\" (UID: \"0f8141f6-bf63-4133-9451-df9c0dd0c1e7\") " pod="openstack/placement-6586c99478-72j5b" Mar 12 08:23:31 crc kubenswrapper[4809]: I0312 08:23:31.116508 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f8141f6-bf63-4133-9451-df9c0dd0c1e7-scripts\") pod \"placement-6586c99478-72j5b\" (UID: \"0f8141f6-bf63-4133-9451-df9c0dd0c1e7\") " pod="openstack/placement-6586c99478-72j5b" Mar 12 08:23:31 crc kubenswrapper[4809]: I0312 08:23:31.116661 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f8141f6-bf63-4133-9451-df9c0dd0c1e7-public-tls-certs\") pod \"placement-6586c99478-72j5b\" (UID: \"0f8141f6-bf63-4133-9451-df9c0dd0c1e7\") " pod="openstack/placement-6586c99478-72j5b" Mar 12 08:23:31 crc kubenswrapper[4809]: I0312 08:23:31.116709 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f8141f6-bf63-4133-9451-df9c0dd0c1e7-combined-ca-bundle\") pod \"placement-6586c99478-72j5b\" (UID: \"0f8141f6-bf63-4133-9451-df9c0dd0c1e7\") " pod="openstack/placement-6586c99478-72j5b" Mar 12 08:23:31 crc kubenswrapper[4809]: I0312 08:23:31.116754 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f8141f6-bf63-4133-9451-df9c0dd0c1e7-logs\") pod \"placement-6586c99478-72j5b\" (UID: \"0f8141f6-bf63-4133-9451-df9c0dd0c1e7\") " pod="openstack/placement-6586c99478-72j5b" Mar 12 08:23:31 crc kubenswrapper[4809]: I0312 08:23:31.132281 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7877ddd69d-mkc5h" Mar 12 08:23:31 crc kubenswrapper[4809]: I0312 08:23:31.132320 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d3259c31-3ce2-44c0-8a04-7672fffde9a2","Type":"ContainerStarted","Data":"9114078ab2eaa7a5a26e20e019ec503541d014327fc6ec069a3155d25af877a8"} Mar 12 08:23:31 crc kubenswrapper[4809]: I0312 08:23:31.132346 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6586c99478-72j5b"] Mar 12 08:23:31 crc kubenswrapper[4809]: I0312 08:23:31.165495 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-fp9lq" podStartSLOduration=7.165469704 podStartE2EDuration="7.165469704s" podCreationTimestamp="2026-03-12 08:23:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:23:31.144003293 +0000 UTC m=+1484.726039026" watchObservedRunningTime="2026-03-12 08:23:31.165469704 +0000 UTC m=+1484.747505437" Mar 12 08:23:31 crc kubenswrapper[4809]: I0312 08:23:31.237974 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-949jb\" (UniqueName: \"kubernetes.io/projected/0f8141f6-bf63-4133-9451-df9c0dd0c1e7-kube-api-access-949jb\") pod \"placement-6586c99478-72j5b\" (UID: \"0f8141f6-bf63-4133-9451-df9c0dd0c1e7\") " pod="openstack/placement-6586c99478-72j5b" Mar 12 08:23:31 crc kubenswrapper[4809]: I0312 08:23:31.238030 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f8141f6-bf63-4133-9451-df9c0dd0c1e7-config-data\") pod \"placement-6586c99478-72j5b\" (UID: \"0f8141f6-bf63-4133-9451-df9c0dd0c1e7\") " pod="openstack/placement-6586c99478-72j5b" Mar 12 08:23:31 crc kubenswrapper[4809]: I0312 08:23:31.238251 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f8141f6-bf63-4133-9451-df9c0dd0c1e7-internal-tls-certs\") pod \"placement-6586c99478-72j5b\" (UID: \"0f8141f6-bf63-4133-9451-df9c0dd0c1e7\") " pod="openstack/placement-6586c99478-72j5b" Mar 12 08:23:31 crc kubenswrapper[4809]: I0312 08:23:31.238315 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f8141f6-bf63-4133-9451-df9c0dd0c1e7-scripts\") pod \"placement-6586c99478-72j5b\" (UID: \"0f8141f6-bf63-4133-9451-df9c0dd0c1e7\") " pod="openstack/placement-6586c99478-72j5b" Mar 12 08:23:31 crc kubenswrapper[4809]: I0312 08:23:31.238438 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f8141f6-bf63-4133-9451-df9c0dd0c1e7-public-tls-certs\") pod \"placement-6586c99478-72j5b\" (UID: \"0f8141f6-bf63-4133-9451-df9c0dd0c1e7\") " pod="openstack/placement-6586c99478-72j5b" Mar 12 08:23:31 crc kubenswrapper[4809]: I0312 08:23:31.238512 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f8141f6-bf63-4133-9451-df9c0dd0c1e7-combined-ca-bundle\") pod \"placement-6586c99478-72j5b\" (UID: \"0f8141f6-bf63-4133-9451-df9c0dd0c1e7\") " pod="openstack/placement-6586c99478-72j5b" Mar 12 08:23:31 crc kubenswrapper[4809]: I0312 08:23:31.238547 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f8141f6-bf63-4133-9451-df9c0dd0c1e7-logs\") pod \"placement-6586c99478-72j5b\" (UID: \"0f8141f6-bf63-4133-9451-df9c0dd0c1e7\") " pod="openstack/placement-6586c99478-72j5b" Mar 12 08:23:31 crc kubenswrapper[4809]: I0312 08:23:31.240267 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f8141f6-bf63-4133-9451-df9c0dd0c1e7-logs\") pod \"placement-6586c99478-72j5b\" (UID: \"0f8141f6-bf63-4133-9451-df9c0dd0c1e7\") " pod="openstack/placement-6586c99478-72j5b" Mar 12 08:23:31 crc kubenswrapper[4809]: I0312 08:23:31.300584 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f8141f6-bf63-4133-9451-df9c0dd0c1e7-scripts\") pod \"placement-6586c99478-72j5b\" (UID: \"0f8141f6-bf63-4133-9451-df9c0dd0c1e7\") " pod="openstack/placement-6586c99478-72j5b" Mar 12 08:23:31 crc kubenswrapper[4809]: I0312 08:23:31.301776 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f8141f6-bf63-4133-9451-df9c0dd0c1e7-internal-tls-certs\") pod \"placement-6586c99478-72j5b\" (UID: \"0f8141f6-bf63-4133-9451-df9c0dd0c1e7\") " pod="openstack/placement-6586c99478-72j5b" Mar 12 08:23:31 crc kubenswrapper[4809]: I0312 08:23:31.305697 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f8141f6-bf63-4133-9451-df9c0dd0c1e7-public-tls-certs\") pod \"placement-6586c99478-72j5b\" (UID: \"0f8141f6-bf63-4133-9451-df9c0dd0c1e7\") " pod="openstack/placement-6586c99478-72j5b" Mar 12 08:23:31 crc kubenswrapper[4809]: I0312 08:23:31.309830 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f8141f6-bf63-4133-9451-df9c0dd0c1e7-config-data\") pod \"placement-6586c99478-72j5b\" (UID: \"0f8141f6-bf63-4133-9451-df9c0dd0c1e7\") " pod="openstack/placement-6586c99478-72j5b" Mar 12 08:23:31 crc kubenswrapper[4809]: I0312 08:23:31.309928 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f8141f6-bf63-4133-9451-df9c0dd0c1e7-combined-ca-bundle\") pod \"placement-6586c99478-72j5b\" (UID: \"0f8141f6-bf63-4133-9451-df9c0dd0c1e7\") " pod="openstack/placement-6586c99478-72j5b" Mar 12 08:23:31 crc kubenswrapper[4809]: I0312 08:23:31.313678 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-949jb\" (UniqueName: \"kubernetes.io/projected/0f8141f6-bf63-4133-9451-df9c0dd0c1e7-kube-api-access-949jb\") pod \"placement-6586c99478-72j5b\" (UID: \"0f8141f6-bf63-4133-9451-df9c0dd0c1e7\") " pod="openstack/placement-6586c99478-72j5b" Mar 12 08:23:31 crc kubenswrapper[4809]: I0312 08:23:31.524258 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6586c99478-72j5b" Mar 12 08:23:32 crc kubenswrapper[4809]: I0312 08:23:32.132719 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5","Type":"ContainerStarted","Data":"8257e78f2dc56cced6fe9d9d6710f4b2bac915c02afb5eae419c33d7a00aba04"} Mar 12 08:23:32 crc kubenswrapper[4809]: I0312 08:23:32.134651 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="2216a968-959e-4bb5-943e-7d8c0dbd5405" containerName="glance-log" containerID="cri-o://7aa9fa32293caff7118941d4182a6d7ffabfd6f3165197d927ff88a0d06b2b92" gracePeriod=30 Mar 12 08:23:32 crc kubenswrapper[4809]: I0312 08:23:32.134862 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="2216a968-959e-4bb5-943e-7d8c0dbd5405" containerName="glance-httpd" containerID="cri-o://2df21a3581682d789abb70904f542fc22c4a2da2da2b9e1e8db2eec9684d0f45" gracePeriod=30 Mar 12 08:23:32 crc kubenswrapper[4809]: I0312 08:23:32.135305 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="d3259c31-3ce2-44c0-8a04-7672fffde9a2" containerName="glance-log" containerID="cri-o://344108671cc743ec609779e0e68defe3aec9702a8e7ba7dda4bf24082f7848c8" gracePeriod=30 Mar 12 08:23:32 crc kubenswrapper[4809]: I0312 08:23:32.135387 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="d3259c31-3ce2-44c0-8a04-7672fffde9a2" containerName="glance-httpd" containerID="cri-o://9114078ab2eaa7a5a26e20e019ec503541d014327fc6ec069a3155d25af877a8" gracePeriod=30 Mar 12 08:23:32 crc kubenswrapper[4809]: I0312 08:23:32.287793 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=23.287725838 podStartE2EDuration="23.287725838s" podCreationTimestamp="2026-03-12 08:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:23:32.223281005 +0000 UTC m=+1485.805316758" watchObservedRunningTime="2026-03-12 08:23:32.287725838 +0000 UTC m=+1485.869761571" Mar 12 08:23:32 crc kubenswrapper[4809]: I0312 08:23:32.304662 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=23.304637806 podStartE2EDuration="23.304637806s" podCreationTimestamp="2026-03-12 08:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:23:32.247803098 +0000 UTC m=+1485.829838831" watchObservedRunningTime="2026-03-12 08:23:32.304637806 +0000 UTC m=+1485.886673539" Mar 12 08:23:33 crc kubenswrapper[4809]: I0312 08:23:33.150903 4809 generic.go:334] "Generic (PLEG): container finished" podID="760b497c-568e-4501-86c1-1c9f6b5e7f7d" containerID="0db4a14003c408fe75529fbe0259f755b43e44c40008f4a33a1ce14f88f9e1f5" exitCode=0 Mar 12 08:23:33 crc kubenswrapper[4809]: I0312 08:23:33.151037 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-jf5zs" event={"ID":"760b497c-568e-4501-86c1-1c9f6b5e7f7d","Type":"ContainerDied","Data":"0db4a14003c408fe75529fbe0259f755b43e44c40008f4a33a1ce14f88f9e1f5"} Mar 12 08:23:33 crc kubenswrapper[4809]: I0312 08:23:33.154298 4809 generic.go:334] "Generic (PLEG): container finished" podID="2216a968-959e-4bb5-943e-7d8c0dbd5405" containerID="2df21a3581682d789abb70904f542fc22c4a2da2da2b9e1e8db2eec9684d0f45" exitCode=0 Mar 12 08:23:33 crc kubenswrapper[4809]: I0312 08:23:33.154322 4809 generic.go:334] "Generic (PLEG): container finished" podID="2216a968-959e-4bb5-943e-7d8c0dbd5405" containerID="7aa9fa32293caff7118941d4182a6d7ffabfd6f3165197d927ff88a0d06b2b92" exitCode=143 Mar 12 08:23:33 crc kubenswrapper[4809]: I0312 08:23:33.154371 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2216a968-959e-4bb5-943e-7d8c0dbd5405","Type":"ContainerDied","Data":"2df21a3581682d789abb70904f542fc22c4a2da2da2b9e1e8db2eec9684d0f45"} Mar 12 08:23:33 crc kubenswrapper[4809]: I0312 08:23:33.154390 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2216a968-959e-4bb5-943e-7d8c0dbd5405","Type":"ContainerDied","Data":"7aa9fa32293caff7118941d4182a6d7ffabfd6f3165197d927ff88a0d06b2b92"} Mar 12 08:23:33 crc kubenswrapper[4809]: I0312 08:23:33.157354 4809 generic.go:334] "Generic (PLEG): container finished" podID="d3259c31-3ce2-44c0-8a04-7672fffde9a2" containerID="9114078ab2eaa7a5a26e20e019ec503541d014327fc6ec069a3155d25af877a8" exitCode=0 Mar 12 08:23:33 crc kubenswrapper[4809]: I0312 08:23:33.157390 4809 generic.go:334] "Generic (PLEG): container finished" podID="d3259c31-3ce2-44c0-8a04-7672fffde9a2" containerID="344108671cc743ec609779e0e68defe3aec9702a8e7ba7dda4bf24082f7848c8" exitCode=143 Mar 12 08:23:33 crc kubenswrapper[4809]: I0312 08:23:33.157544 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d3259c31-3ce2-44c0-8a04-7672fffde9a2","Type":"ContainerDied","Data":"9114078ab2eaa7a5a26e20e019ec503541d014327fc6ec069a3155d25af877a8"} Mar 12 08:23:33 crc kubenswrapper[4809]: I0312 08:23:33.157663 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d3259c31-3ce2-44c0-8a04-7672fffde9a2","Type":"ContainerDied","Data":"344108671cc743ec609779e0e68defe3aec9702a8e7ba7dda4bf24082f7848c8"} Mar 12 08:23:35 crc kubenswrapper[4809]: I0312 08:23:35.199858 4809 generic.go:334] "Generic (PLEG): container finished" podID="23f0c997-5c6b-4229-911e-efe7b42f59f7" containerID="d8fa398a028701803e96d3104388e36869dab06e89ada1de3c840908f82cf454" exitCode=0 Mar 12 08:23:35 crc kubenswrapper[4809]: I0312 08:23:35.200463 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jssh7" event={"ID":"23f0c997-5c6b-4229-911e-efe7b42f59f7","Type":"ContainerDied","Data":"d8fa398a028701803e96d3104388e36869dab06e89ada1de3c840908f82cf454"} Mar 12 08:23:35 crc kubenswrapper[4809]: I0312 08:23:35.288364 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-fp9lq" Mar 12 08:23:35 crc kubenswrapper[4809]: I0312 08:23:35.397704 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-s5tlv"] Mar 12 08:23:35 crc kubenswrapper[4809]: I0312 08:23:35.397986 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58dd9ff6bc-s5tlv" podUID="57e7abd3-40b2-4f84-b669-01f4a2bf9bd2" containerName="dnsmasq-dns" containerID="cri-o://83c9a8a069771586937d929aaccafb02a39c6ae671f69689b2c351e022fa1bef" gracePeriod=10 Mar 12 08:23:36 crc kubenswrapper[4809]: I0312 08:23:36.221631 4809 generic.go:334] "Generic (PLEG): container finished" podID="57e7abd3-40b2-4f84-b669-01f4a2bf9bd2" containerID="83c9a8a069771586937d929aaccafb02a39c6ae671f69689b2c351e022fa1bef" exitCode=0 Mar 12 08:23:36 crc kubenswrapper[4809]: I0312 08:23:36.221704 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-s5tlv" event={"ID":"57e7abd3-40b2-4f84-b669-01f4a2bf9bd2","Type":"ContainerDied","Data":"83c9a8a069771586937d929aaccafb02a39c6ae671f69689b2c351e022fa1bef"} Mar 12 08:23:37 crc kubenswrapper[4809]: I0312 08:23:37.378769 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jssh7" Mar 12 08:23:37 crc kubenswrapper[4809]: I0312 08:23:37.406371 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-jf5zs" Mar 12 08:23:37 crc kubenswrapper[4809]: I0312 08:23:37.559648 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23f0c997-5c6b-4229-911e-efe7b42f59f7-config-data\") pod \"23f0c997-5c6b-4229-911e-efe7b42f59f7\" (UID: \"23f0c997-5c6b-4229-911e-efe7b42f59f7\") " Mar 12 08:23:37 crc kubenswrapper[4809]: I0312 08:23:37.560172 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/23f0c997-5c6b-4229-911e-efe7b42f59f7-fernet-keys\") pod \"23f0c997-5c6b-4229-911e-efe7b42f59f7\" (UID: \"23f0c997-5c6b-4229-911e-efe7b42f59f7\") " Mar 12 08:23:37 crc kubenswrapper[4809]: I0312 08:23:37.560295 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/760b497c-568e-4501-86c1-1c9f6b5e7f7d-combined-ca-bundle\") pod \"760b497c-568e-4501-86c1-1c9f6b5e7f7d\" (UID: \"760b497c-568e-4501-86c1-1c9f6b5e7f7d\") " Mar 12 08:23:37 crc kubenswrapper[4809]: I0312 08:23:37.560330 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23f0c997-5c6b-4229-911e-efe7b42f59f7-scripts\") pod \"23f0c997-5c6b-4229-911e-efe7b42f59f7\" (UID: \"23f0c997-5c6b-4229-911e-efe7b42f59f7\") " Mar 12 08:23:37 crc kubenswrapper[4809]: I0312 08:23:37.560417 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4vcv\" (UniqueName: \"kubernetes.io/projected/760b497c-568e-4501-86c1-1c9f6b5e7f7d-kube-api-access-j4vcv\") pod \"760b497c-568e-4501-86c1-1c9f6b5e7f7d\" (UID: \"760b497c-568e-4501-86c1-1c9f6b5e7f7d\") " Mar 12 08:23:37 crc kubenswrapper[4809]: I0312 08:23:37.560514 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/760b497c-568e-4501-86c1-1c9f6b5e7f7d-db-sync-config-data\") pod \"760b497c-568e-4501-86c1-1c9f6b5e7f7d\" (UID: \"760b497c-568e-4501-86c1-1c9f6b5e7f7d\") " Mar 12 08:23:37 crc kubenswrapper[4809]: I0312 08:23:37.560618 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23f0c997-5c6b-4229-911e-efe7b42f59f7-combined-ca-bundle\") pod \"23f0c997-5c6b-4229-911e-efe7b42f59f7\" (UID: \"23f0c997-5c6b-4229-911e-efe7b42f59f7\") " Mar 12 08:23:37 crc kubenswrapper[4809]: I0312 08:23:37.560712 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/23f0c997-5c6b-4229-911e-efe7b42f59f7-credential-keys\") pod \"23f0c997-5c6b-4229-911e-efe7b42f59f7\" (UID: \"23f0c997-5c6b-4229-911e-efe7b42f59f7\") " Mar 12 08:23:37 crc kubenswrapper[4809]: I0312 08:23:37.560826 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtphd\" (UniqueName: \"kubernetes.io/projected/23f0c997-5c6b-4229-911e-efe7b42f59f7-kube-api-access-gtphd\") pod \"23f0c997-5c6b-4229-911e-efe7b42f59f7\" (UID: \"23f0c997-5c6b-4229-911e-efe7b42f59f7\") " Mar 12 08:23:37 crc kubenswrapper[4809]: I0312 08:23:37.566820 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/760b497c-568e-4501-86c1-1c9f6b5e7f7d-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "760b497c-568e-4501-86c1-1c9f6b5e7f7d" (UID: "760b497c-568e-4501-86c1-1c9f6b5e7f7d"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:23:37 crc kubenswrapper[4809]: I0312 08:23:37.588539 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/760b497c-568e-4501-86c1-1c9f6b5e7f7d-kube-api-access-j4vcv" (OuterVolumeSpecName: "kube-api-access-j4vcv") pod "760b497c-568e-4501-86c1-1c9f6b5e7f7d" (UID: "760b497c-568e-4501-86c1-1c9f6b5e7f7d"). InnerVolumeSpecName "kube-api-access-j4vcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:23:37 crc kubenswrapper[4809]: I0312 08:23:37.589353 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23f0c997-5c6b-4229-911e-efe7b42f59f7-scripts" (OuterVolumeSpecName: "scripts") pod "23f0c997-5c6b-4229-911e-efe7b42f59f7" (UID: "23f0c997-5c6b-4229-911e-efe7b42f59f7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:23:37 crc kubenswrapper[4809]: I0312 08:23:37.595411 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23f0c997-5c6b-4229-911e-efe7b42f59f7-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "23f0c997-5c6b-4229-911e-efe7b42f59f7" (UID: "23f0c997-5c6b-4229-911e-efe7b42f59f7"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:23:37 crc kubenswrapper[4809]: I0312 08:23:37.595551 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23f0c997-5c6b-4229-911e-efe7b42f59f7-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "23f0c997-5c6b-4229-911e-efe7b42f59f7" (UID: "23f0c997-5c6b-4229-911e-efe7b42f59f7"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:23:37 crc kubenswrapper[4809]: I0312 08:23:37.614334 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23f0c997-5c6b-4229-911e-efe7b42f59f7-kube-api-access-gtphd" (OuterVolumeSpecName: "kube-api-access-gtphd") pod "23f0c997-5c6b-4229-911e-efe7b42f59f7" (UID: "23f0c997-5c6b-4229-911e-efe7b42f59f7"). InnerVolumeSpecName "kube-api-access-gtphd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:23:37 crc kubenswrapper[4809]: I0312 08:23:37.665720 4809 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/23f0c997-5c6b-4229-911e-efe7b42f59f7-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:37 crc kubenswrapper[4809]: I0312 08:23:37.665759 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtphd\" (UniqueName: \"kubernetes.io/projected/23f0c997-5c6b-4229-911e-efe7b42f59f7-kube-api-access-gtphd\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:37 crc kubenswrapper[4809]: I0312 08:23:37.665773 4809 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/23f0c997-5c6b-4229-911e-efe7b42f59f7-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:37 crc kubenswrapper[4809]: I0312 08:23:37.665784 4809 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23f0c997-5c6b-4229-911e-efe7b42f59f7-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:37 crc kubenswrapper[4809]: I0312 08:23:37.665797 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4vcv\" (UniqueName: \"kubernetes.io/projected/760b497c-568e-4501-86c1-1c9f6b5e7f7d-kube-api-access-j4vcv\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:37 crc kubenswrapper[4809]: I0312 08:23:37.665807 4809 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/760b497c-568e-4501-86c1-1c9f6b5e7f7d-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:37 crc kubenswrapper[4809]: I0312 08:23:37.696605 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-s5tlv" Mar 12 08:23:37 crc kubenswrapper[4809]: I0312 08:23:37.748088 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23f0c997-5c6b-4229-911e-efe7b42f59f7-config-data" (OuterVolumeSpecName: "config-data") pod "23f0c997-5c6b-4229-911e-efe7b42f59f7" (UID: "23f0c997-5c6b-4229-911e-efe7b42f59f7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:23:37 crc kubenswrapper[4809]: I0312 08:23:37.770286 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23f0c997-5c6b-4229-911e-efe7b42f59f7-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:37 crc kubenswrapper[4809]: I0312 08:23:37.800206 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/760b497c-568e-4501-86c1-1c9f6b5e7f7d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "760b497c-568e-4501-86c1-1c9f6b5e7f7d" (UID: "760b497c-568e-4501-86c1-1c9f6b5e7f7d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:23:37 crc kubenswrapper[4809]: I0312 08:23:37.814258 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23f0c997-5c6b-4229-911e-efe7b42f59f7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "23f0c997-5c6b-4229-911e-efe7b42f59f7" (UID: "23f0c997-5c6b-4229-911e-efe7b42f59f7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:23:37 crc kubenswrapper[4809]: I0312 08:23:37.825844 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 12 08:23:37 crc kubenswrapper[4809]: I0312 08:23:37.875144 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/57e7abd3-40b2-4f84-b669-01f4a2bf9bd2-ovsdbserver-sb\") pod \"57e7abd3-40b2-4f84-b669-01f4a2bf9bd2\" (UID: \"57e7abd3-40b2-4f84-b669-01f4a2bf9bd2\") " Mar 12 08:23:37 crc kubenswrapper[4809]: I0312 08:23:37.875199 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/57e7abd3-40b2-4f84-b669-01f4a2bf9bd2-ovsdbserver-nb\") pod \"57e7abd3-40b2-4f84-b669-01f4a2bf9bd2\" (UID: \"57e7abd3-40b2-4f84-b669-01f4a2bf9bd2\") " Mar 12 08:23:37 crc kubenswrapper[4809]: I0312 08:23:37.875253 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4q997\" (UniqueName: \"kubernetes.io/projected/57e7abd3-40b2-4f84-b669-01f4a2bf9bd2-kube-api-access-4q997\") pod \"57e7abd3-40b2-4f84-b669-01f4a2bf9bd2\" (UID: \"57e7abd3-40b2-4f84-b669-01f4a2bf9bd2\") " Mar 12 08:23:37 crc kubenswrapper[4809]: I0312 08:23:37.875313 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57e7abd3-40b2-4f84-b669-01f4a2bf9bd2-dns-svc\") pod \"57e7abd3-40b2-4f84-b669-01f4a2bf9bd2\" (UID: \"57e7abd3-40b2-4f84-b669-01f4a2bf9bd2\") " Mar 12 08:23:37 crc kubenswrapper[4809]: I0312 08:23:37.875430 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57e7abd3-40b2-4f84-b669-01f4a2bf9bd2-config\") pod \"57e7abd3-40b2-4f84-b669-01f4a2bf9bd2\" (UID: \"57e7abd3-40b2-4f84-b669-01f4a2bf9bd2\") " Mar 12 08:23:37 crc kubenswrapper[4809]: I0312 08:23:37.875538 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/57e7abd3-40b2-4f84-b669-01f4a2bf9bd2-dns-swift-storage-0\") pod \"57e7abd3-40b2-4f84-b669-01f4a2bf9bd2\" (UID: \"57e7abd3-40b2-4f84-b669-01f4a2bf9bd2\") " Mar 12 08:23:37 crc kubenswrapper[4809]: I0312 08:23:37.876446 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/760b497c-568e-4501-86c1-1c9f6b5e7f7d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:37 crc kubenswrapper[4809]: I0312 08:23:37.876465 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23f0c997-5c6b-4229-911e-efe7b42f59f7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:37 crc kubenswrapper[4809]: I0312 08:23:37.885589 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57e7abd3-40b2-4f84-b669-01f4a2bf9bd2-kube-api-access-4q997" (OuterVolumeSpecName: "kube-api-access-4q997") pod "57e7abd3-40b2-4f84-b669-01f4a2bf9bd2" (UID: "57e7abd3-40b2-4f84-b669-01f4a2bf9bd2"). InnerVolumeSpecName "kube-api-access-4q997". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:23:37 crc kubenswrapper[4809]: I0312 08:23:37.944373 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57e7abd3-40b2-4f84-b669-01f4a2bf9bd2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "57e7abd3-40b2-4f84-b669-01f4a2bf9bd2" (UID: "57e7abd3-40b2-4f84-b669-01f4a2bf9bd2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:23:37 crc kubenswrapper[4809]: I0312 08:23:37.952818 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57e7abd3-40b2-4f84-b669-01f4a2bf9bd2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "57e7abd3-40b2-4f84-b669-01f4a2bf9bd2" (UID: "57e7abd3-40b2-4f84-b669-01f4a2bf9bd2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:23:37 crc kubenswrapper[4809]: I0312 08:23:37.964619 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57e7abd3-40b2-4f84-b669-01f4a2bf9bd2-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "57e7abd3-40b2-4f84-b669-01f4a2bf9bd2" (UID: "57e7abd3-40b2-4f84-b669-01f4a2bf9bd2"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:23:37 crc kubenswrapper[4809]: I0312 08:23:37.969130 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57e7abd3-40b2-4f84-b669-01f4a2bf9bd2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "57e7abd3-40b2-4f84-b669-01f4a2bf9bd2" (UID: "57e7abd3-40b2-4f84-b669-01f4a2bf9bd2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:23:37 crc kubenswrapper[4809]: I0312 08:23:37.970751 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57e7abd3-40b2-4f84-b669-01f4a2bf9bd2-config" (OuterVolumeSpecName: "config") pod "57e7abd3-40b2-4f84-b669-01f4a2bf9bd2" (UID: "57e7abd3-40b2-4f84-b669-01f4a2bf9bd2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:23:37 crc kubenswrapper[4809]: I0312 08:23:37.978014 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3259c31-3ce2-44c0-8a04-7672fffde9a2-logs\") pod \"d3259c31-3ce2-44c0-8a04-7672fffde9a2\" (UID: \"d3259c31-3ce2-44c0-8a04-7672fffde9a2\") " Mar 12 08:23:37 crc kubenswrapper[4809]: I0312 08:23:37.978091 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d3259c31-3ce2-44c0-8a04-7672fffde9a2-httpd-run\") pod \"d3259c31-3ce2-44c0-8a04-7672fffde9a2\" (UID: \"d3259c31-3ce2-44c0-8a04-7672fffde9a2\") " Mar 12 08:23:37 crc kubenswrapper[4809]: I0312 08:23:37.978201 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3259c31-3ce2-44c0-8a04-7672fffde9a2-combined-ca-bundle\") pod \"d3259c31-3ce2-44c0-8a04-7672fffde9a2\" (UID: \"d3259c31-3ce2-44c0-8a04-7672fffde9a2\") " Mar 12 08:23:37 crc kubenswrapper[4809]: I0312 08:23:37.978448 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cf6534d4-9b30-4b23-b28c-87ea2179a8ac\") pod \"d3259c31-3ce2-44c0-8a04-7672fffde9a2\" (UID: \"d3259c31-3ce2-44c0-8a04-7672fffde9a2\") " Mar 12 08:23:37 crc kubenswrapper[4809]: I0312 08:23:37.978540 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7ftz\" (UniqueName: \"kubernetes.io/projected/d3259c31-3ce2-44c0-8a04-7672fffde9a2-kube-api-access-l7ftz\") pod \"d3259c31-3ce2-44c0-8a04-7672fffde9a2\" (UID: \"d3259c31-3ce2-44c0-8a04-7672fffde9a2\") " Mar 12 08:23:37 crc kubenswrapper[4809]: I0312 08:23:37.978565 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3259c31-3ce2-44c0-8a04-7672fffde9a2-scripts\") pod \"d3259c31-3ce2-44c0-8a04-7672fffde9a2\" (UID: \"d3259c31-3ce2-44c0-8a04-7672fffde9a2\") " Mar 12 08:23:37 crc kubenswrapper[4809]: I0312 08:23:37.978691 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3259c31-3ce2-44c0-8a04-7672fffde9a2-config-data\") pod \"d3259c31-3ce2-44c0-8a04-7672fffde9a2\" (UID: \"d3259c31-3ce2-44c0-8a04-7672fffde9a2\") " Mar 12 08:23:37 crc kubenswrapper[4809]: I0312 08:23:37.979217 4809 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57e7abd3-40b2-4f84-b669-01f4a2bf9bd2-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:37 crc kubenswrapper[4809]: I0312 08:23:37.979241 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57e7abd3-40b2-4f84-b669-01f4a2bf9bd2-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:37 crc kubenswrapper[4809]: I0312 08:23:37.979253 4809 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/57e7abd3-40b2-4f84-b669-01f4a2bf9bd2-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:37 crc kubenswrapper[4809]: I0312 08:23:37.979267 4809 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/57e7abd3-40b2-4f84-b669-01f4a2bf9bd2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:37 crc kubenswrapper[4809]: I0312 08:23:37.979278 4809 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/57e7abd3-40b2-4f84-b669-01f4a2bf9bd2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:37 crc kubenswrapper[4809]: I0312 08:23:37.979286 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4q997\" (UniqueName: \"kubernetes.io/projected/57e7abd3-40b2-4f84-b669-01f4a2bf9bd2-kube-api-access-4q997\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:37 crc kubenswrapper[4809]: I0312 08:23:37.978683 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3259c31-3ce2-44c0-8a04-7672fffde9a2-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d3259c31-3ce2-44c0-8a04-7672fffde9a2" (UID: "d3259c31-3ce2-44c0-8a04-7672fffde9a2"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:23:37 crc kubenswrapper[4809]: I0312 08:23:37.980243 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3259c31-3ce2-44c0-8a04-7672fffde9a2-logs" (OuterVolumeSpecName: "logs") pod "d3259c31-3ce2-44c0-8a04-7672fffde9a2" (UID: "d3259c31-3ce2-44c0-8a04-7672fffde9a2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:23:37 crc kubenswrapper[4809]: I0312 08:23:37.988634 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3259c31-3ce2-44c0-8a04-7672fffde9a2-kube-api-access-l7ftz" (OuterVolumeSpecName: "kube-api-access-l7ftz") pod "d3259c31-3ce2-44c0-8a04-7672fffde9a2" (UID: "d3259c31-3ce2-44c0-8a04-7672fffde9a2"). InnerVolumeSpecName "kube-api-access-l7ftz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:23:37 crc kubenswrapper[4809]: I0312 08:23:37.991893 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3259c31-3ce2-44c0-8a04-7672fffde9a2-scripts" (OuterVolumeSpecName: "scripts") pod "d3259c31-3ce2-44c0-8a04-7672fffde9a2" (UID: "d3259c31-3ce2-44c0-8a04-7672fffde9a2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.002972 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cf6534d4-9b30-4b23-b28c-87ea2179a8ac" (OuterVolumeSpecName: "glance") pod "d3259c31-3ce2-44c0-8a04-7672fffde9a2" (UID: "d3259c31-3ce2-44c0-8a04-7672fffde9a2"). InnerVolumeSpecName "pvc-cf6534d4-9b30-4b23-b28c-87ea2179a8ac". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 12 08:23:38 crc kubenswrapper[4809]: W0312 08:23:38.012995 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f8141f6_bf63_4133_9451_df9c0dd0c1e7.slice/crio-81eb050c0c272ca46efed220e765729bc536f39fd373f2fba696f7f5e8215b7a WatchSource:0}: Error finding container 81eb050c0c272ca46efed220e765729bc536f39fd373f2fba696f7f5e8215b7a: Status 404 returned error can't find the container with id 81eb050c0c272ca46efed220e765729bc536f39fd373f2fba696f7f5e8215b7a Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.019526 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6586c99478-72j5b"] Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.045463 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3259c31-3ce2-44c0-8a04-7672fffde9a2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d3259c31-3ce2-44c0-8a04-7672fffde9a2" (UID: "d3259c31-3ce2-44c0-8a04-7672fffde9a2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.066105 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3259c31-3ce2-44c0-8a04-7672fffde9a2-config-data" (OuterVolumeSpecName: "config-data") pod "d3259c31-3ce2-44c0-8a04-7672fffde9a2" (UID: "d3259c31-3ce2-44c0-8a04-7672fffde9a2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.083515 4809 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-cf6534d4-9b30-4b23-b28c-87ea2179a8ac\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cf6534d4-9b30-4b23-b28c-87ea2179a8ac\") on node \"crc\" " Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.083562 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7ftz\" (UniqueName: \"kubernetes.io/projected/d3259c31-3ce2-44c0-8a04-7672fffde9a2-kube-api-access-l7ftz\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.083575 4809 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3259c31-3ce2-44c0-8a04-7672fffde9a2-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.083585 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3259c31-3ce2-44c0-8a04-7672fffde9a2-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.083595 4809 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3259c31-3ce2-44c0-8a04-7672fffde9a2-logs\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.083606 4809 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d3259c31-3ce2-44c0-8a04-7672fffde9a2-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.083618 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3259c31-3ce2-44c0-8a04-7672fffde9a2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.224647 4809 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.225074 4809 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-cf6534d4-9b30-4b23-b28c-87ea2179a8ac" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cf6534d4-9b30-4b23-b28c-87ea2179a8ac") on node "crc" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.267838 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d3259c31-3ce2-44c0-8a04-7672fffde9a2","Type":"ContainerDied","Data":"1821c14abd9155306fef5013067538396e92cba1ab7a6be88198f93044253a7d"} Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.267906 4809 scope.go:117] "RemoveContainer" containerID="9114078ab2eaa7a5a26e20e019ec503541d014327fc6ec069a3155d25af877a8" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.267950 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.288689 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-jf5zs" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.281712 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e08acf54-60ca-4dc7-bd01-ce8fed2bcc51","Type":"ContainerStarted","Data":"f946e4c66d3701365b384171ed1fb9caab9356ff3f24993cf025e6ec57c96f54"} Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.293699 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-jf5zs" event={"ID":"760b497c-568e-4501-86c1-1c9f6b5e7f7d","Type":"ContainerDied","Data":"5886a8d03274b4f268332922be13df87fd48b74f00b04268eeead251b5518848"} Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.293738 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5886a8d03274b4f268332922be13df87fd48b74f00b04268eeead251b5518848" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.293753 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-jw2qr" event={"ID":"a8ef0743-567a-4a4b-aada-a0bc3659b200","Type":"ContainerStarted","Data":"d245ce5e756182f20e26a6759b3800a839afa1db8cc82a4b20d0d7a6b179c10b"} Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.307034 4809 reconciler_common.go:293] "Volume detached for volume \"pvc-cf6534d4-9b30-4b23-b28c-87ea2179a8ac\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cf6534d4-9b30-4b23-b28c-87ea2179a8ac\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.308187 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5b85bc8f8c-p6zd6" event={"ID":"5fcbe319-aa82-46b8-a366-ec7ef1a7484e","Type":"ContainerStarted","Data":"f01085a64442ed5f2a7494db9e19762dab838c2d11d89dd439cd7f2275356334"} Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.309512 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5b85bc8f8c-p6zd6" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.321837 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-s5tlv" event={"ID":"57e7abd3-40b2-4f84-b669-01f4a2bf9bd2","Type":"ContainerDied","Data":"6366638038dee3b7d69f579b0b92fc3bf2800896219928f489dcc0bea68a47cb"} Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.322059 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-s5tlv" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.334413 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-jw2qr" podStartSLOduration=5.426625 podStartE2EDuration="53.334380814s" podCreationTimestamp="2026-03-12 08:22:45 +0000 UTC" firstStartedPulling="2026-03-12 08:22:49.447063366 +0000 UTC m=+1443.029099089" lastFinishedPulling="2026-03-12 08:23:37.35481917 +0000 UTC m=+1490.936854903" observedRunningTime="2026-03-12 08:23:38.319264605 +0000 UTC m=+1491.901300338" watchObservedRunningTime="2026-03-12 08:23:38.334380814 +0000 UTC m=+1491.916416547" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.356269 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8bcp4" event={"ID":"2d78fd7f-3718-4999-a661-ad590dc808a5","Type":"ContainerStarted","Data":"a8d6861d68e11422991828ff49aed83698b07929c3598f11c5badcbf7ebf6610"} Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.368272 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6586c99478-72j5b" event={"ID":"0f8141f6-bf63-4133-9451-df9c0dd0c1e7","Type":"ContainerStarted","Data":"81eb050c0c272ca46efed220e765729bc536f39fd373f2fba696f7f5e8215b7a"} Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.370691 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jssh7" event={"ID":"23f0c997-5c6b-4229-911e-efe7b42f59f7","Type":"ContainerDied","Data":"1b79888e4c6ec91d793ebcd313e30d7b38397604180b0ae74604293dfc52597c"} Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.370746 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b79888e4c6ec91d793ebcd313e30d7b38397604180b0ae74604293dfc52597c" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.370881 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jssh7" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.373218 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5b85bc8f8c-p6zd6" podStartSLOduration=11.373188154 podStartE2EDuration="11.373188154s" podCreationTimestamp="2026-03-12 08:23:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:23:38.348885856 +0000 UTC m=+1491.930921599" watchObservedRunningTime="2026-03-12 08:23:38.373188154 +0000 UTC m=+1491.955223877" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.375580 4809 scope.go:117] "RemoveContainer" containerID="344108671cc743ec609779e0e68defe3aec9702a8e7ba7dda4bf24082f7848c8" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.388390 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8bcp4" podStartSLOduration=26.739141621999998 podStartE2EDuration="37.388364035s" podCreationTimestamp="2026-03-12 08:23:01 +0000 UTC" firstStartedPulling="2026-03-12 08:23:26.701221888 +0000 UTC m=+1480.283257621" lastFinishedPulling="2026-03-12 08:23:37.350444301 +0000 UTC m=+1490.932480034" observedRunningTime="2026-03-12 08:23:38.387041559 +0000 UTC m=+1491.969077312" watchObservedRunningTime="2026-03-12 08:23:38.388364035 +0000 UTC m=+1491.970399768" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.431801 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.502695 4809 scope.go:117] "RemoveContainer" containerID="83c9a8a069771586937d929aaccafb02a39c6ae671f69689b2c351e022fa1bef" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.502924 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.550612 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 12 08:23:38 crc kubenswrapper[4809]: E0312 08:23:38.551439 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="760b497c-568e-4501-86c1-1c9f6b5e7f7d" containerName="barbican-db-sync" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.551459 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="760b497c-568e-4501-86c1-1c9f6b5e7f7d" containerName="barbican-db-sync" Mar 12 08:23:38 crc kubenswrapper[4809]: E0312 08:23:38.551481 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57e7abd3-40b2-4f84-b669-01f4a2bf9bd2" containerName="dnsmasq-dns" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.551490 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="57e7abd3-40b2-4f84-b669-01f4a2bf9bd2" containerName="dnsmasq-dns" Mar 12 08:23:38 crc kubenswrapper[4809]: E0312 08:23:38.551516 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3259c31-3ce2-44c0-8a04-7672fffde9a2" containerName="glance-log" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.551522 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3259c31-3ce2-44c0-8a04-7672fffde9a2" containerName="glance-log" Mar 12 08:23:38 crc kubenswrapper[4809]: E0312 08:23:38.551550 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57e7abd3-40b2-4f84-b669-01f4a2bf9bd2" containerName="init" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.551557 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="57e7abd3-40b2-4f84-b669-01f4a2bf9bd2" containerName="init" Mar 12 08:23:38 crc kubenswrapper[4809]: E0312 08:23:38.551569 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23f0c997-5c6b-4229-911e-efe7b42f59f7" containerName="keystone-bootstrap" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.551578 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="23f0c997-5c6b-4229-911e-efe7b42f59f7" containerName="keystone-bootstrap" Mar 12 08:23:38 crc kubenswrapper[4809]: E0312 08:23:38.551588 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3259c31-3ce2-44c0-8a04-7672fffde9a2" containerName="glance-httpd" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.551593 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3259c31-3ce2-44c0-8a04-7672fffde9a2" containerName="glance-httpd" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.551836 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3259c31-3ce2-44c0-8a04-7672fffde9a2" containerName="glance-log" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.551846 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="23f0c997-5c6b-4229-911e-efe7b42f59f7" containerName="keystone-bootstrap" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.551861 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="760b497c-568e-4501-86c1-1c9f6b5e7f7d" containerName="barbican-db-sync" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.551872 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3259c31-3ce2-44c0-8a04-7672fffde9a2" containerName="glance-httpd" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.551888 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="57e7abd3-40b2-4f84-b669-01f4a2bf9bd2" containerName="dnsmasq-dns" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.553361 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.558770 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.559152 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.584424 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-s5tlv"] Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.616582 4809 scope.go:117] "RemoveContainer" containerID="ecdb64c88d0c1ac990f48631d321a7d031dd118c75166821e331178164af0ca0" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.624413 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-s5tlv"] Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.627216 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6749cc55-c4e2-4011-bb67-0f1676ba152a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6749cc55-c4e2-4011-bb67-0f1676ba152a\") " pod="openstack/glance-default-internal-api-0" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.627273 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6749cc55-c4e2-4011-bb67-0f1676ba152a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6749cc55-c4e2-4011-bb67-0f1676ba152a\") " pod="openstack/glance-default-internal-api-0" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.627311 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-cf6534d4-9b30-4b23-b28c-87ea2179a8ac\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cf6534d4-9b30-4b23-b28c-87ea2179a8ac\") pod \"glance-default-internal-api-0\" (UID: \"6749cc55-c4e2-4011-bb67-0f1676ba152a\") " pod="openstack/glance-default-internal-api-0" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.627335 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shrw2\" (UniqueName: \"kubernetes.io/projected/6749cc55-c4e2-4011-bb67-0f1676ba152a-kube-api-access-shrw2\") pod \"glance-default-internal-api-0\" (UID: \"6749cc55-c4e2-4011-bb67-0f1676ba152a\") " pod="openstack/glance-default-internal-api-0" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.627375 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6749cc55-c4e2-4011-bb67-0f1676ba152a-logs\") pod \"glance-default-internal-api-0\" (UID: \"6749cc55-c4e2-4011-bb67-0f1676ba152a\") " pod="openstack/glance-default-internal-api-0" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.627393 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6749cc55-c4e2-4011-bb67-0f1676ba152a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6749cc55-c4e2-4011-bb67-0f1676ba152a\") " pod="openstack/glance-default-internal-api-0" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.627506 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6749cc55-c4e2-4011-bb67-0f1676ba152a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6749cc55-c4e2-4011-bb67-0f1676ba152a\") " pod="openstack/glance-default-internal-api-0" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.627526 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6749cc55-c4e2-4011-bb67-0f1676ba152a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6749cc55-c4e2-4011-bb67-0f1676ba152a\") " pod="openstack/glance-default-internal-api-0" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.692841 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.723189 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-76f57f5c5-gb5xf"] Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.725336 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-76f57f5c5-gb5xf" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.730209 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6749cc55-c4e2-4011-bb67-0f1676ba152a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6749cc55-c4e2-4011-bb67-0f1676ba152a\") " pod="openstack/glance-default-internal-api-0" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.730279 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6749cc55-c4e2-4011-bb67-0f1676ba152a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6749cc55-c4e2-4011-bb67-0f1676ba152a\") " pod="openstack/glance-default-internal-api-0" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.730314 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shrw2\" (UniqueName: \"kubernetes.io/projected/6749cc55-c4e2-4011-bb67-0f1676ba152a-kube-api-access-shrw2\") pod \"glance-default-internal-api-0\" (UID: \"6749cc55-c4e2-4011-bb67-0f1676ba152a\") " pod="openstack/glance-default-internal-api-0" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.730341 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-cf6534d4-9b30-4b23-b28c-87ea2179a8ac\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cf6534d4-9b30-4b23-b28c-87ea2179a8ac\") pod \"glance-default-internal-api-0\" (UID: \"6749cc55-c4e2-4011-bb67-0f1676ba152a\") " pod="openstack/glance-default-internal-api-0" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.730379 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6749cc55-c4e2-4011-bb67-0f1676ba152a-logs\") pod \"glance-default-internal-api-0\" (UID: \"6749cc55-c4e2-4011-bb67-0f1676ba152a\") " pod="openstack/glance-default-internal-api-0" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.730398 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6749cc55-c4e2-4011-bb67-0f1676ba152a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6749cc55-c4e2-4011-bb67-0f1676ba152a\") " pod="openstack/glance-default-internal-api-0" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.730499 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6749cc55-c4e2-4011-bb67-0f1676ba152a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6749cc55-c4e2-4011-bb67-0f1676ba152a\") " pod="openstack/glance-default-internal-api-0" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.730519 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6749cc55-c4e2-4011-bb67-0f1676ba152a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6749cc55-c4e2-4011-bb67-0f1676ba152a\") " pod="openstack/glance-default-internal-api-0" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.733296 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6749cc55-c4e2-4011-bb67-0f1676ba152a-logs\") pod \"glance-default-internal-api-0\" (UID: \"6749cc55-c4e2-4011-bb67-0f1676ba152a\") " pod="openstack/glance-default-internal-api-0" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.733598 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.739319 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6749cc55-c4e2-4011-bb67-0f1676ba152a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6749cc55-c4e2-4011-bb67-0f1676ba152a\") " pod="openstack/glance-default-internal-api-0" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.753908 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.754157 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.755490 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.756288 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-gndsl" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.756630 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.765923 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6749cc55-c4e2-4011-bb67-0f1676ba152a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6749cc55-c4e2-4011-bb67-0f1676ba152a\") " pod="openstack/glance-default-internal-api-0" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.766013 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-76f57f5c5-gb5xf"] Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.768675 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6749cc55-c4e2-4011-bb67-0f1676ba152a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6749cc55-c4e2-4011-bb67-0f1676ba152a\") " pod="openstack/glance-default-internal-api-0" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.769263 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6749cc55-c4e2-4011-bb67-0f1676ba152a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6749cc55-c4e2-4011-bb67-0f1676ba152a\") " pod="openstack/glance-default-internal-api-0" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.778363 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6749cc55-c4e2-4011-bb67-0f1676ba152a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6749cc55-c4e2-4011-bb67-0f1676ba152a\") " pod="openstack/glance-default-internal-api-0" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.825016 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-7d46b6b97c-tzjzj"] Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.845764 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d470174b-7e71-48e0-936a-b527f398db7e-config-data\") pod \"keystone-76f57f5c5-gb5xf\" (UID: \"d470174b-7e71-48e0-936a-b527f398db7e\") " pod="openstack/keystone-76f57f5c5-gb5xf" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.845827 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d470174b-7e71-48e0-936a-b527f398db7e-combined-ca-bundle\") pod \"keystone-76f57f5c5-gb5xf\" (UID: \"d470174b-7e71-48e0-936a-b527f398db7e\") " pod="openstack/keystone-76f57f5c5-gb5xf" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.845850 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d470174b-7e71-48e0-936a-b527f398db7e-scripts\") pod \"keystone-76f57f5c5-gb5xf\" (UID: \"d470174b-7e71-48e0-936a-b527f398db7e\") " pod="openstack/keystone-76f57f5c5-gb5xf" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.845883 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d470174b-7e71-48e0-936a-b527f398db7e-public-tls-certs\") pod \"keystone-76f57f5c5-gb5xf\" (UID: \"d470174b-7e71-48e0-936a-b527f398db7e\") " pod="openstack/keystone-76f57f5c5-gb5xf" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.845898 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d470174b-7e71-48e0-936a-b527f398db7e-internal-tls-certs\") pod \"keystone-76f57f5c5-gb5xf\" (UID: \"d470174b-7e71-48e0-936a-b527f398db7e\") " pod="openstack/keystone-76f57f5c5-gb5xf" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.845918 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d470174b-7e71-48e0-936a-b527f398db7e-credential-keys\") pod \"keystone-76f57f5c5-gb5xf\" (UID: \"d470174b-7e71-48e0-936a-b527f398db7e\") " pod="openstack/keystone-76f57f5c5-gb5xf" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.845950 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g6r8\" (UniqueName: \"kubernetes.io/projected/d470174b-7e71-48e0-936a-b527f398db7e-kube-api-access-8g6r8\") pod \"keystone-76f57f5c5-gb5xf\" (UID: \"d470174b-7e71-48e0-936a-b527f398db7e\") " pod="openstack/keystone-76f57f5c5-gb5xf" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.845967 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d470174b-7e71-48e0-936a-b527f398db7e-fernet-keys\") pod \"keystone-76f57f5c5-gb5xf\" (UID: \"d470174b-7e71-48e0-936a-b527f398db7e\") " pod="openstack/keystone-76f57f5c5-gb5xf" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.853033 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shrw2\" (UniqueName: \"kubernetes.io/projected/6749cc55-c4e2-4011-bb67-0f1676ba152a-kube-api-access-shrw2\") pod \"glance-default-internal-api-0\" (UID: \"6749cc55-c4e2-4011-bb67-0f1676ba152a\") " pod="openstack/glance-default-internal-api-0" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.950829 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d470174b-7e71-48e0-936a-b527f398db7e-public-tls-certs\") pod \"keystone-76f57f5c5-gb5xf\" (UID: \"d470174b-7e71-48e0-936a-b527f398db7e\") " pod="openstack/keystone-76f57f5c5-gb5xf" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.950879 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d470174b-7e71-48e0-936a-b527f398db7e-internal-tls-certs\") pod \"keystone-76f57f5c5-gb5xf\" (UID: \"d470174b-7e71-48e0-936a-b527f398db7e\") " pod="openstack/keystone-76f57f5c5-gb5xf" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.950903 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d470174b-7e71-48e0-936a-b527f398db7e-credential-keys\") pod \"keystone-76f57f5c5-gb5xf\" (UID: \"d470174b-7e71-48e0-936a-b527f398db7e\") " pod="openstack/keystone-76f57f5c5-gb5xf" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.950941 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8g6r8\" (UniqueName: \"kubernetes.io/projected/d470174b-7e71-48e0-936a-b527f398db7e-kube-api-access-8g6r8\") pod \"keystone-76f57f5c5-gb5xf\" (UID: \"d470174b-7e71-48e0-936a-b527f398db7e\") " pod="openstack/keystone-76f57f5c5-gb5xf" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.950963 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d470174b-7e71-48e0-936a-b527f398db7e-fernet-keys\") pod \"keystone-76f57f5c5-gb5xf\" (UID: \"d470174b-7e71-48e0-936a-b527f398db7e\") " pod="openstack/keystone-76f57f5c5-gb5xf" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.951156 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d470174b-7e71-48e0-936a-b527f398db7e-config-data\") pod \"keystone-76f57f5c5-gb5xf\" (UID: \"d470174b-7e71-48e0-936a-b527f398db7e\") " pod="openstack/keystone-76f57f5c5-gb5xf" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.951186 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d470174b-7e71-48e0-936a-b527f398db7e-combined-ca-bundle\") pod \"keystone-76f57f5c5-gb5xf\" (UID: \"d470174b-7e71-48e0-936a-b527f398db7e\") " pod="openstack/keystone-76f57f5c5-gb5xf" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.951204 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d470174b-7e71-48e0-936a-b527f398db7e-scripts\") pod \"keystone-76f57f5c5-gb5xf\" (UID: \"d470174b-7e71-48e0-936a-b527f398db7e\") " pod="openstack/keystone-76f57f5c5-gb5xf" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.955638 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7d46b6b97c-tzjzj"] Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.955822 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7d46b6b97c-tzjzj" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.962691 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d470174b-7e71-48e0-936a-b527f398db7e-scripts\") pod \"keystone-76f57f5c5-gb5xf\" (UID: \"d470174b-7e71-48e0-936a-b527f398db7e\") " pod="openstack/keystone-76f57f5c5-gb5xf" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.977858 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-65c74fbb78-q9c6j"] Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.979377 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d470174b-7e71-48e0-936a-b527f398db7e-public-tls-certs\") pod \"keystone-76f57f5c5-gb5xf\" (UID: \"d470174b-7e71-48e0-936a-b527f398db7e\") " pod="openstack/keystone-76f57f5c5-gb5xf" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.979578 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d470174b-7e71-48e0-936a-b527f398db7e-config-data\") pod \"keystone-76f57f5c5-gb5xf\" (UID: \"d470174b-7e71-48e0-936a-b527f398db7e\") " pod="openstack/keystone-76f57f5c5-gb5xf" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.979938 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-65c74fbb78-q9c6j" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.980190 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d470174b-7e71-48e0-936a-b527f398db7e-combined-ca-bundle\") pod \"keystone-76f57f5c5-gb5xf\" (UID: \"d470174b-7e71-48e0-936a-b527f398db7e\") " pod="openstack/keystone-76f57f5c5-gb5xf" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.986598 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.986936 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.987075 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-w2csq" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.987973 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d470174b-7e71-48e0-936a-b527f398db7e-fernet-keys\") pod \"keystone-76f57f5c5-gb5xf\" (UID: \"d470174b-7e71-48e0-936a-b527f398db7e\") " pod="openstack/keystone-76f57f5c5-gb5xf" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.988026 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d470174b-7e71-48e0-936a-b527f398db7e-internal-tls-certs\") pod \"keystone-76f57f5c5-gb5xf\" (UID: \"d470174b-7e71-48e0-936a-b527f398db7e\") " pod="openstack/keystone-76f57f5c5-gb5xf" Mar 12 08:23:38 crc kubenswrapper[4809]: I0312 08:23:38.996203 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-65c74fbb78-q9c6j"] Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.001598 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.018705 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d470174b-7e71-48e0-936a-b527f398db7e-credential-keys\") pod \"keystone-76f57f5c5-gb5xf\" (UID: \"d470174b-7e71-48e0-936a-b527f398db7e\") " pod="openstack/keystone-76f57f5c5-gb5xf" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.022902 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8g6r8\" (UniqueName: \"kubernetes.io/projected/d470174b-7e71-48e0-936a-b527f398db7e-kube-api-access-8g6r8\") pod \"keystone-76f57f5c5-gb5xf\" (UID: \"d470174b-7e71-48e0-936a-b527f398db7e\") " pod="openstack/keystone-76f57f5c5-gb5xf" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.054068 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4056654a-5349-4947-9a20-99626cb45c87-logs\") pod \"barbican-worker-7d46b6b97c-tzjzj\" (UID: \"4056654a-5349-4947-9a20-99626cb45c87\") " pod="openstack/barbican-worker-7d46b6b97c-tzjzj" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.054145 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4056654a-5349-4947-9a20-99626cb45c87-config-data-custom\") pod \"barbican-worker-7d46b6b97c-tzjzj\" (UID: \"4056654a-5349-4947-9a20-99626cb45c87\") " pod="openstack/barbican-worker-7d46b6b97c-tzjzj" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.054239 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lspp\" (UniqueName: \"kubernetes.io/projected/4056654a-5349-4947-9a20-99626cb45c87-kube-api-access-4lspp\") pod \"barbican-worker-7d46b6b97c-tzjzj\" (UID: \"4056654a-5349-4947-9a20-99626cb45c87\") " pod="openstack/barbican-worker-7d46b6b97c-tzjzj" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.054326 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4056654a-5349-4947-9a20-99626cb45c87-combined-ca-bundle\") pod \"barbican-worker-7d46b6b97c-tzjzj\" (UID: \"4056654a-5349-4947-9a20-99626cb45c87\") " pod="openstack/barbican-worker-7d46b6b97c-tzjzj" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.054395 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4056654a-5349-4947-9a20-99626cb45c87-config-data\") pod \"barbican-worker-7d46b6b97c-tzjzj\" (UID: \"4056654a-5349-4947-9a20-99626cb45c87\") " pod="openstack/barbican-worker-7d46b6b97c-tzjzj" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.060105 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-76f57f5c5-gb5xf" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.069038 4809 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.069088 4809 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-cf6534d4-9b30-4b23-b28c-87ea2179a8ac\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cf6534d4-9b30-4b23-b28c-87ea2179a8ac\") pod \"glance-default-internal-api-0\" (UID: \"6749cc55-c4e2-4011-bb67-0f1676ba152a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f5a7d6d655feca30f79868ff57bcbc2eb87574a4dc9b83f3feb9e7096f859fa1/globalmount\"" pod="openstack/glance-default-internal-api-0" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.156923 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4056654a-5349-4947-9a20-99626cb45c87-combined-ca-bundle\") pod \"barbican-worker-7d46b6b97c-tzjzj\" (UID: \"4056654a-5349-4947-9a20-99626cb45c87\") " pod="openstack/barbican-worker-7d46b6b97c-tzjzj" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.158598 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a1edce5-95e6-4080-935e-07399aaa4f89-logs\") pod \"barbican-keystone-listener-65c74fbb78-q9c6j\" (UID: \"7a1edce5-95e6-4080-935e-07399aaa4f89\") " pod="openstack/barbican-keystone-listener-65c74fbb78-q9c6j" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.158632 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a1edce5-95e6-4080-935e-07399aaa4f89-combined-ca-bundle\") pod \"barbican-keystone-listener-65c74fbb78-q9c6j\" (UID: \"7a1edce5-95e6-4080-935e-07399aaa4f89\") " pod="openstack/barbican-keystone-listener-65c74fbb78-q9c6j" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.158664 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbvzn\" (UniqueName: \"kubernetes.io/projected/7a1edce5-95e6-4080-935e-07399aaa4f89-kube-api-access-mbvzn\") pod \"barbican-keystone-listener-65c74fbb78-q9c6j\" (UID: \"7a1edce5-95e6-4080-935e-07399aaa4f89\") " pod="openstack/barbican-keystone-listener-65c74fbb78-q9c6j" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.158729 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4056654a-5349-4947-9a20-99626cb45c87-config-data\") pod \"barbican-worker-7d46b6b97c-tzjzj\" (UID: \"4056654a-5349-4947-9a20-99626cb45c87\") " pod="openstack/barbican-worker-7d46b6b97c-tzjzj" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.158770 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4056654a-5349-4947-9a20-99626cb45c87-logs\") pod \"barbican-worker-7d46b6b97c-tzjzj\" (UID: \"4056654a-5349-4947-9a20-99626cb45c87\") " pod="openstack/barbican-worker-7d46b6b97c-tzjzj" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.158813 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4056654a-5349-4947-9a20-99626cb45c87-config-data-custom\") pod \"barbican-worker-7d46b6b97c-tzjzj\" (UID: \"4056654a-5349-4947-9a20-99626cb45c87\") " pod="openstack/barbican-worker-7d46b6b97c-tzjzj" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.158877 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a1edce5-95e6-4080-935e-07399aaa4f89-config-data\") pod \"barbican-keystone-listener-65c74fbb78-q9c6j\" (UID: \"7a1edce5-95e6-4080-935e-07399aaa4f89\") " pod="openstack/barbican-keystone-listener-65c74fbb78-q9c6j" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.158954 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lspp\" (UniqueName: \"kubernetes.io/projected/4056654a-5349-4947-9a20-99626cb45c87-kube-api-access-4lspp\") pod \"barbican-worker-7d46b6b97c-tzjzj\" (UID: \"4056654a-5349-4947-9a20-99626cb45c87\") " pod="openstack/barbican-worker-7d46b6b97c-tzjzj" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.159008 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7a1edce5-95e6-4080-935e-07399aaa4f89-config-data-custom\") pod \"barbican-keystone-listener-65c74fbb78-q9c6j\" (UID: \"7a1edce5-95e6-4080-935e-07399aaa4f89\") " pod="openstack/barbican-keystone-listener-65c74fbb78-q9c6j" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.167500 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4056654a-5349-4947-9a20-99626cb45c87-logs\") pod \"barbican-worker-7d46b6b97c-tzjzj\" (UID: \"4056654a-5349-4947-9a20-99626cb45c87\") " pod="openstack/barbican-worker-7d46b6b97c-tzjzj" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.168789 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4056654a-5349-4947-9a20-99626cb45c87-config-data-custom\") pod \"barbican-worker-7d46b6b97c-tzjzj\" (UID: \"4056654a-5349-4947-9a20-99626cb45c87\") " pod="openstack/barbican-worker-7d46b6b97c-tzjzj" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.170968 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4056654a-5349-4947-9a20-99626cb45c87-combined-ca-bundle\") pod \"barbican-worker-7d46b6b97c-tzjzj\" (UID: \"4056654a-5349-4947-9a20-99626cb45c87\") " pod="openstack/barbican-worker-7d46b6b97c-tzjzj" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.177274 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4056654a-5349-4947-9a20-99626cb45c87-config-data\") pod \"barbican-worker-7d46b6b97c-tzjzj\" (UID: \"4056654a-5349-4947-9a20-99626cb45c87\") " pod="openstack/barbican-worker-7d46b6b97c-tzjzj" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.228764 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57e7abd3-40b2-4f84-b669-01f4a2bf9bd2" path="/var/lib/kubelet/pods/57e7abd3-40b2-4f84-b669-01f4a2bf9bd2/volumes" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.235079 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lspp\" (UniqueName: \"kubernetes.io/projected/4056654a-5349-4947-9a20-99626cb45c87-kube-api-access-4lspp\") pod \"barbican-worker-7d46b6b97c-tzjzj\" (UID: \"4056654a-5349-4947-9a20-99626cb45c87\") " pod="openstack/barbican-worker-7d46b6b97c-tzjzj" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.267853 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3259c31-3ce2-44c0-8a04-7672fffde9a2" path="/var/lib/kubelet/pods/d3259c31-3ce2-44c0-8a04-7672fffde9a2/volumes" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.270062 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a1edce5-95e6-4080-935e-07399aaa4f89-config-data\") pod \"barbican-keystone-listener-65c74fbb78-q9c6j\" (UID: \"7a1edce5-95e6-4080-935e-07399aaa4f89\") " pod="openstack/barbican-keystone-listener-65c74fbb78-q9c6j" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.271371 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.322219 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7d46b6b97c-tzjzj" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.322232 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7a1edce5-95e6-4080-935e-07399aaa4f89-config-data-custom\") pod \"barbican-keystone-listener-65c74fbb78-q9c6j\" (UID: \"7a1edce5-95e6-4080-935e-07399aaa4f89\") " pod="openstack/barbican-keystone-listener-65c74fbb78-q9c6j" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.327040 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a1edce5-95e6-4080-935e-07399aaa4f89-logs\") pod \"barbican-keystone-listener-65c74fbb78-q9c6j\" (UID: \"7a1edce5-95e6-4080-935e-07399aaa4f89\") " pod="openstack/barbican-keystone-listener-65c74fbb78-q9c6j" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.327081 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a1edce5-95e6-4080-935e-07399aaa4f89-combined-ca-bundle\") pod \"barbican-keystone-listener-65c74fbb78-q9c6j\" (UID: \"7a1edce5-95e6-4080-935e-07399aaa4f89\") " pod="openstack/barbican-keystone-listener-65c74fbb78-q9c6j" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.327138 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbvzn\" (UniqueName: \"kubernetes.io/projected/7a1edce5-95e6-4080-935e-07399aaa4f89-kube-api-access-mbvzn\") pod \"barbican-keystone-listener-65c74fbb78-q9c6j\" (UID: \"7a1edce5-95e6-4080-935e-07399aaa4f89\") " pod="openstack/barbican-keystone-listener-65c74fbb78-q9c6j" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.342353 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a1edce5-95e6-4080-935e-07399aaa4f89-logs\") pod \"barbican-keystone-listener-65c74fbb78-q9c6j\" (UID: \"7a1edce5-95e6-4080-935e-07399aaa4f89\") " pod="openstack/barbican-keystone-listener-65c74fbb78-q9c6j" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.345481 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a1edce5-95e6-4080-935e-07399aaa4f89-config-data\") pod \"barbican-keystone-listener-65c74fbb78-q9c6j\" (UID: \"7a1edce5-95e6-4080-935e-07399aaa4f89\") " pod="openstack/barbican-keystone-listener-65c74fbb78-q9c6j" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.347444 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-jrkvl"] Mar 12 08:23:39 crc kubenswrapper[4809]: E0312 08:23:39.348056 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2216a968-959e-4bb5-943e-7d8c0dbd5405" containerName="glance-httpd" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.348072 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="2216a968-959e-4bb5-943e-7d8c0dbd5405" containerName="glance-httpd" Mar 12 08:23:39 crc kubenswrapper[4809]: E0312 08:23:39.348092 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2216a968-959e-4bb5-943e-7d8c0dbd5405" containerName="glance-log" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.348102 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="2216a968-959e-4bb5-943e-7d8c0dbd5405" containerName="glance-log" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.348354 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="2216a968-959e-4bb5-943e-7d8c0dbd5405" containerName="glance-httpd" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.348374 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="2216a968-959e-4bb5-943e-7d8c0dbd5405" containerName="glance-log" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.349875 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-jrkvl" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.352425 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a1edce5-95e6-4080-935e-07399aaa4f89-combined-ca-bundle\") pod \"barbican-keystone-listener-65c74fbb78-q9c6j\" (UID: \"7a1edce5-95e6-4080-935e-07399aaa4f89\") " pod="openstack/barbican-keystone-listener-65c74fbb78-q9c6j" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.379307 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7a1edce5-95e6-4080-935e-07399aaa4f89-config-data-custom\") pod \"barbican-keystone-listener-65c74fbb78-q9c6j\" (UID: \"7a1edce5-95e6-4080-935e-07399aaa4f89\") " pod="openstack/barbican-keystone-listener-65c74fbb78-q9c6j" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.396434 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-jrkvl"] Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.417497 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbvzn\" (UniqueName: \"kubernetes.io/projected/7a1edce5-95e6-4080-935e-07399aaa4f89-kube-api-access-mbvzn\") pod \"barbican-keystone-listener-65c74fbb78-q9c6j\" (UID: \"7a1edce5-95e6-4080-935e-07399aaa4f89\") " pod="openstack/barbican-keystone-listener-65c74fbb78-q9c6j" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.442756 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-65c74fbb78-q9c6j" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.444604 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3f492e23-d032-4adb-b2a5-07078c3cb028\") pod \"2216a968-959e-4bb5-943e-7d8c0dbd5405\" (UID: \"2216a968-959e-4bb5-943e-7d8c0dbd5405\") " Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.444661 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2216a968-959e-4bb5-943e-7d8c0dbd5405-logs\") pod \"2216a968-959e-4bb5-943e-7d8c0dbd5405\" (UID: \"2216a968-959e-4bb5-943e-7d8c0dbd5405\") " Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.444824 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2216a968-959e-4bb5-943e-7d8c0dbd5405-config-data\") pod \"2216a968-959e-4bb5-943e-7d8c0dbd5405\" (UID: \"2216a968-959e-4bb5-943e-7d8c0dbd5405\") " Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.444888 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2216a968-959e-4bb5-943e-7d8c0dbd5405-combined-ca-bundle\") pod \"2216a968-959e-4bb5-943e-7d8c0dbd5405\" (UID: \"2216a968-959e-4bb5-943e-7d8c0dbd5405\") " Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.444925 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrzw6\" (UniqueName: \"kubernetes.io/projected/2216a968-959e-4bb5-943e-7d8c0dbd5405-kube-api-access-zrzw6\") pod \"2216a968-959e-4bb5-943e-7d8c0dbd5405\" (UID: \"2216a968-959e-4bb5-943e-7d8c0dbd5405\") " Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.444989 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2216a968-959e-4bb5-943e-7d8c0dbd5405-httpd-run\") pod \"2216a968-959e-4bb5-943e-7d8c0dbd5405\" (UID: \"2216a968-959e-4bb5-943e-7d8c0dbd5405\") " Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.445052 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2216a968-959e-4bb5-943e-7d8c0dbd5405-scripts\") pod \"2216a968-959e-4bb5-943e-7d8c0dbd5405\" (UID: \"2216a968-959e-4bb5-943e-7d8c0dbd5405\") " Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.449871 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b6e69d29-6d0e-4bad-a44b-6a24e4971281-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-jrkvl\" (UID: \"b6e69d29-6d0e-4bad-a44b-6a24e4971281\") " pod="openstack/dnsmasq-dns-85ff748b95-jrkvl" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.449936 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b6e69d29-6d0e-4bad-a44b-6a24e4971281-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-jrkvl\" (UID: \"b6e69d29-6d0e-4bad-a44b-6a24e4971281\") " pod="openstack/dnsmasq-dns-85ff748b95-jrkvl" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.450136 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b6e69d29-6d0e-4bad-a44b-6a24e4971281-dns-svc\") pod \"dnsmasq-dns-85ff748b95-jrkvl\" (UID: \"b6e69d29-6d0e-4bad-a44b-6a24e4971281\") " pod="openstack/dnsmasq-dns-85ff748b95-jrkvl" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.450179 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbmns\" (UniqueName: \"kubernetes.io/projected/b6e69d29-6d0e-4bad-a44b-6a24e4971281-kube-api-access-dbmns\") pod \"dnsmasq-dns-85ff748b95-jrkvl\" (UID: \"b6e69d29-6d0e-4bad-a44b-6a24e4971281\") " pod="openstack/dnsmasq-dns-85ff748b95-jrkvl" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.450394 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6e69d29-6d0e-4bad-a44b-6a24e4971281-config\") pod \"dnsmasq-dns-85ff748b95-jrkvl\" (UID: \"b6e69d29-6d0e-4bad-a44b-6a24e4971281\") " pod="openstack/dnsmasq-dns-85ff748b95-jrkvl" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.450555 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b6e69d29-6d0e-4bad-a44b-6a24e4971281-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-jrkvl\" (UID: \"b6e69d29-6d0e-4bad-a44b-6a24e4971281\") " pod="openstack/dnsmasq-dns-85ff748b95-jrkvl" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.465874 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2216a968-959e-4bb5-943e-7d8c0dbd5405-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "2216a968-959e-4bb5-943e-7d8c0dbd5405" (UID: "2216a968-959e-4bb5-943e-7d8c0dbd5405"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.468693 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2216a968-959e-4bb5-943e-7d8c0dbd5405-kube-api-access-zrzw6" (OuterVolumeSpecName: "kube-api-access-zrzw6") pod "2216a968-959e-4bb5-943e-7d8c0dbd5405" (UID: "2216a968-959e-4bb5-943e-7d8c0dbd5405"). InnerVolumeSpecName "kube-api-access-zrzw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.469240 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2216a968-959e-4bb5-943e-7d8c0dbd5405-logs" (OuterVolumeSpecName: "logs") pod "2216a968-959e-4bb5-943e-7d8c0dbd5405" (UID: "2216a968-959e-4bb5-943e-7d8c0dbd5405"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.491942 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2216a968-959e-4bb5-943e-7d8c0dbd5405-scripts" (OuterVolumeSpecName: "scripts") pod "2216a968-959e-4bb5-943e-7d8c0dbd5405" (UID: "2216a968-959e-4bb5-943e-7d8c0dbd5405"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.551305 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7ff4d6f6d-442gb"] Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.563856 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7ff4d6f6d-442gb" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.568301 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b6e69d29-6d0e-4bad-a44b-6a24e4971281-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-jrkvl\" (UID: \"b6e69d29-6d0e-4bad-a44b-6a24e4971281\") " pod="openstack/dnsmasq-dns-85ff748b95-jrkvl" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.568398 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b6e69d29-6d0e-4bad-a44b-6a24e4971281-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-jrkvl\" (UID: \"b6e69d29-6d0e-4bad-a44b-6a24e4971281\") " pod="openstack/dnsmasq-dns-85ff748b95-jrkvl" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.568831 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b6e69d29-6d0e-4bad-a44b-6a24e4971281-dns-svc\") pod \"dnsmasq-dns-85ff748b95-jrkvl\" (UID: \"b6e69d29-6d0e-4bad-a44b-6a24e4971281\") " pod="openstack/dnsmasq-dns-85ff748b95-jrkvl" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.568880 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbmns\" (UniqueName: \"kubernetes.io/projected/b6e69d29-6d0e-4bad-a44b-6a24e4971281-kube-api-access-dbmns\") pod \"dnsmasq-dns-85ff748b95-jrkvl\" (UID: \"b6e69d29-6d0e-4bad-a44b-6a24e4971281\") " pod="openstack/dnsmasq-dns-85ff748b95-jrkvl" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.569131 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6e69d29-6d0e-4bad-a44b-6a24e4971281-config\") pod \"dnsmasq-dns-85ff748b95-jrkvl\" (UID: \"b6e69d29-6d0e-4bad-a44b-6a24e4971281\") " pod="openstack/dnsmasq-dns-85ff748b95-jrkvl" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.569334 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b6e69d29-6d0e-4bad-a44b-6a24e4971281-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-jrkvl\" (UID: \"b6e69d29-6d0e-4bad-a44b-6a24e4971281\") " pod="openstack/dnsmasq-dns-85ff748b95-jrkvl" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.569676 4809 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2216a968-959e-4bb5-943e-7d8c0dbd5405-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.569699 4809 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2216a968-959e-4bb5-943e-7d8c0dbd5405-logs\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.569718 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrzw6\" (UniqueName: \"kubernetes.io/projected/2216a968-959e-4bb5-943e-7d8c0dbd5405-kube-api-access-zrzw6\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.569734 4809 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2216a968-959e-4bb5-943e-7d8c0dbd5405-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.571449 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b6e69d29-6d0e-4bad-a44b-6a24e4971281-dns-svc\") pod \"dnsmasq-dns-85ff748b95-jrkvl\" (UID: \"b6e69d29-6d0e-4bad-a44b-6a24e4971281\") " pod="openstack/dnsmasq-dns-85ff748b95-jrkvl" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.571648 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b6e69d29-6d0e-4bad-a44b-6a24e4971281-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-jrkvl\" (UID: \"b6e69d29-6d0e-4bad-a44b-6a24e4971281\") " pod="openstack/dnsmasq-dns-85ff748b95-jrkvl" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.572030 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b6e69d29-6d0e-4bad-a44b-6a24e4971281-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-jrkvl\" (UID: \"b6e69d29-6d0e-4bad-a44b-6a24e4971281\") " pod="openstack/dnsmasq-dns-85ff748b95-jrkvl" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.578933 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.599691 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6e69d29-6d0e-4bad-a44b-6a24e4971281-config\") pod \"dnsmasq-dns-85ff748b95-jrkvl\" (UID: \"b6e69d29-6d0e-4bad-a44b-6a24e4971281\") " pod="openstack/dnsmasq-dns-85ff748b95-jrkvl" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.628025 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b6e69d29-6d0e-4bad-a44b-6a24e4971281-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-jrkvl\" (UID: \"b6e69d29-6d0e-4bad-a44b-6a24e4971281\") " pod="openstack/dnsmasq-dns-85ff748b95-jrkvl" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.655970 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbmns\" (UniqueName: \"kubernetes.io/projected/b6e69d29-6d0e-4bad-a44b-6a24e4971281-kube-api-access-dbmns\") pod \"dnsmasq-dns-85ff748b95-jrkvl\" (UID: \"b6e69d29-6d0e-4bad-a44b-6a24e4971281\") " pod="openstack/dnsmasq-dns-85ff748b95-jrkvl" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.672488 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2216a968-959e-4bb5-943e-7d8c0dbd5405-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2216a968-959e-4bb5-943e-7d8c0dbd5405" (UID: "2216a968-959e-4bb5-943e-7d8c0dbd5405"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.674456 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2216a968-959e-4bb5-943e-7d8c0dbd5405-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.689744 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6586c99478-72j5b" event={"ID":"0f8141f6-bf63-4133-9451-df9c0dd0c1e7","Type":"ContainerStarted","Data":"c78920513e2ac05da31144b07e115193999eae386099b094fa6da04caa07e3bd"} Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.708459 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-cf6534d4-9b30-4b23-b28c-87ea2179a8ac\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cf6534d4-9b30-4b23-b28c-87ea2179a8ac\") pod \"glance-default-internal-api-0\" (UID: \"6749cc55-c4e2-4011-bb67-0f1676ba152a\") " pod="openstack/glance-default-internal-api-0" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.742142 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2216a968-959e-4bb5-943e-7d8c0dbd5405-config-data" (OuterVolumeSpecName: "config-data") pod "2216a968-959e-4bb5-943e-7d8c0dbd5405" (UID: "2216a968-959e-4bb5-943e-7d8c0dbd5405"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.742204 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.742226 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2216a968-959e-4bb5-943e-7d8c0dbd5405","Type":"ContainerDied","Data":"5ed8c959f50a9aed8c0ae867fac0902a9f96082674b24b5536520430be2bf532"} Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.743053 4809 scope.go:117] "RemoveContainer" containerID="2df21a3581682d789abb70904f542fc22c4a2da2da2b9e1e8db2eec9684d0f45" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.795526 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a013b38e-6ede-4108-8a0d-5fb8bed67494-config-data-custom\") pod \"barbican-api-7ff4d6f6d-442gb\" (UID: \"a013b38e-6ede-4108-8a0d-5fb8bed67494\") " pod="openstack/barbican-api-7ff4d6f6d-442gb" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.795648 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a013b38e-6ede-4108-8a0d-5fb8bed67494-combined-ca-bundle\") pod \"barbican-api-7ff4d6f6d-442gb\" (UID: \"a013b38e-6ede-4108-8a0d-5fb8bed67494\") " pod="openstack/barbican-api-7ff4d6f6d-442gb" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.795728 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cb4n\" (UniqueName: \"kubernetes.io/projected/a013b38e-6ede-4108-8a0d-5fb8bed67494-kube-api-access-6cb4n\") pod \"barbican-api-7ff4d6f6d-442gb\" (UID: \"a013b38e-6ede-4108-8a0d-5fb8bed67494\") " pod="openstack/barbican-api-7ff4d6f6d-442gb" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.795778 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a013b38e-6ede-4108-8a0d-5fb8bed67494-logs\") pod \"barbican-api-7ff4d6f6d-442gb\" (UID: \"a013b38e-6ede-4108-8a0d-5fb8bed67494\") " pod="openstack/barbican-api-7ff4d6f6d-442gb" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.795814 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a013b38e-6ede-4108-8a0d-5fb8bed67494-config-data\") pod \"barbican-api-7ff4d6f6d-442gb\" (UID: \"a013b38e-6ede-4108-8a0d-5fb8bed67494\") " pod="openstack/barbican-api-7ff4d6f6d-442gb" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.795946 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2216a968-959e-4bb5-943e-7d8c0dbd5405-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.796244 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.813931 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7ff4d6f6d-442gb"] Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.823901 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-jrkvl" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.880494 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-68668c5975-8z984"] Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.883104 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-68668c5975-8z984" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.894280 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3f492e23-d032-4adb-b2a5-07078c3cb028" (OuterVolumeSpecName: "glance") pod "2216a968-959e-4bb5-943e-7d8c0dbd5405" (UID: "2216a968-959e-4bb5-943e-7d8c0dbd5405"). InnerVolumeSpecName "pvc-3f492e23-d032-4adb-b2a5-07078c3cb028". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.897841 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cb4n\" (UniqueName: \"kubernetes.io/projected/a013b38e-6ede-4108-8a0d-5fb8bed67494-kube-api-access-6cb4n\") pod \"barbican-api-7ff4d6f6d-442gb\" (UID: \"a013b38e-6ede-4108-8a0d-5fb8bed67494\") " pod="openstack/barbican-api-7ff4d6f6d-442gb" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.897944 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a013b38e-6ede-4108-8a0d-5fb8bed67494-logs\") pod \"barbican-api-7ff4d6f6d-442gb\" (UID: \"a013b38e-6ede-4108-8a0d-5fb8bed67494\") " pod="openstack/barbican-api-7ff4d6f6d-442gb" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.898026 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a013b38e-6ede-4108-8a0d-5fb8bed67494-config-data\") pod \"barbican-api-7ff4d6f6d-442gb\" (UID: \"a013b38e-6ede-4108-8a0d-5fb8bed67494\") " pod="openstack/barbican-api-7ff4d6f6d-442gb" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.898066 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a013b38e-6ede-4108-8a0d-5fb8bed67494-config-data-custom\") pod \"barbican-api-7ff4d6f6d-442gb\" (UID: \"a013b38e-6ede-4108-8a0d-5fb8bed67494\") " pod="openstack/barbican-api-7ff4d6f6d-442gb" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.898236 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a013b38e-6ede-4108-8a0d-5fb8bed67494-combined-ca-bundle\") pod \"barbican-api-7ff4d6f6d-442gb\" (UID: \"a013b38e-6ede-4108-8a0d-5fb8bed67494\") " pod="openstack/barbican-api-7ff4d6f6d-442gb" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.898377 4809 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-3f492e23-d032-4adb-b2a5-07078c3cb028\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3f492e23-d032-4adb-b2a5-07078c3cb028\") on node \"crc\" " Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.932158 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a013b38e-6ede-4108-8a0d-5fb8bed67494-logs\") pod \"barbican-api-7ff4d6f6d-442gb\" (UID: \"a013b38e-6ede-4108-8a0d-5fb8bed67494\") " pod="openstack/barbican-api-7ff4d6f6d-442gb" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.937211 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a013b38e-6ede-4108-8a0d-5fb8bed67494-combined-ca-bundle\") pod \"barbican-api-7ff4d6f6d-442gb\" (UID: \"a013b38e-6ede-4108-8a0d-5fb8bed67494\") " pod="openstack/barbican-api-7ff4d6f6d-442gb" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.948717 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a013b38e-6ede-4108-8a0d-5fb8bed67494-config-data\") pod \"barbican-api-7ff4d6f6d-442gb\" (UID: \"a013b38e-6ede-4108-8a0d-5fb8bed67494\") " pod="openstack/barbican-api-7ff4d6f6d-442gb" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.963471 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cb4n\" (UniqueName: \"kubernetes.io/projected/a013b38e-6ede-4108-8a0d-5fb8bed67494-kube-api-access-6cb4n\") pod \"barbican-api-7ff4d6f6d-442gb\" (UID: \"a013b38e-6ede-4108-8a0d-5fb8bed67494\") " pod="openstack/barbican-api-7ff4d6f6d-442gb" Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.963573 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-68668c5975-8z984"] Mar 12 08:23:39 crc kubenswrapper[4809]: I0312 08:23:39.965929 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a013b38e-6ede-4108-8a0d-5fb8bed67494-config-data-custom\") pod \"barbican-api-7ff4d6f6d-442gb\" (UID: \"a013b38e-6ede-4108-8a0d-5fb8bed67494\") " pod="openstack/barbican-api-7ff4d6f6d-442gb" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.010811 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afaa98ca-8124-4f9a-b990-58012066b090-combined-ca-bundle\") pod \"barbican-worker-68668c5975-8z984\" (UID: \"afaa98ca-8124-4f9a-b990-58012066b090\") " pod="openstack/barbican-worker-68668c5975-8z984" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.010898 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzhgv\" (UniqueName: \"kubernetes.io/projected/afaa98ca-8124-4f9a-b990-58012066b090-kube-api-access-fzhgv\") pod \"barbican-worker-68668c5975-8z984\" (UID: \"afaa98ca-8124-4f9a-b990-58012066b090\") " pod="openstack/barbican-worker-68668c5975-8z984" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.010963 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/afaa98ca-8124-4f9a-b990-58012066b090-logs\") pod \"barbican-worker-68668c5975-8z984\" (UID: \"afaa98ca-8124-4f9a-b990-58012066b090\") " pod="openstack/barbican-worker-68668c5975-8z984" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.011036 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afaa98ca-8124-4f9a-b990-58012066b090-config-data\") pod \"barbican-worker-68668c5975-8z984\" (UID: \"afaa98ca-8124-4f9a-b990-58012066b090\") " pod="openstack/barbican-worker-68668c5975-8z984" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.011251 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/afaa98ca-8124-4f9a-b990-58012066b090-config-data-custom\") pod \"barbican-worker-68668c5975-8z984\" (UID: \"afaa98ca-8124-4f9a-b990-58012066b090\") " pod="openstack/barbican-worker-68668c5975-8z984" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.016488 4809 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.016959 4809 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-3f492e23-d032-4adb-b2a5-07078c3cb028" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3f492e23-d032-4adb-b2a5-07078c3cb028") on node "crc" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.030255 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-6f8c457c74-ppnrt"] Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.051691 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6f8c457c74-ppnrt" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.070068 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6f8c457c74-ppnrt"] Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.113572 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edad7564-3eab-49a8-a90d-7f945bf1a458-combined-ca-bundle\") pod \"barbican-keystone-listener-6f8c457c74-ppnrt\" (UID: \"edad7564-3eab-49a8-a90d-7f945bf1a458\") " pod="openstack/barbican-keystone-listener-6f8c457c74-ppnrt" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.113692 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/afaa98ca-8124-4f9a-b990-58012066b090-config-data-custom\") pod \"barbican-worker-68668c5975-8z984\" (UID: \"afaa98ca-8124-4f9a-b990-58012066b090\") " pod="openstack/barbican-worker-68668c5975-8z984" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.113734 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edad7564-3eab-49a8-a90d-7f945bf1a458-logs\") pod \"barbican-keystone-listener-6f8c457c74-ppnrt\" (UID: \"edad7564-3eab-49a8-a90d-7f945bf1a458\") " pod="openstack/barbican-keystone-listener-6f8c457c74-ppnrt" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.113791 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afaa98ca-8124-4f9a-b990-58012066b090-combined-ca-bundle\") pod \"barbican-worker-68668c5975-8z984\" (UID: \"afaa98ca-8124-4f9a-b990-58012066b090\") " pod="openstack/barbican-worker-68668c5975-8z984" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.113812 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edad7564-3eab-49a8-a90d-7f945bf1a458-config-data\") pod \"barbican-keystone-listener-6f8c457c74-ppnrt\" (UID: \"edad7564-3eab-49a8-a90d-7f945bf1a458\") " pod="openstack/barbican-keystone-listener-6f8c457c74-ppnrt" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.113851 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzhgv\" (UniqueName: \"kubernetes.io/projected/afaa98ca-8124-4f9a-b990-58012066b090-kube-api-access-fzhgv\") pod \"barbican-worker-68668c5975-8z984\" (UID: \"afaa98ca-8124-4f9a-b990-58012066b090\") " pod="openstack/barbican-worker-68668c5975-8z984" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.113900 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc97f\" (UniqueName: \"kubernetes.io/projected/edad7564-3eab-49a8-a90d-7f945bf1a458-kube-api-access-jc97f\") pod \"barbican-keystone-listener-6f8c457c74-ppnrt\" (UID: \"edad7564-3eab-49a8-a90d-7f945bf1a458\") " pod="openstack/barbican-keystone-listener-6f8c457c74-ppnrt" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.113939 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/afaa98ca-8124-4f9a-b990-58012066b090-logs\") pod \"barbican-worker-68668c5975-8z984\" (UID: \"afaa98ca-8124-4f9a-b990-58012066b090\") " pod="openstack/barbican-worker-68668c5975-8z984" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.113996 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afaa98ca-8124-4f9a-b990-58012066b090-config-data\") pod \"barbican-worker-68668c5975-8z984\" (UID: \"afaa98ca-8124-4f9a-b990-58012066b090\") " pod="openstack/barbican-worker-68668c5975-8z984" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.114054 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/edad7564-3eab-49a8-a90d-7f945bf1a458-config-data-custom\") pod \"barbican-keystone-listener-6f8c457c74-ppnrt\" (UID: \"edad7564-3eab-49a8-a90d-7f945bf1a458\") " pod="openstack/barbican-keystone-listener-6f8c457c74-ppnrt" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.114218 4809 reconciler_common.go:293] "Volume detached for volume \"pvc-3f492e23-d032-4adb-b2a5-07078c3cb028\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3f492e23-d032-4adb-b2a5-07078c3cb028\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.121576 4809 scope.go:117] "RemoveContainer" containerID="7aa9fa32293caff7118941d4182a6d7ffabfd6f3165197d927ff88a0d06b2b92" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.121811 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-66ff5d45fd-nxlnk"] Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.121980 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/afaa98ca-8124-4f9a-b990-58012066b090-logs\") pod \"barbican-worker-68668c5975-8z984\" (UID: \"afaa98ca-8124-4f9a-b990-58012066b090\") " pod="openstack/barbican-worker-68668c5975-8z984" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.126491 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-66ff5d45fd-nxlnk" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.133754 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/afaa98ca-8124-4f9a-b990-58012066b090-config-data-custom\") pod \"barbican-worker-68668c5975-8z984\" (UID: \"afaa98ca-8124-4f9a-b990-58012066b090\") " pod="openstack/barbican-worker-68668c5975-8z984" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.140868 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afaa98ca-8124-4f9a-b990-58012066b090-combined-ca-bundle\") pod \"barbican-worker-68668c5975-8z984\" (UID: \"afaa98ca-8124-4f9a-b990-58012066b090\") " pod="openstack/barbican-worker-68668c5975-8z984" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.145015 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-66ff5d45fd-nxlnk"] Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.152931 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzhgv\" (UniqueName: \"kubernetes.io/projected/afaa98ca-8124-4f9a-b990-58012066b090-kube-api-access-fzhgv\") pod \"barbican-worker-68668c5975-8z984\" (UID: \"afaa98ca-8124-4f9a-b990-58012066b090\") " pod="openstack/barbican-worker-68668c5975-8z984" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.163328 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afaa98ca-8124-4f9a-b990-58012066b090-config-data\") pod \"barbican-worker-68668c5975-8z984\" (UID: \"afaa98ca-8124-4f9a-b990-58012066b090\") " pod="openstack/barbican-worker-68668c5975-8z984" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.217937 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95d1f5f5-216c-45e1-aefc-3e54135c8dc8-logs\") pod \"barbican-api-66ff5d45fd-nxlnk\" (UID: \"95d1f5f5-216c-45e1-aefc-3e54135c8dc8\") " pod="openstack/barbican-api-66ff5d45fd-nxlnk" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.218598 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/edad7564-3eab-49a8-a90d-7f945bf1a458-config-data-custom\") pod \"barbican-keystone-listener-6f8c457c74-ppnrt\" (UID: \"edad7564-3eab-49a8-a90d-7f945bf1a458\") " pod="openstack/barbican-keystone-listener-6f8c457c74-ppnrt" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.218691 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edad7564-3eab-49a8-a90d-7f945bf1a458-combined-ca-bundle\") pod \"barbican-keystone-listener-6f8c457c74-ppnrt\" (UID: \"edad7564-3eab-49a8-a90d-7f945bf1a458\") " pod="openstack/barbican-keystone-listener-6f8c457c74-ppnrt" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.218938 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95d1f5f5-216c-45e1-aefc-3e54135c8dc8-combined-ca-bundle\") pod \"barbican-api-66ff5d45fd-nxlnk\" (UID: \"95d1f5f5-216c-45e1-aefc-3e54135c8dc8\") " pod="openstack/barbican-api-66ff5d45fd-nxlnk" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.219058 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95d1f5f5-216c-45e1-aefc-3e54135c8dc8-config-data\") pod \"barbican-api-66ff5d45fd-nxlnk\" (UID: \"95d1f5f5-216c-45e1-aefc-3e54135c8dc8\") " pod="openstack/barbican-api-66ff5d45fd-nxlnk" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.220976 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edad7564-3eab-49a8-a90d-7f945bf1a458-logs\") pod \"barbican-keystone-listener-6f8c457c74-ppnrt\" (UID: \"edad7564-3eab-49a8-a90d-7f945bf1a458\") " pod="openstack/barbican-keystone-listener-6f8c457c74-ppnrt" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.221614 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhq4l\" (UniqueName: \"kubernetes.io/projected/95d1f5f5-216c-45e1-aefc-3e54135c8dc8-kube-api-access-bhq4l\") pod \"barbican-api-66ff5d45fd-nxlnk\" (UID: \"95d1f5f5-216c-45e1-aefc-3e54135c8dc8\") " pod="openstack/barbican-api-66ff5d45fd-nxlnk" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.221679 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edad7564-3eab-49a8-a90d-7f945bf1a458-config-data\") pod \"barbican-keystone-listener-6f8c457c74-ppnrt\" (UID: \"edad7564-3eab-49a8-a90d-7f945bf1a458\") " pod="openstack/barbican-keystone-listener-6f8c457c74-ppnrt" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.221786 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/95d1f5f5-216c-45e1-aefc-3e54135c8dc8-config-data-custom\") pod \"barbican-api-66ff5d45fd-nxlnk\" (UID: \"95d1f5f5-216c-45e1-aefc-3e54135c8dc8\") " pod="openstack/barbican-api-66ff5d45fd-nxlnk" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.221848 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jc97f\" (UniqueName: \"kubernetes.io/projected/edad7564-3eab-49a8-a90d-7f945bf1a458-kube-api-access-jc97f\") pod \"barbican-keystone-listener-6f8c457c74-ppnrt\" (UID: \"edad7564-3eab-49a8-a90d-7f945bf1a458\") " pod="openstack/barbican-keystone-listener-6f8c457c74-ppnrt" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.222975 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-68668c5975-8z984" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.227708 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edad7564-3eab-49a8-a90d-7f945bf1a458-logs\") pod \"barbican-keystone-listener-6f8c457c74-ppnrt\" (UID: \"edad7564-3eab-49a8-a90d-7f945bf1a458\") " pod="openstack/barbican-keystone-listener-6f8c457c74-ppnrt" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.229678 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edad7564-3eab-49a8-a90d-7f945bf1a458-config-data\") pod \"barbican-keystone-listener-6f8c457c74-ppnrt\" (UID: \"edad7564-3eab-49a8-a90d-7f945bf1a458\") " pod="openstack/barbican-keystone-listener-6f8c457c74-ppnrt" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.229806 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edad7564-3eab-49a8-a90d-7f945bf1a458-combined-ca-bundle\") pod \"barbican-keystone-listener-6f8c457c74-ppnrt\" (UID: \"edad7564-3eab-49a8-a90d-7f945bf1a458\") " pod="openstack/barbican-keystone-listener-6f8c457c74-ppnrt" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.234350 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/edad7564-3eab-49a8-a90d-7f945bf1a458-config-data-custom\") pod \"barbican-keystone-listener-6f8c457c74-ppnrt\" (UID: \"edad7564-3eab-49a8-a90d-7f945bf1a458\") " pod="openstack/barbican-keystone-listener-6f8c457c74-ppnrt" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.248273 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jc97f\" (UniqueName: \"kubernetes.io/projected/edad7564-3eab-49a8-a90d-7f945bf1a458-kube-api-access-jc97f\") pod \"barbican-keystone-listener-6f8c457c74-ppnrt\" (UID: \"edad7564-3eab-49a8-a90d-7f945bf1a458\") " pod="openstack/barbican-keystone-listener-6f8c457c74-ppnrt" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.257090 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7ff4d6f6d-442gb" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.323911 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhq4l\" (UniqueName: \"kubernetes.io/projected/95d1f5f5-216c-45e1-aefc-3e54135c8dc8-kube-api-access-bhq4l\") pod \"barbican-api-66ff5d45fd-nxlnk\" (UID: \"95d1f5f5-216c-45e1-aefc-3e54135c8dc8\") " pod="openstack/barbican-api-66ff5d45fd-nxlnk" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.323996 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/95d1f5f5-216c-45e1-aefc-3e54135c8dc8-config-data-custom\") pod \"barbican-api-66ff5d45fd-nxlnk\" (UID: \"95d1f5f5-216c-45e1-aefc-3e54135c8dc8\") " pod="openstack/barbican-api-66ff5d45fd-nxlnk" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.324050 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95d1f5f5-216c-45e1-aefc-3e54135c8dc8-logs\") pod \"barbican-api-66ff5d45fd-nxlnk\" (UID: \"95d1f5f5-216c-45e1-aefc-3e54135c8dc8\") " pod="openstack/barbican-api-66ff5d45fd-nxlnk" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.324146 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95d1f5f5-216c-45e1-aefc-3e54135c8dc8-combined-ca-bundle\") pod \"barbican-api-66ff5d45fd-nxlnk\" (UID: \"95d1f5f5-216c-45e1-aefc-3e54135c8dc8\") " pod="openstack/barbican-api-66ff5d45fd-nxlnk" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.324171 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95d1f5f5-216c-45e1-aefc-3e54135c8dc8-config-data\") pod \"barbican-api-66ff5d45fd-nxlnk\" (UID: \"95d1f5f5-216c-45e1-aefc-3e54135c8dc8\") " pod="openstack/barbican-api-66ff5d45fd-nxlnk" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.327612 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95d1f5f5-216c-45e1-aefc-3e54135c8dc8-logs\") pod \"barbican-api-66ff5d45fd-nxlnk\" (UID: \"95d1f5f5-216c-45e1-aefc-3e54135c8dc8\") " pod="openstack/barbican-api-66ff5d45fd-nxlnk" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.334514 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/95d1f5f5-216c-45e1-aefc-3e54135c8dc8-config-data-custom\") pod \"barbican-api-66ff5d45fd-nxlnk\" (UID: \"95d1f5f5-216c-45e1-aefc-3e54135c8dc8\") " pod="openstack/barbican-api-66ff5d45fd-nxlnk" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.342085 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95d1f5f5-216c-45e1-aefc-3e54135c8dc8-config-data\") pod \"barbican-api-66ff5d45fd-nxlnk\" (UID: \"95d1f5f5-216c-45e1-aefc-3e54135c8dc8\") " pod="openstack/barbican-api-66ff5d45fd-nxlnk" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.345497 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95d1f5f5-216c-45e1-aefc-3e54135c8dc8-combined-ca-bundle\") pod \"barbican-api-66ff5d45fd-nxlnk\" (UID: \"95d1f5f5-216c-45e1-aefc-3e54135c8dc8\") " pod="openstack/barbican-api-66ff5d45fd-nxlnk" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.372658 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhq4l\" (UniqueName: \"kubernetes.io/projected/95d1f5f5-216c-45e1-aefc-3e54135c8dc8-kube-api-access-bhq4l\") pod \"barbican-api-66ff5d45fd-nxlnk\" (UID: \"95d1f5f5-216c-45e1-aefc-3e54135c8dc8\") " pod="openstack/barbican-api-66ff5d45fd-nxlnk" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.432612 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6f8c457c74-ppnrt" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.467895 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-66ff5d45fd-nxlnk" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.500684 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7d46b6b97c-tzjzj"] Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.596209 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.657213 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.720051 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.728464 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.734414 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.739853 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.741599 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.850042 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8766c893-91de-40a1-b884-381264755524-logs\") pod \"glance-default-external-api-0\" (UID: \"8766c893-91de-40a1-b884-381264755524\") " pod="openstack/glance-default-external-api-0" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.850407 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8766c893-91de-40a1-b884-381264755524-scripts\") pod \"glance-default-external-api-0\" (UID: \"8766c893-91de-40a1-b884-381264755524\") " pod="openstack/glance-default-external-api-0" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.850433 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3f492e23-d032-4adb-b2a5-07078c3cb028\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3f492e23-d032-4adb-b2a5-07078c3cb028\") pod \"glance-default-external-api-0\" (UID: \"8766c893-91de-40a1-b884-381264755524\") " pod="openstack/glance-default-external-api-0" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.850489 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8766c893-91de-40a1-b884-381264755524-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8766c893-91de-40a1-b884-381264755524\") " pod="openstack/glance-default-external-api-0" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.850528 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8766c893-91de-40a1-b884-381264755524-config-data\") pod \"glance-default-external-api-0\" (UID: \"8766c893-91de-40a1-b884-381264755524\") " pod="openstack/glance-default-external-api-0" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.850549 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8766c893-91de-40a1-b884-381264755524-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8766c893-91de-40a1-b884-381264755524\") " pod="openstack/glance-default-external-api-0" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.850610 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8766c893-91de-40a1-b884-381264755524-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8766c893-91de-40a1-b884-381264755524\") " pod="openstack/glance-default-external-api-0" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.850656 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p98jr\" (UniqueName: \"kubernetes.io/projected/8766c893-91de-40a1-b884-381264755524-kube-api-access-p98jr\") pod \"glance-default-external-api-0\" (UID: \"8766c893-91de-40a1-b884-381264755524\") " pod="openstack/glance-default-external-api-0" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.864829 4809 generic.go:334] "Generic (PLEG): container finished" podID="9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5" containerID="8257e78f2dc56cced6fe9d9d6710f4b2bac915c02afb5eae419c33d7a00aba04" exitCode=0 Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.864994 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5","Type":"ContainerDied","Data":"8257e78f2dc56cced6fe9d9d6710f4b2bac915c02afb5eae419c33d7a00aba04"} Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.897101 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7d46b6b97c-tzjzj" event={"ID":"4056654a-5349-4947-9a20-99626cb45c87","Type":"ContainerStarted","Data":"8eccdad5aacb690d2ad207eb8b9ce84ebc2e4235acca1c9a632c5ffbd9172771"} Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.921503 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6586c99478-72j5b" event={"ID":"0f8141f6-bf63-4133-9451-df9c0dd0c1e7","Type":"ContainerStarted","Data":"deac96f5d03d62ef8b9106ebf6cbcd9a47096ae5a8380d7f3af6bec1c61cc791"} Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.922516 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6586c99478-72j5b" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.922553 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6586c99478-72j5b" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.953687 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8766c893-91de-40a1-b884-381264755524-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8766c893-91de-40a1-b884-381264755524\") " pod="openstack/glance-default-external-api-0" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.953787 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p98jr\" (UniqueName: \"kubernetes.io/projected/8766c893-91de-40a1-b884-381264755524-kube-api-access-p98jr\") pod \"glance-default-external-api-0\" (UID: \"8766c893-91de-40a1-b884-381264755524\") " pod="openstack/glance-default-external-api-0" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.953931 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8766c893-91de-40a1-b884-381264755524-logs\") pod \"glance-default-external-api-0\" (UID: \"8766c893-91de-40a1-b884-381264755524\") " pod="openstack/glance-default-external-api-0" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.953980 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8766c893-91de-40a1-b884-381264755524-scripts\") pod \"glance-default-external-api-0\" (UID: \"8766c893-91de-40a1-b884-381264755524\") " pod="openstack/glance-default-external-api-0" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.954000 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3f492e23-d032-4adb-b2a5-07078c3cb028\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3f492e23-d032-4adb-b2a5-07078c3cb028\") pod \"glance-default-external-api-0\" (UID: \"8766c893-91de-40a1-b884-381264755524\") " pod="openstack/glance-default-external-api-0" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.954089 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8766c893-91de-40a1-b884-381264755524-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8766c893-91de-40a1-b884-381264755524\") " pod="openstack/glance-default-external-api-0" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.954155 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8766c893-91de-40a1-b884-381264755524-config-data\") pod \"glance-default-external-api-0\" (UID: \"8766c893-91de-40a1-b884-381264755524\") " pod="openstack/glance-default-external-api-0" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.954205 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8766c893-91de-40a1-b884-381264755524-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8766c893-91de-40a1-b884-381264755524\") " pod="openstack/glance-default-external-api-0" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.960402 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8766c893-91de-40a1-b884-381264755524-logs\") pod \"glance-default-external-api-0\" (UID: \"8766c893-91de-40a1-b884-381264755524\") " pod="openstack/glance-default-external-api-0" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.966973 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8766c893-91de-40a1-b884-381264755524-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8766c893-91de-40a1-b884-381264755524\") " pod="openstack/glance-default-external-api-0" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.967904 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8766c893-91de-40a1-b884-381264755524-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8766c893-91de-40a1-b884-381264755524\") " pod="openstack/glance-default-external-api-0" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.970957 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8766c893-91de-40a1-b884-381264755524-scripts\") pod \"glance-default-external-api-0\" (UID: \"8766c893-91de-40a1-b884-381264755524\") " pod="openstack/glance-default-external-api-0" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.982018 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8766c893-91de-40a1-b884-381264755524-config-data\") pod \"glance-default-external-api-0\" (UID: \"8766c893-91de-40a1-b884-381264755524\") " pod="openstack/glance-default-external-api-0" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.995533 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6586c99478-72j5b" podStartSLOduration=9.995512652 podStartE2EDuration="9.995512652s" podCreationTimestamp="2026-03-12 08:23:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:23:40.98880244 +0000 UTC m=+1494.570838173" watchObservedRunningTime="2026-03-12 08:23:40.995512652 +0000 UTC m=+1494.577548385" Mar 12 08:23:40 crc kubenswrapper[4809]: I0312 08:23:40.997320 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p98jr\" (UniqueName: \"kubernetes.io/projected/8766c893-91de-40a1-b884-381264755524-kube-api-access-p98jr\") pod \"glance-default-external-api-0\" (UID: \"8766c893-91de-40a1-b884-381264755524\") " pod="openstack/glance-default-external-api-0" Mar 12 08:23:41 crc kubenswrapper[4809]: I0312 08:23:40.998081 4809 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 08:23:41 crc kubenswrapper[4809]: I0312 08:23:41.012131 4809 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3f492e23-d032-4adb-b2a5-07078c3cb028\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3f492e23-d032-4adb-b2a5-07078c3cb028\") pod \"glance-default-external-api-0\" (UID: \"8766c893-91de-40a1-b884-381264755524\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/13e7c719e4a31debe9dfef24b9790869cfa71b5da18da849b6185df18b1db2d8/globalmount\"" pod="openstack/glance-default-external-api-0" Mar 12 08:23:41 crc kubenswrapper[4809]: I0312 08:23:41.000580 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8766c893-91de-40a1-b884-381264755524-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8766c893-91de-40a1-b884-381264755524\") " pod="openstack/glance-default-external-api-0" Mar 12 08:23:41 crc kubenswrapper[4809]: I0312 08:23:41.097985 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-65c74fbb78-q9c6j"] Mar 12 08:23:41 crc kubenswrapper[4809]: I0312 08:23:41.202244 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2216a968-959e-4bb5-943e-7d8c0dbd5405" path="/var/lib/kubelet/pods/2216a968-959e-4bb5-943e-7d8c0dbd5405/volumes" Mar 12 08:23:41 crc kubenswrapper[4809]: I0312 08:23:41.206763 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-76f57f5c5-gb5xf"] Mar 12 08:23:41 crc kubenswrapper[4809]: W0312 08:23:41.236864 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd470174b_7e71_48e0_936a_b527f398db7e.slice/crio-33c8bd760a2aff61bd67c0bdc3d34e55df4685528a0b346b50cf5a3da19e2532 WatchSource:0}: Error finding container 33c8bd760a2aff61bd67c0bdc3d34e55df4685528a0b346b50cf5a3da19e2532: Status 404 returned error can't find the container with id 33c8bd760a2aff61bd67c0bdc3d34e55df4685528a0b346b50cf5a3da19e2532 Mar 12 08:23:41 crc kubenswrapper[4809]: I0312 08:23:41.252334 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3f492e23-d032-4adb-b2a5-07078c3cb028\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3f492e23-d032-4adb-b2a5-07078c3cb028\") pod \"glance-default-external-api-0\" (UID: \"8766c893-91de-40a1-b884-381264755524\") " pod="openstack/glance-default-external-api-0" Mar 12 08:23:41 crc kubenswrapper[4809]: I0312 08:23:41.380676 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 12 08:23:41 crc kubenswrapper[4809]: I0312 08:23:41.384065 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-jrkvl"] Mar 12 08:23:41 crc kubenswrapper[4809]: I0312 08:23:41.612037 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 12 08:23:41 crc kubenswrapper[4809]: I0312 08:23:41.872691 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6f8c457c74-ppnrt"] Mar 12 08:23:41 crc kubenswrapper[4809]: I0312 08:23:41.886506 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-68668c5975-8z984"] Mar 12 08:23:41 crc kubenswrapper[4809]: I0312 08:23:41.898956 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7ff4d6f6d-442gb"] Mar 12 08:23:41 crc kubenswrapper[4809]: I0312 08:23:41.911765 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-66ff5d45fd-nxlnk"] Mar 12 08:23:41 crc kubenswrapper[4809]: I0312 08:23:41.990543 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7ff4d6f6d-442gb" event={"ID":"a013b38e-6ede-4108-8a0d-5fb8bed67494","Type":"ContainerStarted","Data":"1d3cba12cd1eaf7c39386bebd0132822047d3badee9a3cf5d3501570851d2243"} Mar 12 08:23:42 crc kubenswrapper[4809]: I0312 08:23:42.004248 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-jrkvl" event={"ID":"b6e69d29-6d0e-4bad-a44b-6a24e4971281","Type":"ContainerStarted","Data":"a728e89f1daf26e356a57f592e3f7a7d0d65fbfc06455f026c75a4d30fdb5efa"} Mar 12 08:23:42 crc kubenswrapper[4809]: I0312 08:23:42.004367 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-jrkvl" event={"ID":"b6e69d29-6d0e-4bad-a44b-6a24e4971281","Type":"ContainerStarted","Data":"829953f1aa9c5a5d1c33383ab69814d7cb87a60dc961f49ec51abd12299b56d0"} Mar 12 08:23:42 crc kubenswrapper[4809]: I0312 08:23:42.008997 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6749cc55-c4e2-4011-bb67-0f1676ba152a","Type":"ContainerStarted","Data":"f84528640c039303cb4e4f580e5ec019695f6453e3f2b7259c9deb955d535714"} Mar 12 08:23:42 crc kubenswrapper[4809]: I0312 08:23:42.011581 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6f8c457c74-ppnrt" event={"ID":"edad7564-3eab-49a8-a90d-7f945bf1a458","Type":"ContainerStarted","Data":"389d97530764917f838f816581c2ebf40bbbd7c44df134db47c5eec0ba36e53f"} Mar 12 08:23:42 crc kubenswrapper[4809]: I0312 08:23:42.136428 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-76f57f5c5-gb5xf" event={"ID":"d470174b-7e71-48e0-936a-b527f398db7e","Type":"ContainerStarted","Data":"af0d6563066c523ed1236ea5f0ae5afcf8b44f1644c64fe5124a7febb62bf088"} Mar 12 08:23:42 crc kubenswrapper[4809]: I0312 08:23:42.136494 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-76f57f5c5-gb5xf" event={"ID":"d470174b-7e71-48e0-936a-b527f398db7e","Type":"ContainerStarted","Data":"33c8bd760a2aff61bd67c0bdc3d34e55df4685528a0b346b50cf5a3da19e2532"} Mar 12 08:23:42 crc kubenswrapper[4809]: I0312 08:23:42.138277 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-76f57f5c5-gb5xf" Mar 12 08:23:42 crc kubenswrapper[4809]: I0312 08:23:42.213706 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-qxpjh" event={"ID":"8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a","Type":"ContainerStarted","Data":"cb0b5aae4655ec5d15ff955cf21bf0af072f7fe49e63438ef83a5f9f31218560"} Mar 12 08:23:42 crc kubenswrapper[4809]: I0312 08:23:42.238802 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-66ff5d45fd-nxlnk" event={"ID":"95d1f5f5-216c-45e1-aefc-3e54135c8dc8","Type":"ContainerStarted","Data":"89fcfeb5d03a45308ea4a4ae5c17487457e43c90592288644526e29bea644a42"} Mar 12 08:23:42 crc kubenswrapper[4809]: I0312 08:23:42.242612 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-65c74fbb78-q9c6j" event={"ID":"7a1edce5-95e6-4080-935e-07399aaa4f89","Type":"ContainerStarted","Data":"552b570ccf94bb17eeee17a5cf6a64a67ca7295489d2ada633fc5619650be6c6"} Mar 12 08:23:42 crc kubenswrapper[4809]: I0312 08:23:42.250013 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5","Type":"ContainerStarted","Data":"ee1d7cc56a40e33ce765ea3b929be822ad001472d08f862316b58dfd1fba9886"} Mar 12 08:23:42 crc kubenswrapper[4809]: I0312 08:23:42.264388 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8bcp4" Mar 12 08:23:42 crc kubenswrapper[4809]: I0312 08:23:42.268458 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8bcp4" Mar 12 08:23:42 crc kubenswrapper[4809]: I0312 08:23:42.269006 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-76f57f5c5-gb5xf" podStartSLOduration=4.268972645 podStartE2EDuration="4.268972645s" podCreationTimestamp="2026-03-12 08:23:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:23:42.21330904 +0000 UTC m=+1495.795344773" watchObservedRunningTime="2026-03-12 08:23:42.268972645 +0000 UTC m=+1495.851008378" Mar 12 08:23:42 crc kubenswrapper[4809]: I0312 08:23:42.282739 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-68668c5975-8z984" event={"ID":"afaa98ca-8124-4f9a-b990-58012066b090","Type":"ContainerStarted","Data":"6736adbd45800763c7858197d2e6e81c859e9815591a8ff89b392f2188d6c7aa"} Mar 12 08:23:42 crc kubenswrapper[4809]: I0312 08:23:42.304104 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-qxpjh" podStartSLOduration=7.993136339 podStartE2EDuration="57.304073824s" podCreationTimestamp="2026-03-12 08:22:45 +0000 UTC" firstStartedPulling="2026-03-12 08:22:49.480355607 +0000 UTC m=+1443.062391340" lastFinishedPulling="2026-03-12 08:23:38.791293092 +0000 UTC m=+1492.373328825" observedRunningTime="2026-03-12 08:23:42.282690096 +0000 UTC m=+1495.864725829" watchObservedRunningTime="2026-03-12 08:23:42.304073824 +0000 UTC m=+1495.886109577" Mar 12 08:23:42 crc kubenswrapper[4809]: I0312 08:23:42.484204 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 12 08:23:43 crc kubenswrapper[4809]: I0312 08:23:43.374652 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7ff4d6f6d-442gb" event={"ID":"a013b38e-6ede-4108-8a0d-5fb8bed67494","Type":"ContainerStarted","Data":"4b0af71c1f5b93a43e124c991707fa02a53a71a7089eb71e2bb9c56348de1df6"} Mar 12 08:23:43 crc kubenswrapper[4809]: I0312 08:23:43.381670 4809 generic.go:334] "Generic (PLEG): container finished" podID="b6e69d29-6d0e-4bad-a44b-6a24e4971281" containerID="a728e89f1daf26e356a57f592e3f7a7d0d65fbfc06455f026c75a4d30fdb5efa" exitCode=0 Mar 12 08:23:43 crc kubenswrapper[4809]: I0312 08:23:43.382221 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-jrkvl" event={"ID":"b6e69d29-6d0e-4bad-a44b-6a24e4971281","Type":"ContainerDied","Data":"a728e89f1daf26e356a57f592e3f7a7d0d65fbfc06455f026c75a4d30fdb5efa"} Mar 12 08:23:43 crc kubenswrapper[4809]: I0312 08:23:43.382294 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-jrkvl" event={"ID":"b6e69d29-6d0e-4bad-a44b-6a24e4971281","Type":"ContainerStarted","Data":"bc62f61b9d135f8f6be1a4a157b54c9f2294d5639bfa26f37f2e69241d3ec8f4"} Mar 12 08:23:43 crc kubenswrapper[4809]: I0312 08:23:43.382313 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85ff748b95-jrkvl" Mar 12 08:23:43 crc kubenswrapper[4809]: I0312 08:23:43.389636 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8766c893-91de-40a1-b884-381264755524","Type":"ContainerStarted","Data":"8b26403ec7d11594f64ea47fce7f5382e8da5ed65d075f7e10340a77cddfdd51"} Mar 12 08:23:43 crc kubenswrapper[4809]: I0312 08:23:43.394294 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-66ff5d45fd-nxlnk" event={"ID":"95d1f5f5-216c-45e1-aefc-3e54135c8dc8","Type":"ContainerStarted","Data":"72e8bc1b8a6396eeb429ccc2b2b7ac525c14e6bf18f83121c8c44933ce18c5fb"} Mar 12 08:23:43 crc kubenswrapper[4809]: I0312 08:23:43.417289 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85ff748b95-jrkvl" podStartSLOduration=4.417266154 podStartE2EDuration="4.417266154s" podCreationTimestamp="2026-03-12 08:23:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:23:43.407732866 +0000 UTC m=+1496.989768619" watchObservedRunningTime="2026-03-12 08:23:43.417266154 +0000 UTC m=+1496.999301887" Mar 12 08:23:43 crc kubenswrapper[4809]: I0312 08:23:43.426222 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8bcp4" podUID="2d78fd7f-3718-4999-a661-ad590dc808a5" containerName="registry-server" probeResult="failure" output=< Mar 12 08:23:43 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 08:23:43 crc kubenswrapper[4809]: > Mar 12 08:23:44 crc kubenswrapper[4809]: I0312 08:23:44.418512 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7ff4d6f6d-442gb" event={"ID":"a013b38e-6ede-4108-8a0d-5fb8bed67494","Type":"ContainerStarted","Data":"cf14a34735dd7332c0abf54b54f0b2b51d5158472dfd0c3e1281ef8e0c53e7f1"} Mar 12 08:23:44 crc kubenswrapper[4809]: I0312 08:23:44.419162 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7ff4d6f6d-442gb" Mar 12 08:23:44 crc kubenswrapper[4809]: I0312 08:23:44.419197 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7ff4d6f6d-442gb" Mar 12 08:23:44 crc kubenswrapper[4809]: I0312 08:23:44.425394 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8766c893-91de-40a1-b884-381264755524","Type":"ContainerStarted","Data":"e4d92fee75bd1ec69fd16528e38ef58e14eb355e4dd1b3a0540f3a72c00dc6e5"} Mar 12 08:23:44 crc kubenswrapper[4809]: I0312 08:23:44.428706 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-66ff5d45fd-nxlnk" event={"ID":"95d1f5f5-216c-45e1-aefc-3e54135c8dc8","Type":"ContainerStarted","Data":"2adb252ef094c6ddfcb8042c46e139beff54844d29e7e89ebdf52570ff4d54b1"} Mar 12 08:23:44 crc kubenswrapper[4809]: I0312 08:23:44.428945 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-66ff5d45fd-nxlnk" Mar 12 08:23:44 crc kubenswrapper[4809]: I0312 08:23:44.428966 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-66ff5d45fd-nxlnk" Mar 12 08:23:44 crc kubenswrapper[4809]: I0312 08:23:44.433297 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6749cc55-c4e2-4011-bb67-0f1676ba152a","Type":"ContainerStarted","Data":"4b3a4b7f8aff56f30bc85b7810c93920eb309ccc4f891a7bd049a1bd7d8eebb8"} Mar 12 08:23:44 crc kubenswrapper[4809]: I0312 08:23:44.457435 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7ff4d6f6d-442gb" podStartSLOduration=5.457394506 podStartE2EDuration="5.457394506s" podCreationTimestamp="2026-03-12 08:23:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:23:44.4375515 +0000 UTC m=+1498.019587253" watchObservedRunningTime="2026-03-12 08:23:44.457394506 +0000 UTC m=+1498.039430239" Mar 12 08:23:44 crc kubenswrapper[4809]: I0312 08:23:44.491298 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-66ff5d45fd-nxlnk" podStartSLOduration=5.491269643 podStartE2EDuration="5.491269643s" podCreationTimestamp="2026-03-12 08:23:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:23:44.465240699 +0000 UTC m=+1498.047276432" watchObservedRunningTime="2026-03-12 08:23:44.491269643 +0000 UTC m=+1498.073305376" Mar 12 08:23:44 crc kubenswrapper[4809]: I0312 08:23:44.945668 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-66ff5d45fd-nxlnk"] Mar 12 08:23:45 crc kubenswrapper[4809]: I0312 08:23:45.043085 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-787b8bc5d6-ldxk6"] Mar 12 08:23:45 crc kubenswrapper[4809]: I0312 08:23:45.046223 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-787b8bc5d6-ldxk6" Mar 12 08:23:45 crc kubenswrapper[4809]: I0312 08:23:45.050967 4809 patch_prober.go:28] interesting pod/machine-config-daemon-h6d4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 08:23:45 crc kubenswrapper[4809]: I0312 08:23:45.051031 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 08:23:45 crc kubenswrapper[4809]: I0312 08:23:45.051486 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Mar 12 08:23:45 crc kubenswrapper[4809]: I0312 08:23:45.051688 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Mar 12 08:23:45 crc kubenswrapper[4809]: I0312 08:23:45.057565 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-787b8bc5d6-ldxk6"] Mar 12 08:23:45 crc kubenswrapper[4809]: I0312 08:23:45.188885 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2396067f-cc69-4c96-8acd-a74b7667ebf3-internal-tls-certs\") pod \"barbican-api-787b8bc5d6-ldxk6\" (UID: \"2396067f-cc69-4c96-8acd-a74b7667ebf3\") " pod="openstack/barbican-api-787b8bc5d6-ldxk6" Mar 12 08:23:45 crc kubenswrapper[4809]: I0312 08:23:45.189335 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2396067f-cc69-4c96-8acd-a74b7667ebf3-public-tls-certs\") pod \"barbican-api-787b8bc5d6-ldxk6\" (UID: \"2396067f-cc69-4c96-8acd-a74b7667ebf3\") " pod="openstack/barbican-api-787b8bc5d6-ldxk6" Mar 12 08:23:45 crc kubenswrapper[4809]: I0312 08:23:45.189505 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2396067f-cc69-4c96-8acd-a74b7667ebf3-logs\") pod \"barbican-api-787b8bc5d6-ldxk6\" (UID: \"2396067f-cc69-4c96-8acd-a74b7667ebf3\") " pod="openstack/barbican-api-787b8bc5d6-ldxk6" Mar 12 08:23:45 crc kubenswrapper[4809]: I0312 08:23:45.189622 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wtrd\" (UniqueName: \"kubernetes.io/projected/2396067f-cc69-4c96-8acd-a74b7667ebf3-kube-api-access-8wtrd\") pod \"barbican-api-787b8bc5d6-ldxk6\" (UID: \"2396067f-cc69-4c96-8acd-a74b7667ebf3\") " pod="openstack/barbican-api-787b8bc5d6-ldxk6" Mar 12 08:23:45 crc kubenswrapper[4809]: I0312 08:23:45.189950 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2396067f-cc69-4c96-8acd-a74b7667ebf3-combined-ca-bundle\") pod \"barbican-api-787b8bc5d6-ldxk6\" (UID: \"2396067f-cc69-4c96-8acd-a74b7667ebf3\") " pod="openstack/barbican-api-787b8bc5d6-ldxk6" Mar 12 08:23:45 crc kubenswrapper[4809]: I0312 08:23:45.190157 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2396067f-cc69-4c96-8acd-a74b7667ebf3-config-data-custom\") pod \"barbican-api-787b8bc5d6-ldxk6\" (UID: \"2396067f-cc69-4c96-8acd-a74b7667ebf3\") " pod="openstack/barbican-api-787b8bc5d6-ldxk6" Mar 12 08:23:45 crc kubenswrapper[4809]: I0312 08:23:45.190369 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2396067f-cc69-4c96-8acd-a74b7667ebf3-config-data\") pod \"barbican-api-787b8bc5d6-ldxk6\" (UID: \"2396067f-cc69-4c96-8acd-a74b7667ebf3\") " pod="openstack/barbican-api-787b8bc5d6-ldxk6" Mar 12 08:23:45 crc kubenswrapper[4809]: I0312 08:23:45.315802 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2396067f-cc69-4c96-8acd-a74b7667ebf3-config-data\") pod \"barbican-api-787b8bc5d6-ldxk6\" (UID: \"2396067f-cc69-4c96-8acd-a74b7667ebf3\") " pod="openstack/barbican-api-787b8bc5d6-ldxk6" Mar 12 08:23:45 crc kubenswrapper[4809]: I0312 08:23:45.316035 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2396067f-cc69-4c96-8acd-a74b7667ebf3-internal-tls-certs\") pod \"barbican-api-787b8bc5d6-ldxk6\" (UID: \"2396067f-cc69-4c96-8acd-a74b7667ebf3\") " pod="openstack/barbican-api-787b8bc5d6-ldxk6" Mar 12 08:23:45 crc kubenswrapper[4809]: I0312 08:23:45.316063 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2396067f-cc69-4c96-8acd-a74b7667ebf3-public-tls-certs\") pod \"barbican-api-787b8bc5d6-ldxk6\" (UID: \"2396067f-cc69-4c96-8acd-a74b7667ebf3\") " pod="openstack/barbican-api-787b8bc5d6-ldxk6" Mar 12 08:23:45 crc kubenswrapper[4809]: I0312 08:23:45.316783 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2396067f-cc69-4c96-8acd-a74b7667ebf3-logs\") pod \"barbican-api-787b8bc5d6-ldxk6\" (UID: \"2396067f-cc69-4c96-8acd-a74b7667ebf3\") " pod="openstack/barbican-api-787b8bc5d6-ldxk6" Mar 12 08:23:45 crc kubenswrapper[4809]: I0312 08:23:45.319233 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2396067f-cc69-4c96-8acd-a74b7667ebf3-logs\") pod \"barbican-api-787b8bc5d6-ldxk6\" (UID: \"2396067f-cc69-4c96-8acd-a74b7667ebf3\") " pod="openstack/barbican-api-787b8bc5d6-ldxk6" Mar 12 08:23:45 crc kubenswrapper[4809]: I0312 08:23:45.324580 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wtrd\" (UniqueName: \"kubernetes.io/projected/2396067f-cc69-4c96-8acd-a74b7667ebf3-kube-api-access-8wtrd\") pod \"barbican-api-787b8bc5d6-ldxk6\" (UID: \"2396067f-cc69-4c96-8acd-a74b7667ebf3\") " pod="openstack/barbican-api-787b8bc5d6-ldxk6" Mar 12 08:23:45 crc kubenswrapper[4809]: I0312 08:23:45.325175 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2396067f-cc69-4c96-8acd-a74b7667ebf3-combined-ca-bundle\") pod \"barbican-api-787b8bc5d6-ldxk6\" (UID: \"2396067f-cc69-4c96-8acd-a74b7667ebf3\") " pod="openstack/barbican-api-787b8bc5d6-ldxk6" Mar 12 08:23:45 crc kubenswrapper[4809]: I0312 08:23:45.325223 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2396067f-cc69-4c96-8acd-a74b7667ebf3-config-data-custom\") pod \"barbican-api-787b8bc5d6-ldxk6\" (UID: \"2396067f-cc69-4c96-8acd-a74b7667ebf3\") " pod="openstack/barbican-api-787b8bc5d6-ldxk6" Mar 12 08:23:45 crc kubenswrapper[4809]: I0312 08:23:45.331324 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2396067f-cc69-4c96-8acd-a74b7667ebf3-config-data\") pod \"barbican-api-787b8bc5d6-ldxk6\" (UID: \"2396067f-cc69-4c96-8acd-a74b7667ebf3\") " pod="openstack/barbican-api-787b8bc5d6-ldxk6" Mar 12 08:23:45 crc kubenswrapper[4809]: I0312 08:23:45.332025 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2396067f-cc69-4c96-8acd-a74b7667ebf3-public-tls-certs\") pod \"barbican-api-787b8bc5d6-ldxk6\" (UID: \"2396067f-cc69-4c96-8acd-a74b7667ebf3\") " pod="openstack/barbican-api-787b8bc5d6-ldxk6" Mar 12 08:23:45 crc kubenswrapper[4809]: I0312 08:23:45.356564 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2396067f-cc69-4c96-8acd-a74b7667ebf3-internal-tls-certs\") pod \"barbican-api-787b8bc5d6-ldxk6\" (UID: \"2396067f-cc69-4c96-8acd-a74b7667ebf3\") " pod="openstack/barbican-api-787b8bc5d6-ldxk6" Mar 12 08:23:45 crc kubenswrapper[4809]: I0312 08:23:45.356799 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2396067f-cc69-4c96-8acd-a74b7667ebf3-config-data-custom\") pod \"barbican-api-787b8bc5d6-ldxk6\" (UID: \"2396067f-cc69-4c96-8acd-a74b7667ebf3\") " pod="openstack/barbican-api-787b8bc5d6-ldxk6" Mar 12 08:23:45 crc kubenswrapper[4809]: I0312 08:23:45.361163 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wtrd\" (UniqueName: \"kubernetes.io/projected/2396067f-cc69-4c96-8acd-a74b7667ebf3-kube-api-access-8wtrd\") pod \"barbican-api-787b8bc5d6-ldxk6\" (UID: \"2396067f-cc69-4c96-8acd-a74b7667ebf3\") " pod="openstack/barbican-api-787b8bc5d6-ldxk6" Mar 12 08:23:45 crc kubenswrapper[4809]: I0312 08:23:45.364534 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2396067f-cc69-4c96-8acd-a74b7667ebf3-combined-ca-bundle\") pod \"barbican-api-787b8bc5d6-ldxk6\" (UID: \"2396067f-cc69-4c96-8acd-a74b7667ebf3\") " pod="openstack/barbican-api-787b8bc5d6-ldxk6" Mar 12 08:23:45 crc kubenswrapper[4809]: I0312 08:23:45.374345 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-787b8bc5d6-ldxk6" Mar 12 08:23:46 crc kubenswrapper[4809]: I0312 08:23:46.527886 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-66ff5d45fd-nxlnk" podUID="95d1f5f5-216c-45e1-aefc-3e54135c8dc8" containerName="barbican-api-log" containerID="cri-o://72e8bc1b8a6396eeb429ccc2b2b7ac525c14e6bf18f83121c8c44933ce18c5fb" gracePeriod=30 Mar 12 08:23:46 crc kubenswrapper[4809]: I0312 08:23:46.528163 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5","Type":"ContainerStarted","Data":"0ff93fcb87bef8c944777572265509efb5e45b4f12df7e1653e22dce2b8e10b2"} Mar 12 08:23:46 crc kubenswrapper[4809]: I0312 08:23:46.528916 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-66ff5d45fd-nxlnk" podUID="95d1f5f5-216c-45e1-aefc-3e54135c8dc8" containerName="barbican-api" containerID="cri-o://2adb252ef094c6ddfcb8042c46e139beff54844d29e7e89ebdf52570ff4d54b1" gracePeriod=30 Mar 12 08:23:47 crc kubenswrapper[4809]: I0312 08:23:47.184215 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-787b8bc5d6-ldxk6"] Mar 12 08:23:47 crc kubenswrapper[4809]: I0312 08:23:47.657591 4809 generic.go:334] "Generic (PLEG): container finished" podID="95d1f5f5-216c-45e1-aefc-3e54135c8dc8" containerID="2adb252ef094c6ddfcb8042c46e139beff54844d29e7e89ebdf52570ff4d54b1" exitCode=0 Mar 12 08:23:47 crc kubenswrapper[4809]: I0312 08:23:47.658423 4809 generic.go:334] "Generic (PLEG): container finished" podID="95d1f5f5-216c-45e1-aefc-3e54135c8dc8" containerID="72e8bc1b8a6396eeb429ccc2b2b7ac525c14e6bf18f83121c8c44933ce18c5fb" exitCode=143 Mar 12 08:23:47 crc kubenswrapper[4809]: I0312 08:23:47.658506 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-66ff5d45fd-nxlnk" event={"ID":"95d1f5f5-216c-45e1-aefc-3e54135c8dc8","Type":"ContainerDied","Data":"2adb252ef094c6ddfcb8042c46e139beff54844d29e7e89ebdf52570ff4d54b1"} Mar 12 08:23:47 crc kubenswrapper[4809]: I0312 08:23:47.658540 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-66ff5d45fd-nxlnk" event={"ID":"95d1f5f5-216c-45e1-aefc-3e54135c8dc8","Type":"ContainerDied","Data":"72e8bc1b8a6396eeb429ccc2b2b7ac525c14e6bf18f83121c8c44933ce18c5fb"} Mar 12 08:23:47 crc kubenswrapper[4809]: I0312 08:23:47.669107 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6f8c457c74-ppnrt" event={"ID":"edad7564-3eab-49a8-a90d-7f945bf1a458","Type":"ContainerStarted","Data":"efdda8b719d2242a25c8a31f19fcfe0aacc6f6be481400f4446046f7f43a9666"} Mar 12 08:23:47 crc kubenswrapper[4809]: I0312 08:23:47.672537 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-65c74fbb78-q9c6j" event={"ID":"7a1edce5-95e6-4080-935e-07399aaa4f89","Type":"ContainerStarted","Data":"d3c2172fcadc00c0e9f60a8390e41884b9cc66b77ca35abf7ca63644e94ab4af"} Mar 12 08:23:47 crc kubenswrapper[4809]: I0312 08:23:47.674904 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-787b8bc5d6-ldxk6" event={"ID":"2396067f-cc69-4c96-8acd-a74b7667ebf3","Type":"ContainerStarted","Data":"1d2c3ec8b0868d84d5e2a914adb420b0347ebca9265c66e84e97168ce85b45e8"} Mar 12 08:23:47 crc kubenswrapper[4809]: I0312 08:23:47.699854 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5","Type":"ContainerStarted","Data":"b70aa78f90dd1c8c340610c2af6c92b02a1f9e17985dd0e0304bca247e2ee861"} Mar 12 08:23:47 crc kubenswrapper[4809]: I0312 08:23:47.912561 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-66ff5d45fd-nxlnk" Mar 12 08:23:47 crc kubenswrapper[4809]: I0312 08:23:47.940583 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=24.940559778 podStartE2EDuration="24.940559778s" podCreationTimestamp="2026-03-12 08:23:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:23:47.7506351 +0000 UTC m=+1501.332670843" watchObservedRunningTime="2026-03-12 08:23:47.940559778 +0000 UTC m=+1501.522595511" Mar 12 08:23:48 crc kubenswrapper[4809]: I0312 08:23:48.023150 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95d1f5f5-216c-45e1-aefc-3e54135c8dc8-logs\") pod \"95d1f5f5-216c-45e1-aefc-3e54135c8dc8\" (UID: \"95d1f5f5-216c-45e1-aefc-3e54135c8dc8\") " Mar 12 08:23:48 crc kubenswrapper[4809]: I0312 08:23:48.023261 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhq4l\" (UniqueName: \"kubernetes.io/projected/95d1f5f5-216c-45e1-aefc-3e54135c8dc8-kube-api-access-bhq4l\") pod \"95d1f5f5-216c-45e1-aefc-3e54135c8dc8\" (UID: \"95d1f5f5-216c-45e1-aefc-3e54135c8dc8\") " Mar 12 08:23:48 crc kubenswrapper[4809]: I0312 08:23:48.023480 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/95d1f5f5-216c-45e1-aefc-3e54135c8dc8-config-data-custom\") pod \"95d1f5f5-216c-45e1-aefc-3e54135c8dc8\" (UID: \"95d1f5f5-216c-45e1-aefc-3e54135c8dc8\") " Mar 12 08:23:48 crc kubenswrapper[4809]: I0312 08:23:48.023506 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95d1f5f5-216c-45e1-aefc-3e54135c8dc8-config-data\") pod \"95d1f5f5-216c-45e1-aefc-3e54135c8dc8\" (UID: \"95d1f5f5-216c-45e1-aefc-3e54135c8dc8\") " Mar 12 08:23:48 crc kubenswrapper[4809]: I0312 08:23:48.023600 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95d1f5f5-216c-45e1-aefc-3e54135c8dc8-combined-ca-bundle\") pod \"95d1f5f5-216c-45e1-aefc-3e54135c8dc8\" (UID: \"95d1f5f5-216c-45e1-aefc-3e54135c8dc8\") " Mar 12 08:23:48 crc kubenswrapper[4809]: I0312 08:23:48.035184 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95d1f5f5-216c-45e1-aefc-3e54135c8dc8-logs" (OuterVolumeSpecName: "logs") pod "95d1f5f5-216c-45e1-aefc-3e54135c8dc8" (UID: "95d1f5f5-216c-45e1-aefc-3e54135c8dc8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:23:48 crc kubenswrapper[4809]: I0312 08:23:48.049340 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95d1f5f5-216c-45e1-aefc-3e54135c8dc8-kube-api-access-bhq4l" (OuterVolumeSpecName: "kube-api-access-bhq4l") pod "95d1f5f5-216c-45e1-aefc-3e54135c8dc8" (UID: "95d1f5f5-216c-45e1-aefc-3e54135c8dc8"). InnerVolumeSpecName "kube-api-access-bhq4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:23:48 crc kubenswrapper[4809]: I0312 08:23:48.066165 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95d1f5f5-216c-45e1-aefc-3e54135c8dc8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "95d1f5f5-216c-45e1-aefc-3e54135c8dc8" (UID: "95d1f5f5-216c-45e1-aefc-3e54135c8dc8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:23:48 crc kubenswrapper[4809]: I0312 08:23:48.126035 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhq4l\" (UniqueName: \"kubernetes.io/projected/95d1f5f5-216c-45e1-aefc-3e54135c8dc8-kube-api-access-bhq4l\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:48 crc kubenswrapper[4809]: I0312 08:23:48.126068 4809 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/95d1f5f5-216c-45e1-aefc-3e54135c8dc8-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:48 crc kubenswrapper[4809]: I0312 08:23:48.126077 4809 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95d1f5f5-216c-45e1-aefc-3e54135c8dc8-logs\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:48 crc kubenswrapper[4809]: I0312 08:23:48.364636 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95d1f5f5-216c-45e1-aefc-3e54135c8dc8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "95d1f5f5-216c-45e1-aefc-3e54135c8dc8" (UID: "95d1f5f5-216c-45e1-aefc-3e54135c8dc8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:23:48 crc kubenswrapper[4809]: I0312 08:23:48.435184 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95d1f5f5-216c-45e1-aefc-3e54135c8dc8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:48 crc kubenswrapper[4809]: I0312 08:23:48.438602 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95d1f5f5-216c-45e1-aefc-3e54135c8dc8-config-data" (OuterVolumeSpecName: "config-data") pod "95d1f5f5-216c-45e1-aefc-3e54135c8dc8" (UID: "95d1f5f5-216c-45e1-aefc-3e54135c8dc8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:23:48 crc kubenswrapper[4809]: I0312 08:23:48.537509 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95d1f5f5-216c-45e1-aefc-3e54135c8dc8-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:48 crc kubenswrapper[4809]: I0312 08:23:48.766428 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-787b8bc5d6-ldxk6" event={"ID":"2396067f-cc69-4c96-8acd-a74b7667ebf3","Type":"ContainerStarted","Data":"2069ab55945cad3bd17344763e237cdeb8a738fe615a0c9b2230ed7893bdcddf"} Mar 12 08:23:48 crc kubenswrapper[4809]: I0312 08:23:48.793686 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-68668c5975-8z984" event={"ID":"afaa98ca-8124-4f9a-b990-58012066b090","Type":"ContainerStarted","Data":"893975203b8b28c270f06d50ac51ec712e1a4d795a336b2a587c0f303a0947b8"} Mar 12 08:23:48 crc kubenswrapper[4809]: I0312 08:23:48.793974 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-68668c5975-8z984" event={"ID":"afaa98ca-8124-4f9a-b990-58012066b090","Type":"ContainerStarted","Data":"0a5c5caff5913376b692bbdb4701414bc78a9a8710c73599b48376544fa5a731"} Mar 12 08:23:48 crc kubenswrapper[4809]: I0312 08:23:48.819317 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7d46b6b97c-tzjzj" event={"ID":"4056654a-5349-4947-9a20-99626cb45c87","Type":"ContainerStarted","Data":"59909581a8e075412baec12432956bda87a8d1a40bf16b140c0a714ceb712a79"} Mar 12 08:23:48 crc kubenswrapper[4809]: I0312 08:23:48.819379 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7d46b6b97c-tzjzj" event={"ID":"4056654a-5349-4947-9a20-99626cb45c87","Type":"ContainerStarted","Data":"938fceeaa88246079030664c43060e5e347a3940d5aa4be9c6fa1115c13dd005"} Mar 12 08:23:48 crc kubenswrapper[4809]: I0312 08:23:48.837512 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-68668c5975-8z984" podStartSLOduration=5.132626952 podStartE2EDuration="9.837488768s" podCreationTimestamp="2026-03-12 08:23:39 +0000 UTC" firstStartedPulling="2026-03-12 08:23:41.920374966 +0000 UTC m=+1495.502410699" lastFinishedPulling="2026-03-12 08:23:46.625236782 +0000 UTC m=+1500.207272515" observedRunningTime="2026-03-12 08:23:48.825667317 +0000 UTC m=+1502.407703050" watchObservedRunningTime="2026-03-12 08:23:48.837488768 +0000 UTC m=+1502.419524501" Mar 12 08:23:48 crc kubenswrapper[4809]: I0312 08:23:48.854284 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8766c893-91de-40a1-b884-381264755524","Type":"ContainerStarted","Data":"6fe198a9e0493ad9e7ac15c32d55e3b52abb8fa1dcf93d05ded3ac3af3f2ae7c"} Mar 12 08:23:48 crc kubenswrapper[4809]: I0312 08:23:48.882003 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-7d46b6b97c-tzjzj" podStartSLOduration=4.962741299 podStartE2EDuration="10.881940939s" podCreationTimestamp="2026-03-12 08:23:38 +0000 UTC" firstStartedPulling="2026-03-12 08:23:40.564586176 +0000 UTC m=+1494.146621909" lastFinishedPulling="2026-03-12 08:23:46.483785816 +0000 UTC m=+1500.065821549" observedRunningTime="2026-03-12 08:23:48.856423759 +0000 UTC m=+1502.438459492" watchObservedRunningTime="2026-03-12 08:23:48.881940939 +0000 UTC m=+1502.463976672" Mar 12 08:23:48 crc kubenswrapper[4809]: I0312 08:23:48.882844 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-66ff5d45fd-nxlnk" event={"ID":"95d1f5f5-216c-45e1-aefc-3e54135c8dc8","Type":"ContainerDied","Data":"89fcfeb5d03a45308ea4a4ae5c17487457e43c90592288644526e29bea644a42"} Mar 12 08:23:48 crc kubenswrapper[4809]: I0312 08:23:48.882919 4809 scope.go:117] "RemoveContainer" containerID="2adb252ef094c6ddfcb8042c46e139beff54844d29e7e89ebdf52570ff4d54b1" Mar 12 08:23:48 crc kubenswrapper[4809]: I0312 08:23:48.882960 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-66ff5d45fd-nxlnk" Mar 12 08:23:48 crc kubenswrapper[4809]: I0312 08:23:48.915884 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=8.915866408 podStartE2EDuration="8.915866408s" podCreationTimestamp="2026-03-12 08:23:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:23:48.914358746 +0000 UTC m=+1502.496394469" watchObservedRunningTime="2026-03-12 08:23:48.915866408 +0000 UTC m=+1502.497902141" Mar 12 08:23:48 crc kubenswrapper[4809]: I0312 08:23:48.922449 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6749cc55-c4e2-4011-bb67-0f1676ba152a","Type":"ContainerStarted","Data":"26c86f288b5f00692fc2f46685df82851d0a87b3cba2344baa420954d27b9f69"} Mar 12 08:23:48 crc kubenswrapper[4809]: I0312 08:23:48.970821 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-7d46b6b97c-tzjzj"] Mar 12 08:23:48 crc kubenswrapper[4809]: I0312 08:23:48.972814 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6f8c457c74-ppnrt" event={"ID":"edad7564-3eab-49a8-a90d-7f945bf1a458","Type":"ContainerStarted","Data":"59f520dfd32bef30114b9b7702ebb5a74ae78b1d83bba1937dff304ec64027f5"} Mar 12 08:23:48 crc kubenswrapper[4809]: I0312 08:23:48.982861 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-65c74fbb78-q9c6j" event={"ID":"7a1edce5-95e6-4080-935e-07399aaa4f89","Type":"ContainerStarted","Data":"a8657b1ff2bb53e82c78c19f96e3df8f79962e9fd0898bff6f9c3c136e1c82be"} Mar 12 08:23:49 crc kubenswrapper[4809]: I0312 08:23:49.022430 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=11.022408959 podStartE2EDuration="11.022408959s" podCreationTimestamp="2026-03-12 08:23:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:23:49.020642961 +0000 UTC m=+1502.602678694" watchObservedRunningTime="2026-03-12 08:23:49.022408959 +0000 UTC m=+1502.604444692" Mar 12 08:23:49 crc kubenswrapper[4809]: I0312 08:23:49.046518 4809 scope.go:117] "RemoveContainer" containerID="72e8bc1b8a6396eeb429ccc2b2b7ac525c14e6bf18f83121c8c44933ce18c5fb" Mar 12 08:23:49 crc kubenswrapper[4809]: I0312 08:23:49.065542 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Mar 12 08:23:49 crc kubenswrapper[4809]: I0312 08:23:49.074229 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-6f8c457c74-ppnrt" podStartSLOduration=5.420776246 podStartE2EDuration="10.07419979s" podCreationTimestamp="2026-03-12 08:23:39 +0000 UTC" firstStartedPulling="2026-03-12 08:23:41.948184639 +0000 UTC m=+1495.530220372" lastFinishedPulling="2026-03-12 08:23:46.601608183 +0000 UTC m=+1500.183643916" observedRunningTime="2026-03-12 08:23:49.048200966 +0000 UTC m=+1502.630236699" watchObservedRunningTime="2026-03-12 08:23:49.07419979 +0000 UTC m=+1502.656235523" Mar 12 08:23:49 crc kubenswrapper[4809]: I0312 08:23:49.197844 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-65c74fbb78-q9c6j"] Mar 12 08:23:49 crc kubenswrapper[4809]: I0312 08:23:49.200710 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-65c74fbb78-q9c6j" podStartSLOduration=5.774424384 podStartE2EDuration="11.200698791s" podCreationTimestamp="2026-03-12 08:23:38 +0000 UTC" firstStartedPulling="2026-03-12 08:23:41.20730175 +0000 UTC m=+1494.789337483" lastFinishedPulling="2026-03-12 08:23:46.633576167 +0000 UTC m=+1500.215611890" observedRunningTime="2026-03-12 08:23:49.114954092 +0000 UTC m=+1502.696989825" watchObservedRunningTime="2026-03-12 08:23:49.200698791 +0000 UTC m=+1502.782734524" Mar 12 08:23:49 crc kubenswrapper[4809]: I0312 08:23:49.284035 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-66ff5d45fd-nxlnk"] Mar 12 08:23:49 crc kubenswrapper[4809]: I0312 08:23:49.324167 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-66ff5d45fd-nxlnk"] Mar 12 08:23:49 crc kubenswrapper[4809]: I0312 08:23:49.797828 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 12 08:23:49 crc kubenswrapper[4809]: I0312 08:23:49.799408 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 12 08:23:49 crc kubenswrapper[4809]: I0312 08:23:49.827161 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-85ff748b95-jrkvl" Mar 12 08:23:49 crc kubenswrapper[4809]: I0312 08:23:49.841849 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 12 08:23:49 crc kubenswrapper[4809]: I0312 08:23:49.903814 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-fp9lq"] Mar 12 08:23:49 crc kubenswrapper[4809]: I0312 08:23:49.911817 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-fp9lq" podUID="fe74aae4-4752-4c38-99a7-eb88886cf5bf" containerName="dnsmasq-dns" containerID="cri-o://2efb76dd17a8757c1e6bd2b3322198730b10e90bda1d870079d1ba0616392268" gracePeriod=10 Mar 12 08:23:50 crc kubenswrapper[4809]: I0312 08:23:50.002402 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 12 08:23:50 crc kubenswrapper[4809]: I0312 08:23:50.072879 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-787b8bc5d6-ldxk6" event={"ID":"2396067f-cc69-4c96-8acd-a74b7667ebf3","Type":"ContainerStarted","Data":"07d355632162667b80d283b719562bf88c6119b91414b2942acbad979b4e74b4"} Mar 12 08:23:50 crc kubenswrapper[4809]: I0312 08:23:50.074636 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-787b8bc5d6-ldxk6" Mar 12 08:23:50 crc kubenswrapper[4809]: I0312 08:23:50.074779 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-787b8bc5d6-ldxk6" Mar 12 08:23:50 crc kubenswrapper[4809]: I0312 08:23:50.076518 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 12 08:23:50 crc kubenswrapper[4809]: I0312 08:23:50.076555 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 12 08:23:50 crc kubenswrapper[4809]: I0312 08:23:50.133932 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-787b8bc5d6-ldxk6" podStartSLOduration=6.133907272 podStartE2EDuration="6.133907272s" podCreationTimestamp="2026-03-12 08:23:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:23:50.101861806 +0000 UTC m=+1503.683897549" watchObservedRunningTime="2026-03-12 08:23:50.133907272 +0000 UTC m=+1503.715943005" Mar 12 08:23:50 crc kubenswrapper[4809]: I0312 08:23:50.620410 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-fp9lq" Mar 12 08:23:50 crc kubenswrapper[4809]: I0312 08:23:50.736285 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fe74aae4-4752-4c38-99a7-eb88886cf5bf-ovsdbserver-sb\") pod \"fe74aae4-4752-4c38-99a7-eb88886cf5bf\" (UID: \"fe74aae4-4752-4c38-99a7-eb88886cf5bf\") " Mar 12 08:23:50 crc kubenswrapper[4809]: I0312 08:23:50.736353 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe74aae4-4752-4c38-99a7-eb88886cf5bf-config\") pod \"fe74aae4-4752-4c38-99a7-eb88886cf5bf\" (UID: \"fe74aae4-4752-4c38-99a7-eb88886cf5bf\") " Mar 12 08:23:50 crc kubenswrapper[4809]: I0312 08:23:50.736509 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fe74aae4-4752-4c38-99a7-eb88886cf5bf-ovsdbserver-nb\") pod \"fe74aae4-4752-4c38-99a7-eb88886cf5bf\" (UID: \"fe74aae4-4752-4c38-99a7-eb88886cf5bf\") " Mar 12 08:23:50 crc kubenswrapper[4809]: I0312 08:23:50.736582 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gk9j\" (UniqueName: \"kubernetes.io/projected/fe74aae4-4752-4c38-99a7-eb88886cf5bf-kube-api-access-8gk9j\") pod \"fe74aae4-4752-4c38-99a7-eb88886cf5bf\" (UID: \"fe74aae4-4752-4c38-99a7-eb88886cf5bf\") " Mar 12 08:23:50 crc kubenswrapper[4809]: I0312 08:23:50.736639 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fe74aae4-4752-4c38-99a7-eb88886cf5bf-dns-swift-storage-0\") pod \"fe74aae4-4752-4c38-99a7-eb88886cf5bf\" (UID: \"fe74aae4-4752-4c38-99a7-eb88886cf5bf\") " Mar 12 08:23:50 crc kubenswrapper[4809]: I0312 08:23:50.736699 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe74aae4-4752-4c38-99a7-eb88886cf5bf-dns-svc\") pod \"fe74aae4-4752-4c38-99a7-eb88886cf5bf\" (UID: \"fe74aae4-4752-4c38-99a7-eb88886cf5bf\") " Mar 12 08:23:50 crc kubenswrapper[4809]: I0312 08:23:50.786383 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe74aae4-4752-4c38-99a7-eb88886cf5bf-kube-api-access-8gk9j" (OuterVolumeSpecName: "kube-api-access-8gk9j") pod "fe74aae4-4752-4c38-99a7-eb88886cf5bf" (UID: "fe74aae4-4752-4c38-99a7-eb88886cf5bf"). InnerVolumeSpecName "kube-api-access-8gk9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:23:50 crc kubenswrapper[4809]: I0312 08:23:50.841290 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gk9j\" (UniqueName: \"kubernetes.io/projected/fe74aae4-4752-4c38-99a7-eb88886cf5bf-kube-api-access-8gk9j\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:50 crc kubenswrapper[4809]: I0312 08:23:50.881267 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe74aae4-4752-4c38-99a7-eb88886cf5bf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fe74aae4-4752-4c38-99a7-eb88886cf5bf" (UID: "fe74aae4-4752-4c38-99a7-eb88886cf5bf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:23:50 crc kubenswrapper[4809]: I0312 08:23:50.935997 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe74aae4-4752-4c38-99a7-eb88886cf5bf-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fe74aae4-4752-4c38-99a7-eb88886cf5bf" (UID: "fe74aae4-4752-4c38-99a7-eb88886cf5bf"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:23:50 crc kubenswrapper[4809]: I0312 08:23:50.937007 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe74aae4-4752-4c38-99a7-eb88886cf5bf-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "fe74aae4-4752-4c38-99a7-eb88886cf5bf" (UID: "fe74aae4-4752-4c38-99a7-eb88886cf5bf"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:23:50 crc kubenswrapper[4809]: I0312 08:23:50.943650 4809 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fe74aae4-4752-4c38-99a7-eb88886cf5bf-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:50 crc kubenswrapper[4809]: I0312 08:23:50.943688 4809 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe74aae4-4752-4c38-99a7-eb88886cf5bf-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:50 crc kubenswrapper[4809]: I0312 08:23:50.943704 4809 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fe74aae4-4752-4c38-99a7-eb88886cf5bf-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:50 crc kubenswrapper[4809]: I0312 08:23:50.958628 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe74aae4-4752-4c38-99a7-eb88886cf5bf-config" (OuterVolumeSpecName: "config") pod "fe74aae4-4752-4c38-99a7-eb88886cf5bf" (UID: "fe74aae4-4752-4c38-99a7-eb88886cf5bf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:23:50 crc kubenswrapper[4809]: I0312 08:23:50.990820 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe74aae4-4752-4c38-99a7-eb88886cf5bf-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fe74aae4-4752-4c38-99a7-eb88886cf5bf" (UID: "fe74aae4-4752-4c38-99a7-eb88886cf5bf"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:23:51 crc kubenswrapper[4809]: I0312 08:23:51.045671 4809 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fe74aae4-4752-4c38-99a7-eb88886cf5bf-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:51 crc kubenswrapper[4809]: I0312 08:23:51.045703 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe74aae4-4752-4c38-99a7-eb88886cf5bf-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:23:51 crc kubenswrapper[4809]: I0312 08:23:51.086861 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-fp9lq" Mar 12 08:23:51 crc kubenswrapper[4809]: I0312 08:23:51.086898 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-fp9lq" event={"ID":"fe74aae4-4752-4c38-99a7-eb88886cf5bf","Type":"ContainerDied","Data":"2efb76dd17a8757c1e6bd2b3322198730b10e90bda1d870079d1ba0616392268"} Mar 12 08:23:51 crc kubenswrapper[4809]: I0312 08:23:51.086969 4809 scope.go:117] "RemoveContainer" containerID="2efb76dd17a8757c1e6bd2b3322198730b10e90bda1d870079d1ba0616392268" Mar 12 08:23:51 crc kubenswrapper[4809]: I0312 08:23:51.086775 4809 generic.go:334] "Generic (PLEG): container finished" podID="fe74aae4-4752-4c38-99a7-eb88886cf5bf" containerID="2efb76dd17a8757c1e6bd2b3322198730b10e90bda1d870079d1ba0616392268" exitCode=0 Mar 12 08:23:51 crc kubenswrapper[4809]: I0312 08:23:51.100753 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-fp9lq" event={"ID":"fe74aae4-4752-4c38-99a7-eb88886cf5bf","Type":"ContainerDied","Data":"df796b21d07e69e67d2a4a972870f33362b8dc904a857a3681c4166308719c00"} Mar 12 08:23:51 crc kubenswrapper[4809]: I0312 08:23:51.123872 4809 generic.go:334] "Generic (PLEG): container finished" podID="a8ef0743-567a-4a4b-aada-a0bc3659b200" containerID="d245ce5e756182f20e26a6759b3800a839afa1db8cc82a4b20d0d7a6b179c10b" exitCode=0 Mar 12 08:23:51 crc kubenswrapper[4809]: I0312 08:23:51.129491 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-65c74fbb78-q9c6j" podUID="7a1edce5-95e6-4080-935e-07399aaa4f89" containerName="barbican-keystone-listener-log" containerID="cri-o://d3c2172fcadc00c0e9f60a8390e41884b9cc66b77ca35abf7ca63644e94ab4af" gracePeriod=30 Mar 12 08:23:51 crc kubenswrapper[4809]: I0312 08:23:51.129888 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-65c74fbb78-q9c6j" podUID="7a1edce5-95e6-4080-935e-07399aaa4f89" containerName="barbican-keystone-listener" containerID="cri-o://a8657b1ff2bb53e82c78c19f96e3df8f79962e9fd0898bff6f9c3c136e1c82be" gracePeriod=30 Mar 12 08:23:51 crc kubenswrapper[4809]: I0312 08:23:51.130048 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-7d46b6b97c-tzjzj" podUID="4056654a-5349-4947-9a20-99626cb45c87" containerName="barbican-worker-log" containerID="cri-o://938fceeaa88246079030664c43060e5e347a3940d5aa4be9c6fa1115c13dd005" gracePeriod=30 Mar 12 08:23:51 crc kubenswrapper[4809]: I0312 08:23:51.130343 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-7d46b6b97c-tzjzj" podUID="4056654a-5349-4947-9a20-99626cb45c87" containerName="barbican-worker" containerID="cri-o://59909581a8e075412baec12432956bda87a8d1a40bf16b140c0a714ceb712a79" gracePeriod=30 Mar 12 08:23:51 crc kubenswrapper[4809]: I0312 08:23:51.189363 4809 scope.go:117] "RemoveContainer" containerID="b3dab38dd189fe1f376d08d668b2eb0b5e4d2d1dc23f1f3c258ce68086087b81" Mar 12 08:23:51 crc kubenswrapper[4809]: I0312 08:23:51.208614 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95d1f5f5-216c-45e1-aefc-3e54135c8dc8" path="/var/lib/kubelet/pods/95d1f5f5-216c-45e1-aefc-3e54135c8dc8/volumes" Mar 12 08:23:51 crc kubenswrapper[4809]: I0312 08:23:51.225291 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-fp9lq"] Mar 12 08:23:51 crc kubenswrapper[4809]: I0312 08:23:51.225337 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-jw2qr" event={"ID":"a8ef0743-567a-4a4b-aada-a0bc3659b200","Type":"ContainerDied","Data":"d245ce5e756182f20e26a6759b3800a839afa1db8cc82a4b20d0d7a6b179c10b"} Mar 12 08:23:51 crc kubenswrapper[4809]: I0312 08:23:51.229553 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-fp9lq"] Mar 12 08:23:51 crc kubenswrapper[4809]: I0312 08:23:51.384495 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 12 08:23:51 crc kubenswrapper[4809]: I0312 08:23:51.384545 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 12 08:23:51 crc kubenswrapper[4809]: I0312 08:23:51.395204 4809 scope.go:117] "RemoveContainer" containerID="2efb76dd17a8757c1e6bd2b3322198730b10e90bda1d870079d1ba0616392268" Mar 12 08:23:51 crc kubenswrapper[4809]: E0312 08:23:51.395721 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2efb76dd17a8757c1e6bd2b3322198730b10e90bda1d870079d1ba0616392268\": container with ID starting with 2efb76dd17a8757c1e6bd2b3322198730b10e90bda1d870079d1ba0616392268 not found: ID does not exist" containerID="2efb76dd17a8757c1e6bd2b3322198730b10e90bda1d870079d1ba0616392268" Mar 12 08:23:51 crc kubenswrapper[4809]: I0312 08:23:51.395759 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2efb76dd17a8757c1e6bd2b3322198730b10e90bda1d870079d1ba0616392268"} err="failed to get container status \"2efb76dd17a8757c1e6bd2b3322198730b10e90bda1d870079d1ba0616392268\": rpc error: code = NotFound desc = could not find container \"2efb76dd17a8757c1e6bd2b3322198730b10e90bda1d870079d1ba0616392268\": container with ID starting with 2efb76dd17a8757c1e6bd2b3322198730b10e90bda1d870079d1ba0616392268 not found: ID does not exist" Mar 12 08:23:51 crc kubenswrapper[4809]: I0312 08:23:51.395782 4809 scope.go:117] "RemoveContainer" containerID="b3dab38dd189fe1f376d08d668b2eb0b5e4d2d1dc23f1f3c258ce68086087b81" Mar 12 08:23:51 crc kubenswrapper[4809]: E0312 08:23:51.396493 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3dab38dd189fe1f376d08d668b2eb0b5e4d2d1dc23f1f3c258ce68086087b81\": container with ID starting with b3dab38dd189fe1f376d08d668b2eb0b5e4d2d1dc23f1f3c258ce68086087b81 not found: ID does not exist" containerID="b3dab38dd189fe1f376d08d668b2eb0b5e4d2d1dc23f1f3c258ce68086087b81" Mar 12 08:23:51 crc kubenswrapper[4809]: I0312 08:23:51.396520 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3dab38dd189fe1f376d08d668b2eb0b5e4d2d1dc23f1f3c258ce68086087b81"} err="failed to get container status \"b3dab38dd189fe1f376d08d668b2eb0b5e4d2d1dc23f1f3c258ce68086087b81\": rpc error: code = NotFound desc = could not find container \"b3dab38dd189fe1f376d08d668b2eb0b5e4d2d1dc23f1f3c258ce68086087b81\": container with ID starting with b3dab38dd189fe1f376d08d668b2eb0b5e4d2d1dc23f1f3c258ce68086087b81 not found: ID does not exist" Mar 12 08:23:51 crc kubenswrapper[4809]: I0312 08:23:51.481823 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 12 08:23:51 crc kubenswrapper[4809]: I0312 08:23:51.769006 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 12 08:23:52 crc kubenswrapper[4809]: I0312 08:23:52.143822 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7d46b6b97c-tzjzj" event={"ID":"4056654a-5349-4947-9a20-99626cb45c87","Type":"ContainerDied","Data":"938fceeaa88246079030664c43060e5e347a3940d5aa4be9c6fa1115c13dd005"} Mar 12 08:23:52 crc kubenswrapper[4809]: I0312 08:23:52.144444 4809 generic.go:334] "Generic (PLEG): container finished" podID="4056654a-5349-4947-9a20-99626cb45c87" containerID="938fceeaa88246079030664c43060e5e347a3940d5aa4be9c6fa1115c13dd005" exitCode=143 Mar 12 08:23:52 crc kubenswrapper[4809]: I0312 08:23:52.169277 4809 generic.go:334] "Generic (PLEG): container finished" podID="7a1edce5-95e6-4080-935e-07399aaa4f89" containerID="d3c2172fcadc00c0e9f60a8390e41884b9cc66b77ca35abf7ca63644e94ab4af" exitCode=143 Mar 12 08:23:52 crc kubenswrapper[4809]: I0312 08:23:52.169832 4809 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 08:23:52 crc kubenswrapper[4809]: I0312 08:23:52.169814 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-65c74fbb78-q9c6j" event={"ID":"7a1edce5-95e6-4080-935e-07399aaa4f89","Type":"ContainerDied","Data":"d3c2172fcadc00c0e9f60a8390e41884b9cc66b77ca35abf7ca63644e94ab4af"} Mar 12 08:23:52 crc kubenswrapper[4809]: I0312 08:23:52.173360 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 12 08:23:52 crc kubenswrapper[4809]: I0312 08:23:52.173497 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 12 08:23:53 crc kubenswrapper[4809]: I0312 08:23:53.034429 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7ff4d6f6d-442gb" Mar 12 08:23:53 crc kubenswrapper[4809]: I0312 08:23:53.129461 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe74aae4-4752-4c38-99a7-eb88886cf5bf" path="/var/lib/kubelet/pods/fe74aae4-4752-4c38-99a7-eb88886cf5bf/volumes" Mar 12 08:23:53 crc kubenswrapper[4809]: I0312 08:23:53.208316 4809 generic.go:334] "Generic (PLEG): container finished" podID="8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a" containerID="cb0b5aae4655ec5d15ff955cf21bf0af072f7fe49e63438ef83a5f9f31218560" exitCode=0 Mar 12 08:23:53 crc kubenswrapper[4809]: I0312 08:23:53.208466 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-qxpjh" event={"ID":"8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a","Type":"ContainerDied","Data":"cb0b5aae4655ec5d15ff955cf21bf0af072f7fe49e63438ef83a5f9f31218560"} Mar 12 08:23:53 crc kubenswrapper[4809]: I0312 08:23:53.262705 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7ff4d6f6d-442gb" Mar 12 08:23:53 crc kubenswrapper[4809]: I0312 08:23:53.368984 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8bcp4" podUID="2d78fd7f-3718-4999-a661-ad590dc808a5" containerName="registry-server" probeResult="failure" output=< Mar 12 08:23:53 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 08:23:53 crc kubenswrapper[4809]: > Mar 12 08:23:54 crc kubenswrapper[4809]: I0312 08:23:54.059300 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Mar 12 08:23:54 crc kubenswrapper[4809]: I0312 08:23:54.076363 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Mar 12 08:23:54 crc kubenswrapper[4809]: I0312 08:23:54.242320 4809 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 08:23:54 crc kubenswrapper[4809]: I0312 08:23:54.264106 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Mar 12 08:23:55 crc kubenswrapper[4809]: I0312 08:23:55.286468 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-55f844cf75-fp9lq" podUID="fe74aae4-4752-4c38-99a7-eb88886cf5bf" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.199:5353: i/o timeout" Mar 12 08:23:55 crc kubenswrapper[4809]: I0312 08:23:55.392270 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7877ddd69d-mkc5h" Mar 12 08:23:55 crc kubenswrapper[4809]: I0312 08:23:55.627227 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 12 08:23:55 crc kubenswrapper[4809]: I0312 08:23:55.627352 4809 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 08:23:55 crc kubenswrapper[4809]: I0312 08:23:55.904009 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5b85bc8f8c-p6zd6"] Mar 12 08:23:55 crc kubenswrapper[4809]: I0312 08:23:55.904307 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5b85bc8f8c-p6zd6" podUID="5fcbe319-aa82-46b8-a366-ec7ef1a7484e" containerName="neutron-api" containerID="cri-o://6c0f8b1e491a8e1f35e2308ea7ce7e7a255d03400277325bb1a47117c3ba423f" gracePeriod=30 Mar 12 08:23:55 crc kubenswrapper[4809]: I0312 08:23:55.905172 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5b85bc8f8c-p6zd6" podUID="5fcbe319-aa82-46b8-a366-ec7ef1a7484e" containerName="neutron-httpd" containerID="cri-o://f01085a64442ed5f2a7494db9e19762dab838c2d11d89dd439cd7f2275356334" gracePeriod=30 Mar 12 08:23:55 crc kubenswrapper[4809]: I0312 08:23:55.938680 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-677488855f-bz28w"] Mar 12 08:23:55 crc kubenswrapper[4809]: E0312 08:23:55.939548 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe74aae4-4752-4c38-99a7-eb88886cf5bf" containerName="init" Mar 12 08:23:55 crc kubenswrapper[4809]: I0312 08:23:55.939567 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe74aae4-4752-4c38-99a7-eb88886cf5bf" containerName="init" Mar 12 08:23:55 crc kubenswrapper[4809]: E0312 08:23:55.939585 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95d1f5f5-216c-45e1-aefc-3e54135c8dc8" containerName="barbican-api-log" Mar 12 08:23:55 crc kubenswrapper[4809]: I0312 08:23:55.939593 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="95d1f5f5-216c-45e1-aefc-3e54135c8dc8" containerName="barbican-api-log" Mar 12 08:23:55 crc kubenswrapper[4809]: E0312 08:23:55.939630 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe74aae4-4752-4c38-99a7-eb88886cf5bf" containerName="dnsmasq-dns" Mar 12 08:23:55 crc kubenswrapper[4809]: I0312 08:23:55.939639 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe74aae4-4752-4c38-99a7-eb88886cf5bf" containerName="dnsmasq-dns" Mar 12 08:23:55 crc kubenswrapper[4809]: E0312 08:23:55.939659 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95d1f5f5-216c-45e1-aefc-3e54135c8dc8" containerName="barbican-api" Mar 12 08:23:55 crc kubenswrapper[4809]: I0312 08:23:55.939669 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="95d1f5f5-216c-45e1-aefc-3e54135c8dc8" containerName="barbican-api" Mar 12 08:23:55 crc kubenswrapper[4809]: I0312 08:23:55.940023 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="95d1f5f5-216c-45e1-aefc-3e54135c8dc8" containerName="barbican-api-log" Mar 12 08:23:55 crc kubenswrapper[4809]: I0312 08:23:55.940045 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="95d1f5f5-216c-45e1-aefc-3e54135c8dc8" containerName="barbican-api" Mar 12 08:23:55 crc kubenswrapper[4809]: I0312 08:23:55.940076 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe74aae4-4752-4c38-99a7-eb88886cf5bf" containerName="dnsmasq-dns" Mar 12 08:23:55 crc kubenswrapper[4809]: I0312 08:23:55.942037 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-677488855f-bz28w" Mar 12 08:23:55 crc kubenswrapper[4809]: I0312 08:23:55.944065 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-5b85bc8f8c-p6zd6" podUID="5fcbe319-aa82-46b8-a366-ec7ef1a7484e" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.201:9696/\": read tcp 10.217.0.2:46430->10.217.0.201:9696: read: connection reset by peer" Mar 12 08:23:55 crc kubenswrapper[4809]: I0312 08:23:55.971539 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-677488855f-bz28w"] Mar 12 08:23:56 crc kubenswrapper[4809]: I0312 08:23:56.049211 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c65e075-ccaa-4054-9903-ebcd26368c00-internal-tls-certs\") pod \"neutron-677488855f-bz28w\" (UID: \"5c65e075-ccaa-4054-9903-ebcd26368c00\") " pod="openstack/neutron-677488855f-bz28w" Mar 12 08:23:56 crc kubenswrapper[4809]: I0312 08:23:56.049277 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c65e075-ccaa-4054-9903-ebcd26368c00-combined-ca-bundle\") pod \"neutron-677488855f-bz28w\" (UID: \"5c65e075-ccaa-4054-9903-ebcd26368c00\") " pod="openstack/neutron-677488855f-bz28w" Mar 12 08:23:56 crc kubenswrapper[4809]: I0312 08:23:56.049297 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5c65e075-ccaa-4054-9903-ebcd26368c00-httpd-config\") pod \"neutron-677488855f-bz28w\" (UID: \"5c65e075-ccaa-4054-9903-ebcd26368c00\") " pod="openstack/neutron-677488855f-bz28w" Mar 12 08:23:56 crc kubenswrapper[4809]: I0312 08:23:56.049356 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c65e075-ccaa-4054-9903-ebcd26368c00-ovndb-tls-certs\") pod \"neutron-677488855f-bz28w\" (UID: \"5c65e075-ccaa-4054-9903-ebcd26368c00\") " pod="openstack/neutron-677488855f-bz28w" Mar 12 08:23:56 crc kubenswrapper[4809]: I0312 08:23:56.049448 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5c65e075-ccaa-4054-9903-ebcd26368c00-config\") pod \"neutron-677488855f-bz28w\" (UID: \"5c65e075-ccaa-4054-9903-ebcd26368c00\") " pod="openstack/neutron-677488855f-bz28w" Mar 12 08:23:56 crc kubenswrapper[4809]: I0312 08:23:56.049464 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c65e075-ccaa-4054-9903-ebcd26368c00-public-tls-certs\") pod \"neutron-677488855f-bz28w\" (UID: \"5c65e075-ccaa-4054-9903-ebcd26368c00\") " pod="openstack/neutron-677488855f-bz28w" Mar 12 08:23:56 crc kubenswrapper[4809]: I0312 08:23:56.049482 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dps82\" (UniqueName: \"kubernetes.io/projected/5c65e075-ccaa-4054-9903-ebcd26368c00-kube-api-access-dps82\") pod \"neutron-677488855f-bz28w\" (UID: \"5c65e075-ccaa-4054-9903-ebcd26368c00\") " pod="openstack/neutron-677488855f-bz28w" Mar 12 08:23:56 crc kubenswrapper[4809]: I0312 08:23:56.152783 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5c65e075-ccaa-4054-9903-ebcd26368c00-config\") pod \"neutron-677488855f-bz28w\" (UID: \"5c65e075-ccaa-4054-9903-ebcd26368c00\") " pod="openstack/neutron-677488855f-bz28w" Mar 12 08:23:56 crc kubenswrapper[4809]: I0312 08:23:56.152838 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c65e075-ccaa-4054-9903-ebcd26368c00-public-tls-certs\") pod \"neutron-677488855f-bz28w\" (UID: \"5c65e075-ccaa-4054-9903-ebcd26368c00\") " pod="openstack/neutron-677488855f-bz28w" Mar 12 08:23:56 crc kubenswrapper[4809]: I0312 08:23:56.152866 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dps82\" (UniqueName: \"kubernetes.io/projected/5c65e075-ccaa-4054-9903-ebcd26368c00-kube-api-access-dps82\") pod \"neutron-677488855f-bz28w\" (UID: \"5c65e075-ccaa-4054-9903-ebcd26368c00\") " pod="openstack/neutron-677488855f-bz28w" Mar 12 08:23:56 crc kubenswrapper[4809]: I0312 08:23:56.152983 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c65e075-ccaa-4054-9903-ebcd26368c00-internal-tls-certs\") pod \"neutron-677488855f-bz28w\" (UID: \"5c65e075-ccaa-4054-9903-ebcd26368c00\") " pod="openstack/neutron-677488855f-bz28w" Mar 12 08:23:56 crc kubenswrapper[4809]: I0312 08:23:56.153018 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c65e075-ccaa-4054-9903-ebcd26368c00-combined-ca-bundle\") pod \"neutron-677488855f-bz28w\" (UID: \"5c65e075-ccaa-4054-9903-ebcd26368c00\") " pod="openstack/neutron-677488855f-bz28w" Mar 12 08:23:56 crc kubenswrapper[4809]: I0312 08:23:56.153043 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5c65e075-ccaa-4054-9903-ebcd26368c00-httpd-config\") pod \"neutron-677488855f-bz28w\" (UID: \"5c65e075-ccaa-4054-9903-ebcd26368c00\") " pod="openstack/neutron-677488855f-bz28w" Mar 12 08:23:56 crc kubenswrapper[4809]: I0312 08:23:56.153108 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c65e075-ccaa-4054-9903-ebcd26368c00-ovndb-tls-certs\") pod \"neutron-677488855f-bz28w\" (UID: \"5c65e075-ccaa-4054-9903-ebcd26368c00\") " pod="openstack/neutron-677488855f-bz28w" Mar 12 08:23:56 crc kubenswrapper[4809]: I0312 08:23:56.165531 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5c65e075-ccaa-4054-9903-ebcd26368c00-httpd-config\") pod \"neutron-677488855f-bz28w\" (UID: \"5c65e075-ccaa-4054-9903-ebcd26368c00\") " pod="openstack/neutron-677488855f-bz28w" Mar 12 08:23:56 crc kubenswrapper[4809]: I0312 08:23:56.165669 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c65e075-ccaa-4054-9903-ebcd26368c00-ovndb-tls-certs\") pod \"neutron-677488855f-bz28w\" (UID: \"5c65e075-ccaa-4054-9903-ebcd26368c00\") " pod="openstack/neutron-677488855f-bz28w" Mar 12 08:23:56 crc kubenswrapper[4809]: I0312 08:23:56.165757 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/5c65e075-ccaa-4054-9903-ebcd26368c00-config\") pod \"neutron-677488855f-bz28w\" (UID: \"5c65e075-ccaa-4054-9903-ebcd26368c00\") " pod="openstack/neutron-677488855f-bz28w" Mar 12 08:23:56 crc kubenswrapper[4809]: I0312 08:23:56.173696 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c65e075-ccaa-4054-9903-ebcd26368c00-public-tls-certs\") pod \"neutron-677488855f-bz28w\" (UID: \"5c65e075-ccaa-4054-9903-ebcd26368c00\") " pod="openstack/neutron-677488855f-bz28w" Mar 12 08:23:56 crc kubenswrapper[4809]: I0312 08:23:56.175679 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dps82\" (UniqueName: \"kubernetes.io/projected/5c65e075-ccaa-4054-9903-ebcd26368c00-kube-api-access-dps82\") pod \"neutron-677488855f-bz28w\" (UID: \"5c65e075-ccaa-4054-9903-ebcd26368c00\") " pod="openstack/neutron-677488855f-bz28w" Mar 12 08:23:56 crc kubenswrapper[4809]: I0312 08:23:56.182397 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c65e075-ccaa-4054-9903-ebcd26368c00-combined-ca-bundle\") pod \"neutron-677488855f-bz28w\" (UID: \"5c65e075-ccaa-4054-9903-ebcd26368c00\") " pod="openstack/neutron-677488855f-bz28w" Mar 12 08:23:56 crc kubenswrapper[4809]: I0312 08:23:56.190709 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c65e075-ccaa-4054-9903-ebcd26368c00-internal-tls-certs\") pod \"neutron-677488855f-bz28w\" (UID: \"5c65e075-ccaa-4054-9903-ebcd26368c00\") " pod="openstack/neutron-677488855f-bz28w" Mar 12 08:23:56 crc kubenswrapper[4809]: I0312 08:23:56.261854 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 12 08:23:56 crc kubenswrapper[4809]: I0312 08:23:56.278867 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-677488855f-bz28w" Mar 12 08:23:56 crc kubenswrapper[4809]: I0312 08:23:56.312732 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 12 08:23:57 crc kubenswrapper[4809]: I0312 08:23:57.323969 4809 generic.go:334] "Generic (PLEG): container finished" podID="5fcbe319-aa82-46b8-a366-ec7ef1a7484e" containerID="f01085a64442ed5f2a7494db9e19762dab838c2d11d89dd439cd7f2275356334" exitCode=0 Mar 12 08:23:57 crc kubenswrapper[4809]: I0312 08:23:57.324408 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5b85bc8f8c-p6zd6" event={"ID":"5fcbe319-aa82-46b8-a366-ec7ef1a7484e","Type":"ContainerDied","Data":"f01085a64442ed5f2a7494db9e19762dab838c2d11d89dd439cd7f2275356334"} Mar 12 08:23:57 crc kubenswrapper[4809]: I0312 08:23:57.858387 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-5b85bc8f8c-p6zd6" podUID="5fcbe319-aa82-46b8-a366-ec7ef1a7484e" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.201:9696/\": dial tcp 10.217.0.201:9696: connect: connection refused" Mar 12 08:23:58 crc kubenswrapper[4809]: I0312 08:23:58.739023 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-787b8bc5d6-ldxk6" Mar 12 08:23:58 crc kubenswrapper[4809]: I0312 08:23:58.916158 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-787b8bc5d6-ldxk6" Mar 12 08:23:59 crc kubenswrapper[4809]: I0312 08:23:59.016338 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7ff4d6f6d-442gb"] Mar 12 08:23:59 crc kubenswrapper[4809]: I0312 08:23:59.016865 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7ff4d6f6d-442gb" podUID="a013b38e-6ede-4108-8a0d-5fb8bed67494" containerName="barbican-api-log" containerID="cri-o://4b0af71c1f5b93a43e124c991707fa02a53a71a7089eb71e2bb9c56348de1df6" gracePeriod=30 Mar 12 08:23:59 crc kubenswrapper[4809]: I0312 08:23:59.017003 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7ff4d6f6d-442gb" podUID="a013b38e-6ede-4108-8a0d-5fb8bed67494" containerName="barbican-api" containerID="cri-o://cf14a34735dd7332c0abf54b54f0b2b51d5158472dfd0c3e1281ef8e0c53e7f1" gracePeriod=30 Mar 12 08:23:59 crc kubenswrapper[4809]: I0312 08:23:59.034376 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7ff4d6f6d-442gb" podUID="a013b38e-6ede-4108-8a0d-5fb8bed67494" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.208:9311/healthcheck\": EOF" Mar 12 08:23:59 crc kubenswrapper[4809]: I0312 08:23:59.377213 4809 generic.go:334] "Generic (PLEG): container finished" podID="a013b38e-6ede-4108-8a0d-5fb8bed67494" containerID="4b0af71c1f5b93a43e124c991707fa02a53a71a7089eb71e2bb9c56348de1df6" exitCode=143 Mar 12 08:23:59 crc kubenswrapper[4809]: I0312 08:23:59.377316 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7ff4d6f6d-442gb" event={"ID":"a013b38e-6ede-4108-8a0d-5fb8bed67494","Type":"ContainerDied","Data":"4b0af71c1f5b93a43e124c991707fa02a53a71a7089eb71e2bb9c56348de1df6"} Mar 12 08:24:00 crc kubenswrapper[4809]: I0312 08:24:00.143575 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555064-rz9hw"] Mar 12 08:24:00 crc kubenswrapper[4809]: I0312 08:24:00.145777 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555064-rz9hw" Mar 12 08:24:00 crc kubenswrapper[4809]: I0312 08:24:00.148677 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-tvxqv" Mar 12 08:24:00 crc kubenswrapper[4809]: I0312 08:24:00.149013 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 08:24:00 crc kubenswrapper[4809]: I0312 08:24:00.149205 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 08:24:00 crc kubenswrapper[4809]: I0312 08:24:00.158471 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555064-rz9hw"] Mar 12 08:24:00 crc kubenswrapper[4809]: I0312 08:24:00.345268 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2hxg\" (UniqueName: \"kubernetes.io/projected/bb8a8f45-bb92-4a72-8fcb-5c82ab191829-kube-api-access-t2hxg\") pod \"auto-csr-approver-29555064-rz9hw\" (UID: \"bb8a8f45-bb92-4a72-8fcb-5c82ab191829\") " pod="openshift-infra/auto-csr-approver-29555064-rz9hw" Mar 12 08:24:00 crc kubenswrapper[4809]: I0312 08:24:00.424513 4809 generic.go:334] "Generic (PLEG): container finished" podID="5fcbe319-aa82-46b8-a366-ec7ef1a7484e" containerID="6c0f8b1e491a8e1f35e2308ea7ce7e7a255d03400277325bb1a47117c3ba423f" exitCode=0 Mar 12 08:24:00 crc kubenswrapper[4809]: I0312 08:24:00.424582 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5b85bc8f8c-p6zd6" event={"ID":"5fcbe319-aa82-46b8-a366-ec7ef1a7484e","Type":"ContainerDied","Data":"6c0f8b1e491a8e1f35e2308ea7ce7e7a255d03400277325bb1a47117c3ba423f"} Mar 12 08:24:00 crc kubenswrapper[4809]: I0312 08:24:00.451617 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2hxg\" (UniqueName: \"kubernetes.io/projected/bb8a8f45-bb92-4a72-8fcb-5c82ab191829-kube-api-access-t2hxg\") pod \"auto-csr-approver-29555064-rz9hw\" (UID: \"bb8a8f45-bb92-4a72-8fcb-5c82ab191829\") " pod="openshift-infra/auto-csr-approver-29555064-rz9hw" Mar 12 08:24:00 crc kubenswrapper[4809]: I0312 08:24:00.490012 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2hxg\" (UniqueName: \"kubernetes.io/projected/bb8a8f45-bb92-4a72-8fcb-5c82ab191829-kube-api-access-t2hxg\") pod \"auto-csr-approver-29555064-rz9hw\" (UID: \"bb8a8f45-bb92-4a72-8fcb-5c82ab191829\") " pod="openshift-infra/auto-csr-approver-29555064-rz9hw" Mar 12 08:24:00 crc kubenswrapper[4809]: I0312 08:24:00.513070 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555064-rz9hw" Mar 12 08:24:00 crc kubenswrapper[4809]: I0312 08:24:00.764384 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 12 08:24:01 crc kubenswrapper[4809]: I0312 08:24:01.503073 4809 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod0857990f-7921-4ea0-a0c1-e431cc7de107"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod0857990f-7921-4ea0-a0c1-e431cc7de107] : Timed out while waiting for systemd to remove kubepods-besteffort-pod0857990f_7921_4ea0_a0c1_e431cc7de107.slice" Mar 12 08:24:01 crc kubenswrapper[4809]: E0312 08:24:01.503618 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod0857990f-7921-4ea0-a0c1-e431cc7de107] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod0857990f-7921-4ea0-a0c1-e431cc7de107] : Timed out while waiting for systemd to remove kubepods-besteffort-pod0857990f_7921_4ea0_a0c1_e431cc7de107.slice" pod="openstack/placement-db-sync-98nmv" podUID="0857990f-7921-4ea0-a0c1-e431cc7de107" Mar 12 08:24:01 crc kubenswrapper[4809]: I0312 08:24:01.830700 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-jw2qr" Mar 12 08:24:01 crc kubenswrapper[4809]: I0312 08:24:01.996853 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8ef0743-567a-4a4b-aada-a0bc3659b200-config-data\") pod \"a8ef0743-567a-4a4b-aada-a0bc3659b200\" (UID: \"a8ef0743-567a-4a4b-aada-a0bc3659b200\") " Mar 12 08:24:01 crc kubenswrapper[4809]: I0312 08:24:01.997267 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8ef0743-567a-4a4b-aada-a0bc3659b200-combined-ca-bundle\") pod \"a8ef0743-567a-4a4b-aada-a0bc3659b200\" (UID: \"a8ef0743-567a-4a4b-aada-a0bc3659b200\") " Mar 12 08:24:01 crc kubenswrapper[4809]: I0312 08:24:01.997545 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htrmz\" (UniqueName: \"kubernetes.io/projected/a8ef0743-567a-4a4b-aada-a0bc3659b200-kube-api-access-htrmz\") pod \"a8ef0743-567a-4a4b-aada-a0bc3659b200\" (UID: \"a8ef0743-567a-4a4b-aada-a0bc3659b200\") " Mar 12 08:24:02 crc kubenswrapper[4809]: I0312 08:24:02.005903 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8ef0743-567a-4a4b-aada-a0bc3659b200-kube-api-access-htrmz" (OuterVolumeSpecName: "kube-api-access-htrmz") pod "a8ef0743-567a-4a4b-aada-a0bc3659b200" (UID: "a8ef0743-567a-4a4b-aada-a0bc3659b200"). InnerVolumeSpecName "kube-api-access-htrmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:24:02 crc kubenswrapper[4809]: I0312 08:24:02.059824 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8ef0743-567a-4a4b-aada-a0bc3659b200-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a8ef0743-567a-4a4b-aada-a0bc3659b200" (UID: "a8ef0743-567a-4a4b-aada-a0bc3659b200"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:24:02 crc kubenswrapper[4809]: I0312 08:24:02.100696 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htrmz\" (UniqueName: \"kubernetes.io/projected/a8ef0743-567a-4a4b-aada-a0bc3659b200-kube-api-access-htrmz\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:02 crc kubenswrapper[4809]: I0312 08:24:02.100728 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8ef0743-567a-4a4b-aada-a0bc3659b200-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:02 crc kubenswrapper[4809]: I0312 08:24:02.105277 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8ef0743-567a-4a4b-aada-a0bc3659b200-config-data" (OuterVolumeSpecName: "config-data") pod "a8ef0743-567a-4a4b-aada-a0bc3659b200" (UID: "a8ef0743-567a-4a4b-aada-a0bc3659b200"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:24:02 crc kubenswrapper[4809]: I0312 08:24:02.203730 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8ef0743-567a-4a4b-aada-a0bc3659b200-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:02 crc kubenswrapper[4809]: I0312 08:24:02.463743 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-98nmv" Mar 12 08:24:02 crc kubenswrapper[4809]: I0312 08:24:02.464953 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-jw2qr" Mar 12 08:24:02 crc kubenswrapper[4809]: I0312 08:24:02.465190 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-jw2qr" event={"ID":"a8ef0743-567a-4a4b-aada-a0bc3659b200","Type":"ContainerDied","Data":"ced1accef93230581153926941451cabea3c98d64f62b21f05bc7086675351b1"} Mar 12 08:24:02 crc kubenswrapper[4809]: I0312 08:24:02.465235 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ced1accef93230581153926941451cabea3c98d64f62b21f05bc7086675351b1" Mar 12 08:24:03 crc kubenswrapper[4809]: I0312 08:24:03.326032 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8bcp4" podUID="2d78fd7f-3718-4999-a661-ad590dc808a5" containerName="registry-server" probeResult="failure" output=< Mar 12 08:24:03 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 08:24:03 crc kubenswrapper[4809]: > Mar 12 08:24:03 crc kubenswrapper[4809]: I0312 08:24:03.497835 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7ff4d6f6d-442gb" podUID="a013b38e-6ede-4108-8a0d-5fb8bed67494" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.208:9311/healthcheck\": read tcp 10.217.0.2:39014->10.217.0.208:9311: read: connection reset by peer" Mar 12 08:24:03 crc kubenswrapper[4809]: I0312 08:24:03.497835 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7ff4d6f6d-442gb" podUID="a013b38e-6ede-4108-8a0d-5fb8bed67494" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.208:9311/healthcheck\": read tcp 10.217.0.2:39030->10.217.0.208:9311: read: connection reset by peer" Mar 12 08:24:03 crc kubenswrapper[4809]: I0312 08:24:03.567554 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6586c99478-72j5b" Mar 12 08:24:03 crc kubenswrapper[4809]: I0312 08:24:03.574162 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6586c99478-72j5b" Mar 12 08:24:03 crc kubenswrapper[4809]: I0312 08:24:03.933973 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6f84ff9464-ktgn6"] Mar 12 08:24:03 crc kubenswrapper[4809]: E0312 08:24:03.935335 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8ef0743-567a-4a4b-aada-a0bc3659b200" containerName="heat-db-sync" Mar 12 08:24:03 crc kubenswrapper[4809]: I0312 08:24:03.935385 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8ef0743-567a-4a4b-aada-a0bc3659b200" containerName="heat-db-sync" Mar 12 08:24:03 crc kubenswrapper[4809]: I0312 08:24:03.935994 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8ef0743-567a-4a4b-aada-a0bc3659b200" containerName="heat-db-sync" Mar 12 08:24:03 crc kubenswrapper[4809]: I0312 08:24:03.944838 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6f84ff9464-ktgn6" Mar 12 08:24:03 crc kubenswrapper[4809]: I0312 08:24:03.961350 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6f84ff9464-ktgn6"] Mar 12 08:24:04 crc kubenswrapper[4809]: I0312 08:24:04.079624 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0102cd54-f02d-4f95-8152-6012f7397103-public-tls-certs\") pod \"placement-6f84ff9464-ktgn6\" (UID: \"0102cd54-f02d-4f95-8152-6012f7397103\") " pod="openstack/placement-6f84ff9464-ktgn6" Mar 12 08:24:04 crc kubenswrapper[4809]: I0312 08:24:04.079723 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0102cd54-f02d-4f95-8152-6012f7397103-config-data\") pod \"placement-6f84ff9464-ktgn6\" (UID: \"0102cd54-f02d-4f95-8152-6012f7397103\") " pod="openstack/placement-6f84ff9464-ktgn6" Mar 12 08:24:04 crc kubenswrapper[4809]: I0312 08:24:04.079855 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0102cd54-f02d-4f95-8152-6012f7397103-combined-ca-bundle\") pod \"placement-6f84ff9464-ktgn6\" (UID: \"0102cd54-f02d-4f95-8152-6012f7397103\") " pod="openstack/placement-6f84ff9464-ktgn6" Mar 12 08:24:04 crc kubenswrapper[4809]: I0312 08:24:04.079886 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0102cd54-f02d-4f95-8152-6012f7397103-logs\") pod \"placement-6f84ff9464-ktgn6\" (UID: \"0102cd54-f02d-4f95-8152-6012f7397103\") " pod="openstack/placement-6f84ff9464-ktgn6" Mar 12 08:24:04 crc kubenswrapper[4809]: I0312 08:24:04.079975 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0102cd54-f02d-4f95-8152-6012f7397103-internal-tls-certs\") pod \"placement-6f84ff9464-ktgn6\" (UID: \"0102cd54-f02d-4f95-8152-6012f7397103\") " pod="openstack/placement-6f84ff9464-ktgn6" Mar 12 08:24:04 crc kubenswrapper[4809]: I0312 08:24:04.080038 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0102cd54-f02d-4f95-8152-6012f7397103-scripts\") pod \"placement-6f84ff9464-ktgn6\" (UID: \"0102cd54-f02d-4f95-8152-6012f7397103\") " pod="openstack/placement-6f84ff9464-ktgn6" Mar 12 08:24:04 crc kubenswrapper[4809]: I0312 08:24:04.080109 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbx95\" (UniqueName: \"kubernetes.io/projected/0102cd54-f02d-4f95-8152-6012f7397103-kube-api-access-cbx95\") pod \"placement-6f84ff9464-ktgn6\" (UID: \"0102cd54-f02d-4f95-8152-6012f7397103\") " pod="openstack/placement-6f84ff9464-ktgn6" Mar 12 08:24:04 crc kubenswrapper[4809]: I0312 08:24:04.183928 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0102cd54-f02d-4f95-8152-6012f7397103-internal-tls-certs\") pod \"placement-6f84ff9464-ktgn6\" (UID: \"0102cd54-f02d-4f95-8152-6012f7397103\") " pod="openstack/placement-6f84ff9464-ktgn6" Mar 12 08:24:04 crc kubenswrapper[4809]: I0312 08:24:04.184146 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0102cd54-f02d-4f95-8152-6012f7397103-scripts\") pod \"placement-6f84ff9464-ktgn6\" (UID: \"0102cd54-f02d-4f95-8152-6012f7397103\") " pod="openstack/placement-6f84ff9464-ktgn6" Mar 12 08:24:04 crc kubenswrapper[4809]: I0312 08:24:04.184328 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbx95\" (UniqueName: \"kubernetes.io/projected/0102cd54-f02d-4f95-8152-6012f7397103-kube-api-access-cbx95\") pod \"placement-6f84ff9464-ktgn6\" (UID: \"0102cd54-f02d-4f95-8152-6012f7397103\") " pod="openstack/placement-6f84ff9464-ktgn6" Mar 12 08:24:04 crc kubenswrapper[4809]: I0312 08:24:04.184377 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0102cd54-f02d-4f95-8152-6012f7397103-public-tls-certs\") pod \"placement-6f84ff9464-ktgn6\" (UID: \"0102cd54-f02d-4f95-8152-6012f7397103\") " pod="openstack/placement-6f84ff9464-ktgn6" Mar 12 08:24:04 crc kubenswrapper[4809]: I0312 08:24:04.184475 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0102cd54-f02d-4f95-8152-6012f7397103-config-data\") pod \"placement-6f84ff9464-ktgn6\" (UID: \"0102cd54-f02d-4f95-8152-6012f7397103\") " pod="openstack/placement-6f84ff9464-ktgn6" Mar 12 08:24:04 crc kubenswrapper[4809]: I0312 08:24:04.184704 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0102cd54-f02d-4f95-8152-6012f7397103-combined-ca-bundle\") pod \"placement-6f84ff9464-ktgn6\" (UID: \"0102cd54-f02d-4f95-8152-6012f7397103\") " pod="openstack/placement-6f84ff9464-ktgn6" Mar 12 08:24:04 crc kubenswrapper[4809]: I0312 08:24:04.184755 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0102cd54-f02d-4f95-8152-6012f7397103-logs\") pod \"placement-6f84ff9464-ktgn6\" (UID: \"0102cd54-f02d-4f95-8152-6012f7397103\") " pod="openstack/placement-6f84ff9464-ktgn6" Mar 12 08:24:04 crc kubenswrapper[4809]: I0312 08:24:04.186378 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0102cd54-f02d-4f95-8152-6012f7397103-logs\") pod \"placement-6f84ff9464-ktgn6\" (UID: \"0102cd54-f02d-4f95-8152-6012f7397103\") " pod="openstack/placement-6f84ff9464-ktgn6" Mar 12 08:24:04 crc kubenswrapper[4809]: I0312 08:24:04.194138 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0102cd54-f02d-4f95-8152-6012f7397103-scripts\") pod \"placement-6f84ff9464-ktgn6\" (UID: \"0102cd54-f02d-4f95-8152-6012f7397103\") " pod="openstack/placement-6f84ff9464-ktgn6" Mar 12 08:24:04 crc kubenswrapper[4809]: I0312 08:24:04.194863 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0102cd54-f02d-4f95-8152-6012f7397103-internal-tls-certs\") pod \"placement-6f84ff9464-ktgn6\" (UID: \"0102cd54-f02d-4f95-8152-6012f7397103\") " pod="openstack/placement-6f84ff9464-ktgn6" Mar 12 08:24:04 crc kubenswrapper[4809]: I0312 08:24:04.194961 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0102cd54-f02d-4f95-8152-6012f7397103-public-tls-certs\") pod \"placement-6f84ff9464-ktgn6\" (UID: \"0102cd54-f02d-4f95-8152-6012f7397103\") " pod="openstack/placement-6f84ff9464-ktgn6" Mar 12 08:24:04 crc kubenswrapper[4809]: I0312 08:24:04.202448 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbx95\" (UniqueName: \"kubernetes.io/projected/0102cd54-f02d-4f95-8152-6012f7397103-kube-api-access-cbx95\") pod \"placement-6f84ff9464-ktgn6\" (UID: \"0102cd54-f02d-4f95-8152-6012f7397103\") " pod="openstack/placement-6f84ff9464-ktgn6" Mar 12 08:24:04 crc kubenswrapper[4809]: I0312 08:24:04.207358 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0102cd54-f02d-4f95-8152-6012f7397103-config-data\") pod \"placement-6f84ff9464-ktgn6\" (UID: \"0102cd54-f02d-4f95-8152-6012f7397103\") " pod="openstack/placement-6f84ff9464-ktgn6" Mar 12 08:24:04 crc kubenswrapper[4809]: I0312 08:24:04.213139 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0102cd54-f02d-4f95-8152-6012f7397103-combined-ca-bundle\") pod \"placement-6f84ff9464-ktgn6\" (UID: \"0102cd54-f02d-4f95-8152-6012f7397103\") " pod="openstack/placement-6f84ff9464-ktgn6" Mar 12 08:24:04 crc kubenswrapper[4809]: I0312 08:24:04.286434 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6f84ff9464-ktgn6" Mar 12 08:24:04 crc kubenswrapper[4809]: I0312 08:24:04.565470 4809 generic.go:334] "Generic (PLEG): container finished" podID="a013b38e-6ede-4108-8a0d-5fb8bed67494" containerID="cf14a34735dd7332c0abf54b54f0b2b51d5158472dfd0c3e1281ef8e0c53e7f1" exitCode=0 Mar 12 08:24:04 crc kubenswrapper[4809]: I0312 08:24:04.567395 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7ff4d6f6d-442gb" event={"ID":"a013b38e-6ede-4108-8a0d-5fb8bed67494","Type":"ContainerDied","Data":"cf14a34735dd7332c0abf54b54f0b2b51d5158472dfd0c3e1281ef8e0c53e7f1"} Mar 12 08:24:05 crc kubenswrapper[4809]: E0312 08:24:05.036055 4809 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24:latest" Mar 12 08:24:05 crc kubenswrapper[4809]: E0312 08:24:05.036310 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jh8xw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e08acf54-60ca-4dc7-bd01-ce8fed2bcc51): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 12 08:24:05 crc kubenswrapper[4809]: E0312 08:24:05.037721 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="e08acf54-60ca-4dc7-bd01-ce8fed2bcc51" Mar 12 08:24:05 crc kubenswrapper[4809]: I0312 08:24:05.260666 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7ff4d6f6d-442gb" podUID="a013b38e-6ede-4108-8a0d-5fb8bed67494" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.208:9311/healthcheck\": dial tcp 10.217.0.208:9311: connect: connection refused" Mar 12 08:24:05 crc kubenswrapper[4809]: I0312 08:24:05.262271 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7ff4d6f6d-442gb" podUID="a013b38e-6ede-4108-8a0d-5fb8bed67494" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.208:9311/healthcheck\": dial tcp 10.217.0.208:9311: connect: connection refused" Mar 12 08:24:05 crc kubenswrapper[4809]: I0312 08:24:05.262477 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7ff4d6f6d-442gb" Mar 12 08:24:05 crc kubenswrapper[4809]: I0312 08:24:05.270082 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-qxpjh" Mar 12 08:24:05 crc kubenswrapper[4809]: I0312 08:24:05.420720 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44gf7\" (UniqueName: \"kubernetes.io/projected/8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a-kube-api-access-44gf7\") pod \"8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a\" (UID: \"8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a\") " Mar 12 08:24:05 crc kubenswrapper[4809]: I0312 08:24:05.421817 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a-scripts\") pod \"8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a\" (UID: \"8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a\") " Mar 12 08:24:05 crc kubenswrapper[4809]: I0312 08:24:05.422126 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a-config-data\") pod \"8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a\" (UID: \"8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a\") " Mar 12 08:24:05 crc kubenswrapper[4809]: I0312 08:24:05.422249 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a-etc-machine-id\") pod \"8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a\" (UID: \"8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a\") " Mar 12 08:24:05 crc kubenswrapper[4809]: I0312 08:24:05.422329 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a-db-sync-config-data\") pod \"8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a\" (UID: \"8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a\") " Mar 12 08:24:05 crc kubenswrapper[4809]: I0312 08:24:05.422409 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a-combined-ca-bundle\") pod \"8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a\" (UID: \"8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a\") " Mar 12 08:24:05 crc kubenswrapper[4809]: I0312 08:24:05.422574 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a" (UID: "8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 08:24:05 crc kubenswrapper[4809]: I0312 08:24:05.423031 4809 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:05 crc kubenswrapper[4809]: I0312 08:24:05.429390 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a" (UID: "8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:24:05 crc kubenswrapper[4809]: I0312 08:24:05.430562 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a-kube-api-access-44gf7" (OuterVolumeSpecName: "kube-api-access-44gf7") pod "8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a" (UID: "8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a"). InnerVolumeSpecName "kube-api-access-44gf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:24:05 crc kubenswrapper[4809]: I0312 08:24:05.433799 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a-scripts" (OuterVolumeSpecName: "scripts") pod "8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a" (UID: "8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:24:05 crc kubenswrapper[4809]: I0312 08:24:05.463923 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a" (UID: "8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:24:05 crc kubenswrapper[4809]: I0312 08:24:05.502176 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a-config-data" (OuterVolumeSpecName: "config-data") pod "8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a" (UID: "8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:24:05 crc kubenswrapper[4809]: I0312 08:24:05.525276 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:05 crc kubenswrapper[4809]: I0312 08:24:05.525323 4809 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:05 crc kubenswrapper[4809]: I0312 08:24:05.525335 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:05 crc kubenswrapper[4809]: I0312 08:24:05.525345 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44gf7\" (UniqueName: \"kubernetes.io/projected/8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a-kube-api-access-44gf7\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:05 crc kubenswrapper[4809]: I0312 08:24:05.525354 4809 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:05 crc kubenswrapper[4809]: I0312 08:24:05.581652 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-qxpjh" Mar 12 08:24:05 crc kubenswrapper[4809]: I0312 08:24:05.581709 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e08acf54-60ca-4dc7-bd01-ce8fed2bcc51" containerName="ceilometer-notification-agent" containerID="cri-o://54e1da993d99079580efe774fb59ccc0164bd5825314a8f4a1085cba9d11fa1c" gracePeriod=30 Mar 12 08:24:05 crc kubenswrapper[4809]: I0312 08:24:05.581830 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-qxpjh" event={"ID":"8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a","Type":"ContainerDied","Data":"c54e6f190b7896abc8d058d0e76090b51ca3e3354e2fe3714dfc91d9096fb985"} Mar 12 08:24:05 crc kubenswrapper[4809]: I0312 08:24:05.581854 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c54e6f190b7896abc8d058d0e76090b51ca3e3354e2fe3714dfc91d9096fb985" Mar 12 08:24:05 crc kubenswrapper[4809]: I0312 08:24:05.582422 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e08acf54-60ca-4dc7-bd01-ce8fed2bcc51" containerName="sg-core" containerID="cri-o://f946e4c66d3701365b384171ed1fb9caab9356ff3f24993cf025e6ec57c96f54" gracePeriod=30 Mar 12 08:24:05 crc kubenswrapper[4809]: I0312 08:24:05.798184 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7ff4d6f6d-442gb" Mar 12 08:24:05 crc kubenswrapper[4809]: I0312 08:24:05.934443 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a013b38e-6ede-4108-8a0d-5fb8bed67494-config-data-custom\") pod \"a013b38e-6ede-4108-8a0d-5fb8bed67494\" (UID: \"a013b38e-6ede-4108-8a0d-5fb8bed67494\") " Mar 12 08:24:05 crc kubenswrapper[4809]: I0312 08:24:05.934523 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a013b38e-6ede-4108-8a0d-5fb8bed67494-logs\") pod \"a013b38e-6ede-4108-8a0d-5fb8bed67494\" (UID: \"a013b38e-6ede-4108-8a0d-5fb8bed67494\") " Mar 12 08:24:05 crc kubenswrapper[4809]: I0312 08:24:05.934752 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a013b38e-6ede-4108-8a0d-5fb8bed67494-config-data\") pod \"a013b38e-6ede-4108-8a0d-5fb8bed67494\" (UID: \"a013b38e-6ede-4108-8a0d-5fb8bed67494\") " Mar 12 08:24:05 crc kubenswrapper[4809]: I0312 08:24:05.934876 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a013b38e-6ede-4108-8a0d-5fb8bed67494-combined-ca-bundle\") pod \"a013b38e-6ede-4108-8a0d-5fb8bed67494\" (UID: \"a013b38e-6ede-4108-8a0d-5fb8bed67494\") " Mar 12 08:24:05 crc kubenswrapper[4809]: I0312 08:24:05.934923 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6cb4n\" (UniqueName: \"kubernetes.io/projected/a013b38e-6ede-4108-8a0d-5fb8bed67494-kube-api-access-6cb4n\") pod \"a013b38e-6ede-4108-8a0d-5fb8bed67494\" (UID: \"a013b38e-6ede-4108-8a0d-5fb8bed67494\") " Mar 12 08:24:05 crc kubenswrapper[4809]: I0312 08:24:05.936678 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a013b38e-6ede-4108-8a0d-5fb8bed67494-logs" (OuterVolumeSpecName: "logs") pod "a013b38e-6ede-4108-8a0d-5fb8bed67494" (UID: "a013b38e-6ede-4108-8a0d-5fb8bed67494"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:24:05 crc kubenswrapper[4809]: I0312 08:24:05.947462 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a013b38e-6ede-4108-8a0d-5fb8bed67494-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a013b38e-6ede-4108-8a0d-5fb8bed67494" (UID: "a013b38e-6ede-4108-8a0d-5fb8bed67494"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:24:05 crc kubenswrapper[4809]: I0312 08:24:05.952389 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a013b38e-6ede-4108-8a0d-5fb8bed67494-kube-api-access-6cb4n" (OuterVolumeSpecName: "kube-api-access-6cb4n") pod "a013b38e-6ede-4108-8a0d-5fb8bed67494" (UID: "a013b38e-6ede-4108-8a0d-5fb8bed67494"). InnerVolumeSpecName "kube-api-access-6cb4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:24:05 crc kubenswrapper[4809]: I0312 08:24:05.981636 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a013b38e-6ede-4108-8a0d-5fb8bed67494-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a013b38e-6ede-4108-8a0d-5fb8bed67494" (UID: "a013b38e-6ede-4108-8a0d-5fb8bed67494"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.009394 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a013b38e-6ede-4108-8a0d-5fb8bed67494-config-data" (OuterVolumeSpecName: "config-data") pod "a013b38e-6ede-4108-8a0d-5fb8bed67494" (UID: "a013b38e-6ede-4108-8a0d-5fb8bed67494"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.033697 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5b85bc8f8c-p6zd6" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.038474 4809 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a013b38e-6ede-4108-8a0d-5fb8bed67494-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.038502 4809 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a013b38e-6ede-4108-8a0d-5fb8bed67494-logs\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.038514 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a013b38e-6ede-4108-8a0d-5fb8bed67494-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.038524 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a013b38e-6ede-4108-8a0d-5fb8bed67494-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.038533 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6cb4n\" (UniqueName: \"kubernetes.io/projected/a013b38e-6ede-4108-8a0d-5fb8bed67494-kube-api-access-6cb4n\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:06 crc kubenswrapper[4809]: W0312 08:24:06.083988 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb8a8f45_bb92_4a72_8fcb_5c82ab191829.slice/crio-2268643d51ba948b0a7921251393e43b3a85d21fca80a7f7e0b6a028c4eef956 WatchSource:0}: Error finding container 2268643d51ba948b0a7921251393e43b3a85d21fca80a7f7e0b6a028c4eef956: Status 404 returned error can't find the container with id 2268643d51ba948b0a7921251393e43b3a85d21fca80a7f7e0b6a028c4eef956 Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.099232 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555064-rz9hw"] Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.140311 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fcbe319-aa82-46b8-a366-ec7ef1a7484e-internal-tls-certs\") pod \"5fcbe319-aa82-46b8-a366-ec7ef1a7484e\" (UID: \"5fcbe319-aa82-46b8-a366-ec7ef1a7484e\") " Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.141042 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fcbe319-aa82-46b8-a366-ec7ef1a7484e-ovndb-tls-certs\") pod \"5fcbe319-aa82-46b8-a366-ec7ef1a7484e\" (UID: \"5fcbe319-aa82-46b8-a366-ec7ef1a7484e\") " Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.141135 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5fcbe319-aa82-46b8-a366-ec7ef1a7484e-config\") pod \"5fcbe319-aa82-46b8-a366-ec7ef1a7484e\" (UID: \"5fcbe319-aa82-46b8-a366-ec7ef1a7484e\") " Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.141213 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcprg\" (UniqueName: \"kubernetes.io/projected/5fcbe319-aa82-46b8-a366-ec7ef1a7484e-kube-api-access-fcprg\") pod \"5fcbe319-aa82-46b8-a366-ec7ef1a7484e\" (UID: \"5fcbe319-aa82-46b8-a366-ec7ef1a7484e\") " Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.141252 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fcbe319-aa82-46b8-a366-ec7ef1a7484e-combined-ca-bundle\") pod \"5fcbe319-aa82-46b8-a366-ec7ef1a7484e\" (UID: \"5fcbe319-aa82-46b8-a366-ec7ef1a7484e\") " Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.141364 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5fcbe319-aa82-46b8-a366-ec7ef1a7484e-httpd-config\") pod \"5fcbe319-aa82-46b8-a366-ec7ef1a7484e\" (UID: \"5fcbe319-aa82-46b8-a366-ec7ef1a7484e\") " Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.141395 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fcbe319-aa82-46b8-a366-ec7ef1a7484e-public-tls-certs\") pod \"5fcbe319-aa82-46b8-a366-ec7ef1a7484e\" (UID: \"5fcbe319-aa82-46b8-a366-ec7ef1a7484e\") " Mar 12 08:24:06 crc kubenswrapper[4809]: W0312 08:24:06.146651 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0102cd54_f02d_4f95_8152_6012f7397103.slice/crio-2a2345e21490a3179deb8930b19c16561309433911296224c27d64b1ca50b483 WatchSource:0}: Error finding container 2a2345e21490a3179deb8930b19c16561309433911296224c27d64b1ca50b483: Status 404 returned error can't find the container with id 2a2345e21490a3179deb8930b19c16561309433911296224c27d64b1ca50b483 Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.148570 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fcbe319-aa82-46b8-a366-ec7ef1a7484e-kube-api-access-fcprg" (OuterVolumeSpecName: "kube-api-access-fcprg") pod "5fcbe319-aa82-46b8-a366-ec7ef1a7484e" (UID: "5fcbe319-aa82-46b8-a366-ec7ef1a7484e"). InnerVolumeSpecName "kube-api-access-fcprg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.149083 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fcbe319-aa82-46b8-a366-ec7ef1a7484e-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "5fcbe319-aa82-46b8-a366-ec7ef1a7484e" (UID: "5fcbe319-aa82-46b8-a366-ec7ef1a7484e"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.151317 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6f84ff9464-ktgn6"] Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.232235 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-677488855f-bz28w"] Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.245076 4809 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5fcbe319-aa82-46b8-a366-ec7ef1a7484e-httpd-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.245125 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcprg\" (UniqueName: \"kubernetes.io/projected/5fcbe319-aa82-46b8-a366-ec7ef1a7484e-kube-api-access-fcprg\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.247370 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fcbe319-aa82-46b8-a366-ec7ef1a7484e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5fcbe319-aa82-46b8-a366-ec7ef1a7484e" (UID: "5fcbe319-aa82-46b8-a366-ec7ef1a7484e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.270807 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fcbe319-aa82-46b8-a366-ec7ef1a7484e-config" (OuterVolumeSpecName: "config") pod "5fcbe319-aa82-46b8-a366-ec7ef1a7484e" (UID: "5fcbe319-aa82-46b8-a366-ec7ef1a7484e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.274544 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fcbe319-aa82-46b8-a366-ec7ef1a7484e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "5fcbe319-aa82-46b8-a366-ec7ef1a7484e" (UID: "5fcbe319-aa82-46b8-a366-ec7ef1a7484e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.292349 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fcbe319-aa82-46b8-a366-ec7ef1a7484e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5fcbe319-aa82-46b8-a366-ec7ef1a7484e" (UID: "5fcbe319-aa82-46b8-a366-ec7ef1a7484e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.312802 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fcbe319-aa82-46b8-a366-ec7ef1a7484e-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "5fcbe319-aa82-46b8-a366-ec7ef1a7484e" (UID: "5fcbe319-aa82-46b8-a366-ec7ef1a7484e"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.347681 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fcbe319-aa82-46b8-a366-ec7ef1a7484e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.347718 4809 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fcbe319-aa82-46b8-a366-ec7ef1a7484e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.347728 4809 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fcbe319-aa82-46b8-a366-ec7ef1a7484e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.347739 4809 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fcbe319-aa82-46b8-a366-ec7ef1a7484e-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.347749 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/5fcbe319-aa82-46b8-a366-ec7ef1a7484e-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.610652 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Mar 12 08:24:06 crc kubenswrapper[4809]: E0312 08:24:06.611774 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a013b38e-6ede-4108-8a0d-5fb8bed67494" containerName="barbican-api" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.611796 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="a013b38e-6ede-4108-8a0d-5fb8bed67494" containerName="barbican-api" Mar 12 08:24:06 crc kubenswrapper[4809]: E0312 08:24:06.611817 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a013b38e-6ede-4108-8a0d-5fb8bed67494" containerName="barbican-api-log" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.611824 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="a013b38e-6ede-4108-8a0d-5fb8bed67494" containerName="barbican-api-log" Mar 12 08:24:06 crc kubenswrapper[4809]: E0312 08:24:06.611848 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fcbe319-aa82-46b8-a366-ec7ef1a7484e" containerName="neutron-httpd" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.611855 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fcbe319-aa82-46b8-a366-ec7ef1a7484e" containerName="neutron-httpd" Mar 12 08:24:06 crc kubenswrapper[4809]: E0312 08:24:06.611878 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fcbe319-aa82-46b8-a366-ec7ef1a7484e" containerName="neutron-api" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.611885 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fcbe319-aa82-46b8-a366-ec7ef1a7484e" containerName="neutron-api" Mar 12 08:24:06 crc kubenswrapper[4809]: E0312 08:24:06.611906 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a" containerName="cinder-db-sync" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.611913 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a" containerName="cinder-db-sync" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.612157 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fcbe319-aa82-46b8-a366-ec7ef1a7484e" containerName="neutron-httpd" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.612185 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a" containerName="cinder-db-sync" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.612199 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fcbe319-aa82-46b8-a366-ec7ef1a7484e" containerName="neutron-api" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.612210 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="a013b38e-6ede-4108-8a0d-5fb8bed67494" containerName="barbican-api-log" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.612219 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="a013b38e-6ede-4108-8a0d-5fb8bed67494" containerName="barbican-api" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.613604 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.619337 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-m2fpj" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.619590 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.619917 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.622833 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.631354 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.658574 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5\") " pod="openstack/cinder-scheduler-0" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.658734 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5-scripts\") pod \"cinder-scheduler-0\" (UID: \"0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5\") " pod="openstack/cinder-scheduler-0" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.658945 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5-config-data\") pod \"cinder-scheduler-0\" (UID: \"0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5\") " pod="openstack/cinder-scheduler-0" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.659034 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5\") " pod="openstack/cinder-scheduler-0" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.659147 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnh8t\" (UniqueName: \"kubernetes.io/projected/0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5-kube-api-access-xnh8t\") pod \"cinder-scheduler-0\" (UID: \"0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5\") " pod="openstack/cinder-scheduler-0" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.659188 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5\") " pod="openstack/cinder-scheduler-0" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.665994 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555064-rz9hw" event={"ID":"bb8a8f45-bb92-4a72-8fcb-5c82ab191829","Type":"ContainerStarted","Data":"2268643d51ba948b0a7921251393e43b3a85d21fca80a7f7e0b6a028c4eef956"} Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.681394 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5b85bc8f8c-p6zd6" event={"ID":"5fcbe319-aa82-46b8-a366-ec7ef1a7484e","Type":"ContainerDied","Data":"42d2150770d0ebc2d2447195520ab3d790c97fb65d07e62ea164fa87c24b823c"} Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.681491 4809 scope.go:117] "RemoveContainer" containerID="f01085a64442ed5f2a7494db9e19762dab838c2d11d89dd439cd7f2275356334" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.681508 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5b85bc8f8c-p6zd6" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.690163 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6f84ff9464-ktgn6" event={"ID":"0102cd54-f02d-4f95-8152-6012f7397103","Type":"ContainerStarted","Data":"2a2345e21490a3179deb8930b19c16561309433911296224c27d64b1ca50b483"} Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.709804 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-677488855f-bz28w" event={"ID":"5c65e075-ccaa-4054-9903-ebcd26368c00","Type":"ContainerStarted","Data":"620b90a04a4fcf88c70e7b24cf5dec43ed516baf831b87e5597bc8a985db3bf3"} Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.719030 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-4qr2q"] Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.720996 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-4qr2q" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.724735 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7ff4d6f6d-442gb" event={"ID":"a013b38e-6ede-4108-8a0d-5fb8bed67494","Type":"ContainerDied","Data":"1d3cba12cd1eaf7c39386bebd0132822047d3badee9a3cf5d3501570851d2243"} Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.733577 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7ff4d6f6d-442gb" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.760573 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c0262dd0-da72-4f2a-a2e9-bbb12081451d-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-4qr2q\" (UID: \"c0262dd0-da72-4f2a-a2e9-bbb12081451d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-4qr2q" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.760621 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6spcp\" (UniqueName: \"kubernetes.io/projected/c0262dd0-da72-4f2a-a2e9-bbb12081451d-kube-api-access-6spcp\") pod \"dnsmasq-dns-5c9776ccc5-4qr2q\" (UID: \"c0262dd0-da72-4f2a-a2e9-bbb12081451d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-4qr2q" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.760643 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c0262dd0-da72-4f2a-a2e9-bbb12081451d-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-4qr2q\" (UID: \"c0262dd0-da72-4f2a-a2e9-bbb12081451d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-4qr2q" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.760673 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5-config-data\") pod \"cinder-scheduler-0\" (UID: \"0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5\") " pod="openstack/cinder-scheduler-0" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.760706 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5\") " pod="openstack/cinder-scheduler-0" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.760748 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnh8t\" (UniqueName: \"kubernetes.io/projected/0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5-kube-api-access-xnh8t\") pod \"cinder-scheduler-0\" (UID: \"0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5\") " pod="openstack/cinder-scheduler-0" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.760771 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5\") " pod="openstack/cinder-scheduler-0" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.760804 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0262dd0-da72-4f2a-a2e9-bbb12081451d-config\") pod \"dnsmasq-dns-5c9776ccc5-4qr2q\" (UID: \"c0262dd0-da72-4f2a-a2e9-bbb12081451d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-4qr2q" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.760827 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c0262dd0-da72-4f2a-a2e9-bbb12081451d-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-4qr2q\" (UID: \"c0262dd0-da72-4f2a-a2e9-bbb12081451d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-4qr2q" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.760881 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5\") " pod="openstack/cinder-scheduler-0" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.760915 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c0262dd0-da72-4f2a-a2e9-bbb12081451d-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-4qr2q\" (UID: \"c0262dd0-da72-4f2a-a2e9-bbb12081451d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-4qr2q" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.760944 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5-scripts\") pod \"cinder-scheduler-0\" (UID: \"0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5\") " pod="openstack/cinder-scheduler-0" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.765560 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5\") " pod="openstack/cinder-scheduler-0" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.771173 4809 generic.go:334] "Generic (PLEG): container finished" podID="e08acf54-60ca-4dc7-bd01-ce8fed2bcc51" containerID="f946e4c66d3701365b384171ed1fb9caab9356ff3f24993cf025e6ec57c96f54" exitCode=2 Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.771461 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e08acf54-60ca-4dc7-bd01-ce8fed2bcc51","Type":"ContainerDied","Data":"f946e4c66d3701365b384171ed1fb9caab9356ff3f24993cf025e6ec57c96f54"} Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.789339 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5\") " pod="openstack/cinder-scheduler-0" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.798730 4809 scope.go:117] "RemoveContainer" containerID="6c0f8b1e491a8e1f35e2308ea7ce7e7a255d03400277325bb1a47117c3ba423f" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.810081 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5-config-data\") pod \"cinder-scheduler-0\" (UID: \"0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5\") " pod="openstack/cinder-scheduler-0" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.812728 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5\") " pod="openstack/cinder-scheduler-0" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.816559 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnh8t\" (UniqueName: \"kubernetes.io/projected/0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5-kube-api-access-xnh8t\") pod \"cinder-scheduler-0\" (UID: \"0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5\") " pod="openstack/cinder-scheduler-0" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.822755 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5-scripts\") pod \"cinder-scheduler-0\" (UID: \"0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5\") " pod="openstack/cinder-scheduler-0" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.839455 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-4qr2q"] Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.866624 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0262dd0-da72-4f2a-a2e9-bbb12081451d-config\") pod \"dnsmasq-dns-5c9776ccc5-4qr2q\" (UID: \"c0262dd0-da72-4f2a-a2e9-bbb12081451d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-4qr2q" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.866678 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c0262dd0-da72-4f2a-a2e9-bbb12081451d-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-4qr2q\" (UID: \"c0262dd0-da72-4f2a-a2e9-bbb12081451d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-4qr2q" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.866768 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c0262dd0-da72-4f2a-a2e9-bbb12081451d-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-4qr2q\" (UID: \"c0262dd0-da72-4f2a-a2e9-bbb12081451d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-4qr2q" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.866818 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c0262dd0-da72-4f2a-a2e9-bbb12081451d-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-4qr2q\" (UID: \"c0262dd0-da72-4f2a-a2e9-bbb12081451d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-4qr2q" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.866842 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6spcp\" (UniqueName: \"kubernetes.io/projected/c0262dd0-da72-4f2a-a2e9-bbb12081451d-kube-api-access-6spcp\") pod \"dnsmasq-dns-5c9776ccc5-4qr2q\" (UID: \"c0262dd0-da72-4f2a-a2e9-bbb12081451d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-4qr2q" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.866866 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c0262dd0-da72-4f2a-a2e9-bbb12081451d-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-4qr2q\" (UID: \"c0262dd0-da72-4f2a-a2e9-bbb12081451d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-4qr2q" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.867932 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c0262dd0-da72-4f2a-a2e9-bbb12081451d-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-4qr2q\" (UID: \"c0262dd0-da72-4f2a-a2e9-bbb12081451d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-4qr2q" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.867944 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c0262dd0-da72-4f2a-a2e9-bbb12081451d-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-4qr2q\" (UID: \"c0262dd0-da72-4f2a-a2e9-bbb12081451d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-4qr2q" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.868519 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c0262dd0-da72-4f2a-a2e9-bbb12081451d-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-4qr2q\" (UID: \"c0262dd0-da72-4f2a-a2e9-bbb12081451d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-4qr2q" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.873816 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0262dd0-da72-4f2a-a2e9-bbb12081451d-config\") pod \"dnsmasq-dns-5c9776ccc5-4qr2q\" (UID: \"c0262dd0-da72-4f2a-a2e9-bbb12081451d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-4qr2q" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.874751 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c0262dd0-da72-4f2a-a2e9-bbb12081451d-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-4qr2q\" (UID: \"c0262dd0-da72-4f2a-a2e9-bbb12081451d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-4qr2q" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.905029 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6spcp\" (UniqueName: \"kubernetes.io/projected/c0262dd0-da72-4f2a-a2e9-bbb12081451d-kube-api-access-6spcp\") pod \"dnsmasq-dns-5c9776ccc5-4qr2q\" (UID: \"c0262dd0-da72-4f2a-a2e9-bbb12081451d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-4qr2q" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.930511 4809 scope.go:117] "RemoveContainer" containerID="cf14a34735dd7332c0abf54b54f0b2b51d5158472dfd0c3e1281ef8e0c53e7f1" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.941070 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7ff4d6f6d-442gb"] Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.974175 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.977265 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-7ff4d6f6d-442gb"] Mar 12 08:24:06 crc kubenswrapper[4809]: I0312 08:24:06.986912 4809 scope.go:117] "RemoveContainer" containerID="4b0af71c1f5b93a43e124c991707fa02a53a71a7089eb71e2bb9c56348de1df6" Mar 12 08:24:07 crc kubenswrapper[4809]: I0312 08:24:07.206018 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-4qr2q" Mar 12 08:24:07 crc kubenswrapper[4809]: I0312 08:24:07.258732 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a013b38e-6ede-4108-8a0d-5fb8bed67494" path="/var/lib/kubelet/pods/a013b38e-6ede-4108-8a0d-5fb8bed67494/volumes" Mar 12 08:24:07 crc kubenswrapper[4809]: I0312 08:24:07.276632 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5b85bc8f8c-p6zd6"] Mar 12 08:24:07 crc kubenswrapper[4809]: I0312 08:24:07.276691 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5b85bc8f8c-p6zd6"] Mar 12 08:24:07 crc kubenswrapper[4809]: I0312 08:24:07.276713 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Mar 12 08:24:07 crc kubenswrapper[4809]: I0312 08:24:07.280318 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 12 08:24:07 crc kubenswrapper[4809]: I0312 08:24:07.280493 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 12 08:24:07 crc kubenswrapper[4809]: I0312 08:24:07.296623 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Mar 12 08:24:07 crc kubenswrapper[4809]: I0312 08:24:07.389328 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4d797bd8-7c65-4856-9b7b-3c207b1a64c4-config-data-custom\") pod \"cinder-api-0\" (UID: \"4d797bd8-7c65-4856-9b7b-3c207b1a64c4\") " pod="openstack/cinder-api-0" Mar 12 08:24:07 crc kubenswrapper[4809]: I0312 08:24:07.389899 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4d797bd8-7c65-4856-9b7b-3c207b1a64c4-etc-machine-id\") pod \"cinder-api-0\" (UID: \"4d797bd8-7c65-4856-9b7b-3c207b1a64c4\") " pod="openstack/cinder-api-0" Mar 12 08:24:07 crc kubenswrapper[4809]: I0312 08:24:07.389930 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d797bd8-7c65-4856-9b7b-3c207b1a64c4-logs\") pod \"cinder-api-0\" (UID: \"4d797bd8-7c65-4856-9b7b-3c207b1a64c4\") " pod="openstack/cinder-api-0" Mar 12 08:24:07 crc kubenswrapper[4809]: I0312 08:24:07.389972 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d797bd8-7c65-4856-9b7b-3c207b1a64c4-config-data\") pod \"cinder-api-0\" (UID: \"4d797bd8-7c65-4856-9b7b-3c207b1a64c4\") " pod="openstack/cinder-api-0" Mar 12 08:24:07 crc kubenswrapper[4809]: I0312 08:24:07.390003 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d797bd8-7c65-4856-9b7b-3c207b1a64c4-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"4d797bd8-7c65-4856-9b7b-3c207b1a64c4\") " pod="openstack/cinder-api-0" Mar 12 08:24:07 crc kubenswrapper[4809]: I0312 08:24:07.390064 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fqj2\" (UniqueName: \"kubernetes.io/projected/4d797bd8-7c65-4856-9b7b-3c207b1a64c4-kube-api-access-7fqj2\") pod \"cinder-api-0\" (UID: \"4d797bd8-7c65-4856-9b7b-3c207b1a64c4\") " pod="openstack/cinder-api-0" Mar 12 08:24:07 crc kubenswrapper[4809]: I0312 08:24:07.390106 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d797bd8-7c65-4856-9b7b-3c207b1a64c4-scripts\") pod \"cinder-api-0\" (UID: \"4d797bd8-7c65-4856-9b7b-3c207b1a64c4\") " pod="openstack/cinder-api-0" Mar 12 08:24:07 crc kubenswrapper[4809]: I0312 08:24:07.499265 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fqj2\" (UniqueName: \"kubernetes.io/projected/4d797bd8-7c65-4856-9b7b-3c207b1a64c4-kube-api-access-7fqj2\") pod \"cinder-api-0\" (UID: \"4d797bd8-7c65-4856-9b7b-3c207b1a64c4\") " pod="openstack/cinder-api-0" Mar 12 08:24:07 crc kubenswrapper[4809]: I0312 08:24:07.499359 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d797bd8-7c65-4856-9b7b-3c207b1a64c4-scripts\") pod \"cinder-api-0\" (UID: \"4d797bd8-7c65-4856-9b7b-3c207b1a64c4\") " pod="openstack/cinder-api-0" Mar 12 08:24:07 crc kubenswrapper[4809]: I0312 08:24:07.499471 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4d797bd8-7c65-4856-9b7b-3c207b1a64c4-config-data-custom\") pod \"cinder-api-0\" (UID: \"4d797bd8-7c65-4856-9b7b-3c207b1a64c4\") " pod="openstack/cinder-api-0" Mar 12 08:24:07 crc kubenswrapper[4809]: I0312 08:24:07.499520 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4d797bd8-7c65-4856-9b7b-3c207b1a64c4-etc-machine-id\") pod \"cinder-api-0\" (UID: \"4d797bd8-7c65-4856-9b7b-3c207b1a64c4\") " pod="openstack/cinder-api-0" Mar 12 08:24:07 crc kubenswrapper[4809]: I0312 08:24:07.499545 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d797bd8-7c65-4856-9b7b-3c207b1a64c4-logs\") pod \"cinder-api-0\" (UID: \"4d797bd8-7c65-4856-9b7b-3c207b1a64c4\") " pod="openstack/cinder-api-0" Mar 12 08:24:07 crc kubenswrapper[4809]: I0312 08:24:07.499578 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d797bd8-7c65-4856-9b7b-3c207b1a64c4-config-data\") pod \"cinder-api-0\" (UID: \"4d797bd8-7c65-4856-9b7b-3c207b1a64c4\") " pod="openstack/cinder-api-0" Mar 12 08:24:07 crc kubenswrapper[4809]: I0312 08:24:07.499611 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d797bd8-7c65-4856-9b7b-3c207b1a64c4-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"4d797bd8-7c65-4856-9b7b-3c207b1a64c4\") " pod="openstack/cinder-api-0" Mar 12 08:24:07 crc kubenswrapper[4809]: I0312 08:24:07.515215 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d797bd8-7c65-4856-9b7b-3c207b1a64c4-logs\") pod \"cinder-api-0\" (UID: \"4d797bd8-7c65-4856-9b7b-3c207b1a64c4\") " pod="openstack/cinder-api-0" Mar 12 08:24:07 crc kubenswrapper[4809]: I0312 08:24:07.515295 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4d797bd8-7c65-4856-9b7b-3c207b1a64c4-etc-machine-id\") pod \"cinder-api-0\" (UID: \"4d797bd8-7c65-4856-9b7b-3c207b1a64c4\") " pod="openstack/cinder-api-0" Mar 12 08:24:07 crc kubenswrapper[4809]: I0312 08:24:07.556104 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d797bd8-7c65-4856-9b7b-3c207b1a64c4-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"4d797bd8-7c65-4856-9b7b-3c207b1a64c4\") " pod="openstack/cinder-api-0" Mar 12 08:24:07 crc kubenswrapper[4809]: I0312 08:24:07.556498 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4d797bd8-7c65-4856-9b7b-3c207b1a64c4-config-data-custom\") pod \"cinder-api-0\" (UID: \"4d797bd8-7c65-4856-9b7b-3c207b1a64c4\") " pod="openstack/cinder-api-0" Mar 12 08:24:07 crc kubenswrapper[4809]: I0312 08:24:07.557520 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d797bd8-7c65-4856-9b7b-3c207b1a64c4-scripts\") pod \"cinder-api-0\" (UID: \"4d797bd8-7c65-4856-9b7b-3c207b1a64c4\") " pod="openstack/cinder-api-0" Mar 12 08:24:07 crc kubenswrapper[4809]: I0312 08:24:07.557818 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d797bd8-7c65-4856-9b7b-3c207b1a64c4-config-data\") pod \"cinder-api-0\" (UID: \"4d797bd8-7c65-4856-9b7b-3c207b1a64c4\") " pod="openstack/cinder-api-0" Mar 12 08:24:07 crc kubenswrapper[4809]: I0312 08:24:07.575042 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fqj2\" (UniqueName: \"kubernetes.io/projected/4d797bd8-7c65-4856-9b7b-3c207b1a64c4-kube-api-access-7fqj2\") pod \"cinder-api-0\" (UID: \"4d797bd8-7c65-4856-9b7b-3c207b1a64c4\") " pod="openstack/cinder-api-0" Mar 12 08:24:07 crc kubenswrapper[4809]: I0312 08:24:07.686479 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 12 08:24:07 crc kubenswrapper[4809]: I0312 08:24:07.885686 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-677488855f-bz28w" event={"ID":"5c65e075-ccaa-4054-9903-ebcd26368c00","Type":"ContainerStarted","Data":"7bfa968709c2f13200c602dee043cc7d7e1cdfca759e9a2d1a9a00cfd6894867"} Mar 12 08:24:07 crc kubenswrapper[4809]: I0312 08:24:07.886464 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-677488855f-bz28w" Mar 12 08:24:07 crc kubenswrapper[4809]: I0312 08:24:07.892543 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 12 08:24:07 crc kubenswrapper[4809]: I0312 08:24:07.949105 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-677488855f-bz28w" podStartSLOduration=12.949079357 podStartE2EDuration="12.949079357s" podCreationTimestamp="2026-03-12 08:23:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:24:07.919266291 +0000 UTC m=+1521.501302024" watchObservedRunningTime="2026-03-12 08:24:07.949079357 +0000 UTC m=+1521.531115090" Mar 12 08:24:07 crc kubenswrapper[4809]: I0312 08:24:07.960599 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6f84ff9464-ktgn6" event={"ID":"0102cd54-f02d-4f95-8152-6012f7397103","Type":"ContainerStarted","Data":"f1dc96b62e5bbf12a610d4be5cde36686fa40914d70ccb68bfd5cdb1d35b5aac"} Mar 12 08:24:07 crc kubenswrapper[4809]: I0312 08:24:07.960851 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6f84ff9464-ktgn6" Mar 12 08:24:07 crc kubenswrapper[4809]: I0312 08:24:07.961336 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6f84ff9464-ktgn6" Mar 12 08:24:08 crc kubenswrapper[4809]: I0312 08:24:08.008581 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6f84ff9464-ktgn6" podStartSLOduration=5.008557407 podStartE2EDuration="5.008557407s" podCreationTimestamp="2026-03-12 08:24:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:24:07.995691789 +0000 UTC m=+1521.577727522" watchObservedRunningTime="2026-03-12 08:24:08.008557407 +0000 UTC m=+1521.590593140" Mar 12 08:24:08 crc kubenswrapper[4809]: I0312 08:24:08.322313 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-4qr2q"] Mar 12 08:24:08 crc kubenswrapper[4809]: I0312 08:24:08.591336 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 12 08:24:08 crc kubenswrapper[4809]: I0312 08:24:08.815146 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 08:24:08 crc kubenswrapper[4809]: I0312 08:24:08.983640 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e08acf54-60ca-4dc7-bd01-ce8fed2bcc51-combined-ca-bundle\") pod \"e08acf54-60ca-4dc7-bd01-ce8fed2bcc51\" (UID: \"e08acf54-60ca-4dc7-bd01-ce8fed2bcc51\") " Mar 12 08:24:08 crc kubenswrapper[4809]: I0312 08:24:08.983720 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e08acf54-60ca-4dc7-bd01-ce8fed2bcc51-scripts\") pod \"e08acf54-60ca-4dc7-bd01-ce8fed2bcc51\" (UID: \"e08acf54-60ca-4dc7-bd01-ce8fed2bcc51\") " Mar 12 08:24:08 crc kubenswrapper[4809]: I0312 08:24:08.983762 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jh8xw\" (UniqueName: \"kubernetes.io/projected/e08acf54-60ca-4dc7-bd01-ce8fed2bcc51-kube-api-access-jh8xw\") pod \"e08acf54-60ca-4dc7-bd01-ce8fed2bcc51\" (UID: \"e08acf54-60ca-4dc7-bd01-ce8fed2bcc51\") " Mar 12 08:24:08 crc kubenswrapper[4809]: I0312 08:24:08.983949 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e08acf54-60ca-4dc7-bd01-ce8fed2bcc51-run-httpd\") pod \"e08acf54-60ca-4dc7-bd01-ce8fed2bcc51\" (UID: \"e08acf54-60ca-4dc7-bd01-ce8fed2bcc51\") " Mar 12 08:24:08 crc kubenswrapper[4809]: I0312 08:24:08.984053 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e08acf54-60ca-4dc7-bd01-ce8fed2bcc51-sg-core-conf-yaml\") pod \"e08acf54-60ca-4dc7-bd01-ce8fed2bcc51\" (UID: \"e08acf54-60ca-4dc7-bd01-ce8fed2bcc51\") " Mar 12 08:24:08 crc kubenswrapper[4809]: I0312 08:24:08.984133 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e08acf54-60ca-4dc7-bd01-ce8fed2bcc51-log-httpd\") pod \"e08acf54-60ca-4dc7-bd01-ce8fed2bcc51\" (UID: \"e08acf54-60ca-4dc7-bd01-ce8fed2bcc51\") " Mar 12 08:24:08 crc kubenswrapper[4809]: I0312 08:24:08.984200 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e08acf54-60ca-4dc7-bd01-ce8fed2bcc51-config-data\") pod \"e08acf54-60ca-4dc7-bd01-ce8fed2bcc51\" (UID: \"e08acf54-60ca-4dc7-bd01-ce8fed2bcc51\") " Mar 12 08:24:08 crc kubenswrapper[4809]: I0312 08:24:08.989823 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e08acf54-60ca-4dc7-bd01-ce8fed2bcc51-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e08acf54-60ca-4dc7-bd01-ce8fed2bcc51" (UID: "e08acf54-60ca-4dc7-bd01-ce8fed2bcc51"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:24:08 crc kubenswrapper[4809]: I0312 08:24:08.989852 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e08acf54-60ca-4dc7-bd01-ce8fed2bcc51-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e08acf54-60ca-4dc7-bd01-ce8fed2bcc51" (UID: "e08acf54-60ca-4dc7-bd01-ce8fed2bcc51"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:24:08 crc kubenswrapper[4809]: I0312 08:24:08.993990 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"4d797bd8-7c65-4856-9b7b-3c207b1a64c4","Type":"ContainerStarted","Data":"09469de8e095f27b0ca6a7ced8dced6ccf6fb793b2263389fbd945451c3a4746"} Mar 12 08:24:09 crc kubenswrapper[4809]: I0312 08:24:09.003265 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6f84ff9464-ktgn6" event={"ID":"0102cd54-f02d-4f95-8152-6012f7397103","Type":"ContainerStarted","Data":"b593f6c68fc79223ab2387c53e1c465961f6f0eb32dd9c9783d2405e666174b4"} Mar 12 08:24:09 crc kubenswrapper[4809]: I0312 08:24:09.010105 4809 generic.go:334] "Generic (PLEG): container finished" podID="c0262dd0-da72-4f2a-a2e9-bbb12081451d" containerID="dc909eb537704e5cd8027931364ad77526a7fc046411c0cb47774d61a7694596" exitCode=0 Mar 12 08:24:09 crc kubenswrapper[4809]: I0312 08:24:09.010267 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-4qr2q" event={"ID":"c0262dd0-da72-4f2a-a2e9-bbb12081451d","Type":"ContainerDied","Data":"dc909eb537704e5cd8027931364ad77526a7fc046411c0cb47774d61a7694596"} Mar 12 08:24:09 crc kubenswrapper[4809]: I0312 08:24:09.010306 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-4qr2q" event={"ID":"c0262dd0-da72-4f2a-a2e9-bbb12081451d","Type":"ContainerStarted","Data":"aa872aca52897e149d8de4a40110b18e85410b612a958efd326f4409f1bfdcd9"} Mar 12 08:24:09 crc kubenswrapper[4809]: I0312 08:24:09.011319 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e08acf54-60ca-4dc7-bd01-ce8fed2bcc51-scripts" (OuterVolumeSpecName: "scripts") pod "e08acf54-60ca-4dc7-bd01-ce8fed2bcc51" (UID: "e08acf54-60ca-4dc7-bd01-ce8fed2bcc51"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:24:09 crc kubenswrapper[4809]: I0312 08:24:09.016013 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-677488855f-bz28w" event={"ID":"5c65e075-ccaa-4054-9903-ebcd26368c00","Type":"ContainerStarted","Data":"c10b539dbc9af4ada3ecea671057a145d7af19f1f9380998e4692e70bad1fba4"} Mar 12 08:24:09 crc kubenswrapper[4809]: I0312 08:24:09.017412 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e08acf54-60ca-4dc7-bd01-ce8fed2bcc51-kube-api-access-jh8xw" (OuterVolumeSpecName: "kube-api-access-jh8xw") pod "e08acf54-60ca-4dc7-bd01-ce8fed2bcc51" (UID: "e08acf54-60ca-4dc7-bd01-ce8fed2bcc51"). InnerVolumeSpecName "kube-api-access-jh8xw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:24:09 crc kubenswrapper[4809]: I0312 08:24:09.023693 4809 generic.go:334] "Generic (PLEG): container finished" podID="e08acf54-60ca-4dc7-bd01-ce8fed2bcc51" containerID="54e1da993d99079580efe774fb59ccc0164bd5825314a8f4a1085cba9d11fa1c" exitCode=0 Mar 12 08:24:09 crc kubenswrapper[4809]: I0312 08:24:09.023792 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e08acf54-60ca-4dc7-bd01-ce8fed2bcc51","Type":"ContainerDied","Data":"54e1da993d99079580efe774fb59ccc0164bd5825314a8f4a1085cba9d11fa1c"} Mar 12 08:24:09 crc kubenswrapper[4809]: I0312 08:24:09.023829 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e08acf54-60ca-4dc7-bd01-ce8fed2bcc51","Type":"ContainerDied","Data":"d8443bf1cf96bab059695e478cd26dc91c322ff1cbb87bc7966e5b7dfc72a8ed"} Mar 12 08:24:09 crc kubenswrapper[4809]: I0312 08:24:09.023850 4809 scope.go:117] "RemoveContainer" containerID="f946e4c66d3701365b384171ed1fb9caab9356ff3f24993cf025e6ec57c96f54" Mar 12 08:24:09 crc kubenswrapper[4809]: I0312 08:24:09.024051 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 08:24:09 crc kubenswrapper[4809]: I0312 08:24:09.040791 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5","Type":"ContainerStarted","Data":"f69187559b99e4db63f1a6b99492a6fc03321da92003b60e06ae25b314aa5dc6"} Mar 12 08:24:09 crc kubenswrapper[4809]: I0312 08:24:09.052410 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555064-rz9hw" event={"ID":"bb8a8f45-bb92-4a72-8fcb-5c82ab191829","Type":"ContainerStarted","Data":"7218fd99851780e805adc0b898f00f5c9ab9816475d6a72103dcc775a9395347"} Mar 12 08:24:09 crc kubenswrapper[4809]: I0312 08:24:09.073132 4809 scope.go:117] "RemoveContainer" containerID="54e1da993d99079580efe774fb59ccc0164bd5825314a8f4a1085cba9d11fa1c" Mar 12 08:24:09 crc kubenswrapper[4809]: I0312 08:24:09.085893 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e08acf54-60ca-4dc7-bd01-ce8fed2bcc51-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e08acf54-60ca-4dc7-bd01-ce8fed2bcc51" (UID: "e08acf54-60ca-4dc7-bd01-ce8fed2bcc51"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:24:09 crc kubenswrapper[4809]: I0312 08:24:09.086709 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555064-rz9hw" podStartSLOduration=8.072183207 podStartE2EDuration="9.086377059s" podCreationTimestamp="2026-03-12 08:24:00 +0000 UTC" firstStartedPulling="2026-03-12 08:24:06.086978872 +0000 UTC m=+1519.669014595" lastFinishedPulling="2026-03-12 08:24:07.101172714 +0000 UTC m=+1520.683208447" observedRunningTime="2026-03-12 08:24:09.069565734 +0000 UTC m=+1522.651601467" watchObservedRunningTime="2026-03-12 08:24:09.086377059 +0000 UTC m=+1522.668412792" Mar 12 08:24:09 crc kubenswrapper[4809]: I0312 08:24:09.090894 4809 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e08acf54-60ca-4dc7-bd01-ce8fed2bcc51-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:09 crc kubenswrapper[4809]: I0312 08:24:09.090935 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jh8xw\" (UniqueName: \"kubernetes.io/projected/e08acf54-60ca-4dc7-bd01-ce8fed2bcc51-kube-api-access-jh8xw\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:09 crc kubenswrapper[4809]: I0312 08:24:09.090952 4809 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e08acf54-60ca-4dc7-bd01-ce8fed2bcc51-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:09 crc kubenswrapper[4809]: I0312 08:24:09.090969 4809 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e08acf54-60ca-4dc7-bd01-ce8fed2bcc51-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:09 crc kubenswrapper[4809]: I0312 08:24:09.091017 4809 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e08acf54-60ca-4dc7-bd01-ce8fed2bcc51-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:09 crc kubenswrapper[4809]: I0312 08:24:09.101251 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e08acf54-60ca-4dc7-bd01-ce8fed2bcc51-config-data" (OuterVolumeSpecName: "config-data") pod "e08acf54-60ca-4dc7-bd01-ce8fed2bcc51" (UID: "e08acf54-60ca-4dc7-bd01-ce8fed2bcc51"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:24:09 crc kubenswrapper[4809]: I0312 08:24:09.115190 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e08acf54-60ca-4dc7-bd01-ce8fed2bcc51-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e08acf54-60ca-4dc7-bd01-ce8fed2bcc51" (UID: "e08acf54-60ca-4dc7-bd01-ce8fed2bcc51"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:24:09 crc kubenswrapper[4809]: I0312 08:24:09.153235 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fcbe319-aa82-46b8-a366-ec7ef1a7484e" path="/var/lib/kubelet/pods/5fcbe319-aa82-46b8-a366-ec7ef1a7484e/volumes" Mar 12 08:24:09 crc kubenswrapper[4809]: I0312 08:24:09.191440 4809 scope.go:117] "RemoveContainer" containerID="f946e4c66d3701365b384171ed1fb9caab9356ff3f24993cf025e6ec57c96f54" Mar 12 08:24:09 crc kubenswrapper[4809]: E0312 08:24:09.194344 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f946e4c66d3701365b384171ed1fb9caab9356ff3f24993cf025e6ec57c96f54\": container with ID starting with f946e4c66d3701365b384171ed1fb9caab9356ff3f24993cf025e6ec57c96f54 not found: ID does not exist" containerID="f946e4c66d3701365b384171ed1fb9caab9356ff3f24993cf025e6ec57c96f54" Mar 12 08:24:09 crc kubenswrapper[4809]: I0312 08:24:09.194393 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f946e4c66d3701365b384171ed1fb9caab9356ff3f24993cf025e6ec57c96f54"} err="failed to get container status \"f946e4c66d3701365b384171ed1fb9caab9356ff3f24993cf025e6ec57c96f54\": rpc error: code = NotFound desc = could not find container \"f946e4c66d3701365b384171ed1fb9caab9356ff3f24993cf025e6ec57c96f54\": container with ID starting with f946e4c66d3701365b384171ed1fb9caab9356ff3f24993cf025e6ec57c96f54 not found: ID does not exist" Mar 12 08:24:09 crc kubenswrapper[4809]: I0312 08:24:09.194422 4809 scope.go:117] "RemoveContainer" containerID="54e1da993d99079580efe774fb59ccc0164bd5825314a8f4a1085cba9d11fa1c" Mar 12 08:24:09 crc kubenswrapper[4809]: I0312 08:24:09.194609 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e08acf54-60ca-4dc7-bd01-ce8fed2bcc51-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:09 crc kubenswrapper[4809]: I0312 08:24:09.195850 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e08acf54-60ca-4dc7-bd01-ce8fed2bcc51-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:09 crc kubenswrapper[4809]: E0312 08:24:09.199893 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54e1da993d99079580efe774fb59ccc0164bd5825314a8f4a1085cba9d11fa1c\": container with ID starting with 54e1da993d99079580efe774fb59ccc0164bd5825314a8f4a1085cba9d11fa1c not found: ID does not exist" containerID="54e1da993d99079580efe774fb59ccc0164bd5825314a8f4a1085cba9d11fa1c" Mar 12 08:24:09 crc kubenswrapper[4809]: I0312 08:24:09.199961 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54e1da993d99079580efe774fb59ccc0164bd5825314a8f4a1085cba9d11fa1c"} err="failed to get container status \"54e1da993d99079580efe774fb59ccc0164bd5825314a8f4a1085cba9d11fa1c\": rpc error: code = NotFound desc = could not find container \"54e1da993d99079580efe774fb59ccc0164bd5825314a8f4a1085cba9d11fa1c\": container with ID starting with 54e1da993d99079580efe774fb59ccc0164bd5825314a8f4a1085cba9d11fa1c not found: ID does not exist" Mar 12 08:24:09 crc kubenswrapper[4809]: I0312 08:24:09.533035 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 08:24:09 crc kubenswrapper[4809]: I0312 08:24:09.554338 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 12 08:24:09 crc kubenswrapper[4809]: I0312 08:24:09.616840 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 12 08:24:09 crc kubenswrapper[4809]: E0312 08:24:09.617623 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e08acf54-60ca-4dc7-bd01-ce8fed2bcc51" containerName="ceilometer-notification-agent" Mar 12 08:24:09 crc kubenswrapper[4809]: I0312 08:24:09.617652 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="e08acf54-60ca-4dc7-bd01-ce8fed2bcc51" containerName="ceilometer-notification-agent" Mar 12 08:24:09 crc kubenswrapper[4809]: E0312 08:24:09.617690 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e08acf54-60ca-4dc7-bd01-ce8fed2bcc51" containerName="sg-core" Mar 12 08:24:09 crc kubenswrapper[4809]: I0312 08:24:09.617697 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="e08acf54-60ca-4dc7-bd01-ce8fed2bcc51" containerName="sg-core" Mar 12 08:24:09 crc kubenswrapper[4809]: I0312 08:24:09.617987 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="e08acf54-60ca-4dc7-bd01-ce8fed2bcc51" containerName="sg-core" Mar 12 08:24:09 crc kubenswrapper[4809]: I0312 08:24:09.618017 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="e08acf54-60ca-4dc7-bd01-ce8fed2bcc51" containerName="ceilometer-notification-agent" Mar 12 08:24:09 crc kubenswrapper[4809]: I0312 08:24:09.621025 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 08:24:09 crc kubenswrapper[4809]: I0312 08:24:09.624135 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 12 08:24:09 crc kubenswrapper[4809]: I0312 08:24:09.626234 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 12 08:24:09 crc kubenswrapper[4809]: I0312 08:24:09.635402 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 08:24:09 crc kubenswrapper[4809]: I0312 08:24:09.711938 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd8e10ab-6dee-431d-8f02-af12acc9e823-scripts\") pod \"ceilometer-0\" (UID: \"cd8e10ab-6dee-431d-8f02-af12acc9e823\") " pod="openstack/ceilometer-0" Mar 12 08:24:09 crc kubenswrapper[4809]: I0312 08:24:09.712263 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd8e10ab-6dee-431d-8f02-af12acc9e823-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cd8e10ab-6dee-431d-8f02-af12acc9e823\") " pod="openstack/ceilometer-0" Mar 12 08:24:09 crc kubenswrapper[4809]: I0312 08:24:09.712525 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxcch\" (UniqueName: \"kubernetes.io/projected/cd8e10ab-6dee-431d-8f02-af12acc9e823-kube-api-access-nxcch\") pod \"ceilometer-0\" (UID: \"cd8e10ab-6dee-431d-8f02-af12acc9e823\") " pod="openstack/ceilometer-0" Mar 12 08:24:09 crc kubenswrapper[4809]: I0312 08:24:09.712647 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd8e10ab-6dee-431d-8f02-af12acc9e823-log-httpd\") pod \"ceilometer-0\" (UID: \"cd8e10ab-6dee-431d-8f02-af12acc9e823\") " pod="openstack/ceilometer-0" Mar 12 08:24:09 crc kubenswrapper[4809]: I0312 08:24:09.712734 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cd8e10ab-6dee-431d-8f02-af12acc9e823-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cd8e10ab-6dee-431d-8f02-af12acc9e823\") " pod="openstack/ceilometer-0" Mar 12 08:24:09 crc kubenswrapper[4809]: I0312 08:24:09.712813 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd8e10ab-6dee-431d-8f02-af12acc9e823-run-httpd\") pod \"ceilometer-0\" (UID: \"cd8e10ab-6dee-431d-8f02-af12acc9e823\") " pod="openstack/ceilometer-0" Mar 12 08:24:09 crc kubenswrapper[4809]: I0312 08:24:09.712956 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd8e10ab-6dee-431d-8f02-af12acc9e823-config-data\") pod \"ceilometer-0\" (UID: \"cd8e10ab-6dee-431d-8f02-af12acc9e823\") " pod="openstack/ceilometer-0" Mar 12 08:24:09 crc kubenswrapper[4809]: I0312 08:24:09.817734 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd8e10ab-6dee-431d-8f02-af12acc9e823-scripts\") pod \"ceilometer-0\" (UID: \"cd8e10ab-6dee-431d-8f02-af12acc9e823\") " pod="openstack/ceilometer-0" Mar 12 08:24:09 crc kubenswrapper[4809]: I0312 08:24:09.817788 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd8e10ab-6dee-431d-8f02-af12acc9e823-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cd8e10ab-6dee-431d-8f02-af12acc9e823\") " pod="openstack/ceilometer-0" Mar 12 08:24:09 crc kubenswrapper[4809]: I0312 08:24:09.817866 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxcch\" (UniqueName: \"kubernetes.io/projected/cd8e10ab-6dee-431d-8f02-af12acc9e823-kube-api-access-nxcch\") pod \"ceilometer-0\" (UID: \"cd8e10ab-6dee-431d-8f02-af12acc9e823\") " pod="openstack/ceilometer-0" Mar 12 08:24:09 crc kubenswrapper[4809]: I0312 08:24:09.817909 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd8e10ab-6dee-431d-8f02-af12acc9e823-log-httpd\") pod \"ceilometer-0\" (UID: \"cd8e10ab-6dee-431d-8f02-af12acc9e823\") " pod="openstack/ceilometer-0" Mar 12 08:24:09 crc kubenswrapper[4809]: I0312 08:24:09.817939 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cd8e10ab-6dee-431d-8f02-af12acc9e823-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cd8e10ab-6dee-431d-8f02-af12acc9e823\") " pod="openstack/ceilometer-0" Mar 12 08:24:09 crc kubenswrapper[4809]: I0312 08:24:09.817963 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd8e10ab-6dee-431d-8f02-af12acc9e823-run-httpd\") pod \"ceilometer-0\" (UID: \"cd8e10ab-6dee-431d-8f02-af12acc9e823\") " pod="openstack/ceilometer-0" Mar 12 08:24:09 crc kubenswrapper[4809]: I0312 08:24:09.817997 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd8e10ab-6dee-431d-8f02-af12acc9e823-config-data\") pod \"ceilometer-0\" (UID: \"cd8e10ab-6dee-431d-8f02-af12acc9e823\") " pod="openstack/ceilometer-0" Mar 12 08:24:09 crc kubenswrapper[4809]: I0312 08:24:09.818999 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd8e10ab-6dee-431d-8f02-af12acc9e823-log-httpd\") pod \"ceilometer-0\" (UID: \"cd8e10ab-6dee-431d-8f02-af12acc9e823\") " pod="openstack/ceilometer-0" Mar 12 08:24:09 crc kubenswrapper[4809]: I0312 08:24:09.823525 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd8e10ab-6dee-431d-8f02-af12acc9e823-run-httpd\") pod \"ceilometer-0\" (UID: \"cd8e10ab-6dee-431d-8f02-af12acc9e823\") " pod="openstack/ceilometer-0" Mar 12 08:24:09 crc kubenswrapper[4809]: I0312 08:24:09.831850 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd8e10ab-6dee-431d-8f02-af12acc9e823-scripts\") pod \"ceilometer-0\" (UID: \"cd8e10ab-6dee-431d-8f02-af12acc9e823\") " pod="openstack/ceilometer-0" Mar 12 08:24:09 crc kubenswrapper[4809]: I0312 08:24:09.835041 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd8e10ab-6dee-431d-8f02-af12acc9e823-config-data\") pod \"ceilometer-0\" (UID: \"cd8e10ab-6dee-431d-8f02-af12acc9e823\") " pod="openstack/ceilometer-0" Mar 12 08:24:09 crc kubenswrapper[4809]: I0312 08:24:09.844613 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cd8e10ab-6dee-431d-8f02-af12acc9e823-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cd8e10ab-6dee-431d-8f02-af12acc9e823\") " pod="openstack/ceilometer-0" Mar 12 08:24:09 crc kubenswrapper[4809]: I0312 08:24:09.851868 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxcch\" (UniqueName: \"kubernetes.io/projected/cd8e10ab-6dee-431d-8f02-af12acc9e823-kube-api-access-nxcch\") pod \"ceilometer-0\" (UID: \"cd8e10ab-6dee-431d-8f02-af12acc9e823\") " pod="openstack/ceilometer-0" Mar 12 08:24:09 crc kubenswrapper[4809]: I0312 08:24:09.853968 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd8e10ab-6dee-431d-8f02-af12acc9e823-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cd8e10ab-6dee-431d-8f02-af12acc9e823\") " pod="openstack/ceilometer-0" Mar 12 08:24:09 crc kubenswrapper[4809]: I0312 08:24:09.964029 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 08:24:10 crc kubenswrapper[4809]: I0312 08:24:10.106540 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"4d797bd8-7c65-4856-9b7b-3c207b1a64c4","Type":"ContainerStarted","Data":"38485dbda02f38f759b8a627ebadac18723fadb38f036f5cca5a07bb7f198019"} Mar 12 08:24:10 crc kubenswrapper[4809]: I0312 08:24:10.135689 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-4qr2q" event={"ID":"c0262dd0-da72-4f2a-a2e9-bbb12081451d","Type":"ContainerStarted","Data":"4bd3a15a038aa7494f171ea96ee0e3a348f6f544b0608b49f75ff46c4f12353c"} Mar 12 08:24:10 crc kubenswrapper[4809]: I0312 08:24:10.136241 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-4qr2q" Mar 12 08:24:10 crc kubenswrapper[4809]: I0312 08:24:10.185620 4809 generic.go:334] "Generic (PLEG): container finished" podID="bb8a8f45-bb92-4a72-8fcb-5c82ab191829" containerID="7218fd99851780e805adc0b898f00f5c9ab9816475d6a72103dcc775a9395347" exitCode=0 Mar 12 08:24:10 crc kubenswrapper[4809]: I0312 08:24:10.186226 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555064-rz9hw" event={"ID":"bb8a8f45-bb92-4a72-8fcb-5c82ab191829","Type":"ContainerDied","Data":"7218fd99851780e805adc0b898f00f5c9ab9816475d6a72103dcc775a9395347"} Mar 12 08:24:10 crc kubenswrapper[4809]: I0312 08:24:10.228372 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-4qr2q" podStartSLOduration=4.2283447259999996 podStartE2EDuration="4.228344726s" podCreationTimestamp="2026-03-12 08:24:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:24:10.200371409 +0000 UTC m=+1523.782407142" watchObservedRunningTime="2026-03-12 08:24:10.228344726 +0000 UTC m=+1523.810380449" Mar 12 08:24:10 crc kubenswrapper[4809]: I0312 08:24:10.574303 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Mar 12 08:24:10 crc kubenswrapper[4809]: I0312 08:24:10.872321 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 08:24:11 crc kubenswrapper[4809]: I0312 08:24:11.123503 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e08acf54-60ca-4dc7-bd01-ce8fed2bcc51" path="/var/lib/kubelet/pods/e08acf54-60ca-4dc7-bd01-ce8fed2bcc51/volumes" Mar 12 08:24:11 crc kubenswrapper[4809]: I0312 08:24:11.229964 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"4d797bd8-7c65-4856-9b7b-3c207b1a64c4","Type":"ContainerStarted","Data":"435f2cbaa12c9c1d3ad668b2f3265fca77c1ba7eacba43b2082cdd03c435fef4"} Mar 12 08:24:11 crc kubenswrapper[4809]: I0312 08:24:11.230265 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="4d797bd8-7c65-4856-9b7b-3c207b1a64c4" containerName="cinder-api-log" containerID="cri-o://38485dbda02f38f759b8a627ebadac18723fadb38f036f5cca5a07bb7f198019" gracePeriod=30 Mar 12 08:24:11 crc kubenswrapper[4809]: I0312 08:24:11.230374 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Mar 12 08:24:11 crc kubenswrapper[4809]: I0312 08:24:11.230929 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="4d797bd8-7c65-4856-9b7b-3c207b1a64c4" containerName="cinder-api" containerID="cri-o://435f2cbaa12c9c1d3ad668b2f3265fca77c1ba7eacba43b2082cdd03c435fef4" gracePeriod=30 Mar 12 08:24:11 crc kubenswrapper[4809]: I0312 08:24:11.241720 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5","Type":"ContainerStarted","Data":"28dc9a539be727f4312fc6a8acf53250cb5e3ef0d61dafc6201f658582b55f58"} Mar 12 08:24:11 crc kubenswrapper[4809]: I0312 08:24:11.246650 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cd8e10ab-6dee-431d-8f02-af12acc9e823","Type":"ContainerStarted","Data":"97a5c0ea02a25363ec34b664c63a82842058dbc2039d427e558edf8a48938455"} Mar 12 08:24:11 crc kubenswrapper[4809]: I0312 08:24:11.289516 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.289477427 podStartE2EDuration="5.289477427s" podCreationTimestamp="2026-03-12 08:24:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:24:11.266198957 +0000 UTC m=+1524.848234690" watchObservedRunningTime="2026-03-12 08:24:11.289477427 +0000 UTC m=+1524.871513160" Mar 12 08:24:11 crc kubenswrapper[4809]: I0312 08:24:11.787554 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555064-rz9hw" Mar 12 08:24:11 crc kubenswrapper[4809]: I0312 08:24:11.893823 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2hxg\" (UniqueName: \"kubernetes.io/projected/bb8a8f45-bb92-4a72-8fcb-5c82ab191829-kube-api-access-t2hxg\") pod \"bb8a8f45-bb92-4a72-8fcb-5c82ab191829\" (UID: \"bb8a8f45-bb92-4a72-8fcb-5c82ab191829\") " Mar 12 08:24:11 crc kubenswrapper[4809]: I0312 08:24:11.919102 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb8a8f45-bb92-4a72-8fcb-5c82ab191829-kube-api-access-t2hxg" (OuterVolumeSpecName: "kube-api-access-t2hxg") pod "bb8a8f45-bb92-4a72-8fcb-5c82ab191829" (UID: "bb8a8f45-bb92-4a72-8fcb-5c82ab191829"). InnerVolumeSpecName "kube-api-access-t2hxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:24:11 crc kubenswrapper[4809]: I0312 08:24:11.997332 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2hxg\" (UniqueName: \"kubernetes.io/projected/bb8a8f45-bb92-4a72-8fcb-5c82ab191829-kube-api-access-t2hxg\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:12 crc kubenswrapper[4809]: I0312 08:24:12.175846 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555058-q6lwz"] Mar 12 08:24:12 crc kubenswrapper[4809]: I0312 08:24:12.187543 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555058-q6lwz"] Mar 12 08:24:12 crc kubenswrapper[4809]: I0312 08:24:12.262243 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cd8e10ab-6dee-431d-8f02-af12acc9e823","Type":"ContainerStarted","Data":"6392b8a3e03c4f8ab556c0f28a3f56a4799a23a97fdd2e354ca291a1255faf1a"} Mar 12 08:24:12 crc kubenswrapper[4809]: I0312 08:24:12.269240 4809 generic.go:334] "Generic (PLEG): container finished" podID="4d797bd8-7c65-4856-9b7b-3c207b1a64c4" containerID="38485dbda02f38f759b8a627ebadac18723fadb38f036f5cca5a07bb7f198019" exitCode=143 Mar 12 08:24:12 crc kubenswrapper[4809]: I0312 08:24:12.269321 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"4d797bd8-7c65-4856-9b7b-3c207b1a64c4","Type":"ContainerDied","Data":"38485dbda02f38f759b8a627ebadac18723fadb38f036f5cca5a07bb7f198019"} Mar 12 08:24:12 crc kubenswrapper[4809]: I0312 08:24:12.270991 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5","Type":"ContainerStarted","Data":"2f22a8f4a042cb492d619423ac46fe58c63e208335a4bedcb67b1bd8a22356cd"} Mar 12 08:24:12 crc kubenswrapper[4809]: I0312 08:24:12.291651 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555064-rz9hw" event={"ID":"bb8a8f45-bb92-4a72-8fcb-5c82ab191829","Type":"ContainerDied","Data":"2268643d51ba948b0a7921251393e43b3a85d21fca80a7f7e0b6a028c4eef956"} Mar 12 08:24:12 crc kubenswrapper[4809]: I0312 08:24:12.291725 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2268643d51ba948b0a7921251393e43b3a85d21fca80a7f7e0b6a028c4eef956" Mar 12 08:24:12 crc kubenswrapper[4809]: I0312 08:24:12.291809 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555064-rz9hw" Mar 12 08:24:12 crc kubenswrapper[4809]: I0312 08:24:12.300609 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.03746582 podStartE2EDuration="6.300584554s" podCreationTimestamp="2026-03-12 08:24:06 +0000 UTC" firstStartedPulling="2026-03-12 08:24:07.952545201 +0000 UTC m=+1521.534580934" lastFinishedPulling="2026-03-12 08:24:09.215663935 +0000 UTC m=+1522.797699668" observedRunningTime="2026-03-12 08:24:12.300444741 +0000 UTC m=+1525.882480474" watchObservedRunningTime="2026-03-12 08:24:12.300584554 +0000 UTC m=+1525.882620287" Mar 12 08:24:13 crc kubenswrapper[4809]: I0312 08:24:13.122833 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58ee5bca-cc6a-4e65-bd6e-9ee1156c19de" path="/var/lib/kubelet/pods/58ee5bca-cc6a-4e65-bd6e-9ee1156c19de/volumes" Mar 12 08:24:13 crc kubenswrapper[4809]: I0312 08:24:13.313391 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cd8e10ab-6dee-431d-8f02-af12acc9e823","Type":"ContainerStarted","Data":"f7642ca7d724aec9fe410366f0506fcd7d36c927caadb24f7acd63ff499120fb"} Mar 12 08:24:13 crc kubenswrapper[4809]: I0312 08:24:13.334446 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8bcp4" podUID="2d78fd7f-3718-4999-a661-ad590dc808a5" containerName="registry-server" probeResult="failure" output=< Mar 12 08:24:13 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 08:24:13 crc kubenswrapper[4809]: > Mar 12 08:24:13 crc kubenswrapper[4809]: I0312 08:24:13.620752 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-76f57f5c5-gb5xf" Mar 12 08:24:14 crc kubenswrapper[4809]: I0312 08:24:14.443644 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cd8e10ab-6dee-431d-8f02-af12acc9e823","Type":"ContainerStarted","Data":"7a1781ec7a3be4ed5094c635f78927db827be1303297f2af393a13c8b83b998a"} Mar 12 08:24:15 crc kubenswrapper[4809]: I0312 08:24:15.048508 4809 patch_prober.go:28] interesting pod/machine-config-daemon-h6d4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 08:24:15 crc kubenswrapper[4809]: I0312 08:24:15.048595 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 08:24:16 crc kubenswrapper[4809]: I0312 08:24:16.470281 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cd8e10ab-6dee-431d-8f02-af12acc9e823","Type":"ContainerStarted","Data":"646f034d2fe028a00e20db2f8597d5d4743f531a101cab39d592f1ca8bcedc67"} Mar 12 08:24:16 crc kubenswrapper[4809]: I0312 08:24:16.470787 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 12 08:24:16 crc kubenswrapper[4809]: I0312 08:24:16.500554 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.134567354 podStartE2EDuration="7.500534593s" podCreationTimestamp="2026-03-12 08:24:09 +0000 UTC" firstStartedPulling="2026-03-12 08:24:10.883993519 +0000 UTC m=+1524.466029242" lastFinishedPulling="2026-03-12 08:24:15.249960748 +0000 UTC m=+1528.831996481" observedRunningTime="2026-03-12 08:24:16.495763283 +0000 UTC m=+1530.077799016" watchObservedRunningTime="2026-03-12 08:24:16.500534593 +0000 UTC m=+1530.082570326" Mar 12 08:24:16 crc kubenswrapper[4809]: I0312 08:24:16.870329 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Mar 12 08:24:16 crc kubenswrapper[4809]: E0312 08:24:16.871134 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb8a8f45-bb92-4a72-8fcb-5c82ab191829" containerName="oc" Mar 12 08:24:16 crc kubenswrapper[4809]: I0312 08:24:16.871160 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb8a8f45-bb92-4a72-8fcb-5c82ab191829" containerName="oc" Mar 12 08:24:16 crc kubenswrapper[4809]: I0312 08:24:16.871458 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb8a8f45-bb92-4a72-8fcb-5c82ab191829" containerName="oc" Mar 12 08:24:16 crc kubenswrapper[4809]: I0312 08:24:16.872656 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 12 08:24:16 crc kubenswrapper[4809]: I0312 08:24:16.875292 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Mar 12 08:24:16 crc kubenswrapper[4809]: I0312 08:24:16.875473 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Mar 12 08:24:16 crc kubenswrapper[4809]: I0312 08:24:16.875993 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-t24jh" Mar 12 08:24:16 crc kubenswrapper[4809]: I0312 08:24:16.895595 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 12 08:24:16 crc kubenswrapper[4809]: I0312 08:24:16.952131 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/dad73b51-42b4-45f0-9a20-d16a4227cb64-openstack-config-secret\") pod \"openstackclient\" (UID: \"dad73b51-42b4-45f0-9a20-d16a4227cb64\") " pod="openstack/openstackclient" Mar 12 08:24:16 crc kubenswrapper[4809]: I0312 08:24:16.952386 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s9nz\" (UniqueName: \"kubernetes.io/projected/dad73b51-42b4-45f0-9a20-d16a4227cb64-kube-api-access-5s9nz\") pod \"openstackclient\" (UID: \"dad73b51-42b4-45f0-9a20-d16a4227cb64\") " pod="openstack/openstackclient" Mar 12 08:24:16 crc kubenswrapper[4809]: I0312 08:24:16.952632 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/dad73b51-42b4-45f0-9a20-d16a4227cb64-openstack-config\") pod \"openstackclient\" (UID: \"dad73b51-42b4-45f0-9a20-d16a4227cb64\") " pod="openstack/openstackclient" Mar 12 08:24:16 crc kubenswrapper[4809]: I0312 08:24:16.952756 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dad73b51-42b4-45f0-9a20-d16a4227cb64-combined-ca-bundle\") pod \"openstackclient\" (UID: \"dad73b51-42b4-45f0-9a20-d16a4227cb64\") " pod="openstack/openstackclient" Mar 12 08:24:16 crc kubenswrapper[4809]: I0312 08:24:16.975467 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Mar 12 08:24:17 crc kubenswrapper[4809]: I0312 08:24:17.055508 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5s9nz\" (UniqueName: \"kubernetes.io/projected/dad73b51-42b4-45f0-9a20-d16a4227cb64-kube-api-access-5s9nz\") pod \"openstackclient\" (UID: \"dad73b51-42b4-45f0-9a20-d16a4227cb64\") " pod="openstack/openstackclient" Mar 12 08:24:17 crc kubenswrapper[4809]: I0312 08:24:17.055986 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/dad73b51-42b4-45f0-9a20-d16a4227cb64-openstack-config\") pod \"openstackclient\" (UID: \"dad73b51-42b4-45f0-9a20-d16a4227cb64\") " pod="openstack/openstackclient" Mar 12 08:24:17 crc kubenswrapper[4809]: I0312 08:24:17.056044 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dad73b51-42b4-45f0-9a20-d16a4227cb64-combined-ca-bundle\") pod \"openstackclient\" (UID: \"dad73b51-42b4-45f0-9a20-d16a4227cb64\") " pod="openstack/openstackclient" Mar 12 08:24:17 crc kubenswrapper[4809]: I0312 08:24:17.056161 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/dad73b51-42b4-45f0-9a20-d16a4227cb64-openstack-config-secret\") pod \"openstackclient\" (UID: \"dad73b51-42b4-45f0-9a20-d16a4227cb64\") " pod="openstack/openstackclient" Mar 12 08:24:17 crc kubenswrapper[4809]: I0312 08:24:17.056919 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/dad73b51-42b4-45f0-9a20-d16a4227cb64-openstack-config\") pod \"openstackclient\" (UID: \"dad73b51-42b4-45f0-9a20-d16a4227cb64\") " pod="openstack/openstackclient" Mar 12 08:24:17 crc kubenswrapper[4809]: I0312 08:24:17.063185 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/dad73b51-42b4-45f0-9a20-d16a4227cb64-openstack-config-secret\") pod \"openstackclient\" (UID: \"dad73b51-42b4-45f0-9a20-d16a4227cb64\") " pod="openstack/openstackclient" Mar 12 08:24:17 crc kubenswrapper[4809]: I0312 08:24:17.072898 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dad73b51-42b4-45f0-9a20-d16a4227cb64-combined-ca-bundle\") pod \"openstackclient\" (UID: \"dad73b51-42b4-45f0-9a20-d16a4227cb64\") " pod="openstack/openstackclient" Mar 12 08:24:17 crc kubenswrapper[4809]: I0312 08:24:17.075870 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5s9nz\" (UniqueName: \"kubernetes.io/projected/dad73b51-42b4-45f0-9a20-d16a4227cb64-kube-api-access-5s9nz\") pod \"openstackclient\" (UID: \"dad73b51-42b4-45f0-9a20-d16a4227cb64\") " pod="openstack/openstackclient" Mar 12 08:24:17 crc kubenswrapper[4809]: I0312 08:24:17.139215 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Mar 12 08:24:17 crc kubenswrapper[4809]: I0312 08:24:17.139993 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 12 08:24:17 crc kubenswrapper[4809]: I0312 08:24:17.157219 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Mar 12 08:24:17 crc kubenswrapper[4809]: I0312 08:24:17.206796 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Mar 12 08:24:17 crc kubenswrapper[4809]: I0312 08:24:17.209062 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 12 08:24:17 crc kubenswrapper[4809]: I0312 08:24:17.209393 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-4qr2q" Mar 12 08:24:17 crc kubenswrapper[4809]: I0312 08:24:17.295678 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Mar 12 08:24:17 crc kubenswrapper[4809]: I0312 08:24:17.339192 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 12 08:24:17 crc kubenswrapper[4809]: I0312 08:24:17.396764 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1af3f1ff-a2d9-4f4d-bd1d-f66ec1cd87bd-openstack-config\") pod \"openstackclient\" (UID: \"1af3f1ff-a2d9-4f4d-bd1d-f66ec1cd87bd\") " pod="openstack/openstackclient" Mar 12 08:24:17 crc kubenswrapper[4809]: I0312 08:24:17.397156 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrpwk\" (UniqueName: \"kubernetes.io/projected/1af3f1ff-a2d9-4f4d-bd1d-f66ec1cd87bd-kube-api-access-wrpwk\") pod \"openstackclient\" (UID: \"1af3f1ff-a2d9-4f4d-bd1d-f66ec1cd87bd\") " pod="openstack/openstackclient" Mar 12 08:24:17 crc kubenswrapper[4809]: I0312 08:24:17.397201 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1af3f1ff-a2d9-4f4d-bd1d-f66ec1cd87bd-combined-ca-bundle\") pod \"openstackclient\" (UID: \"1af3f1ff-a2d9-4f4d-bd1d-f66ec1cd87bd\") " pod="openstack/openstackclient" Mar 12 08:24:17 crc kubenswrapper[4809]: I0312 08:24:17.397255 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1af3f1ff-a2d9-4f4d-bd1d-f66ec1cd87bd-openstack-config-secret\") pod \"openstackclient\" (UID: \"1af3f1ff-a2d9-4f4d-bd1d-f66ec1cd87bd\") " pod="openstack/openstackclient" Mar 12 08:24:17 crc kubenswrapper[4809]: I0312 08:24:17.443388 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-jrkvl"] Mar 12 08:24:17 crc kubenswrapper[4809]: I0312 08:24:17.443731 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85ff748b95-jrkvl" podUID="b6e69d29-6d0e-4bad-a44b-6a24e4971281" containerName="dnsmasq-dns" containerID="cri-o://bc62f61b9d135f8f6be1a4a157b54c9f2294d5639bfa26f37f2e69241d3ec8f4" gracePeriod=10 Mar 12 08:24:17 crc kubenswrapper[4809]: E0312 08:24:17.471584 4809 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 12 08:24:17 crc kubenswrapper[4809]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_dad73b51-42b4-45f0-9a20-d16a4227cb64_0(e30dd8c1b2f2f33d4689c4dc3b742877dae3d258a2ac4d114544de485e3d35ff): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"e30dd8c1b2f2f33d4689c4dc3b742877dae3d258a2ac4d114544de485e3d35ff" Netns:"/var/run/netns/7a0a5241-ea47-4310-b0e5-15602bd415fd" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=e30dd8c1b2f2f33d4689c4dc3b742877dae3d258a2ac4d114544de485e3d35ff;K8S_POD_UID=dad73b51-42b4-45f0-9a20-d16a4227cb64" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/dad73b51-42b4-45f0-9a20-d16a4227cb64]: expected pod UID "dad73b51-42b4-45f0-9a20-d16a4227cb64" but got "1af3f1ff-a2d9-4f4d-bd1d-f66ec1cd87bd" from Kube API Mar 12 08:24:17 crc kubenswrapper[4809]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 12 08:24:17 crc kubenswrapper[4809]: > Mar 12 08:24:17 crc kubenswrapper[4809]: E0312 08:24:17.473497 4809 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 12 08:24:17 crc kubenswrapper[4809]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_dad73b51-42b4-45f0-9a20-d16a4227cb64_0(e30dd8c1b2f2f33d4689c4dc3b742877dae3d258a2ac4d114544de485e3d35ff): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"e30dd8c1b2f2f33d4689c4dc3b742877dae3d258a2ac4d114544de485e3d35ff" Netns:"/var/run/netns/7a0a5241-ea47-4310-b0e5-15602bd415fd" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=e30dd8c1b2f2f33d4689c4dc3b742877dae3d258a2ac4d114544de485e3d35ff;K8S_POD_UID=dad73b51-42b4-45f0-9a20-d16a4227cb64" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/dad73b51-42b4-45f0-9a20-d16a4227cb64]: expected pod UID "dad73b51-42b4-45f0-9a20-d16a4227cb64" but got "1af3f1ff-a2d9-4f4d-bd1d-f66ec1cd87bd" from Kube API Mar 12 08:24:17 crc kubenswrapper[4809]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 12 08:24:17 crc kubenswrapper[4809]: > pod="openstack/openstackclient" Mar 12 08:24:17 crc kubenswrapper[4809]: I0312 08:24:17.501190 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1af3f1ff-a2d9-4f4d-bd1d-f66ec1cd87bd-openstack-config\") pod \"openstackclient\" (UID: \"1af3f1ff-a2d9-4f4d-bd1d-f66ec1cd87bd\") " pod="openstack/openstackclient" Mar 12 08:24:17 crc kubenswrapper[4809]: I0312 08:24:17.501242 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrpwk\" (UniqueName: \"kubernetes.io/projected/1af3f1ff-a2d9-4f4d-bd1d-f66ec1cd87bd-kube-api-access-wrpwk\") pod \"openstackclient\" (UID: \"1af3f1ff-a2d9-4f4d-bd1d-f66ec1cd87bd\") " pod="openstack/openstackclient" Mar 12 08:24:17 crc kubenswrapper[4809]: I0312 08:24:17.501272 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1af3f1ff-a2d9-4f4d-bd1d-f66ec1cd87bd-combined-ca-bundle\") pod \"openstackclient\" (UID: \"1af3f1ff-a2d9-4f4d-bd1d-f66ec1cd87bd\") " pod="openstack/openstackclient" Mar 12 08:24:17 crc kubenswrapper[4809]: I0312 08:24:17.501300 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1af3f1ff-a2d9-4f4d-bd1d-f66ec1cd87bd-openstack-config-secret\") pod \"openstackclient\" (UID: \"1af3f1ff-a2d9-4f4d-bd1d-f66ec1cd87bd\") " pod="openstack/openstackclient" Mar 12 08:24:17 crc kubenswrapper[4809]: I0312 08:24:17.503229 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1af3f1ff-a2d9-4f4d-bd1d-f66ec1cd87bd-openstack-config\") pod \"openstackclient\" (UID: \"1af3f1ff-a2d9-4f4d-bd1d-f66ec1cd87bd\") " pod="openstack/openstackclient" Mar 12 08:24:17 crc kubenswrapper[4809]: I0312 08:24:17.507635 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1af3f1ff-a2d9-4f4d-bd1d-f66ec1cd87bd-combined-ca-bundle\") pod \"openstackclient\" (UID: \"1af3f1ff-a2d9-4f4d-bd1d-f66ec1cd87bd\") " pod="openstack/openstackclient" Mar 12 08:24:17 crc kubenswrapper[4809]: I0312 08:24:17.508218 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1af3f1ff-a2d9-4f4d-bd1d-f66ec1cd87bd-openstack-config-secret\") pod \"openstackclient\" (UID: \"1af3f1ff-a2d9-4f4d-bd1d-f66ec1cd87bd\") " pod="openstack/openstackclient" Mar 12 08:24:17 crc kubenswrapper[4809]: I0312 08:24:17.533413 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrpwk\" (UniqueName: \"kubernetes.io/projected/1af3f1ff-a2d9-4f4d-bd1d-f66ec1cd87bd-kube-api-access-wrpwk\") pod \"openstackclient\" (UID: \"1af3f1ff-a2d9-4f4d-bd1d-f66ec1cd87bd\") " pod="openstack/openstackclient" Mar 12 08:24:17 crc kubenswrapper[4809]: I0312 08:24:17.564012 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 12 08:24:17 crc kubenswrapper[4809]: I0312 08:24:17.665998 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 12 08:24:17 crc kubenswrapper[4809]: I0312 08:24:17.957931 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-jrkvl" Mar 12 08:24:18 crc kubenswrapper[4809]: I0312 08:24:18.016038 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b6e69d29-6d0e-4bad-a44b-6a24e4971281-dns-svc\") pod \"b6e69d29-6d0e-4bad-a44b-6a24e4971281\" (UID: \"b6e69d29-6d0e-4bad-a44b-6a24e4971281\") " Mar 12 08:24:18 crc kubenswrapper[4809]: I0312 08:24:18.016145 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b6e69d29-6d0e-4bad-a44b-6a24e4971281-ovsdbserver-sb\") pod \"b6e69d29-6d0e-4bad-a44b-6a24e4971281\" (UID: \"b6e69d29-6d0e-4bad-a44b-6a24e4971281\") " Mar 12 08:24:18 crc kubenswrapper[4809]: I0312 08:24:18.016336 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b6e69d29-6d0e-4bad-a44b-6a24e4971281-dns-swift-storage-0\") pod \"b6e69d29-6d0e-4bad-a44b-6a24e4971281\" (UID: \"b6e69d29-6d0e-4bad-a44b-6a24e4971281\") " Mar 12 08:24:18 crc kubenswrapper[4809]: I0312 08:24:18.016380 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbmns\" (UniqueName: \"kubernetes.io/projected/b6e69d29-6d0e-4bad-a44b-6a24e4971281-kube-api-access-dbmns\") pod \"b6e69d29-6d0e-4bad-a44b-6a24e4971281\" (UID: \"b6e69d29-6d0e-4bad-a44b-6a24e4971281\") " Mar 12 08:24:18 crc kubenswrapper[4809]: I0312 08:24:18.016480 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6e69d29-6d0e-4bad-a44b-6a24e4971281-config\") pod \"b6e69d29-6d0e-4bad-a44b-6a24e4971281\" (UID: \"b6e69d29-6d0e-4bad-a44b-6a24e4971281\") " Mar 12 08:24:18 crc kubenswrapper[4809]: I0312 08:24:18.016503 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b6e69d29-6d0e-4bad-a44b-6a24e4971281-ovsdbserver-nb\") pod \"b6e69d29-6d0e-4bad-a44b-6a24e4971281\" (UID: \"b6e69d29-6d0e-4bad-a44b-6a24e4971281\") " Mar 12 08:24:18 crc kubenswrapper[4809]: I0312 08:24:18.024401 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6e69d29-6d0e-4bad-a44b-6a24e4971281-kube-api-access-dbmns" (OuterVolumeSpecName: "kube-api-access-dbmns") pod "b6e69d29-6d0e-4bad-a44b-6a24e4971281" (UID: "b6e69d29-6d0e-4bad-a44b-6a24e4971281"). InnerVolumeSpecName "kube-api-access-dbmns". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:24:18 crc kubenswrapper[4809]: I0312 08:24:18.122487 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbmns\" (UniqueName: \"kubernetes.io/projected/b6e69d29-6d0e-4bad-a44b-6a24e4971281-kube-api-access-dbmns\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:18 crc kubenswrapper[4809]: I0312 08:24:18.129123 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6e69d29-6d0e-4bad-a44b-6a24e4971281-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b6e69d29-6d0e-4bad-a44b-6a24e4971281" (UID: "b6e69d29-6d0e-4bad-a44b-6a24e4971281"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:24:18 crc kubenswrapper[4809]: I0312 08:24:18.137683 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6e69d29-6d0e-4bad-a44b-6a24e4971281-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b6e69d29-6d0e-4bad-a44b-6a24e4971281" (UID: "b6e69d29-6d0e-4bad-a44b-6a24e4971281"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:24:18 crc kubenswrapper[4809]: I0312 08:24:18.144576 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6e69d29-6d0e-4bad-a44b-6a24e4971281-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b6e69d29-6d0e-4bad-a44b-6a24e4971281" (UID: "b6e69d29-6d0e-4bad-a44b-6a24e4971281"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:24:18 crc kubenswrapper[4809]: I0312 08:24:18.180605 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6e69d29-6d0e-4bad-a44b-6a24e4971281-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b6e69d29-6d0e-4bad-a44b-6a24e4971281" (UID: "b6e69d29-6d0e-4bad-a44b-6a24e4971281"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:24:18 crc kubenswrapper[4809]: I0312 08:24:18.208300 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6e69d29-6d0e-4bad-a44b-6a24e4971281-config" (OuterVolumeSpecName: "config") pod "b6e69d29-6d0e-4bad-a44b-6a24e4971281" (UID: "b6e69d29-6d0e-4bad-a44b-6a24e4971281"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:24:18 crc kubenswrapper[4809]: I0312 08:24:18.234431 4809 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b6e69d29-6d0e-4bad-a44b-6a24e4971281-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:18 crc kubenswrapper[4809]: I0312 08:24:18.236480 4809 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b6e69d29-6d0e-4bad-a44b-6a24e4971281-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:18 crc kubenswrapper[4809]: I0312 08:24:18.236508 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6e69d29-6d0e-4bad-a44b-6a24e4971281-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:18 crc kubenswrapper[4809]: I0312 08:24:18.236637 4809 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b6e69d29-6d0e-4bad-a44b-6a24e4971281-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:18 crc kubenswrapper[4809]: I0312 08:24:18.236664 4809 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b6e69d29-6d0e-4bad-a44b-6a24e4971281-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:18 crc kubenswrapper[4809]: I0312 08:24:18.324326 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 12 08:24:18 crc kubenswrapper[4809]: I0312 08:24:18.516303 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"1af3f1ff-a2d9-4f4d-bd1d-f66ec1cd87bd","Type":"ContainerStarted","Data":"fb1771ec779edae8f3c01c6944fc36dc1c19aecee941269e7256bbad31637223"} Mar 12 08:24:18 crc kubenswrapper[4809]: I0312 08:24:18.520567 4809 generic.go:334] "Generic (PLEG): container finished" podID="b6e69d29-6d0e-4bad-a44b-6a24e4971281" containerID="bc62f61b9d135f8f6be1a4a157b54c9f2294d5639bfa26f37f2e69241d3ec8f4" exitCode=0 Mar 12 08:24:18 crc kubenswrapper[4809]: I0312 08:24:18.520682 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 12 08:24:18 crc kubenswrapper[4809]: I0312 08:24:18.520776 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-jrkvl" Mar 12 08:24:18 crc kubenswrapper[4809]: I0312 08:24:18.520866 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-jrkvl" event={"ID":"b6e69d29-6d0e-4bad-a44b-6a24e4971281","Type":"ContainerDied","Data":"bc62f61b9d135f8f6be1a4a157b54c9f2294d5639bfa26f37f2e69241d3ec8f4"} Mar 12 08:24:18 crc kubenswrapper[4809]: I0312 08:24:18.521981 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-jrkvl" event={"ID":"b6e69d29-6d0e-4bad-a44b-6a24e4971281","Type":"ContainerDied","Data":"829953f1aa9c5a5d1c33383ab69814d7cb87a60dc961f49ec51abd12299b56d0"} Mar 12 08:24:18 crc kubenswrapper[4809]: I0312 08:24:18.521998 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5" containerName="probe" containerID="cri-o://2f22a8f4a042cb492d619423ac46fe58c63e208335a4bedcb67b1bd8a22356cd" gracePeriod=30 Mar 12 08:24:18 crc kubenswrapper[4809]: I0312 08:24:18.522152 4809 scope.go:117] "RemoveContainer" containerID="bc62f61b9d135f8f6be1a4a157b54c9f2294d5639bfa26f37f2e69241d3ec8f4" Mar 12 08:24:18 crc kubenswrapper[4809]: I0312 08:24:18.521758 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5" containerName="cinder-scheduler" containerID="cri-o://28dc9a539be727f4312fc6a8acf53250cb5e3ef0d61dafc6201f658582b55f58" gracePeriod=30 Mar 12 08:24:18 crc kubenswrapper[4809]: I0312 08:24:18.528264 4809 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="dad73b51-42b4-45f0-9a20-d16a4227cb64" podUID="1af3f1ff-a2d9-4f4d-bd1d-f66ec1cd87bd" Mar 12 08:24:18 crc kubenswrapper[4809]: I0312 08:24:18.540320 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 12 08:24:18 crc kubenswrapper[4809]: I0312 08:24:18.562211 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-jrkvl"] Mar 12 08:24:18 crc kubenswrapper[4809]: I0312 08:24:18.566226 4809 scope.go:117] "RemoveContainer" containerID="a728e89f1daf26e356a57f592e3f7a7d0d65fbfc06455f026c75a4d30fdb5efa" Mar 12 08:24:18 crc kubenswrapper[4809]: I0312 08:24:18.580333 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-jrkvl"] Mar 12 08:24:18 crc kubenswrapper[4809]: I0312 08:24:18.590620 4809 scope.go:117] "RemoveContainer" containerID="bc62f61b9d135f8f6be1a4a157b54c9f2294d5639bfa26f37f2e69241d3ec8f4" Mar 12 08:24:18 crc kubenswrapper[4809]: E0312 08:24:18.591368 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc62f61b9d135f8f6be1a4a157b54c9f2294d5639bfa26f37f2e69241d3ec8f4\": container with ID starting with bc62f61b9d135f8f6be1a4a157b54c9f2294d5639bfa26f37f2e69241d3ec8f4 not found: ID does not exist" containerID="bc62f61b9d135f8f6be1a4a157b54c9f2294d5639bfa26f37f2e69241d3ec8f4" Mar 12 08:24:18 crc kubenswrapper[4809]: I0312 08:24:18.591418 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc62f61b9d135f8f6be1a4a157b54c9f2294d5639bfa26f37f2e69241d3ec8f4"} err="failed to get container status \"bc62f61b9d135f8f6be1a4a157b54c9f2294d5639bfa26f37f2e69241d3ec8f4\": rpc error: code = NotFound desc = could not find container \"bc62f61b9d135f8f6be1a4a157b54c9f2294d5639bfa26f37f2e69241d3ec8f4\": container with ID starting with bc62f61b9d135f8f6be1a4a157b54c9f2294d5639bfa26f37f2e69241d3ec8f4 not found: ID does not exist" Mar 12 08:24:18 crc kubenswrapper[4809]: I0312 08:24:18.591456 4809 scope.go:117] "RemoveContainer" containerID="a728e89f1daf26e356a57f592e3f7a7d0d65fbfc06455f026c75a4d30fdb5efa" Mar 12 08:24:18 crc kubenswrapper[4809]: E0312 08:24:18.591845 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a728e89f1daf26e356a57f592e3f7a7d0d65fbfc06455f026c75a4d30fdb5efa\": container with ID starting with a728e89f1daf26e356a57f592e3f7a7d0d65fbfc06455f026c75a4d30fdb5efa not found: ID does not exist" containerID="a728e89f1daf26e356a57f592e3f7a7d0d65fbfc06455f026c75a4d30fdb5efa" Mar 12 08:24:18 crc kubenswrapper[4809]: I0312 08:24:18.591912 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a728e89f1daf26e356a57f592e3f7a7d0d65fbfc06455f026c75a4d30fdb5efa"} err="failed to get container status \"a728e89f1daf26e356a57f592e3f7a7d0d65fbfc06455f026c75a4d30fdb5efa\": rpc error: code = NotFound desc = could not find container \"a728e89f1daf26e356a57f592e3f7a7d0d65fbfc06455f026c75a4d30fdb5efa\": container with ID starting with a728e89f1daf26e356a57f592e3f7a7d0d65fbfc06455f026c75a4d30fdb5efa not found: ID does not exist" Mar 12 08:24:18 crc kubenswrapper[4809]: I0312 08:24:18.652783 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/dad73b51-42b4-45f0-9a20-d16a4227cb64-openstack-config\") pod \"dad73b51-42b4-45f0-9a20-d16a4227cb64\" (UID: \"dad73b51-42b4-45f0-9a20-d16a4227cb64\") " Mar 12 08:24:18 crc kubenswrapper[4809]: I0312 08:24:18.653398 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dad73b51-42b4-45f0-9a20-d16a4227cb64-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "dad73b51-42b4-45f0-9a20-d16a4227cb64" (UID: "dad73b51-42b4-45f0-9a20-d16a4227cb64"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:24:18 crc kubenswrapper[4809]: I0312 08:24:18.653455 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5s9nz\" (UniqueName: \"kubernetes.io/projected/dad73b51-42b4-45f0-9a20-d16a4227cb64-kube-api-access-5s9nz\") pod \"dad73b51-42b4-45f0-9a20-d16a4227cb64\" (UID: \"dad73b51-42b4-45f0-9a20-d16a4227cb64\") " Mar 12 08:24:18 crc kubenswrapper[4809]: I0312 08:24:18.653596 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dad73b51-42b4-45f0-9a20-d16a4227cb64-combined-ca-bundle\") pod \"dad73b51-42b4-45f0-9a20-d16a4227cb64\" (UID: \"dad73b51-42b4-45f0-9a20-d16a4227cb64\") " Mar 12 08:24:18 crc kubenswrapper[4809]: I0312 08:24:18.653702 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/dad73b51-42b4-45f0-9a20-d16a4227cb64-openstack-config-secret\") pod \"dad73b51-42b4-45f0-9a20-d16a4227cb64\" (UID: \"dad73b51-42b4-45f0-9a20-d16a4227cb64\") " Mar 12 08:24:18 crc kubenswrapper[4809]: I0312 08:24:18.654369 4809 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/dad73b51-42b4-45f0-9a20-d16a4227cb64-openstack-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:18 crc kubenswrapper[4809]: I0312 08:24:18.660057 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dad73b51-42b4-45f0-9a20-d16a4227cb64-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "dad73b51-42b4-45f0-9a20-d16a4227cb64" (UID: "dad73b51-42b4-45f0-9a20-d16a4227cb64"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:24:18 crc kubenswrapper[4809]: I0312 08:24:18.660978 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dad73b51-42b4-45f0-9a20-d16a4227cb64-kube-api-access-5s9nz" (OuterVolumeSpecName: "kube-api-access-5s9nz") pod "dad73b51-42b4-45f0-9a20-d16a4227cb64" (UID: "dad73b51-42b4-45f0-9a20-d16a4227cb64"). InnerVolumeSpecName "kube-api-access-5s9nz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:24:18 crc kubenswrapper[4809]: I0312 08:24:18.662728 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dad73b51-42b4-45f0-9a20-d16a4227cb64-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dad73b51-42b4-45f0-9a20-d16a4227cb64" (UID: "dad73b51-42b4-45f0-9a20-d16a4227cb64"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:24:18 crc kubenswrapper[4809]: I0312 08:24:18.756862 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5s9nz\" (UniqueName: \"kubernetes.io/projected/dad73b51-42b4-45f0-9a20-d16a4227cb64-kube-api-access-5s9nz\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:18 crc kubenswrapper[4809]: I0312 08:24:18.756904 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dad73b51-42b4-45f0-9a20-d16a4227cb64-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:18 crc kubenswrapper[4809]: I0312 08:24:18.756917 4809 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/dad73b51-42b4-45f0-9a20-d16a4227cb64-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:19 crc kubenswrapper[4809]: I0312 08:24:19.128407 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6e69d29-6d0e-4bad-a44b-6a24e4971281" path="/var/lib/kubelet/pods/b6e69d29-6d0e-4bad-a44b-6a24e4971281/volumes" Mar 12 08:24:19 crc kubenswrapper[4809]: I0312 08:24:19.130064 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dad73b51-42b4-45f0-9a20-d16a4227cb64" path="/var/lib/kubelet/pods/dad73b51-42b4-45f0-9a20-d16a4227cb64/volumes" Mar 12 08:24:19 crc kubenswrapper[4809]: I0312 08:24:19.569332 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 12 08:24:19 crc kubenswrapper[4809]: I0312 08:24:19.592496 4809 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="dad73b51-42b4-45f0-9a20-d16a4227cb64" podUID="1af3f1ff-a2d9-4f4d-bd1d-f66ec1cd87bd" Mar 12 08:24:20 crc kubenswrapper[4809]: I0312 08:24:20.606682 4809 generic.go:334] "Generic (PLEG): container finished" podID="0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5" containerID="2f22a8f4a042cb492d619423ac46fe58c63e208335a4bedcb67b1bd8a22356cd" exitCode=0 Mar 12 08:24:20 crc kubenswrapper[4809]: I0312 08:24:20.606751 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5","Type":"ContainerDied","Data":"2f22a8f4a042cb492d619423ac46fe58c63e208335a4bedcb67b1bd8a22356cd"} Mar 12 08:24:20 crc kubenswrapper[4809]: I0312 08:24:20.769012 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.241923 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-65959dcdbb-g8h24"] Mar 12 08:24:21 crc kubenswrapper[4809]: E0312 08:24:21.245960 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6e69d29-6d0e-4bad-a44b-6a24e4971281" containerName="init" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.246002 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6e69d29-6d0e-4bad-a44b-6a24e4971281" containerName="init" Mar 12 08:24:21 crc kubenswrapper[4809]: E0312 08:24:21.246057 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6e69d29-6d0e-4bad-a44b-6a24e4971281" containerName="dnsmasq-dns" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.246065 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6e69d29-6d0e-4bad-a44b-6a24e4971281" containerName="dnsmasq-dns" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.246571 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6e69d29-6d0e-4bad-a44b-6a24e4971281" containerName="dnsmasq-dns" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.252764 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-65959dcdbb-g8h24" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.260772 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.269654 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-4t7mv" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.276658 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.305777 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-65959dcdbb-g8h24"] Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.446936 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/970fd7c0-4095-4aa7-8e61-f300972a7124-config-data-custom\") pod \"heat-engine-65959dcdbb-g8h24\" (UID: \"970fd7c0-4095-4aa7-8e61-f300972a7124\") " pod="openstack/heat-engine-65959dcdbb-g8h24" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.447318 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/970fd7c0-4095-4aa7-8e61-f300972a7124-config-data\") pod \"heat-engine-65959dcdbb-g8h24\" (UID: \"970fd7c0-4095-4aa7-8e61-f300972a7124\") " pod="openstack/heat-engine-65959dcdbb-g8h24" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.447352 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z2gx\" (UniqueName: \"kubernetes.io/projected/970fd7c0-4095-4aa7-8e61-f300972a7124-kube-api-access-7z2gx\") pod \"heat-engine-65959dcdbb-g8h24\" (UID: \"970fd7c0-4095-4aa7-8e61-f300972a7124\") " pod="openstack/heat-engine-65959dcdbb-g8h24" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.447391 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/970fd7c0-4095-4aa7-8e61-f300972a7124-combined-ca-bundle\") pod \"heat-engine-65959dcdbb-g8h24\" (UID: \"970fd7c0-4095-4aa7-8e61-f300972a7124\") " pod="openstack/heat-engine-65959dcdbb-g8h24" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.462666 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-7c7km"] Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.464912 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-7c7km" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.493825 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-7c7km"] Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.549205 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/970fd7c0-4095-4aa7-8e61-f300972a7124-config-data-custom\") pod \"heat-engine-65959dcdbb-g8h24\" (UID: \"970fd7c0-4095-4aa7-8e61-f300972a7124\") " pod="openstack/heat-engine-65959dcdbb-g8h24" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.549416 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/970fd7c0-4095-4aa7-8e61-f300972a7124-config-data\") pod \"heat-engine-65959dcdbb-g8h24\" (UID: \"970fd7c0-4095-4aa7-8e61-f300972a7124\") " pod="openstack/heat-engine-65959dcdbb-g8h24" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.549455 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7z2gx\" (UniqueName: \"kubernetes.io/projected/970fd7c0-4095-4aa7-8e61-f300972a7124-kube-api-access-7z2gx\") pod \"heat-engine-65959dcdbb-g8h24\" (UID: \"970fd7c0-4095-4aa7-8e61-f300972a7124\") " pod="openstack/heat-engine-65959dcdbb-g8h24" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.549499 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/970fd7c0-4095-4aa7-8e61-f300972a7124-combined-ca-bundle\") pod \"heat-engine-65959dcdbb-g8h24\" (UID: \"970fd7c0-4095-4aa7-8e61-f300972a7124\") " pod="openstack/heat-engine-65959dcdbb-g8h24" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.561751 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/970fd7c0-4095-4aa7-8e61-f300972a7124-config-data-custom\") pod \"heat-engine-65959dcdbb-g8h24\" (UID: \"970fd7c0-4095-4aa7-8e61-f300972a7124\") " pod="openstack/heat-engine-65959dcdbb-g8h24" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.572704 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-5f7c584ffb-4dlpd"] Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.573422 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/970fd7c0-4095-4aa7-8e61-f300972a7124-combined-ca-bundle\") pod \"heat-engine-65959dcdbb-g8h24\" (UID: \"970fd7c0-4095-4aa7-8e61-f300972a7124\") " pod="openstack/heat-engine-65959dcdbb-g8h24" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.574491 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5f7c584ffb-4dlpd" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.575332 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/970fd7c0-4095-4aa7-8e61-f300972a7124-config-data\") pod \"heat-engine-65959dcdbb-g8h24\" (UID: \"970fd7c0-4095-4aa7-8e61-f300972a7124\") " pod="openstack/heat-engine-65959dcdbb-g8h24" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.579071 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.602871 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7z2gx\" (UniqueName: \"kubernetes.io/projected/970fd7c0-4095-4aa7-8e61-f300972a7124-kube-api-access-7z2gx\") pod \"heat-engine-65959dcdbb-g8h24\" (UID: \"970fd7c0-4095-4aa7-8e61-f300972a7124\") " pod="openstack/heat-engine-65959dcdbb-g8h24" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.608374 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5f7c584ffb-4dlpd"] Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.627319 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.632791 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-7fb898569b-fzjx6"] Mar 12 08:24:21 crc kubenswrapper[4809]: E0312 08:24:21.633402 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5" containerName="probe" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.633424 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5" containerName="probe" Mar 12 08:24:21 crc kubenswrapper[4809]: E0312 08:24:21.633438 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5" containerName="cinder-scheduler" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.633446 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5" containerName="cinder-scheduler" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.633666 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5" containerName="cinder-scheduler" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.633704 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5" containerName="probe" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.651812 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/43a2fa45-78b7-4556-802d-ec28c44a4f12-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-7c7km\" (UID: \"43a2fa45-78b7-4556-802d-ec28c44a4f12\") " pod="openstack/dnsmasq-dns-7756b9d78c-7c7km" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.651913 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/43a2fa45-78b7-4556-802d-ec28c44a4f12-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-7c7km\" (UID: \"43a2fa45-78b7-4556-802d-ec28c44a4f12\") " pod="openstack/dnsmasq-dns-7756b9d78c-7c7km" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.651940 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/43a2fa45-78b7-4556-802d-ec28c44a4f12-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-7c7km\" (UID: \"43a2fa45-78b7-4556-802d-ec28c44a4f12\") " pod="openstack/dnsmasq-dns-7756b9d78c-7c7km" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.651960 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dzg8\" (UniqueName: \"kubernetes.io/projected/43a2fa45-78b7-4556-802d-ec28c44a4f12-kube-api-access-9dzg8\") pod \"dnsmasq-dns-7756b9d78c-7c7km\" (UID: \"43a2fa45-78b7-4556-802d-ec28c44a4f12\") " pod="openstack/dnsmasq-dns-7756b9d78c-7c7km" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.652038 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43a2fa45-78b7-4556-802d-ec28c44a4f12-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-7c7km\" (UID: \"43a2fa45-78b7-4556-802d-ec28c44a4f12\") " pod="openstack/dnsmasq-dns-7756b9d78c-7c7km" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.652073 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43a2fa45-78b7-4556-802d-ec28c44a4f12-config\") pod \"dnsmasq-dns-7756b9d78c-7c7km\" (UID: \"43a2fa45-78b7-4556-802d-ec28c44a4f12\") " pod="openstack/dnsmasq-dns-7756b9d78c-7c7km" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.653838 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7fb898569b-fzjx6" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.668476 4809 generic.go:334] "Generic (PLEG): container finished" podID="4056654a-5349-4947-9a20-99626cb45c87" containerID="59909581a8e075412baec12432956bda87a8d1a40bf16b140c0a714ceb712a79" exitCode=137 Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.668631 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.668655 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7d46b6b97c-tzjzj" event={"ID":"4056654a-5349-4947-9a20-99626cb45c87","Type":"ContainerDied","Data":"59909581a8e075412baec12432956bda87a8d1a40bf16b140c0a714ceb712a79"} Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.693774 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7fb898569b-fzjx6"] Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.728868 4809 generic.go:334] "Generic (PLEG): container finished" podID="0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5" containerID="28dc9a539be727f4312fc6a8acf53250cb5e3ef0d61dafc6201f658582b55f58" exitCode=0 Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.729017 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5","Type":"ContainerDied","Data":"28dc9a539be727f4312fc6a8acf53250cb5e3ef0d61dafc6201f658582b55f58"} Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.729060 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5","Type":"ContainerDied","Data":"f69187559b99e4db63f1a6b99492a6fc03321da92003b60e06ae25b314aa5dc6"} Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.729089 4809 scope.go:117] "RemoveContainer" containerID="2f22a8f4a042cb492d619423ac46fe58c63e208335a4bedcb67b1bd8a22356cd" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.729371 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.762291 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnh8t\" (UniqueName: \"kubernetes.io/projected/0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5-kube-api-access-xnh8t\") pod \"0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5\" (UID: \"0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5\") " Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.762355 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5-config-data\") pod \"0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5\" (UID: \"0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5\") " Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.763046 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5-combined-ca-bundle\") pod \"0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5\" (UID: \"0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5\") " Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.763155 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5-scripts\") pod \"0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5\" (UID: \"0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5\") " Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.763243 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5-config-data-custom\") pod \"0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5\" (UID: \"0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5\") " Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.763301 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5-etc-machine-id\") pod \"0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5\" (UID: \"0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5\") " Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.763661 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e637b87-4eab-400a-b98b-08f2da100650-combined-ca-bundle\") pod \"heat-cfnapi-5f7c584ffb-4dlpd\" (UID: \"5e637b87-4eab-400a-b98b-08f2da100650\") " pod="openstack/heat-cfnapi-5f7c584ffb-4dlpd" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.763710 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/43a2fa45-78b7-4556-802d-ec28c44a4f12-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-7c7km\" (UID: \"43a2fa45-78b7-4556-802d-ec28c44a4f12\") " pod="openstack/dnsmasq-dns-7756b9d78c-7c7km" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.763744 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/43a2fa45-78b7-4556-802d-ec28c44a4f12-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-7c7km\" (UID: \"43a2fa45-78b7-4556-802d-ec28c44a4f12\") " pod="openstack/dnsmasq-dns-7756b9d78c-7c7km" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.763777 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dzg8\" (UniqueName: \"kubernetes.io/projected/43a2fa45-78b7-4556-802d-ec28c44a4f12-kube-api-access-9dzg8\") pod \"dnsmasq-dns-7756b9d78c-7c7km\" (UID: \"43a2fa45-78b7-4556-802d-ec28c44a4f12\") " pod="openstack/dnsmasq-dns-7756b9d78c-7c7km" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.763802 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bh57h\" (UniqueName: \"kubernetes.io/projected/3c291d73-cd7f-494e-8684-e6bda1c78259-kube-api-access-bh57h\") pod \"heat-api-7fb898569b-fzjx6\" (UID: \"3c291d73-cd7f-494e-8684-e6bda1c78259\") " pod="openstack/heat-api-7fb898569b-fzjx6" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.763844 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5e637b87-4eab-400a-b98b-08f2da100650-config-data-custom\") pod \"heat-cfnapi-5f7c584ffb-4dlpd\" (UID: \"5e637b87-4eab-400a-b98b-08f2da100650\") " pod="openstack/heat-cfnapi-5f7c584ffb-4dlpd" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.763878 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c291d73-cd7f-494e-8684-e6bda1c78259-combined-ca-bundle\") pod \"heat-api-7fb898569b-fzjx6\" (UID: \"3c291d73-cd7f-494e-8684-e6bda1c78259\") " pod="openstack/heat-api-7fb898569b-fzjx6" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.763969 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43a2fa45-78b7-4556-802d-ec28c44a4f12-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-7c7km\" (UID: \"43a2fa45-78b7-4556-802d-ec28c44a4f12\") " pod="openstack/dnsmasq-dns-7756b9d78c-7c7km" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.764021 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e637b87-4eab-400a-b98b-08f2da100650-config-data\") pod \"heat-cfnapi-5f7c584ffb-4dlpd\" (UID: \"5e637b87-4eab-400a-b98b-08f2da100650\") " pod="openstack/heat-cfnapi-5f7c584ffb-4dlpd" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.764049 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43a2fa45-78b7-4556-802d-ec28c44a4f12-config\") pod \"dnsmasq-dns-7756b9d78c-7c7km\" (UID: \"43a2fa45-78b7-4556-802d-ec28c44a4f12\") " pod="openstack/dnsmasq-dns-7756b9d78c-7c7km" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.764084 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3c291d73-cd7f-494e-8684-e6bda1c78259-config-data-custom\") pod \"heat-api-7fb898569b-fzjx6\" (UID: \"3c291d73-cd7f-494e-8684-e6bda1c78259\") " pod="openstack/heat-api-7fb898569b-fzjx6" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.765545 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/43a2fa45-78b7-4556-802d-ec28c44a4f12-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-7c7km\" (UID: \"43a2fa45-78b7-4556-802d-ec28c44a4f12\") " pod="openstack/dnsmasq-dns-7756b9d78c-7c7km" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.766592 4809 generic.go:334] "Generic (PLEG): container finished" podID="7a1edce5-95e6-4080-935e-07399aaa4f89" containerID="a8657b1ff2bb53e82c78c19f96e3df8f79962e9fd0898bff6f9c3c136e1c82be" exitCode=137 Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.766649 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-65c74fbb78-q9c6j" event={"ID":"7a1edce5-95e6-4080-935e-07399aaa4f89","Type":"ContainerDied","Data":"a8657b1ff2bb53e82c78c19f96e3df8f79962e9fd0898bff6f9c3c136e1c82be"} Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.767428 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5" (UID: "0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.774870 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5" (UID: "0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.776654 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43a2fa45-78b7-4556-802d-ec28c44a4f12-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-7c7km\" (UID: \"43a2fa45-78b7-4556-802d-ec28c44a4f12\") " pod="openstack/dnsmasq-dns-7756b9d78c-7c7km" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.777614 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43a2fa45-78b7-4556-802d-ec28c44a4f12-config\") pod \"dnsmasq-dns-7756b9d78c-7c7km\" (UID: \"43a2fa45-78b7-4556-802d-ec28c44a4f12\") " pod="openstack/dnsmasq-dns-7756b9d78c-7c7km" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.778285 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c291d73-cd7f-494e-8684-e6bda1c78259-config-data\") pod \"heat-api-7fb898569b-fzjx6\" (UID: \"3c291d73-cd7f-494e-8684-e6bda1c78259\") " pod="openstack/heat-api-7fb898569b-fzjx6" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.778351 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szncq\" (UniqueName: \"kubernetes.io/projected/5e637b87-4eab-400a-b98b-08f2da100650-kube-api-access-szncq\") pod \"heat-cfnapi-5f7c584ffb-4dlpd\" (UID: \"5e637b87-4eab-400a-b98b-08f2da100650\") " pod="openstack/heat-cfnapi-5f7c584ffb-4dlpd" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.778580 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/43a2fa45-78b7-4556-802d-ec28c44a4f12-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-7c7km\" (UID: \"43a2fa45-78b7-4556-802d-ec28c44a4f12\") " pod="openstack/dnsmasq-dns-7756b9d78c-7c7km" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.778865 4809 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.778890 4809 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.779685 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/43a2fa45-78b7-4556-802d-ec28c44a4f12-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-7c7km\" (UID: \"43a2fa45-78b7-4556-802d-ec28c44a4f12\") " pod="openstack/dnsmasq-dns-7756b9d78c-7c7km" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.781594 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5-scripts" (OuterVolumeSpecName: "scripts") pod "0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5" (UID: "0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.789367 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/43a2fa45-78b7-4556-802d-ec28c44a4f12-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-7c7km\" (UID: \"43a2fa45-78b7-4556-802d-ec28c44a4f12\") " pod="openstack/dnsmasq-dns-7756b9d78c-7c7km" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.789689 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5-kube-api-access-xnh8t" (OuterVolumeSpecName: "kube-api-access-xnh8t") pod "0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5" (UID: "0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5"). InnerVolumeSpecName "kube-api-access-xnh8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.802520 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dzg8\" (UniqueName: \"kubernetes.io/projected/43a2fa45-78b7-4556-802d-ec28c44a4f12-kube-api-access-9dzg8\") pod \"dnsmasq-dns-7756b9d78c-7c7km\" (UID: \"43a2fa45-78b7-4556-802d-ec28c44a4f12\") " pod="openstack/dnsmasq-dns-7756b9d78c-7c7km" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.827369 4809 scope.go:117] "RemoveContainer" containerID="28dc9a539be727f4312fc6a8acf53250cb5e3ef0d61dafc6201f658582b55f58" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.876905 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-65959dcdbb-g8h24" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.887589 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e637b87-4eab-400a-b98b-08f2da100650-combined-ca-bundle\") pod \"heat-cfnapi-5f7c584ffb-4dlpd\" (UID: \"5e637b87-4eab-400a-b98b-08f2da100650\") " pod="openstack/heat-cfnapi-5f7c584ffb-4dlpd" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.887717 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bh57h\" (UniqueName: \"kubernetes.io/projected/3c291d73-cd7f-494e-8684-e6bda1c78259-kube-api-access-bh57h\") pod \"heat-api-7fb898569b-fzjx6\" (UID: \"3c291d73-cd7f-494e-8684-e6bda1c78259\") " pod="openstack/heat-api-7fb898569b-fzjx6" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.887790 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5e637b87-4eab-400a-b98b-08f2da100650-config-data-custom\") pod \"heat-cfnapi-5f7c584ffb-4dlpd\" (UID: \"5e637b87-4eab-400a-b98b-08f2da100650\") " pod="openstack/heat-cfnapi-5f7c584ffb-4dlpd" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.887827 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c291d73-cd7f-494e-8684-e6bda1c78259-combined-ca-bundle\") pod \"heat-api-7fb898569b-fzjx6\" (UID: \"3c291d73-cd7f-494e-8684-e6bda1c78259\") " pod="openstack/heat-api-7fb898569b-fzjx6" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.888007 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e637b87-4eab-400a-b98b-08f2da100650-config-data\") pod \"heat-cfnapi-5f7c584ffb-4dlpd\" (UID: \"5e637b87-4eab-400a-b98b-08f2da100650\") " pod="openstack/heat-cfnapi-5f7c584ffb-4dlpd" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.888059 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3c291d73-cd7f-494e-8684-e6bda1c78259-config-data-custom\") pod \"heat-api-7fb898569b-fzjx6\" (UID: \"3c291d73-cd7f-494e-8684-e6bda1c78259\") " pod="openstack/heat-api-7fb898569b-fzjx6" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.888142 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c291d73-cd7f-494e-8684-e6bda1c78259-config-data\") pod \"heat-api-7fb898569b-fzjx6\" (UID: \"3c291d73-cd7f-494e-8684-e6bda1c78259\") " pod="openstack/heat-api-7fb898569b-fzjx6" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.888174 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szncq\" (UniqueName: \"kubernetes.io/projected/5e637b87-4eab-400a-b98b-08f2da100650-kube-api-access-szncq\") pod \"heat-cfnapi-5f7c584ffb-4dlpd\" (UID: \"5e637b87-4eab-400a-b98b-08f2da100650\") " pod="openstack/heat-cfnapi-5f7c584ffb-4dlpd" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.888303 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnh8t\" (UniqueName: \"kubernetes.io/projected/0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5-kube-api-access-xnh8t\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.888324 4809 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.894499 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5" (UID: "0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.895143 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5e637b87-4eab-400a-b98b-08f2da100650-config-data-custom\") pod \"heat-cfnapi-5f7c584ffb-4dlpd\" (UID: \"5e637b87-4eab-400a-b98b-08f2da100650\") " pod="openstack/heat-cfnapi-5f7c584ffb-4dlpd" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.903520 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-7c7km" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.913203 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e637b87-4eab-400a-b98b-08f2da100650-combined-ca-bundle\") pod \"heat-cfnapi-5f7c584ffb-4dlpd\" (UID: \"5e637b87-4eab-400a-b98b-08f2da100650\") " pod="openstack/heat-cfnapi-5f7c584ffb-4dlpd" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.914086 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3c291d73-cd7f-494e-8684-e6bda1c78259-config-data-custom\") pod \"heat-api-7fb898569b-fzjx6\" (UID: \"3c291d73-cd7f-494e-8684-e6bda1c78259\") " pod="openstack/heat-api-7fb898569b-fzjx6" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.914631 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e637b87-4eab-400a-b98b-08f2da100650-config-data\") pod \"heat-cfnapi-5f7c584ffb-4dlpd\" (UID: \"5e637b87-4eab-400a-b98b-08f2da100650\") " pod="openstack/heat-cfnapi-5f7c584ffb-4dlpd" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.925066 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szncq\" (UniqueName: \"kubernetes.io/projected/5e637b87-4eab-400a-b98b-08f2da100650-kube-api-access-szncq\") pod \"heat-cfnapi-5f7c584ffb-4dlpd\" (UID: \"5e637b87-4eab-400a-b98b-08f2da100650\") " pod="openstack/heat-cfnapi-5f7c584ffb-4dlpd" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.925818 4809 scope.go:117] "RemoveContainer" containerID="2f22a8f4a042cb492d619423ac46fe58c63e208335a4bedcb67b1bd8a22356cd" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.927528 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c291d73-cd7f-494e-8684-e6bda1c78259-config-data\") pod \"heat-api-7fb898569b-fzjx6\" (UID: \"3c291d73-cd7f-494e-8684-e6bda1c78259\") " pod="openstack/heat-api-7fb898569b-fzjx6" Mar 12 08:24:21 crc kubenswrapper[4809]: E0312 08:24:21.935457 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f22a8f4a042cb492d619423ac46fe58c63e208335a4bedcb67b1bd8a22356cd\": container with ID starting with 2f22a8f4a042cb492d619423ac46fe58c63e208335a4bedcb67b1bd8a22356cd not found: ID does not exist" containerID="2f22a8f4a042cb492d619423ac46fe58c63e208335a4bedcb67b1bd8a22356cd" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.935529 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f22a8f4a042cb492d619423ac46fe58c63e208335a4bedcb67b1bd8a22356cd"} err="failed to get container status \"2f22a8f4a042cb492d619423ac46fe58c63e208335a4bedcb67b1bd8a22356cd\": rpc error: code = NotFound desc = could not find container \"2f22a8f4a042cb492d619423ac46fe58c63e208335a4bedcb67b1bd8a22356cd\": container with ID starting with 2f22a8f4a042cb492d619423ac46fe58c63e208335a4bedcb67b1bd8a22356cd not found: ID does not exist" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.935577 4809 scope.go:117] "RemoveContainer" containerID="28dc9a539be727f4312fc6a8acf53250cb5e3ef0d61dafc6201f658582b55f58" Mar 12 08:24:21 crc kubenswrapper[4809]: E0312 08:24:21.937730 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28dc9a539be727f4312fc6a8acf53250cb5e3ef0d61dafc6201f658582b55f58\": container with ID starting with 28dc9a539be727f4312fc6a8acf53250cb5e3ef0d61dafc6201f658582b55f58 not found: ID does not exist" containerID="28dc9a539be727f4312fc6a8acf53250cb5e3ef0d61dafc6201f658582b55f58" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.937787 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28dc9a539be727f4312fc6a8acf53250cb5e3ef0d61dafc6201f658582b55f58"} err="failed to get container status \"28dc9a539be727f4312fc6a8acf53250cb5e3ef0d61dafc6201f658582b55f58\": rpc error: code = NotFound desc = could not find container \"28dc9a539be727f4312fc6a8acf53250cb5e3ef0d61dafc6201f658582b55f58\": container with ID starting with 28dc9a539be727f4312fc6a8acf53250cb5e3ef0d61dafc6201f658582b55f58 not found: ID does not exist" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.944469 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c291d73-cd7f-494e-8684-e6bda1c78259-combined-ca-bundle\") pod \"heat-api-7fb898569b-fzjx6\" (UID: \"3c291d73-cd7f-494e-8684-e6bda1c78259\") " pod="openstack/heat-api-7fb898569b-fzjx6" Mar 12 08:24:21 crc kubenswrapper[4809]: I0312 08:24:21.952956 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5f7c584ffb-4dlpd" Mar 12 08:24:22 crc kubenswrapper[4809]: I0312 08:24:21.993051 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:22 crc kubenswrapper[4809]: I0312 08:24:22.082300 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bh57h\" (UniqueName: \"kubernetes.io/projected/3c291d73-cd7f-494e-8684-e6bda1c78259-kube-api-access-bh57h\") pod \"heat-api-7fb898569b-fzjx6\" (UID: \"3c291d73-cd7f-494e-8684-e6bda1c78259\") " pod="openstack/heat-api-7fb898569b-fzjx6" Mar 12 08:24:22 crc kubenswrapper[4809]: I0312 08:24:22.197393 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5-config-data" (OuterVolumeSpecName: "config-data") pod "0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5" (UID: "0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:24:22 crc kubenswrapper[4809]: I0312 08:24:22.232592 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:22 crc kubenswrapper[4809]: I0312 08:24:22.296145 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7fb898569b-fzjx6" Mar 12 08:24:22 crc kubenswrapper[4809]: I0312 08:24:22.325552 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-65c74fbb78-q9c6j" Mar 12 08:24:22 crc kubenswrapper[4809]: I0312 08:24:22.449254 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7a1edce5-95e6-4080-935e-07399aaa4f89-config-data-custom\") pod \"7a1edce5-95e6-4080-935e-07399aaa4f89\" (UID: \"7a1edce5-95e6-4080-935e-07399aaa4f89\") " Mar 12 08:24:22 crc kubenswrapper[4809]: I0312 08:24:22.449719 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a1edce5-95e6-4080-935e-07399aaa4f89-logs\") pod \"7a1edce5-95e6-4080-935e-07399aaa4f89\" (UID: \"7a1edce5-95e6-4080-935e-07399aaa4f89\") " Mar 12 08:24:22 crc kubenswrapper[4809]: I0312 08:24:22.449818 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbvzn\" (UniqueName: \"kubernetes.io/projected/7a1edce5-95e6-4080-935e-07399aaa4f89-kube-api-access-mbvzn\") pod \"7a1edce5-95e6-4080-935e-07399aaa4f89\" (UID: \"7a1edce5-95e6-4080-935e-07399aaa4f89\") " Mar 12 08:24:22 crc kubenswrapper[4809]: I0312 08:24:22.449851 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a1edce5-95e6-4080-935e-07399aaa4f89-combined-ca-bundle\") pod \"7a1edce5-95e6-4080-935e-07399aaa4f89\" (UID: \"7a1edce5-95e6-4080-935e-07399aaa4f89\") " Mar 12 08:24:22 crc kubenswrapper[4809]: I0312 08:24:22.449873 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a1edce5-95e6-4080-935e-07399aaa4f89-config-data\") pod \"7a1edce5-95e6-4080-935e-07399aaa4f89\" (UID: \"7a1edce5-95e6-4080-935e-07399aaa4f89\") " Mar 12 08:24:22 crc kubenswrapper[4809]: I0312 08:24:22.453869 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a1edce5-95e6-4080-935e-07399aaa4f89-logs" (OuterVolumeSpecName: "logs") pod "7a1edce5-95e6-4080-935e-07399aaa4f89" (UID: "7a1edce5-95e6-4080-935e-07399aaa4f89"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:24:22 crc kubenswrapper[4809]: I0312 08:24:22.474644 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 12 08:24:22 crc kubenswrapper[4809]: I0312 08:24:22.482409 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a1edce5-95e6-4080-935e-07399aaa4f89-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7a1edce5-95e6-4080-935e-07399aaa4f89" (UID: "7a1edce5-95e6-4080-935e-07399aaa4f89"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:24:22 crc kubenswrapper[4809]: I0312 08:24:22.495852 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a1edce5-95e6-4080-935e-07399aaa4f89-kube-api-access-mbvzn" (OuterVolumeSpecName: "kube-api-access-mbvzn") pod "7a1edce5-95e6-4080-935e-07399aaa4f89" (UID: "7a1edce5-95e6-4080-935e-07399aaa4f89"). InnerVolumeSpecName "kube-api-access-mbvzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:24:22 crc kubenswrapper[4809]: I0312 08:24:22.522893 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7d46b6b97c-tzjzj" Mar 12 08:24:22 crc kubenswrapper[4809]: I0312 08:24:22.533281 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a1edce5-95e6-4080-935e-07399aaa4f89-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7a1edce5-95e6-4080-935e-07399aaa4f89" (UID: "7a1edce5-95e6-4080-935e-07399aaa4f89"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:24:22 crc kubenswrapper[4809]: I0312 08:24:22.568062 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbvzn\" (UniqueName: \"kubernetes.io/projected/7a1edce5-95e6-4080-935e-07399aaa4f89-kube-api-access-mbvzn\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:22 crc kubenswrapper[4809]: I0312 08:24:22.568100 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a1edce5-95e6-4080-935e-07399aaa4f89-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:22 crc kubenswrapper[4809]: I0312 08:24:22.568124 4809 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7a1edce5-95e6-4080-935e-07399aaa4f89-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:22 crc kubenswrapper[4809]: I0312 08:24:22.568136 4809 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a1edce5-95e6-4080-935e-07399aaa4f89-logs\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:22 crc kubenswrapper[4809]: I0312 08:24:22.571107 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a1edce5-95e6-4080-935e-07399aaa4f89-config-data" (OuterVolumeSpecName: "config-data") pod "7a1edce5-95e6-4080-935e-07399aaa4f89" (UID: "7a1edce5-95e6-4080-935e-07399aaa4f89"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:24:22 crc kubenswrapper[4809]: I0312 08:24:22.609443 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 12 08:24:22 crc kubenswrapper[4809]: I0312 08:24:22.675189 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4056654a-5349-4947-9a20-99626cb45c87-config-data-custom\") pod \"4056654a-5349-4947-9a20-99626cb45c87\" (UID: \"4056654a-5349-4947-9a20-99626cb45c87\") " Mar 12 08:24:22 crc kubenswrapper[4809]: I0312 08:24:22.675822 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lspp\" (UniqueName: \"kubernetes.io/projected/4056654a-5349-4947-9a20-99626cb45c87-kube-api-access-4lspp\") pod \"4056654a-5349-4947-9a20-99626cb45c87\" (UID: \"4056654a-5349-4947-9a20-99626cb45c87\") " Mar 12 08:24:22 crc kubenswrapper[4809]: I0312 08:24:22.675965 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4056654a-5349-4947-9a20-99626cb45c87-logs\") pod \"4056654a-5349-4947-9a20-99626cb45c87\" (UID: \"4056654a-5349-4947-9a20-99626cb45c87\") " Mar 12 08:24:22 crc kubenswrapper[4809]: I0312 08:24:22.676078 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4056654a-5349-4947-9a20-99626cb45c87-combined-ca-bundle\") pod \"4056654a-5349-4947-9a20-99626cb45c87\" (UID: \"4056654a-5349-4947-9a20-99626cb45c87\") " Mar 12 08:24:22 crc kubenswrapper[4809]: I0312 08:24:22.681302 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4056654a-5349-4947-9a20-99626cb45c87-config-data\") pod \"4056654a-5349-4947-9a20-99626cb45c87\" (UID: \"4056654a-5349-4947-9a20-99626cb45c87\") " Mar 12 08:24:22 crc kubenswrapper[4809]: I0312 08:24:22.682537 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a1edce5-95e6-4080-935e-07399aaa4f89-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:22 crc kubenswrapper[4809]: I0312 08:24:22.689898 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4056654a-5349-4947-9a20-99626cb45c87-logs" (OuterVolumeSpecName: "logs") pod "4056654a-5349-4947-9a20-99626cb45c87" (UID: "4056654a-5349-4947-9a20-99626cb45c87"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:24:22 crc kubenswrapper[4809]: I0312 08:24:22.716785 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4056654a-5349-4947-9a20-99626cb45c87-kube-api-access-4lspp" (OuterVolumeSpecName: "kube-api-access-4lspp") pod "4056654a-5349-4947-9a20-99626cb45c87" (UID: "4056654a-5349-4947-9a20-99626cb45c87"). InnerVolumeSpecName "kube-api-access-4lspp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:24:22 crc kubenswrapper[4809]: I0312 08:24:22.716940 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Mar 12 08:24:22 crc kubenswrapper[4809]: E0312 08:24:22.724354 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a1edce5-95e6-4080-935e-07399aaa4f89" containerName="barbican-keystone-listener" Mar 12 08:24:22 crc kubenswrapper[4809]: I0312 08:24:22.724377 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a1edce5-95e6-4080-935e-07399aaa4f89" containerName="barbican-keystone-listener" Mar 12 08:24:22 crc kubenswrapper[4809]: E0312 08:24:22.724415 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a1edce5-95e6-4080-935e-07399aaa4f89" containerName="barbican-keystone-listener-log" Mar 12 08:24:22 crc kubenswrapper[4809]: I0312 08:24:22.724422 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a1edce5-95e6-4080-935e-07399aaa4f89" containerName="barbican-keystone-listener-log" Mar 12 08:24:22 crc kubenswrapper[4809]: E0312 08:24:22.724436 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4056654a-5349-4947-9a20-99626cb45c87" containerName="barbican-worker" Mar 12 08:24:22 crc kubenswrapper[4809]: I0312 08:24:22.724443 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="4056654a-5349-4947-9a20-99626cb45c87" containerName="barbican-worker" Mar 12 08:24:22 crc kubenswrapper[4809]: E0312 08:24:22.724460 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4056654a-5349-4947-9a20-99626cb45c87" containerName="barbican-worker-log" Mar 12 08:24:22 crc kubenswrapper[4809]: I0312 08:24:22.724467 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="4056654a-5349-4947-9a20-99626cb45c87" containerName="barbican-worker-log" Mar 12 08:24:22 crc kubenswrapper[4809]: I0312 08:24:22.732958 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="4056654a-5349-4947-9a20-99626cb45c87" containerName="barbican-worker" Mar 12 08:24:22 crc kubenswrapper[4809]: I0312 08:24:22.732987 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a1edce5-95e6-4080-935e-07399aaa4f89" containerName="barbican-keystone-listener" Mar 12 08:24:22 crc kubenswrapper[4809]: I0312 08:24:22.733013 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a1edce5-95e6-4080-935e-07399aaa4f89" containerName="barbican-keystone-listener-log" Mar 12 08:24:22 crc kubenswrapper[4809]: I0312 08:24:22.733022 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="4056654a-5349-4947-9a20-99626cb45c87" containerName="barbican-worker-log" Mar 12 08:24:22 crc kubenswrapper[4809]: I0312 08:24:22.734680 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 12 08:24:22 crc kubenswrapper[4809]: I0312 08:24:22.737275 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4056654a-5349-4947-9a20-99626cb45c87-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4056654a-5349-4947-9a20-99626cb45c87" (UID: "4056654a-5349-4947-9a20-99626cb45c87"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:24:22 crc kubenswrapper[4809]: I0312 08:24:22.748258 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Mar 12 08:24:22 crc kubenswrapper[4809]: I0312 08:24:22.791305 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4056654a-5349-4947-9a20-99626cb45c87-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4056654a-5349-4947-9a20-99626cb45c87" (UID: "4056654a-5349-4947-9a20-99626cb45c87"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:24:22 crc kubenswrapper[4809]: I0312 08:24:22.792150 4809 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4056654a-5349-4947-9a20-99626cb45c87-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:22 crc kubenswrapper[4809]: I0312 08:24:22.811218 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lspp\" (UniqueName: \"kubernetes.io/projected/4056654a-5349-4947-9a20-99626cb45c87-kube-api-access-4lspp\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:22 crc kubenswrapper[4809]: I0312 08:24:22.811303 4809 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4056654a-5349-4947-9a20-99626cb45c87-logs\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:22 crc kubenswrapper[4809]: I0312 08:24:22.811318 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4056654a-5349-4947-9a20-99626cb45c87-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:22 crc kubenswrapper[4809]: I0312 08:24:22.851080 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 12 08:24:22 crc kubenswrapper[4809]: I0312 08:24:22.869073 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4056654a-5349-4947-9a20-99626cb45c87-config-data" (OuterVolumeSpecName: "config-data") pod "4056654a-5349-4947-9a20-99626cb45c87" (UID: "4056654a-5349-4947-9a20-99626cb45c87"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:24:22 crc kubenswrapper[4809]: I0312 08:24:22.877819 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7d46b6b97c-tzjzj" event={"ID":"4056654a-5349-4947-9a20-99626cb45c87","Type":"ContainerDied","Data":"8eccdad5aacb690d2ad207eb8b9ce84ebc2e4235acca1c9a632c5ffbd9172771"} Mar 12 08:24:22 crc kubenswrapper[4809]: I0312 08:24:22.877887 4809 scope.go:117] "RemoveContainer" containerID="59909581a8e075412baec12432956bda87a8d1a40bf16b140c0a714ceb712a79" Mar 12 08:24:22 crc kubenswrapper[4809]: I0312 08:24:22.878054 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7d46b6b97c-tzjzj" Mar 12 08:24:22 crc kubenswrapper[4809]: I0312 08:24:22.915778 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f39b431-0c84-4f84-b887-d5f74af3d573-config-data\") pod \"cinder-scheduler-0\" (UID: \"6f39b431-0c84-4f84-b887-d5f74af3d573\") " pod="openstack/cinder-scheduler-0" Mar 12 08:24:22 crc kubenswrapper[4809]: I0312 08:24:22.915842 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f39b431-0c84-4f84-b887-d5f74af3d573-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"6f39b431-0c84-4f84-b887-d5f74af3d573\") " pod="openstack/cinder-scheduler-0" Mar 12 08:24:22 crc kubenswrapper[4809]: I0312 08:24:22.915874 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6wqs\" (UniqueName: \"kubernetes.io/projected/6f39b431-0c84-4f84-b887-d5f74af3d573-kube-api-access-t6wqs\") pod \"cinder-scheduler-0\" (UID: \"6f39b431-0c84-4f84-b887-d5f74af3d573\") " pod="openstack/cinder-scheduler-0" Mar 12 08:24:22 crc kubenswrapper[4809]: I0312 08:24:22.915921 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6f39b431-0c84-4f84-b887-d5f74af3d573-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"6f39b431-0c84-4f84-b887-d5f74af3d573\") " pod="openstack/cinder-scheduler-0" Mar 12 08:24:22 crc kubenswrapper[4809]: I0312 08:24:22.915945 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f39b431-0c84-4f84-b887-d5f74af3d573-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"6f39b431-0c84-4f84-b887-d5f74af3d573\") " pod="openstack/cinder-scheduler-0" Mar 12 08:24:22 crc kubenswrapper[4809]: I0312 08:24:22.916000 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f39b431-0c84-4f84-b887-d5f74af3d573-scripts\") pod \"cinder-scheduler-0\" (UID: \"6f39b431-0c84-4f84-b887-d5f74af3d573\") " pod="openstack/cinder-scheduler-0" Mar 12 08:24:22 crc kubenswrapper[4809]: I0312 08:24:22.916126 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4056654a-5349-4947-9a20-99626cb45c87-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:22 crc kubenswrapper[4809]: I0312 08:24:22.942419 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-65c74fbb78-q9c6j" event={"ID":"7a1edce5-95e6-4080-935e-07399aaa4f89","Type":"ContainerDied","Data":"552b570ccf94bb17eeee17a5cf6a64a67ca7295489d2ada633fc5619650be6c6"} Mar 12 08:24:22 crc kubenswrapper[4809]: I0312 08:24:22.942554 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-65c74fbb78-q9c6j" Mar 12 08:24:23 crc kubenswrapper[4809]: I0312 08:24:23.018878 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f39b431-0c84-4f84-b887-d5f74af3d573-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"6f39b431-0c84-4f84-b887-d5f74af3d573\") " pod="openstack/cinder-scheduler-0" Mar 12 08:24:23 crc kubenswrapper[4809]: I0312 08:24:23.018973 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f39b431-0c84-4f84-b887-d5f74af3d573-scripts\") pod \"cinder-scheduler-0\" (UID: \"6f39b431-0c84-4f84-b887-d5f74af3d573\") " pod="openstack/cinder-scheduler-0" Mar 12 08:24:23 crc kubenswrapper[4809]: I0312 08:24:23.019142 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f39b431-0c84-4f84-b887-d5f74af3d573-config-data\") pod \"cinder-scheduler-0\" (UID: \"6f39b431-0c84-4f84-b887-d5f74af3d573\") " pod="openstack/cinder-scheduler-0" Mar 12 08:24:23 crc kubenswrapper[4809]: I0312 08:24:23.019160 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f39b431-0c84-4f84-b887-d5f74af3d573-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"6f39b431-0c84-4f84-b887-d5f74af3d573\") " pod="openstack/cinder-scheduler-0" Mar 12 08:24:23 crc kubenswrapper[4809]: I0312 08:24:23.019181 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6wqs\" (UniqueName: \"kubernetes.io/projected/6f39b431-0c84-4f84-b887-d5f74af3d573-kube-api-access-t6wqs\") pod \"cinder-scheduler-0\" (UID: \"6f39b431-0c84-4f84-b887-d5f74af3d573\") " pod="openstack/cinder-scheduler-0" Mar 12 08:24:23 crc kubenswrapper[4809]: I0312 08:24:23.019228 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6f39b431-0c84-4f84-b887-d5f74af3d573-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"6f39b431-0c84-4f84-b887-d5f74af3d573\") " pod="openstack/cinder-scheduler-0" Mar 12 08:24:23 crc kubenswrapper[4809]: I0312 08:24:23.019329 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6f39b431-0c84-4f84-b887-d5f74af3d573-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"6f39b431-0c84-4f84-b887-d5f74af3d573\") " pod="openstack/cinder-scheduler-0" Mar 12 08:24:23 crc kubenswrapper[4809]: I0312 08:24:23.031845 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f39b431-0c84-4f84-b887-d5f74af3d573-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"6f39b431-0c84-4f84-b887-d5f74af3d573\") " pod="openstack/cinder-scheduler-0" Mar 12 08:24:23 crc kubenswrapper[4809]: I0312 08:24:23.032069 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f39b431-0c84-4f84-b887-d5f74af3d573-scripts\") pod \"cinder-scheduler-0\" (UID: \"6f39b431-0c84-4f84-b887-d5f74af3d573\") " pod="openstack/cinder-scheduler-0" Mar 12 08:24:23 crc kubenswrapper[4809]: I0312 08:24:23.036390 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f39b431-0c84-4f84-b887-d5f74af3d573-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"6f39b431-0c84-4f84-b887-d5f74af3d573\") " pod="openstack/cinder-scheduler-0" Mar 12 08:24:23 crc kubenswrapper[4809]: I0312 08:24:23.038370 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f39b431-0c84-4f84-b887-d5f74af3d573-config-data\") pod \"cinder-scheduler-0\" (UID: \"6f39b431-0c84-4f84-b887-d5f74af3d573\") " pod="openstack/cinder-scheduler-0" Mar 12 08:24:23 crc kubenswrapper[4809]: I0312 08:24:23.073137 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6wqs\" (UniqueName: \"kubernetes.io/projected/6f39b431-0c84-4f84-b887-d5f74af3d573-kube-api-access-t6wqs\") pod \"cinder-scheduler-0\" (UID: \"6f39b431-0c84-4f84-b887-d5f74af3d573\") " pod="openstack/cinder-scheduler-0" Mar 12 08:24:23 crc kubenswrapper[4809]: I0312 08:24:23.147821 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5" path="/var/lib/kubelet/pods/0bd8e81e-3ef1-4b3e-8b45-ba06a0ff1fb5/volumes" Mar 12 08:24:23 crc kubenswrapper[4809]: I0312 08:24:23.202760 4809 scope.go:117] "RemoveContainer" containerID="938fceeaa88246079030664c43060e5e347a3940d5aa4be9c6fa1115c13dd005" Mar 12 08:24:23 crc kubenswrapper[4809]: I0312 08:24:23.230151 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-7d46b6b97c-tzjzj"] Mar 12 08:24:23 crc kubenswrapper[4809]: I0312 08:24:23.264452 4809 scope.go:117] "RemoveContainer" containerID="a8657b1ff2bb53e82c78c19f96e3df8f79962e9fd0898bff6f9c3c136e1c82be" Mar 12 08:24:23 crc kubenswrapper[4809]: I0312 08:24:23.266339 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-7d46b6b97c-tzjzj"] Mar 12 08:24:23 crc kubenswrapper[4809]: I0312 08:24:23.292960 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-65c74fbb78-q9c6j"] Mar 12 08:24:23 crc kubenswrapper[4809]: I0312 08:24:23.303715 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-65c74fbb78-q9c6j"] Mar 12 08:24:23 crc kubenswrapper[4809]: I0312 08:24:23.308103 4809 scope.go:117] "RemoveContainer" containerID="d3c2172fcadc00c0e9f60a8390e41884b9cc66b77ca35abf7ca63644e94ab4af" Mar 12 08:24:23 crc kubenswrapper[4809]: I0312 08:24:23.352925 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8bcp4" podUID="2d78fd7f-3718-4999-a661-ad590dc808a5" containerName="registry-server" probeResult="failure" output=< Mar 12 08:24:23 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 08:24:23 crc kubenswrapper[4809]: > Mar 12 08:24:23 crc kubenswrapper[4809]: I0312 08:24:23.374162 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 12 08:24:23 crc kubenswrapper[4809]: I0312 08:24:23.467555 4809 scope.go:117] "RemoveContainer" containerID="f3db8f17cfacd1e7dbec9a00a340b5a8a163c21025c6d5d51a65608f3fa28bb8" Mar 12 08:24:23 crc kubenswrapper[4809]: I0312 08:24:23.653919 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5f7c584ffb-4dlpd"] Mar 12 08:24:23 crc kubenswrapper[4809]: I0312 08:24:23.688347 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7fb898569b-fzjx6"] Mar 12 08:24:23 crc kubenswrapper[4809]: I0312 08:24:23.708232 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-7c7km"] Mar 12 08:24:23 crc kubenswrapper[4809]: I0312 08:24:23.752074 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-65959dcdbb-g8h24"] Mar 12 08:24:23 crc kubenswrapper[4809]: W0312 08:24:23.794322 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43a2fa45_78b7_4556_802d_ec28c44a4f12.slice/crio-ce37f3654fa076ea7833a758c920562b5fd40ee8bab7f74f35c0e458a0f67911 WatchSource:0}: Error finding container ce37f3654fa076ea7833a758c920562b5fd40ee8bab7f74f35c0e458a0f67911: Status 404 returned error can't find the container with id ce37f3654fa076ea7833a758c920562b5fd40ee8bab7f74f35c0e458a0f67911 Mar 12 08:24:23 crc kubenswrapper[4809]: I0312 08:24:23.994148 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 12 08:24:24 crc kubenswrapper[4809]: I0312 08:24:24.050683 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-7c7km" event={"ID":"43a2fa45-78b7-4556-802d-ec28c44a4f12","Type":"ContainerStarted","Data":"ce37f3654fa076ea7833a758c920562b5fd40ee8bab7f74f35c0e458a0f67911"} Mar 12 08:24:24 crc kubenswrapper[4809]: I0312 08:24:24.064767 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7fb898569b-fzjx6" event={"ID":"3c291d73-cd7f-494e-8684-e6bda1c78259","Type":"ContainerStarted","Data":"0bf55d0d7023594f319062c0eeda7eff49e51d8b80564d2182890187b4d55e76"} Mar 12 08:24:24 crc kubenswrapper[4809]: I0312 08:24:24.075559 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-65959dcdbb-g8h24" event={"ID":"970fd7c0-4095-4aa7-8e61-f300972a7124","Type":"ContainerStarted","Data":"f3171087752fe2b9c63669c3d53986c7ca3d32fe9f769e2ea4bda6f8d40df289"} Mar 12 08:24:24 crc kubenswrapper[4809]: I0312 08:24:24.115306 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5f7c584ffb-4dlpd" event={"ID":"5e637b87-4eab-400a-b98b-08f2da100650","Type":"ContainerStarted","Data":"c158d3a4b588485c82571fb9e425555f1c4b311a22ab3d24659965a5cab8aa44"} Mar 12 08:24:25 crc kubenswrapper[4809]: I0312 08:24:25.127835 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4056654a-5349-4947-9a20-99626cb45c87" path="/var/lib/kubelet/pods/4056654a-5349-4947-9a20-99626cb45c87/volumes" Mar 12 08:24:25 crc kubenswrapper[4809]: I0312 08:24:25.129146 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a1edce5-95e6-4080-935e-07399aaa4f89" path="/var/lib/kubelet/pods/7a1edce5-95e6-4080-935e-07399aaa4f89/volumes" Mar 12 08:24:25 crc kubenswrapper[4809]: I0312 08:24:25.150047 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6f39b431-0c84-4f84-b887-d5f74af3d573","Type":"ContainerStarted","Data":"aa3596dde61fc1c9640740ad869c09e84f4c5634bf2ca4f179b3b71e5e883db4"} Mar 12 08:24:25 crc kubenswrapper[4809]: I0312 08:24:25.152530 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-65959dcdbb-g8h24" event={"ID":"970fd7c0-4095-4aa7-8e61-f300972a7124","Type":"ContainerStarted","Data":"eaa71738ce9ba7a19c458b10f5fc2b4bf86dce7e57dd2894e77ed4731b9c6d6e"} Mar 12 08:24:25 crc kubenswrapper[4809]: I0312 08:24:25.154484 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-65959dcdbb-g8h24" Mar 12 08:24:25 crc kubenswrapper[4809]: I0312 08:24:25.159556 4809 generic.go:334] "Generic (PLEG): container finished" podID="43a2fa45-78b7-4556-802d-ec28c44a4f12" containerID="e6f38ddc4df56ab60f5bdfe869a9252bd1f1232314ddc19aa459413915daf29e" exitCode=0 Mar 12 08:24:25 crc kubenswrapper[4809]: I0312 08:24:25.159613 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-7c7km" event={"ID":"43a2fa45-78b7-4556-802d-ec28c44a4f12","Type":"ContainerDied","Data":"e6f38ddc4df56ab60f5bdfe869a9252bd1f1232314ddc19aa459413915daf29e"} Mar 12 08:24:25 crc kubenswrapper[4809]: I0312 08:24:25.212332 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-65959dcdbb-g8h24" podStartSLOduration=4.212311144 podStartE2EDuration="4.212311144s" podCreationTimestamp="2026-03-12 08:24:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:24:25.195750656 +0000 UTC m=+1538.777786389" watchObservedRunningTime="2026-03-12 08:24:25.212311144 +0000 UTC m=+1538.794346877" Mar 12 08:24:25 crc kubenswrapper[4809]: I0312 08:24:25.801137 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-677f6cdf55-cmx2q"] Mar 12 08:24:25 crc kubenswrapper[4809]: I0312 08:24:25.805189 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-677f6cdf55-cmx2q" Mar 12 08:24:25 crc kubenswrapper[4809]: I0312 08:24:25.816167 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Mar 12 08:24:25 crc kubenswrapper[4809]: I0312 08:24:25.816261 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Mar 12 08:24:25 crc kubenswrapper[4809]: I0312 08:24:25.825177 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Mar 12 08:24:25 crc kubenswrapper[4809]: I0312 08:24:25.831407 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-677f6cdf55-cmx2q"] Mar 12 08:24:25 crc kubenswrapper[4809]: I0312 08:24:25.976396 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/409cda21-626c-4670-9cf9-06900631ddd5-config-data\") pod \"swift-proxy-677f6cdf55-cmx2q\" (UID: \"409cda21-626c-4670-9cf9-06900631ddd5\") " pod="openstack/swift-proxy-677f6cdf55-cmx2q" Mar 12 08:24:25 crc kubenswrapper[4809]: I0312 08:24:25.976549 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/409cda21-626c-4670-9cf9-06900631ddd5-run-httpd\") pod \"swift-proxy-677f6cdf55-cmx2q\" (UID: \"409cda21-626c-4670-9cf9-06900631ddd5\") " pod="openstack/swift-proxy-677f6cdf55-cmx2q" Mar 12 08:24:25 crc kubenswrapper[4809]: I0312 08:24:25.976586 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/409cda21-626c-4670-9cf9-06900631ddd5-internal-tls-certs\") pod \"swift-proxy-677f6cdf55-cmx2q\" (UID: \"409cda21-626c-4670-9cf9-06900631ddd5\") " pod="openstack/swift-proxy-677f6cdf55-cmx2q" Mar 12 08:24:25 crc kubenswrapper[4809]: I0312 08:24:25.976619 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqhfp\" (UniqueName: \"kubernetes.io/projected/409cda21-626c-4670-9cf9-06900631ddd5-kube-api-access-bqhfp\") pod \"swift-proxy-677f6cdf55-cmx2q\" (UID: \"409cda21-626c-4670-9cf9-06900631ddd5\") " pod="openstack/swift-proxy-677f6cdf55-cmx2q" Mar 12 08:24:25 crc kubenswrapper[4809]: I0312 08:24:25.976663 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/409cda21-626c-4670-9cf9-06900631ddd5-combined-ca-bundle\") pod \"swift-proxy-677f6cdf55-cmx2q\" (UID: \"409cda21-626c-4670-9cf9-06900631ddd5\") " pod="openstack/swift-proxy-677f6cdf55-cmx2q" Mar 12 08:24:25 crc kubenswrapper[4809]: I0312 08:24:25.976859 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/409cda21-626c-4670-9cf9-06900631ddd5-etc-swift\") pod \"swift-proxy-677f6cdf55-cmx2q\" (UID: \"409cda21-626c-4670-9cf9-06900631ddd5\") " pod="openstack/swift-proxy-677f6cdf55-cmx2q" Mar 12 08:24:25 crc kubenswrapper[4809]: I0312 08:24:25.976920 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/409cda21-626c-4670-9cf9-06900631ddd5-log-httpd\") pod \"swift-proxy-677f6cdf55-cmx2q\" (UID: \"409cda21-626c-4670-9cf9-06900631ddd5\") " pod="openstack/swift-proxy-677f6cdf55-cmx2q" Mar 12 08:24:25 crc kubenswrapper[4809]: I0312 08:24:25.977061 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/409cda21-626c-4670-9cf9-06900631ddd5-public-tls-certs\") pod \"swift-proxy-677f6cdf55-cmx2q\" (UID: \"409cda21-626c-4670-9cf9-06900631ddd5\") " pod="openstack/swift-proxy-677f6cdf55-cmx2q" Mar 12 08:24:26 crc kubenswrapper[4809]: I0312 08:24:26.080171 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/409cda21-626c-4670-9cf9-06900631ddd5-log-httpd\") pod \"swift-proxy-677f6cdf55-cmx2q\" (UID: \"409cda21-626c-4670-9cf9-06900631ddd5\") " pod="openstack/swift-proxy-677f6cdf55-cmx2q" Mar 12 08:24:26 crc kubenswrapper[4809]: I0312 08:24:26.081416 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/409cda21-626c-4670-9cf9-06900631ddd5-public-tls-certs\") pod \"swift-proxy-677f6cdf55-cmx2q\" (UID: \"409cda21-626c-4670-9cf9-06900631ddd5\") " pod="openstack/swift-proxy-677f6cdf55-cmx2q" Mar 12 08:24:26 crc kubenswrapper[4809]: I0312 08:24:26.081470 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/409cda21-626c-4670-9cf9-06900631ddd5-config-data\") pod \"swift-proxy-677f6cdf55-cmx2q\" (UID: \"409cda21-626c-4670-9cf9-06900631ddd5\") " pod="openstack/swift-proxy-677f6cdf55-cmx2q" Mar 12 08:24:26 crc kubenswrapper[4809]: I0312 08:24:26.082271 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/409cda21-626c-4670-9cf9-06900631ddd5-run-httpd\") pod \"swift-proxy-677f6cdf55-cmx2q\" (UID: \"409cda21-626c-4670-9cf9-06900631ddd5\") " pod="openstack/swift-proxy-677f6cdf55-cmx2q" Mar 12 08:24:26 crc kubenswrapper[4809]: I0312 08:24:26.082308 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/409cda21-626c-4670-9cf9-06900631ddd5-internal-tls-certs\") pod \"swift-proxy-677f6cdf55-cmx2q\" (UID: \"409cda21-626c-4670-9cf9-06900631ddd5\") " pod="openstack/swift-proxy-677f6cdf55-cmx2q" Mar 12 08:24:26 crc kubenswrapper[4809]: I0312 08:24:26.082332 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqhfp\" (UniqueName: \"kubernetes.io/projected/409cda21-626c-4670-9cf9-06900631ddd5-kube-api-access-bqhfp\") pod \"swift-proxy-677f6cdf55-cmx2q\" (UID: \"409cda21-626c-4670-9cf9-06900631ddd5\") " pod="openstack/swift-proxy-677f6cdf55-cmx2q" Mar 12 08:24:26 crc kubenswrapper[4809]: I0312 08:24:26.082370 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/409cda21-626c-4670-9cf9-06900631ddd5-combined-ca-bundle\") pod \"swift-proxy-677f6cdf55-cmx2q\" (UID: \"409cda21-626c-4670-9cf9-06900631ddd5\") " pod="openstack/swift-proxy-677f6cdf55-cmx2q" Mar 12 08:24:26 crc kubenswrapper[4809]: I0312 08:24:26.082497 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/409cda21-626c-4670-9cf9-06900631ddd5-etc-swift\") pod \"swift-proxy-677f6cdf55-cmx2q\" (UID: \"409cda21-626c-4670-9cf9-06900631ddd5\") " pod="openstack/swift-proxy-677f6cdf55-cmx2q" Mar 12 08:24:26 crc kubenswrapper[4809]: I0312 08:24:26.082822 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/409cda21-626c-4670-9cf9-06900631ddd5-run-httpd\") pod \"swift-proxy-677f6cdf55-cmx2q\" (UID: \"409cda21-626c-4670-9cf9-06900631ddd5\") " pod="openstack/swift-proxy-677f6cdf55-cmx2q" Mar 12 08:24:26 crc kubenswrapper[4809]: I0312 08:24:26.081339 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/409cda21-626c-4670-9cf9-06900631ddd5-log-httpd\") pod \"swift-proxy-677f6cdf55-cmx2q\" (UID: \"409cda21-626c-4670-9cf9-06900631ddd5\") " pod="openstack/swift-proxy-677f6cdf55-cmx2q" Mar 12 08:24:26 crc kubenswrapper[4809]: I0312 08:24:26.088291 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/409cda21-626c-4670-9cf9-06900631ddd5-public-tls-certs\") pod \"swift-proxy-677f6cdf55-cmx2q\" (UID: \"409cda21-626c-4670-9cf9-06900631ddd5\") " pod="openstack/swift-proxy-677f6cdf55-cmx2q" Mar 12 08:24:26 crc kubenswrapper[4809]: I0312 08:24:26.089264 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/409cda21-626c-4670-9cf9-06900631ddd5-etc-swift\") pod \"swift-proxy-677f6cdf55-cmx2q\" (UID: \"409cda21-626c-4670-9cf9-06900631ddd5\") " pod="openstack/swift-proxy-677f6cdf55-cmx2q" Mar 12 08:24:26 crc kubenswrapper[4809]: I0312 08:24:26.094897 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/409cda21-626c-4670-9cf9-06900631ddd5-internal-tls-certs\") pod \"swift-proxy-677f6cdf55-cmx2q\" (UID: \"409cda21-626c-4670-9cf9-06900631ddd5\") " pod="openstack/swift-proxy-677f6cdf55-cmx2q" Mar 12 08:24:26 crc kubenswrapper[4809]: I0312 08:24:26.095070 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/409cda21-626c-4670-9cf9-06900631ddd5-combined-ca-bundle\") pod \"swift-proxy-677f6cdf55-cmx2q\" (UID: \"409cda21-626c-4670-9cf9-06900631ddd5\") " pod="openstack/swift-proxy-677f6cdf55-cmx2q" Mar 12 08:24:26 crc kubenswrapper[4809]: I0312 08:24:26.096931 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/409cda21-626c-4670-9cf9-06900631ddd5-config-data\") pod \"swift-proxy-677f6cdf55-cmx2q\" (UID: \"409cda21-626c-4670-9cf9-06900631ddd5\") " pod="openstack/swift-proxy-677f6cdf55-cmx2q" Mar 12 08:24:26 crc kubenswrapper[4809]: I0312 08:24:26.105014 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqhfp\" (UniqueName: \"kubernetes.io/projected/409cda21-626c-4670-9cf9-06900631ddd5-kube-api-access-bqhfp\") pod \"swift-proxy-677f6cdf55-cmx2q\" (UID: \"409cda21-626c-4670-9cf9-06900631ddd5\") " pod="openstack/swift-proxy-677f6cdf55-cmx2q" Mar 12 08:24:26 crc kubenswrapper[4809]: I0312 08:24:26.149857 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-677f6cdf55-cmx2q" Mar 12 08:24:26 crc kubenswrapper[4809]: I0312 08:24:26.178397 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6f39b431-0c84-4f84-b887-d5f74af3d573","Type":"ContainerStarted","Data":"e385256959cd28774fee9a870ce5aa13c4a88fe4001e42f00a7915dcc36f3a4d"} Mar 12 08:24:26 crc kubenswrapper[4809]: I0312 08:24:26.192612 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-7c7km" event={"ID":"43a2fa45-78b7-4556-802d-ec28c44a4f12","Type":"ContainerStarted","Data":"228d81e87f41e7ab431405d079127d1d4ef3ecb92b4b42b615a7e777aa12533e"} Mar 12 08:24:26 crc kubenswrapper[4809]: I0312 08:24:26.193087 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7756b9d78c-7c7km" Mar 12 08:24:26 crc kubenswrapper[4809]: I0312 08:24:26.221944 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7756b9d78c-7c7km" podStartSLOduration=5.221919462 podStartE2EDuration="5.221919462s" podCreationTimestamp="2026-03-12 08:24:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:24:26.211343745 +0000 UTC m=+1539.793379478" watchObservedRunningTime="2026-03-12 08:24:26.221919462 +0000 UTC m=+1539.803955195" Mar 12 08:24:26 crc kubenswrapper[4809]: I0312 08:24:26.303591 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-677488855f-bz28w" Mar 12 08:24:26 crc kubenswrapper[4809]: I0312 08:24:26.392534 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7877ddd69d-mkc5h"] Mar 12 08:24:26 crc kubenswrapper[4809]: I0312 08:24:26.392852 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7877ddd69d-mkc5h" podUID="160fb026-73c9-4fbd-8602-aa6de6bc9417" containerName="neutron-api" containerID="cri-o://3eecb5ef1d8b229c29f7ec07f94dec773a8f4e06959ae3d2077c7143eb61d022" gracePeriod=30 Mar 12 08:24:26 crc kubenswrapper[4809]: I0312 08:24:26.393513 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7877ddd69d-mkc5h" podUID="160fb026-73c9-4fbd-8602-aa6de6bc9417" containerName="neutron-httpd" containerID="cri-o://fe20fa75b8ebe3a499eff0c4a37968a4213fb9728f5ae5283afc6a8e419a7d05" gracePeriod=30 Mar 12 08:24:27 crc kubenswrapper[4809]: I0312 08:24:27.256756 4809 generic.go:334] "Generic (PLEG): container finished" podID="160fb026-73c9-4fbd-8602-aa6de6bc9417" containerID="fe20fa75b8ebe3a499eff0c4a37968a4213fb9728f5ae5283afc6a8e419a7d05" exitCode=0 Mar 12 08:24:27 crc kubenswrapper[4809]: I0312 08:24:27.259181 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7877ddd69d-mkc5h" event={"ID":"160fb026-73c9-4fbd-8602-aa6de6bc9417","Type":"ContainerDied","Data":"fe20fa75b8ebe3a499eff0c4a37968a4213fb9728f5ae5283afc6a8e419a7d05"} Mar 12 08:24:28 crc kubenswrapper[4809]: I0312 08:24:28.789331 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-677f6cdf55-cmx2q"] Mar 12 08:24:28 crc kubenswrapper[4809]: W0312 08:24:28.798913 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod409cda21_626c_4670_9cf9_06900631ddd5.slice/crio-c8b4efb9a41958e1dccf8f541691c331fd4763ccfd88e33452f781aaa32fe481 WatchSource:0}: Error finding container c8b4efb9a41958e1dccf8f541691c331fd4763ccfd88e33452f781aaa32fe481: Status 404 returned error can't find the container with id c8b4efb9a41958e1dccf8f541691c331fd4763ccfd88e33452f781aaa32fe481 Mar 12 08:24:29 crc kubenswrapper[4809]: I0312 08:24:29.367982 4809 generic.go:334] "Generic (PLEG): container finished" podID="160fb026-73c9-4fbd-8602-aa6de6bc9417" containerID="3eecb5ef1d8b229c29f7ec07f94dec773a8f4e06959ae3d2077c7143eb61d022" exitCode=0 Mar 12 08:24:29 crc kubenswrapper[4809]: I0312 08:24:29.368502 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7877ddd69d-mkc5h" event={"ID":"160fb026-73c9-4fbd-8602-aa6de6bc9417","Type":"ContainerDied","Data":"3eecb5ef1d8b229c29f7ec07f94dec773a8f4e06959ae3d2077c7143eb61d022"} Mar 12 08:24:29 crc kubenswrapper[4809]: I0312 08:24:29.382787 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5f7c584ffb-4dlpd" event={"ID":"5e637b87-4eab-400a-b98b-08f2da100650","Type":"ContainerStarted","Data":"7cb2db8a0543905d4c83711a3c028acb096f2a452bbce32fac6798c8cd437d95"} Mar 12 08:24:29 crc kubenswrapper[4809]: I0312 08:24:29.384283 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-5f7c584ffb-4dlpd" Mar 12 08:24:29 crc kubenswrapper[4809]: I0312 08:24:29.409328 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-677f6cdf55-cmx2q" event={"ID":"409cda21-626c-4670-9cf9-06900631ddd5","Type":"ContainerStarted","Data":"90e58ead81b0375c78fc79799951be19de769fcd69a284cddeb809369db0ffd9"} Mar 12 08:24:29 crc kubenswrapper[4809]: I0312 08:24:29.409382 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-677f6cdf55-cmx2q" event={"ID":"409cda21-626c-4670-9cf9-06900631ddd5","Type":"ContainerStarted","Data":"c8b4efb9a41958e1dccf8f541691c331fd4763ccfd88e33452f781aaa32fe481"} Mar 12 08:24:29 crc kubenswrapper[4809]: I0312 08:24:29.412324 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-5f7c584ffb-4dlpd" podStartSLOduration=4.399353564 podStartE2EDuration="8.412300024s" podCreationTimestamp="2026-03-12 08:24:21 +0000 UTC" firstStartedPulling="2026-03-12 08:24:23.785199375 +0000 UTC m=+1537.367235118" lastFinishedPulling="2026-03-12 08:24:27.798145845 +0000 UTC m=+1541.380181578" observedRunningTime="2026-03-12 08:24:29.404292847 +0000 UTC m=+1542.986328580" watchObservedRunningTime="2026-03-12 08:24:29.412300024 +0000 UTC m=+1542.994335747" Mar 12 08:24:29 crc kubenswrapper[4809]: I0312 08:24:29.435733 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6f39b431-0c84-4f84-b887-d5f74af3d573","Type":"ContainerStarted","Data":"60d53b941914d36cb3a7d2a04bf76332e393adba6e142b0d40f17e0f405f7cab"} Mar 12 08:24:29 crc kubenswrapper[4809]: I0312 08:24:29.510507 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7fb898569b-fzjx6" event={"ID":"3c291d73-cd7f-494e-8684-e6bda1c78259","Type":"ContainerStarted","Data":"59b06b4b056813793a9b771b9bc23f38a65b7b4e156d88ed76e70b6acd0398e5"} Mar 12 08:24:29 crc kubenswrapper[4809]: I0312 08:24:29.513032 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-7fb898569b-fzjx6" Mar 12 08:24:29 crc kubenswrapper[4809]: I0312 08:24:29.521615 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=7.52158957 podStartE2EDuration="7.52158957s" podCreationTimestamp="2026-03-12 08:24:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:24:29.477575409 +0000 UTC m=+1543.059611152" watchObservedRunningTime="2026-03-12 08:24:29.52158957 +0000 UTC m=+1543.103625303" Mar 12 08:24:29 crc kubenswrapper[4809]: I0312 08:24:29.567864 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-7fb898569b-fzjx6" podStartSLOduration=4.5804921929999995 podStartE2EDuration="8.567837931s" podCreationTimestamp="2026-03-12 08:24:21 +0000 UTC" firstStartedPulling="2026-03-12 08:24:23.792939374 +0000 UTC m=+1537.374975107" lastFinishedPulling="2026-03-12 08:24:27.780285122 +0000 UTC m=+1541.362320845" observedRunningTime="2026-03-12 08:24:29.543477821 +0000 UTC m=+1543.125513554" watchObservedRunningTime="2026-03-12 08:24:29.567837931 +0000 UTC m=+1543.149873664" Mar 12 08:24:29 crc kubenswrapper[4809]: I0312 08:24:29.974552 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7877ddd69d-mkc5h" Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.075096 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/160fb026-73c9-4fbd-8602-aa6de6bc9417-combined-ca-bundle\") pod \"160fb026-73c9-4fbd-8602-aa6de6bc9417\" (UID: \"160fb026-73c9-4fbd-8602-aa6de6bc9417\") " Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.075200 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/160fb026-73c9-4fbd-8602-aa6de6bc9417-config\") pod \"160fb026-73c9-4fbd-8602-aa6de6bc9417\" (UID: \"160fb026-73c9-4fbd-8602-aa6de6bc9417\") " Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.075249 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/160fb026-73c9-4fbd-8602-aa6de6bc9417-ovndb-tls-certs\") pod \"160fb026-73c9-4fbd-8602-aa6de6bc9417\" (UID: \"160fb026-73c9-4fbd-8602-aa6de6bc9417\") " Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.075403 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svvbt\" (UniqueName: \"kubernetes.io/projected/160fb026-73c9-4fbd-8602-aa6de6bc9417-kube-api-access-svvbt\") pod \"160fb026-73c9-4fbd-8602-aa6de6bc9417\" (UID: \"160fb026-73c9-4fbd-8602-aa6de6bc9417\") " Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.075502 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/160fb026-73c9-4fbd-8602-aa6de6bc9417-httpd-config\") pod \"160fb026-73c9-4fbd-8602-aa6de6bc9417\" (UID: \"160fb026-73c9-4fbd-8602-aa6de6bc9417\") " Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.127411 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/160fb026-73c9-4fbd-8602-aa6de6bc9417-kube-api-access-svvbt" (OuterVolumeSpecName: "kube-api-access-svvbt") pod "160fb026-73c9-4fbd-8602-aa6de6bc9417" (UID: "160fb026-73c9-4fbd-8602-aa6de6bc9417"). InnerVolumeSpecName "kube-api-access-svvbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.155467 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/160fb026-73c9-4fbd-8602-aa6de6bc9417-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "160fb026-73c9-4fbd-8602-aa6de6bc9417" (UID: "160fb026-73c9-4fbd-8602-aa6de6bc9417"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.187223 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svvbt\" (UniqueName: \"kubernetes.io/projected/160fb026-73c9-4fbd-8602-aa6de6bc9417-kube-api-access-svvbt\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.187361 4809 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/160fb026-73c9-4fbd-8602-aa6de6bc9417-httpd-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.224261 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/160fb026-73c9-4fbd-8602-aa6de6bc9417-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "160fb026-73c9-4fbd-8602-aa6de6bc9417" (UID: "160fb026-73c9-4fbd-8602-aa6de6bc9417"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.263062 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/160fb026-73c9-4fbd-8602-aa6de6bc9417-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "160fb026-73c9-4fbd-8602-aa6de6bc9417" (UID: "160fb026-73c9-4fbd-8602-aa6de6bc9417"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.281236 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/160fb026-73c9-4fbd-8602-aa6de6bc9417-config" (OuterVolumeSpecName: "config") pod "160fb026-73c9-4fbd-8602-aa6de6bc9417" (UID: "160fb026-73c9-4fbd-8602-aa6de6bc9417"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.290151 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/160fb026-73c9-4fbd-8602-aa6de6bc9417-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.290183 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/160fb026-73c9-4fbd-8602-aa6de6bc9417-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.290196 4809 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/160fb026-73c9-4fbd-8602-aa6de6bc9417-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.533984 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-677f6cdf55-cmx2q" event={"ID":"409cda21-626c-4670-9cf9-06900631ddd5","Type":"ContainerStarted","Data":"71bac8dde17e37affdec3ca61e0da316d42795740d55d93beb89b34a4f2c1c1b"} Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.534127 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-677f6cdf55-cmx2q" Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.534172 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-677f6cdf55-cmx2q" Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.542668 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7877ddd69d-mkc5h" event={"ID":"160fb026-73c9-4fbd-8602-aa6de6bc9417","Type":"ContainerDied","Data":"95967317ce2ce6ace655df6f194afdab6cb92db9202c013ec8808385ee4def76"} Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.542767 4809 scope.go:117] "RemoveContainer" containerID="fe20fa75b8ebe3a499eff0c4a37968a4213fb9728f5ae5283afc6a8e419a7d05" Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.543135 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7877ddd69d-mkc5h" Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.579734 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-776db87b84-bnbp7"] Mar 12 08:24:30 crc kubenswrapper[4809]: E0312 08:24:30.580437 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="160fb026-73c9-4fbd-8602-aa6de6bc9417" containerName="neutron-httpd" Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.580457 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="160fb026-73c9-4fbd-8602-aa6de6bc9417" containerName="neutron-httpd" Mar 12 08:24:30 crc kubenswrapper[4809]: E0312 08:24:30.580500 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="160fb026-73c9-4fbd-8602-aa6de6bc9417" containerName="neutron-api" Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.580507 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="160fb026-73c9-4fbd-8602-aa6de6bc9417" containerName="neutron-api" Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.580749 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="160fb026-73c9-4fbd-8602-aa6de6bc9417" containerName="neutron-httpd" Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.580803 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="160fb026-73c9-4fbd-8602-aa6de6bc9417" containerName="neutron-api" Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.581831 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-776db87b84-bnbp7" Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.590230 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-677f6cdf55-cmx2q" podStartSLOduration=5.590214993 podStartE2EDuration="5.590214993s" podCreationTimestamp="2026-03-12 08:24:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:24:30.570353486 +0000 UTC m=+1544.152389219" watchObservedRunningTime="2026-03-12 08:24:30.590214993 +0000 UTC m=+1544.172250726" Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.602057 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c1e547b-63a1-4600-83d6-7efc902df373-combined-ca-bundle\") pod \"heat-engine-776db87b84-bnbp7\" (UID: \"2c1e547b-63a1-4600-83d6-7efc902df373\") " pod="openstack/heat-engine-776db87b84-bnbp7" Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.602241 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c1e547b-63a1-4600-83d6-7efc902df373-config-data-custom\") pod \"heat-engine-776db87b84-bnbp7\" (UID: \"2c1e547b-63a1-4600-83d6-7efc902df373\") " pod="openstack/heat-engine-776db87b84-bnbp7" Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.602275 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c1e547b-63a1-4600-83d6-7efc902df373-config-data\") pod \"heat-engine-776db87b84-bnbp7\" (UID: \"2c1e547b-63a1-4600-83d6-7efc902df373\") " pod="openstack/heat-engine-776db87b84-bnbp7" Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.602300 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6smr\" (UniqueName: \"kubernetes.io/projected/2c1e547b-63a1-4600-83d6-7efc902df373-kube-api-access-p6smr\") pod \"heat-engine-776db87b84-bnbp7\" (UID: \"2c1e547b-63a1-4600-83d6-7efc902df373\") " pod="openstack/heat-engine-776db87b84-bnbp7" Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.632285 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-776db87b84-bnbp7"] Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.657841 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-bcd664c56-8zpnk"] Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.659869 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-bcd664c56-8zpnk" Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.662760 4809 scope.go:117] "RemoveContainer" containerID="3eecb5ef1d8b229c29f7ec07f94dec773a8f4e06959ae3d2077c7143eb61d022" Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.677383 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7877ddd69d-mkc5h"] Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.698570 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7877ddd69d-mkc5h"] Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.717616 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c1e547b-63a1-4600-83d6-7efc902df373-combined-ca-bundle\") pod \"heat-engine-776db87b84-bnbp7\" (UID: \"2c1e547b-63a1-4600-83d6-7efc902df373\") " pod="openstack/heat-engine-776db87b84-bnbp7" Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.717720 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efdd8218-be37-4288-a09b-4b4119b5ca39-combined-ca-bundle\") pod \"heat-api-bcd664c56-8zpnk\" (UID: \"efdd8218-be37-4288-a09b-4b4119b5ca39\") " pod="openstack/heat-api-bcd664c56-8zpnk" Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.717745 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c1e547b-63a1-4600-83d6-7efc902df373-config-data-custom\") pod \"heat-engine-776db87b84-bnbp7\" (UID: \"2c1e547b-63a1-4600-83d6-7efc902df373\") " pod="openstack/heat-engine-776db87b84-bnbp7" Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.717770 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c1e547b-63a1-4600-83d6-7efc902df373-config-data\") pod \"heat-engine-776db87b84-bnbp7\" (UID: \"2c1e547b-63a1-4600-83d6-7efc902df373\") " pod="openstack/heat-engine-776db87b84-bnbp7" Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.717790 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6smr\" (UniqueName: \"kubernetes.io/projected/2c1e547b-63a1-4600-83d6-7efc902df373-kube-api-access-p6smr\") pod \"heat-engine-776db87b84-bnbp7\" (UID: \"2c1e547b-63a1-4600-83d6-7efc902df373\") " pod="openstack/heat-engine-776db87b84-bnbp7" Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.717892 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/efdd8218-be37-4288-a09b-4b4119b5ca39-config-data-custom\") pod \"heat-api-bcd664c56-8zpnk\" (UID: \"efdd8218-be37-4288-a09b-4b4119b5ca39\") " pod="openstack/heat-api-bcd664c56-8zpnk" Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.717927 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bd5cd\" (UniqueName: \"kubernetes.io/projected/efdd8218-be37-4288-a09b-4b4119b5ca39-kube-api-access-bd5cd\") pod \"heat-api-bcd664c56-8zpnk\" (UID: \"efdd8218-be37-4288-a09b-4b4119b5ca39\") " pod="openstack/heat-api-bcd664c56-8zpnk" Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.717975 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efdd8218-be37-4288-a09b-4b4119b5ca39-config-data\") pod \"heat-api-bcd664c56-8zpnk\" (UID: \"efdd8218-be37-4288-a09b-4b4119b5ca39\") " pod="openstack/heat-api-bcd664c56-8zpnk" Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.725926 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c1e547b-63a1-4600-83d6-7efc902df373-config-data-custom\") pod \"heat-engine-776db87b84-bnbp7\" (UID: \"2c1e547b-63a1-4600-83d6-7efc902df373\") " pod="openstack/heat-engine-776db87b84-bnbp7" Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.730890 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-bcd664c56-8zpnk"] Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.745299 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-6f489d8994-s29s4"] Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.747090 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c1e547b-63a1-4600-83d6-7efc902df373-combined-ca-bundle\") pod \"heat-engine-776db87b84-bnbp7\" (UID: \"2c1e547b-63a1-4600-83d6-7efc902df373\") " pod="openstack/heat-engine-776db87b84-bnbp7" Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.749726 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6f489d8994-s29s4" Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.751222 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c1e547b-63a1-4600-83d6-7efc902df373-config-data\") pod \"heat-engine-776db87b84-bnbp7\" (UID: \"2c1e547b-63a1-4600-83d6-7efc902df373\") " pod="openstack/heat-engine-776db87b84-bnbp7" Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.751586 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6smr\" (UniqueName: \"kubernetes.io/projected/2c1e547b-63a1-4600-83d6-7efc902df373-kube-api-access-p6smr\") pod \"heat-engine-776db87b84-bnbp7\" (UID: \"2c1e547b-63a1-4600-83d6-7efc902df373\") " pod="openstack/heat-engine-776db87b84-bnbp7" Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.783502 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6f489d8994-s29s4"] Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.824163 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e4b8a60-4319-4d91-a9e5-897d76e61f96-config-data\") pod \"heat-cfnapi-6f489d8994-s29s4\" (UID: \"5e4b8a60-4319-4d91-a9e5-897d76e61f96\") " pod="openstack/heat-cfnapi-6f489d8994-s29s4" Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.824355 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/efdd8218-be37-4288-a09b-4b4119b5ca39-config-data-custom\") pod \"heat-api-bcd664c56-8zpnk\" (UID: \"efdd8218-be37-4288-a09b-4b4119b5ca39\") " pod="openstack/heat-api-bcd664c56-8zpnk" Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.824474 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bd5cd\" (UniqueName: \"kubernetes.io/projected/efdd8218-be37-4288-a09b-4b4119b5ca39-kube-api-access-bd5cd\") pod \"heat-api-bcd664c56-8zpnk\" (UID: \"efdd8218-be37-4288-a09b-4b4119b5ca39\") " pod="openstack/heat-api-bcd664c56-8zpnk" Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.824541 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5e4b8a60-4319-4d91-a9e5-897d76e61f96-config-data-custom\") pod \"heat-cfnapi-6f489d8994-s29s4\" (UID: \"5e4b8a60-4319-4d91-a9e5-897d76e61f96\") " pod="openstack/heat-cfnapi-6f489d8994-s29s4" Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.824603 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efdd8218-be37-4288-a09b-4b4119b5ca39-config-data\") pod \"heat-api-bcd664c56-8zpnk\" (UID: \"efdd8218-be37-4288-a09b-4b4119b5ca39\") " pod="openstack/heat-api-bcd664c56-8zpnk" Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.824862 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efdd8218-be37-4288-a09b-4b4119b5ca39-combined-ca-bundle\") pod \"heat-api-bcd664c56-8zpnk\" (UID: \"efdd8218-be37-4288-a09b-4b4119b5ca39\") " pod="openstack/heat-api-bcd664c56-8zpnk" Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.824908 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e4b8a60-4319-4d91-a9e5-897d76e61f96-combined-ca-bundle\") pod \"heat-cfnapi-6f489d8994-s29s4\" (UID: \"5e4b8a60-4319-4d91-a9e5-897d76e61f96\") " pod="openstack/heat-cfnapi-6f489d8994-s29s4" Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.825696 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgd5v\" (UniqueName: \"kubernetes.io/projected/5e4b8a60-4319-4d91-a9e5-897d76e61f96-kube-api-access-tgd5v\") pod \"heat-cfnapi-6f489d8994-s29s4\" (UID: \"5e4b8a60-4319-4d91-a9e5-897d76e61f96\") " pod="openstack/heat-cfnapi-6f489d8994-s29s4" Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.855908 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/efdd8218-be37-4288-a09b-4b4119b5ca39-config-data-custom\") pod \"heat-api-bcd664c56-8zpnk\" (UID: \"efdd8218-be37-4288-a09b-4b4119b5ca39\") " pod="openstack/heat-api-bcd664c56-8zpnk" Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.857399 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efdd8218-be37-4288-a09b-4b4119b5ca39-combined-ca-bundle\") pod \"heat-api-bcd664c56-8zpnk\" (UID: \"efdd8218-be37-4288-a09b-4b4119b5ca39\") " pod="openstack/heat-api-bcd664c56-8zpnk" Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.862986 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efdd8218-be37-4288-a09b-4b4119b5ca39-config-data\") pod \"heat-api-bcd664c56-8zpnk\" (UID: \"efdd8218-be37-4288-a09b-4b4119b5ca39\") " pod="openstack/heat-api-bcd664c56-8zpnk" Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.867934 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bd5cd\" (UniqueName: \"kubernetes.io/projected/efdd8218-be37-4288-a09b-4b4119b5ca39-kube-api-access-bd5cd\") pod \"heat-api-bcd664c56-8zpnk\" (UID: \"efdd8218-be37-4288-a09b-4b4119b5ca39\") " pod="openstack/heat-api-bcd664c56-8zpnk" Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.911555 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-776db87b84-bnbp7" Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.927950 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e4b8a60-4319-4d91-a9e5-897d76e61f96-combined-ca-bundle\") pod \"heat-cfnapi-6f489d8994-s29s4\" (UID: \"5e4b8a60-4319-4d91-a9e5-897d76e61f96\") " pod="openstack/heat-cfnapi-6f489d8994-s29s4" Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.928050 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgd5v\" (UniqueName: \"kubernetes.io/projected/5e4b8a60-4319-4d91-a9e5-897d76e61f96-kube-api-access-tgd5v\") pod \"heat-cfnapi-6f489d8994-s29s4\" (UID: \"5e4b8a60-4319-4d91-a9e5-897d76e61f96\") " pod="openstack/heat-cfnapi-6f489d8994-s29s4" Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.928081 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e4b8a60-4319-4d91-a9e5-897d76e61f96-config-data\") pod \"heat-cfnapi-6f489d8994-s29s4\" (UID: \"5e4b8a60-4319-4d91-a9e5-897d76e61f96\") " pod="openstack/heat-cfnapi-6f489d8994-s29s4" Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.928166 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5e4b8a60-4319-4d91-a9e5-897d76e61f96-config-data-custom\") pod \"heat-cfnapi-6f489d8994-s29s4\" (UID: \"5e4b8a60-4319-4d91-a9e5-897d76e61f96\") " pod="openstack/heat-cfnapi-6f489d8994-s29s4" Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.934104 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5e4b8a60-4319-4d91-a9e5-897d76e61f96-config-data-custom\") pod \"heat-cfnapi-6f489d8994-s29s4\" (UID: \"5e4b8a60-4319-4d91-a9e5-897d76e61f96\") " pod="openstack/heat-cfnapi-6f489d8994-s29s4" Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.936950 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e4b8a60-4319-4d91-a9e5-897d76e61f96-config-data\") pod \"heat-cfnapi-6f489d8994-s29s4\" (UID: \"5e4b8a60-4319-4d91-a9e5-897d76e61f96\") " pod="openstack/heat-cfnapi-6f489d8994-s29s4" Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.944954 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e4b8a60-4319-4d91-a9e5-897d76e61f96-combined-ca-bundle\") pod \"heat-cfnapi-6f489d8994-s29s4\" (UID: \"5e4b8a60-4319-4d91-a9e5-897d76e61f96\") " pod="openstack/heat-cfnapi-6f489d8994-s29s4" Mar 12 08:24:30 crc kubenswrapper[4809]: I0312 08:24:30.954735 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgd5v\" (UniqueName: \"kubernetes.io/projected/5e4b8a60-4319-4d91-a9e5-897d76e61f96-kube-api-access-tgd5v\") pod \"heat-cfnapi-6f489d8994-s29s4\" (UID: \"5e4b8a60-4319-4d91-a9e5-897d76e61f96\") " pod="openstack/heat-cfnapi-6f489d8994-s29s4" Mar 12 08:24:31 crc kubenswrapper[4809]: I0312 08:24:31.064481 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-bcd664c56-8zpnk" Mar 12 08:24:31 crc kubenswrapper[4809]: I0312 08:24:31.082814 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6f489d8994-s29s4" Mar 12 08:24:31 crc kubenswrapper[4809]: I0312 08:24:31.162320 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="160fb026-73c9-4fbd-8602-aa6de6bc9417" path="/var/lib/kubelet/pods/160fb026-73c9-4fbd-8602-aa6de6bc9417/volumes" Mar 12 08:24:31 crc kubenswrapper[4809]: I0312 08:24:31.178968 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 08:24:31 crc kubenswrapper[4809]: I0312 08:24:31.179406 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cd8e10ab-6dee-431d-8f02-af12acc9e823" containerName="ceilometer-central-agent" containerID="cri-o://6392b8a3e03c4f8ab556c0f28a3f56a4799a23a97fdd2e354ca291a1255faf1a" gracePeriod=30 Mar 12 08:24:31 crc kubenswrapper[4809]: I0312 08:24:31.182851 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cd8e10ab-6dee-431d-8f02-af12acc9e823" containerName="proxy-httpd" containerID="cri-o://646f034d2fe028a00e20db2f8597d5d4743f531a101cab39d592f1ca8bcedc67" gracePeriod=30 Mar 12 08:24:31 crc kubenswrapper[4809]: I0312 08:24:31.183140 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cd8e10ab-6dee-431d-8f02-af12acc9e823" containerName="sg-core" containerID="cri-o://7a1781ec7a3be4ed5094c635f78927db827be1303297f2af393a13c8b83b998a" gracePeriod=30 Mar 12 08:24:31 crc kubenswrapper[4809]: I0312 08:24:31.183200 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cd8e10ab-6dee-431d-8f02-af12acc9e823" containerName="ceilometer-notification-agent" containerID="cri-o://f7642ca7d724aec9fe410366f0506fcd7d36c927caadb24f7acd63ff499120fb" gracePeriod=30 Mar 12 08:24:31 crc kubenswrapper[4809]: I0312 08:24:31.206522 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Mar 12 08:24:31 crc kubenswrapper[4809]: I0312 08:24:31.606939 4809 generic.go:334] "Generic (PLEG): container finished" podID="cd8e10ab-6dee-431d-8f02-af12acc9e823" containerID="7a1781ec7a3be4ed5094c635f78927db827be1303297f2af393a13c8b83b998a" exitCode=2 Mar 12 08:24:31 crc kubenswrapper[4809]: I0312 08:24:31.607037 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cd8e10ab-6dee-431d-8f02-af12acc9e823","Type":"ContainerDied","Data":"7a1781ec7a3be4ed5094c635f78927db827be1303297f2af393a13c8b83b998a"} Mar 12 08:24:31 crc kubenswrapper[4809]: I0312 08:24:31.665040 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-776db87b84-bnbp7"] Mar 12 08:24:32 crc kubenswrapper[4809]: I0312 08:24:31.925589 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7756b9d78c-7c7km" Mar 12 08:24:32 crc kubenswrapper[4809]: I0312 08:24:31.936801 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-bcd664c56-8zpnk"] Mar 12 08:24:32 crc kubenswrapper[4809]: I0312 08:24:32.065723 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-4qr2q"] Mar 12 08:24:32 crc kubenswrapper[4809]: I0312 08:24:32.066230 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-4qr2q" podUID="c0262dd0-da72-4f2a-a2e9-bbb12081451d" containerName="dnsmasq-dns" containerID="cri-o://4bd3a15a038aa7494f171ea96ee0e3a348f6f544b0608b49f75ff46c4f12353c" gracePeriod=10 Mar 12 08:24:32 crc kubenswrapper[4809]: I0312 08:24:32.095784 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6f489d8994-s29s4"] Mar 12 08:24:32 crc kubenswrapper[4809]: I0312 08:24:32.207701 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c9776ccc5-4qr2q" podUID="c0262dd0-da72-4f2a-a2e9-bbb12081451d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.218:5353: connect: connection refused" Mar 12 08:24:32 crc kubenswrapper[4809]: I0312 08:24:32.784379 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-bcd664c56-8zpnk" event={"ID":"efdd8218-be37-4288-a09b-4b4119b5ca39","Type":"ContainerStarted","Data":"8261c475e03d2d9e2d235c63fb28aac820fd5993d8722c085a75fb130246bf94"} Mar 12 08:24:32 crc kubenswrapper[4809]: I0312 08:24:32.795094 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-776db87b84-bnbp7" event={"ID":"2c1e547b-63a1-4600-83d6-7efc902df373","Type":"ContainerStarted","Data":"c6791c423abdc7139b629d816a564bb56b9bb90e0cb7bfa1e33c388b4497120c"} Mar 12 08:24:32 crc kubenswrapper[4809]: I0312 08:24:32.795283 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-776db87b84-bnbp7" event={"ID":"2c1e547b-63a1-4600-83d6-7efc902df373","Type":"ContainerStarted","Data":"2f50bff09564f728d3e2d5cff3d830dda02a9ba1bef0000f647b62e3b7cfaeac"} Mar 12 08:24:32 crc kubenswrapper[4809]: I0312 08:24:32.797289 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-776db87b84-bnbp7" Mar 12 08:24:32 crc kubenswrapper[4809]: I0312 08:24:32.845441 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-776db87b84-bnbp7" podStartSLOduration=2.845409191 podStartE2EDuration="2.845409191s" podCreationTimestamp="2026-03-12 08:24:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:24:32.81765062 +0000 UTC m=+1546.399686353" watchObservedRunningTime="2026-03-12 08:24:32.845409191 +0000 UTC m=+1546.427444914" Mar 12 08:24:32 crc kubenswrapper[4809]: I0312 08:24:32.857327 4809 generic.go:334] "Generic (PLEG): container finished" podID="cd8e10ab-6dee-431d-8f02-af12acc9e823" containerID="646f034d2fe028a00e20db2f8597d5d4743f531a101cab39d592f1ca8bcedc67" exitCode=0 Mar 12 08:24:32 crc kubenswrapper[4809]: I0312 08:24:32.857382 4809 generic.go:334] "Generic (PLEG): container finished" podID="cd8e10ab-6dee-431d-8f02-af12acc9e823" containerID="f7642ca7d724aec9fe410366f0506fcd7d36c927caadb24f7acd63ff499120fb" exitCode=0 Mar 12 08:24:32 crc kubenswrapper[4809]: I0312 08:24:32.857395 4809 generic.go:334] "Generic (PLEG): container finished" podID="cd8e10ab-6dee-431d-8f02-af12acc9e823" containerID="6392b8a3e03c4f8ab556c0f28a3f56a4799a23a97fdd2e354ca291a1255faf1a" exitCode=0 Mar 12 08:24:32 crc kubenswrapper[4809]: I0312 08:24:32.857496 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cd8e10ab-6dee-431d-8f02-af12acc9e823","Type":"ContainerDied","Data":"646f034d2fe028a00e20db2f8597d5d4743f531a101cab39d592f1ca8bcedc67"} Mar 12 08:24:32 crc kubenswrapper[4809]: I0312 08:24:32.857535 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cd8e10ab-6dee-431d-8f02-af12acc9e823","Type":"ContainerDied","Data":"f7642ca7d724aec9fe410366f0506fcd7d36c927caadb24f7acd63ff499120fb"} Mar 12 08:24:32 crc kubenswrapper[4809]: I0312 08:24:32.857587 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cd8e10ab-6dee-431d-8f02-af12acc9e823","Type":"ContainerDied","Data":"6392b8a3e03c4f8ab556c0f28a3f56a4799a23a97fdd2e354ca291a1255faf1a"} Mar 12 08:24:32 crc kubenswrapper[4809]: I0312 08:24:32.869340 4809 generic.go:334] "Generic (PLEG): container finished" podID="c0262dd0-da72-4f2a-a2e9-bbb12081451d" containerID="4bd3a15a038aa7494f171ea96ee0e3a348f6f544b0608b49f75ff46c4f12353c" exitCode=0 Mar 12 08:24:32 crc kubenswrapper[4809]: I0312 08:24:32.869428 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-4qr2q" event={"ID":"c0262dd0-da72-4f2a-a2e9-bbb12081451d","Type":"ContainerDied","Data":"4bd3a15a038aa7494f171ea96ee0e3a348f6f544b0608b49f75ff46c4f12353c"} Mar 12 08:24:32 crc kubenswrapper[4809]: I0312 08:24:32.875821 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6f489d8994-s29s4" event={"ID":"5e4b8a60-4319-4d91-a9e5-897d76e61f96","Type":"ContainerStarted","Data":"f877cdd60e648e11b9664dc7142d4e7a8c5bc18c4cf0f7853f2537be669db5ad"} Mar 12 08:24:32 crc kubenswrapper[4809]: I0312 08:24:32.876311 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-6f489d8994-s29s4" Mar 12 08:24:32 crc kubenswrapper[4809]: I0312 08:24:32.908645 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-6f489d8994-s29s4" podStartSLOduration=2.90860815 podStartE2EDuration="2.90860815s" podCreationTimestamp="2026-03-12 08:24:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:24:32.896728598 +0000 UTC m=+1546.478764321" watchObservedRunningTime="2026-03-12 08:24:32.90860815 +0000 UTC m=+1546.490643883" Mar 12 08:24:33 crc kubenswrapper[4809]: I0312 08:24:33.324146 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8bcp4" podUID="2d78fd7f-3718-4999-a661-ad590dc808a5" containerName="registry-server" probeResult="failure" output=< Mar 12 08:24:33 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 08:24:33 crc kubenswrapper[4809]: > Mar 12 08:24:33 crc kubenswrapper[4809]: I0312 08:24:33.374258 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Mar 12 08:24:33 crc kubenswrapper[4809]: I0312 08:24:33.427651 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 08:24:33 crc kubenswrapper[4809]: I0312 08:24:33.447082 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-4qr2q" Mar 12 08:24:33 crc kubenswrapper[4809]: I0312 08:24:33.474730 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd8e10ab-6dee-431d-8f02-af12acc9e823-run-httpd\") pod \"cd8e10ab-6dee-431d-8f02-af12acc9e823\" (UID: \"cd8e10ab-6dee-431d-8f02-af12acc9e823\") " Mar 12 08:24:33 crc kubenswrapper[4809]: I0312 08:24:33.474889 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd8e10ab-6dee-431d-8f02-af12acc9e823-log-httpd\") pod \"cd8e10ab-6dee-431d-8f02-af12acc9e823\" (UID: \"cd8e10ab-6dee-431d-8f02-af12acc9e823\") " Mar 12 08:24:33 crc kubenswrapper[4809]: I0312 08:24:33.474916 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cd8e10ab-6dee-431d-8f02-af12acc9e823-sg-core-conf-yaml\") pod \"cd8e10ab-6dee-431d-8f02-af12acc9e823\" (UID: \"cd8e10ab-6dee-431d-8f02-af12acc9e823\") " Mar 12 08:24:33 crc kubenswrapper[4809]: I0312 08:24:33.474976 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd8e10ab-6dee-431d-8f02-af12acc9e823-combined-ca-bundle\") pod \"cd8e10ab-6dee-431d-8f02-af12acc9e823\" (UID: \"cd8e10ab-6dee-431d-8f02-af12acc9e823\") " Mar 12 08:24:33 crc kubenswrapper[4809]: I0312 08:24:33.475062 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd8e10ab-6dee-431d-8f02-af12acc9e823-config-data\") pod \"cd8e10ab-6dee-431d-8f02-af12acc9e823\" (UID: \"cd8e10ab-6dee-431d-8f02-af12acc9e823\") " Mar 12 08:24:33 crc kubenswrapper[4809]: I0312 08:24:33.475159 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxcch\" (UniqueName: \"kubernetes.io/projected/cd8e10ab-6dee-431d-8f02-af12acc9e823-kube-api-access-nxcch\") pod \"cd8e10ab-6dee-431d-8f02-af12acc9e823\" (UID: \"cd8e10ab-6dee-431d-8f02-af12acc9e823\") " Mar 12 08:24:33 crc kubenswrapper[4809]: I0312 08:24:33.475267 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd8e10ab-6dee-431d-8f02-af12acc9e823-scripts\") pod \"cd8e10ab-6dee-431d-8f02-af12acc9e823\" (UID: \"cd8e10ab-6dee-431d-8f02-af12acc9e823\") " Mar 12 08:24:33 crc kubenswrapper[4809]: I0312 08:24:33.475331 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd8e10ab-6dee-431d-8f02-af12acc9e823-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "cd8e10ab-6dee-431d-8f02-af12acc9e823" (UID: "cd8e10ab-6dee-431d-8f02-af12acc9e823"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:24:33 crc kubenswrapper[4809]: I0312 08:24:33.476737 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd8e10ab-6dee-431d-8f02-af12acc9e823-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "cd8e10ab-6dee-431d-8f02-af12acc9e823" (UID: "cd8e10ab-6dee-431d-8f02-af12acc9e823"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:24:33 crc kubenswrapper[4809]: I0312 08:24:33.480789 4809 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd8e10ab-6dee-431d-8f02-af12acc9e823-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:33 crc kubenswrapper[4809]: I0312 08:24:33.480835 4809 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd8e10ab-6dee-431d-8f02-af12acc9e823-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:33 crc kubenswrapper[4809]: I0312 08:24:33.539081 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd8e10ab-6dee-431d-8f02-af12acc9e823-scripts" (OuterVolumeSpecName: "scripts") pod "cd8e10ab-6dee-431d-8f02-af12acc9e823" (UID: "cd8e10ab-6dee-431d-8f02-af12acc9e823"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:24:33 crc kubenswrapper[4809]: I0312 08:24:33.539197 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd8e10ab-6dee-431d-8f02-af12acc9e823-kube-api-access-nxcch" (OuterVolumeSpecName: "kube-api-access-nxcch") pod "cd8e10ab-6dee-431d-8f02-af12acc9e823" (UID: "cd8e10ab-6dee-431d-8f02-af12acc9e823"). InnerVolumeSpecName "kube-api-access-nxcch". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:24:33 crc kubenswrapper[4809]: I0312 08:24:33.568255 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd8e10ab-6dee-431d-8f02-af12acc9e823-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "cd8e10ab-6dee-431d-8f02-af12acc9e823" (UID: "cd8e10ab-6dee-431d-8f02-af12acc9e823"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:24:33 crc kubenswrapper[4809]: I0312 08:24:33.581889 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c0262dd0-da72-4f2a-a2e9-bbb12081451d-dns-svc\") pod \"c0262dd0-da72-4f2a-a2e9-bbb12081451d\" (UID: \"c0262dd0-da72-4f2a-a2e9-bbb12081451d\") " Mar 12 08:24:33 crc kubenswrapper[4809]: I0312 08:24:33.581960 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6spcp\" (UniqueName: \"kubernetes.io/projected/c0262dd0-da72-4f2a-a2e9-bbb12081451d-kube-api-access-6spcp\") pod \"c0262dd0-da72-4f2a-a2e9-bbb12081451d\" (UID: \"c0262dd0-da72-4f2a-a2e9-bbb12081451d\") " Mar 12 08:24:33 crc kubenswrapper[4809]: I0312 08:24:33.581984 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c0262dd0-da72-4f2a-a2e9-bbb12081451d-ovsdbserver-sb\") pod \"c0262dd0-da72-4f2a-a2e9-bbb12081451d\" (UID: \"c0262dd0-da72-4f2a-a2e9-bbb12081451d\") " Mar 12 08:24:33 crc kubenswrapper[4809]: I0312 08:24:33.582026 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c0262dd0-da72-4f2a-a2e9-bbb12081451d-dns-swift-storage-0\") pod \"c0262dd0-da72-4f2a-a2e9-bbb12081451d\" (UID: \"c0262dd0-da72-4f2a-a2e9-bbb12081451d\") " Mar 12 08:24:33 crc kubenswrapper[4809]: I0312 08:24:33.582185 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c0262dd0-da72-4f2a-a2e9-bbb12081451d-ovsdbserver-nb\") pod \"c0262dd0-da72-4f2a-a2e9-bbb12081451d\" (UID: \"c0262dd0-da72-4f2a-a2e9-bbb12081451d\") " Mar 12 08:24:33 crc kubenswrapper[4809]: I0312 08:24:33.582200 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0262dd0-da72-4f2a-a2e9-bbb12081451d-config\") pod \"c0262dd0-da72-4f2a-a2e9-bbb12081451d\" (UID: \"c0262dd0-da72-4f2a-a2e9-bbb12081451d\") " Mar 12 08:24:33 crc kubenswrapper[4809]: I0312 08:24:33.582810 4809 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cd8e10ab-6dee-431d-8f02-af12acc9e823-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:33 crc kubenswrapper[4809]: I0312 08:24:33.582822 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxcch\" (UniqueName: \"kubernetes.io/projected/cd8e10ab-6dee-431d-8f02-af12acc9e823-kube-api-access-nxcch\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:33 crc kubenswrapper[4809]: I0312 08:24:33.582832 4809 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd8e10ab-6dee-431d-8f02-af12acc9e823-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:33 crc kubenswrapper[4809]: I0312 08:24:33.603754 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0262dd0-da72-4f2a-a2e9-bbb12081451d-kube-api-access-6spcp" (OuterVolumeSpecName: "kube-api-access-6spcp") pod "c0262dd0-da72-4f2a-a2e9-bbb12081451d" (UID: "c0262dd0-da72-4f2a-a2e9-bbb12081451d"). InnerVolumeSpecName "kube-api-access-6spcp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:24:33 crc kubenswrapper[4809]: I0312 08:24:33.691627 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6spcp\" (UniqueName: \"kubernetes.io/projected/c0262dd0-da72-4f2a-a2e9-bbb12081451d-kube-api-access-6spcp\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:33 crc kubenswrapper[4809]: I0312 08:24:33.718029 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Mar 12 08:24:33 crc kubenswrapper[4809]: I0312 08:24:33.746678 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd8e10ab-6dee-431d-8f02-af12acc9e823-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cd8e10ab-6dee-431d-8f02-af12acc9e823" (UID: "cd8e10ab-6dee-431d-8f02-af12acc9e823"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:24:33 crc kubenswrapper[4809]: I0312 08:24:33.766062 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0262dd0-da72-4f2a-a2e9-bbb12081451d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c0262dd0-da72-4f2a-a2e9-bbb12081451d" (UID: "c0262dd0-da72-4f2a-a2e9-bbb12081451d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:24:33 crc kubenswrapper[4809]: I0312 08:24:33.769506 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0262dd0-da72-4f2a-a2e9-bbb12081451d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c0262dd0-da72-4f2a-a2e9-bbb12081451d" (UID: "c0262dd0-da72-4f2a-a2e9-bbb12081451d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:24:33 crc kubenswrapper[4809]: I0312 08:24:33.793936 4809 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c0262dd0-da72-4f2a-a2e9-bbb12081451d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:33 crc kubenswrapper[4809]: I0312 08:24:33.793971 4809 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c0262dd0-da72-4f2a-a2e9-bbb12081451d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:33 crc kubenswrapper[4809]: I0312 08:24:33.793983 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd8e10ab-6dee-431d-8f02-af12acc9e823-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:33 crc kubenswrapper[4809]: I0312 08:24:33.804618 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0262dd0-da72-4f2a-a2e9-bbb12081451d-config" (OuterVolumeSpecName: "config") pod "c0262dd0-da72-4f2a-a2e9-bbb12081451d" (UID: "c0262dd0-da72-4f2a-a2e9-bbb12081451d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:24:33 crc kubenswrapper[4809]: I0312 08:24:33.826708 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0262dd0-da72-4f2a-a2e9-bbb12081451d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c0262dd0-da72-4f2a-a2e9-bbb12081451d" (UID: "c0262dd0-da72-4f2a-a2e9-bbb12081451d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:24:33 crc kubenswrapper[4809]: I0312 08:24:33.832383 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0262dd0-da72-4f2a-a2e9-bbb12081451d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c0262dd0-da72-4f2a-a2e9-bbb12081451d" (UID: "c0262dd0-da72-4f2a-a2e9-bbb12081451d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:24:33 crc kubenswrapper[4809]: I0312 08:24:33.876710 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd8e10ab-6dee-431d-8f02-af12acc9e823-config-data" (OuterVolumeSpecName: "config-data") pod "cd8e10ab-6dee-431d-8f02-af12acc9e823" (UID: "cd8e10ab-6dee-431d-8f02-af12acc9e823"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:24:33 crc kubenswrapper[4809]: I0312 08:24:33.892508 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cd8e10ab-6dee-431d-8f02-af12acc9e823","Type":"ContainerDied","Data":"97a5c0ea02a25363ec34b664c63a82842058dbc2039d427e558edf8a48938455"} Mar 12 08:24:33 crc kubenswrapper[4809]: I0312 08:24:33.892566 4809 scope.go:117] "RemoveContainer" containerID="646f034d2fe028a00e20db2f8597d5d4743f531a101cab39d592f1ca8bcedc67" Mar 12 08:24:33 crc kubenswrapper[4809]: I0312 08:24:33.892743 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 08:24:33 crc kubenswrapper[4809]: I0312 08:24:33.898386 4809 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c0262dd0-da72-4f2a-a2e9-bbb12081451d-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:33 crc kubenswrapper[4809]: I0312 08:24:33.898425 4809 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c0262dd0-da72-4f2a-a2e9-bbb12081451d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:33 crc kubenswrapper[4809]: I0312 08:24:33.898440 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0262dd0-da72-4f2a-a2e9-bbb12081451d-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:33 crc kubenswrapper[4809]: I0312 08:24:33.898449 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd8e10ab-6dee-431d-8f02-af12acc9e823-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:33 crc kubenswrapper[4809]: I0312 08:24:33.904757 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-4qr2q" event={"ID":"c0262dd0-da72-4f2a-a2e9-bbb12081451d","Type":"ContainerDied","Data":"aa872aca52897e149d8de4a40110b18e85410b612a958efd326f4409f1bfdcd9"} Mar 12 08:24:33 crc kubenswrapper[4809]: I0312 08:24:33.904876 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-4qr2q" Mar 12 08:24:33 crc kubenswrapper[4809]: I0312 08:24:33.913576 4809 generic.go:334] "Generic (PLEG): container finished" podID="5e4b8a60-4319-4d91-a9e5-897d76e61f96" containerID="55f099fb1918256dd497b2cccd57d97bfd017c32bb7c7573eb0e7e2e0e7c40f8" exitCode=1 Mar 12 08:24:33 crc kubenswrapper[4809]: I0312 08:24:33.913694 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6f489d8994-s29s4" event={"ID":"5e4b8a60-4319-4d91-a9e5-897d76e61f96","Type":"ContainerDied","Data":"55f099fb1918256dd497b2cccd57d97bfd017c32bb7c7573eb0e7e2e0e7c40f8"} Mar 12 08:24:33 crc kubenswrapper[4809]: I0312 08:24:33.915127 4809 scope.go:117] "RemoveContainer" containerID="55f099fb1918256dd497b2cccd57d97bfd017c32bb7c7573eb0e7e2e0e7c40f8" Mar 12 08:24:33 crc kubenswrapper[4809]: I0312 08:24:33.949190 4809 scope.go:117] "RemoveContainer" containerID="f95fbbe6a327c41dee3bf97aef30a58295dcaccdf6f183f722dbd01cafa1e9e2" Mar 12 08:24:33 crc kubenswrapper[4809]: I0312 08:24:33.949877 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-bcd664c56-8zpnk" event={"ID":"efdd8218-be37-4288-a09b-4b4119b5ca39","Type":"ContainerStarted","Data":"f95fbbe6a327c41dee3bf97aef30a58295dcaccdf6f183f722dbd01cafa1e9e2"} Mar 12 08:24:34 crc kubenswrapper[4809]: I0312 08:24:34.162477 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 08:24:34 crc kubenswrapper[4809]: I0312 08:24:34.166721 4809 scope.go:117] "RemoveContainer" containerID="7a1781ec7a3be4ed5094c635f78927db827be1303297f2af393a13c8b83b998a" Mar 12 08:24:34 crc kubenswrapper[4809]: I0312 08:24:34.192205 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 12 08:24:34 crc kubenswrapper[4809]: I0312 08:24:34.202590 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 12 08:24:34 crc kubenswrapper[4809]: E0312 08:24:34.203452 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd8e10ab-6dee-431d-8f02-af12acc9e823" containerName="ceilometer-central-agent" Mar 12 08:24:34 crc kubenswrapper[4809]: I0312 08:24:34.203470 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd8e10ab-6dee-431d-8f02-af12acc9e823" containerName="ceilometer-central-agent" Mar 12 08:24:34 crc kubenswrapper[4809]: E0312 08:24:34.203492 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd8e10ab-6dee-431d-8f02-af12acc9e823" containerName="ceilometer-notification-agent" Mar 12 08:24:34 crc kubenswrapper[4809]: I0312 08:24:34.203500 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd8e10ab-6dee-431d-8f02-af12acc9e823" containerName="ceilometer-notification-agent" Mar 12 08:24:34 crc kubenswrapper[4809]: E0312 08:24:34.203509 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd8e10ab-6dee-431d-8f02-af12acc9e823" containerName="proxy-httpd" Mar 12 08:24:34 crc kubenswrapper[4809]: I0312 08:24:34.203518 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd8e10ab-6dee-431d-8f02-af12acc9e823" containerName="proxy-httpd" Mar 12 08:24:34 crc kubenswrapper[4809]: E0312 08:24:34.203550 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0262dd0-da72-4f2a-a2e9-bbb12081451d" containerName="dnsmasq-dns" Mar 12 08:24:34 crc kubenswrapper[4809]: I0312 08:24:34.203556 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0262dd0-da72-4f2a-a2e9-bbb12081451d" containerName="dnsmasq-dns" Mar 12 08:24:34 crc kubenswrapper[4809]: E0312 08:24:34.203574 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0262dd0-da72-4f2a-a2e9-bbb12081451d" containerName="init" Mar 12 08:24:34 crc kubenswrapper[4809]: I0312 08:24:34.203580 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0262dd0-da72-4f2a-a2e9-bbb12081451d" containerName="init" Mar 12 08:24:34 crc kubenswrapper[4809]: E0312 08:24:34.203595 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd8e10ab-6dee-431d-8f02-af12acc9e823" containerName="sg-core" Mar 12 08:24:34 crc kubenswrapper[4809]: I0312 08:24:34.203601 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd8e10ab-6dee-431d-8f02-af12acc9e823" containerName="sg-core" Mar 12 08:24:34 crc kubenswrapper[4809]: I0312 08:24:34.203841 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd8e10ab-6dee-431d-8f02-af12acc9e823" containerName="ceilometer-notification-agent" Mar 12 08:24:34 crc kubenswrapper[4809]: I0312 08:24:34.203855 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd8e10ab-6dee-431d-8f02-af12acc9e823" containerName="ceilometer-central-agent" Mar 12 08:24:34 crc kubenswrapper[4809]: I0312 08:24:34.203864 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd8e10ab-6dee-431d-8f02-af12acc9e823" containerName="sg-core" Mar 12 08:24:34 crc kubenswrapper[4809]: I0312 08:24:34.203874 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd8e10ab-6dee-431d-8f02-af12acc9e823" containerName="proxy-httpd" Mar 12 08:24:34 crc kubenswrapper[4809]: I0312 08:24:34.203891 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0262dd0-da72-4f2a-a2e9-bbb12081451d" containerName="dnsmasq-dns" Mar 12 08:24:34 crc kubenswrapper[4809]: I0312 08:24:34.210250 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 08:24:34 crc kubenswrapper[4809]: I0312 08:24:34.215151 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 12 08:24:34 crc kubenswrapper[4809]: I0312 08:24:34.215462 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 12 08:24:34 crc kubenswrapper[4809]: I0312 08:24:34.219152 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-4qr2q"] Mar 12 08:24:34 crc kubenswrapper[4809]: I0312 08:24:34.241083 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-4qr2q"] Mar 12 08:24:34 crc kubenswrapper[4809]: I0312 08:24:34.277475 4809 scope.go:117] "RemoveContainer" containerID="f7642ca7d724aec9fe410366f0506fcd7d36c927caadb24f7acd63ff499120fb" Mar 12 08:24:34 crc kubenswrapper[4809]: I0312 08:24:34.327540 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 08:24:34 crc kubenswrapper[4809]: I0312 08:24:34.341728 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4\") " pod="openstack/ceilometer-0" Mar 12 08:24:34 crc kubenswrapper[4809]: I0312 08:24:34.343733 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4-scripts\") pod \"ceilometer-0\" (UID: \"ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4\") " pod="openstack/ceilometer-0" Mar 12 08:24:34 crc kubenswrapper[4809]: I0312 08:24:34.345473 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4-run-httpd\") pod \"ceilometer-0\" (UID: \"ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4\") " pod="openstack/ceilometer-0" Mar 12 08:24:34 crc kubenswrapper[4809]: I0312 08:24:34.345721 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4-log-httpd\") pod \"ceilometer-0\" (UID: \"ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4\") " pod="openstack/ceilometer-0" Mar 12 08:24:34 crc kubenswrapper[4809]: I0312 08:24:34.345769 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4\") " pod="openstack/ceilometer-0" Mar 12 08:24:34 crc kubenswrapper[4809]: I0312 08:24:34.346000 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmgtm\" (UniqueName: \"kubernetes.io/projected/ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4-kube-api-access-vmgtm\") pod \"ceilometer-0\" (UID: \"ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4\") " pod="openstack/ceilometer-0" Mar 12 08:24:34 crc kubenswrapper[4809]: I0312 08:24:34.346034 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4-config-data\") pod \"ceilometer-0\" (UID: \"ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4\") " pod="openstack/ceilometer-0" Mar 12 08:24:34 crc kubenswrapper[4809]: I0312 08:24:34.404364 4809 scope.go:117] "RemoveContainer" containerID="6392b8a3e03c4f8ab556c0f28a3f56a4799a23a97fdd2e354ca291a1255faf1a" Mar 12 08:24:34 crc kubenswrapper[4809]: I0312 08:24:34.441357 4809 scope.go:117] "RemoveContainer" containerID="4bd3a15a038aa7494f171ea96ee0e3a348f6f544b0608b49f75ff46c4f12353c" Mar 12 08:24:34 crc kubenswrapper[4809]: I0312 08:24:34.448191 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4-run-httpd\") pod \"ceilometer-0\" (UID: \"ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4\") " pod="openstack/ceilometer-0" Mar 12 08:24:34 crc kubenswrapper[4809]: I0312 08:24:34.448294 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4-log-httpd\") pod \"ceilometer-0\" (UID: \"ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4\") " pod="openstack/ceilometer-0" Mar 12 08:24:34 crc kubenswrapper[4809]: I0312 08:24:34.448335 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4\") " pod="openstack/ceilometer-0" Mar 12 08:24:34 crc kubenswrapper[4809]: I0312 08:24:34.448420 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmgtm\" (UniqueName: \"kubernetes.io/projected/ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4-kube-api-access-vmgtm\") pod \"ceilometer-0\" (UID: \"ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4\") " pod="openstack/ceilometer-0" Mar 12 08:24:34 crc kubenswrapper[4809]: I0312 08:24:34.448444 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4-config-data\") pod \"ceilometer-0\" (UID: \"ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4\") " pod="openstack/ceilometer-0" Mar 12 08:24:34 crc kubenswrapper[4809]: I0312 08:24:34.448485 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4\") " pod="openstack/ceilometer-0" Mar 12 08:24:34 crc kubenswrapper[4809]: I0312 08:24:34.448505 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4-scripts\") pod \"ceilometer-0\" (UID: \"ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4\") " pod="openstack/ceilometer-0" Mar 12 08:24:34 crc kubenswrapper[4809]: I0312 08:24:34.449224 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4-run-httpd\") pod \"ceilometer-0\" (UID: \"ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4\") " pod="openstack/ceilometer-0" Mar 12 08:24:34 crc kubenswrapper[4809]: I0312 08:24:34.449532 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4-log-httpd\") pod \"ceilometer-0\" (UID: \"ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4\") " pod="openstack/ceilometer-0" Mar 12 08:24:34 crc kubenswrapper[4809]: I0312 08:24:34.464453 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4-config-data\") pod \"ceilometer-0\" (UID: \"ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4\") " pod="openstack/ceilometer-0" Mar 12 08:24:34 crc kubenswrapper[4809]: I0312 08:24:34.465019 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4\") " pod="openstack/ceilometer-0" Mar 12 08:24:34 crc kubenswrapper[4809]: I0312 08:24:34.466167 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4-scripts\") pod \"ceilometer-0\" (UID: \"ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4\") " pod="openstack/ceilometer-0" Mar 12 08:24:34 crc kubenswrapper[4809]: I0312 08:24:34.466338 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4\") " pod="openstack/ceilometer-0" Mar 12 08:24:34 crc kubenswrapper[4809]: I0312 08:24:34.468105 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmgtm\" (UniqueName: \"kubernetes.io/projected/ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4-kube-api-access-vmgtm\") pod \"ceilometer-0\" (UID: \"ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4\") " pod="openstack/ceilometer-0" Mar 12 08:24:34 crc kubenswrapper[4809]: I0312 08:24:34.479275 4809 scope.go:117] "RemoveContainer" containerID="dc909eb537704e5cd8027931364ad77526a7fc046411c0cb47774d61a7694596" Mar 12 08:24:34 crc kubenswrapper[4809]: I0312 08:24:34.585908 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 08:24:35 crc kubenswrapper[4809]: I0312 08:24:35.046425 4809 generic.go:334] "Generic (PLEG): container finished" podID="5e4b8a60-4319-4d91-a9e5-897d76e61f96" containerID="bbc0864639506c8c2decddea0021d65f355e5aca267a33817f42430f82d211ce" exitCode=1 Mar 12 08:24:35 crc kubenswrapper[4809]: I0312 08:24:35.048536 4809 scope.go:117] "RemoveContainer" containerID="bbc0864639506c8c2decddea0021d65f355e5aca267a33817f42430f82d211ce" Mar 12 08:24:35 crc kubenswrapper[4809]: E0312 08:24:35.048925 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-6f489d8994-s29s4_openstack(5e4b8a60-4319-4d91-a9e5-897d76e61f96)\"" pod="openstack/heat-cfnapi-6f489d8994-s29s4" podUID="5e4b8a60-4319-4d91-a9e5-897d76e61f96" Mar 12 08:24:35 crc kubenswrapper[4809]: I0312 08:24:35.049156 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6f489d8994-s29s4" event={"ID":"5e4b8a60-4319-4d91-a9e5-897d76e61f96","Type":"ContainerDied","Data":"bbc0864639506c8c2decddea0021d65f355e5aca267a33817f42430f82d211ce"} Mar 12 08:24:35 crc kubenswrapper[4809]: I0312 08:24:35.049197 4809 scope.go:117] "RemoveContainer" containerID="55f099fb1918256dd497b2cccd57d97bfd017c32bb7c7573eb0e7e2e0e7c40f8" Mar 12 08:24:35 crc kubenswrapper[4809]: I0312 08:24:35.119542 4809 generic.go:334] "Generic (PLEG): container finished" podID="efdd8218-be37-4288-a09b-4b4119b5ca39" containerID="f95fbbe6a327c41dee3bf97aef30a58295dcaccdf6f183f722dbd01cafa1e9e2" exitCode=1 Mar 12 08:24:35 crc kubenswrapper[4809]: I0312 08:24:35.119584 4809 generic.go:334] "Generic (PLEG): container finished" podID="efdd8218-be37-4288-a09b-4b4119b5ca39" containerID="f5b4b8d0bfc4bb029e2ed3cd3a5f85628813c70631ec341d6feb9709f5fcabad" exitCode=1 Mar 12 08:24:35 crc kubenswrapper[4809]: I0312 08:24:35.120706 4809 scope.go:117] "RemoveContainer" containerID="f5b4b8d0bfc4bb029e2ed3cd3a5f85628813c70631ec341d6feb9709f5fcabad" Mar 12 08:24:35 crc kubenswrapper[4809]: E0312 08:24:35.121033 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-bcd664c56-8zpnk_openstack(efdd8218-be37-4288-a09b-4b4119b5ca39)\"" pod="openstack/heat-api-bcd664c56-8zpnk" podUID="efdd8218-be37-4288-a09b-4b4119b5ca39" Mar 12 08:24:35 crc kubenswrapper[4809]: I0312 08:24:35.152755 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0262dd0-da72-4f2a-a2e9-bbb12081451d" path="/var/lib/kubelet/pods/c0262dd0-da72-4f2a-a2e9-bbb12081451d/volumes" Mar 12 08:24:35 crc kubenswrapper[4809]: I0312 08:24:35.153535 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd8e10ab-6dee-431d-8f02-af12acc9e823" path="/var/lib/kubelet/pods/cd8e10ab-6dee-431d-8f02-af12acc9e823/volumes" Mar 12 08:24:35 crc kubenswrapper[4809]: I0312 08:24:35.155510 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-bcd664c56-8zpnk" event={"ID":"efdd8218-be37-4288-a09b-4b4119b5ca39","Type":"ContainerDied","Data":"f95fbbe6a327c41dee3bf97aef30a58295dcaccdf6f183f722dbd01cafa1e9e2"} Mar 12 08:24:35 crc kubenswrapper[4809]: I0312 08:24:35.155551 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-bcd664c56-8zpnk" event={"ID":"efdd8218-be37-4288-a09b-4b4119b5ca39","Type":"ContainerDied","Data":"f5b4b8d0bfc4bb029e2ed3cd3a5f85628813c70631ec341d6feb9709f5fcabad"} Mar 12 08:24:35 crc kubenswrapper[4809]: I0312 08:24:35.232288 4809 scope.go:117] "RemoveContainer" containerID="f95fbbe6a327c41dee3bf97aef30a58295dcaccdf6f183f722dbd01cafa1e9e2" Mar 12 08:24:35 crc kubenswrapper[4809]: I0312 08:24:35.267952 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 08:24:35 crc kubenswrapper[4809]: I0312 08:24:35.406535 4809 scope.go:117] "RemoveContainer" containerID="f95fbbe6a327c41dee3bf97aef30a58295dcaccdf6f183f722dbd01cafa1e9e2" Mar 12 08:24:35 crc kubenswrapper[4809]: E0312 08:24:35.412859 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f95fbbe6a327c41dee3bf97aef30a58295dcaccdf6f183f722dbd01cafa1e9e2\": container with ID starting with f95fbbe6a327c41dee3bf97aef30a58295dcaccdf6f183f722dbd01cafa1e9e2 not found: ID does not exist" containerID="f95fbbe6a327c41dee3bf97aef30a58295dcaccdf6f183f722dbd01cafa1e9e2" Mar 12 08:24:35 crc kubenswrapper[4809]: I0312 08:24:35.412918 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f95fbbe6a327c41dee3bf97aef30a58295dcaccdf6f183f722dbd01cafa1e9e2"} err="failed to get container status \"f95fbbe6a327c41dee3bf97aef30a58295dcaccdf6f183f722dbd01cafa1e9e2\": rpc error: code = NotFound desc = could not find container \"f95fbbe6a327c41dee3bf97aef30a58295dcaccdf6f183f722dbd01cafa1e9e2\": container with ID starting with f95fbbe6a327c41dee3bf97aef30a58295dcaccdf6f183f722dbd01cafa1e9e2 not found: ID does not exist" Mar 12 08:24:35 crc kubenswrapper[4809]: I0312 08:24:35.818196 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-7fb898569b-fzjx6"] Mar 12 08:24:35 crc kubenswrapper[4809]: I0312 08:24:35.818566 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-7fb898569b-fzjx6" podUID="3c291d73-cd7f-494e-8684-e6bda1c78259" containerName="heat-api" containerID="cri-o://59b06b4b056813793a9b771b9bc23f38a65b7b4e156d88ed76e70b6acd0398e5" gracePeriod=60 Mar 12 08:24:35 crc kubenswrapper[4809]: I0312 08:24:35.837255 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-5f7c584ffb-4dlpd"] Mar 12 08:24:35 crc kubenswrapper[4809]: I0312 08:24:35.837613 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-5f7c584ffb-4dlpd" podUID="5e637b87-4eab-400a-b98b-08f2da100650" containerName="heat-cfnapi" containerID="cri-o://7cb2db8a0543905d4c83711a3c028acb096f2a452bbce32fac6798c8cd437d95" gracePeriod=60 Mar 12 08:24:35 crc kubenswrapper[4809]: I0312 08:24:35.887753 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-84d75fd6d6-jt5pm"] Mar 12 08:24:35 crc kubenswrapper[4809]: I0312 08:24:35.889668 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-84d75fd6d6-jt5pm" Mar 12 08:24:35 crc kubenswrapper[4809]: I0312 08:24:35.894465 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Mar 12 08:24:35 crc kubenswrapper[4809]: I0312 08:24:35.901054 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-765ccf488b-zgbk5"] Mar 12 08:24:35 crc kubenswrapper[4809]: I0312 08:24:35.902708 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-765ccf488b-zgbk5" Mar 12 08:24:35 crc kubenswrapper[4809]: I0312 08:24:35.922094 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-5f7c584ffb-4dlpd" podUID="5e637b87-4eab-400a-b98b-08f2da100650" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.225:8000/healthcheck\": EOF" Mar 12 08:24:35 crc kubenswrapper[4809]: I0312 08:24:35.922691 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Mar 12 08:24:35 crc kubenswrapper[4809]: I0312 08:24:35.923007 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Mar 12 08:24:35 crc kubenswrapper[4809]: I0312 08:24:35.923478 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Mar 12 08:24:35 crc kubenswrapper[4809]: I0312 08:24:35.930567 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-84d75fd6d6-jt5pm"] Mar 12 08:24:35 crc kubenswrapper[4809]: I0312 08:24:35.954089 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-765ccf488b-zgbk5"] Mar 12 08:24:36 crc kubenswrapper[4809]: I0312 08:24:36.047716 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/157d4d8b-15cb-413b-b689-209cdf45f1b7-internal-tls-certs\") pod \"heat-api-84d75fd6d6-jt5pm\" (UID: \"157d4d8b-15cb-413b-b689-209cdf45f1b7\") " pod="openstack/heat-api-84d75fd6d6-jt5pm" Mar 12 08:24:36 crc kubenswrapper[4809]: I0312 08:24:36.047783 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/be0d6711-7a21-40cf-ba47-eff3c52046e7-public-tls-certs\") pod \"heat-cfnapi-765ccf488b-zgbk5\" (UID: \"be0d6711-7a21-40cf-ba47-eff3c52046e7\") " pod="openstack/heat-cfnapi-765ccf488b-zgbk5" Mar 12 08:24:36 crc kubenswrapper[4809]: I0312 08:24:36.047834 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/be0d6711-7a21-40cf-ba47-eff3c52046e7-config-data-custom\") pod \"heat-cfnapi-765ccf488b-zgbk5\" (UID: \"be0d6711-7a21-40cf-ba47-eff3c52046e7\") " pod="openstack/heat-cfnapi-765ccf488b-zgbk5" Mar 12 08:24:36 crc kubenswrapper[4809]: I0312 08:24:36.047855 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/157d4d8b-15cb-413b-b689-209cdf45f1b7-combined-ca-bundle\") pod \"heat-api-84d75fd6d6-jt5pm\" (UID: \"157d4d8b-15cb-413b-b689-209cdf45f1b7\") " pod="openstack/heat-api-84d75fd6d6-jt5pm" Mar 12 08:24:36 crc kubenswrapper[4809]: I0312 08:24:36.047875 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be0d6711-7a21-40cf-ba47-eff3c52046e7-combined-ca-bundle\") pod \"heat-cfnapi-765ccf488b-zgbk5\" (UID: \"be0d6711-7a21-40cf-ba47-eff3c52046e7\") " pod="openstack/heat-cfnapi-765ccf488b-zgbk5" Mar 12 08:24:36 crc kubenswrapper[4809]: I0312 08:24:36.047933 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tk9kx\" (UniqueName: \"kubernetes.io/projected/157d4d8b-15cb-413b-b689-209cdf45f1b7-kube-api-access-tk9kx\") pod \"heat-api-84d75fd6d6-jt5pm\" (UID: \"157d4d8b-15cb-413b-b689-209cdf45f1b7\") " pod="openstack/heat-api-84d75fd6d6-jt5pm" Mar 12 08:24:36 crc kubenswrapper[4809]: I0312 08:24:36.047970 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/157d4d8b-15cb-413b-b689-209cdf45f1b7-config-data\") pod \"heat-api-84d75fd6d6-jt5pm\" (UID: \"157d4d8b-15cb-413b-b689-209cdf45f1b7\") " pod="openstack/heat-api-84d75fd6d6-jt5pm" Mar 12 08:24:36 crc kubenswrapper[4809]: I0312 08:24:36.048024 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/be0d6711-7a21-40cf-ba47-eff3c52046e7-internal-tls-certs\") pod \"heat-cfnapi-765ccf488b-zgbk5\" (UID: \"be0d6711-7a21-40cf-ba47-eff3c52046e7\") " pod="openstack/heat-cfnapi-765ccf488b-zgbk5" Mar 12 08:24:36 crc kubenswrapper[4809]: I0312 08:24:36.048066 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/157d4d8b-15cb-413b-b689-209cdf45f1b7-public-tls-certs\") pod \"heat-api-84d75fd6d6-jt5pm\" (UID: \"157d4d8b-15cb-413b-b689-209cdf45f1b7\") " pod="openstack/heat-api-84d75fd6d6-jt5pm" Mar 12 08:24:36 crc kubenswrapper[4809]: I0312 08:24:36.048097 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/157d4d8b-15cb-413b-b689-209cdf45f1b7-config-data-custom\") pod \"heat-api-84d75fd6d6-jt5pm\" (UID: \"157d4d8b-15cb-413b-b689-209cdf45f1b7\") " pod="openstack/heat-api-84d75fd6d6-jt5pm" Mar 12 08:24:36 crc kubenswrapper[4809]: I0312 08:24:36.048157 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be0d6711-7a21-40cf-ba47-eff3c52046e7-config-data\") pod \"heat-cfnapi-765ccf488b-zgbk5\" (UID: \"be0d6711-7a21-40cf-ba47-eff3c52046e7\") " pod="openstack/heat-cfnapi-765ccf488b-zgbk5" Mar 12 08:24:36 crc kubenswrapper[4809]: I0312 08:24:36.048178 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghlw5\" (UniqueName: \"kubernetes.io/projected/be0d6711-7a21-40cf-ba47-eff3c52046e7-kube-api-access-ghlw5\") pod \"heat-cfnapi-765ccf488b-zgbk5\" (UID: \"be0d6711-7a21-40cf-ba47-eff3c52046e7\") " pod="openstack/heat-cfnapi-765ccf488b-zgbk5" Mar 12 08:24:36 crc kubenswrapper[4809]: I0312 08:24:36.066886 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-bcd664c56-8zpnk" Mar 12 08:24:36 crc kubenswrapper[4809]: I0312 08:24:36.067179 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-bcd664c56-8zpnk" Mar 12 08:24:36 crc kubenswrapper[4809]: I0312 08:24:36.083788 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-6f489d8994-s29s4" Mar 12 08:24:36 crc kubenswrapper[4809]: I0312 08:24:36.083851 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-6f489d8994-s29s4" Mar 12 08:24:36 crc kubenswrapper[4809]: I0312 08:24:36.157336 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4","Type":"ContainerStarted","Data":"a554ae445332c23b64205fac8f1e7d67ca6fba2f16f8bd751719dff0511ef86c"} Mar 12 08:24:36 crc kubenswrapper[4809]: I0312 08:24:36.157773 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-677f6cdf55-cmx2q" Mar 12 08:24:36 crc kubenswrapper[4809]: I0312 08:24:36.160507 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/157d4d8b-15cb-413b-b689-209cdf45f1b7-public-tls-certs\") pod \"heat-api-84d75fd6d6-jt5pm\" (UID: \"157d4d8b-15cb-413b-b689-209cdf45f1b7\") " pod="openstack/heat-api-84d75fd6d6-jt5pm" Mar 12 08:24:36 crc kubenswrapper[4809]: I0312 08:24:36.160619 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/157d4d8b-15cb-413b-b689-209cdf45f1b7-config-data-custom\") pod \"heat-api-84d75fd6d6-jt5pm\" (UID: \"157d4d8b-15cb-413b-b689-209cdf45f1b7\") " pod="openstack/heat-api-84d75fd6d6-jt5pm" Mar 12 08:24:36 crc kubenswrapper[4809]: I0312 08:24:36.160715 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be0d6711-7a21-40cf-ba47-eff3c52046e7-config-data\") pod \"heat-cfnapi-765ccf488b-zgbk5\" (UID: \"be0d6711-7a21-40cf-ba47-eff3c52046e7\") " pod="openstack/heat-cfnapi-765ccf488b-zgbk5" Mar 12 08:24:36 crc kubenswrapper[4809]: I0312 08:24:36.160742 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghlw5\" (UniqueName: \"kubernetes.io/projected/be0d6711-7a21-40cf-ba47-eff3c52046e7-kube-api-access-ghlw5\") pod \"heat-cfnapi-765ccf488b-zgbk5\" (UID: \"be0d6711-7a21-40cf-ba47-eff3c52046e7\") " pod="openstack/heat-cfnapi-765ccf488b-zgbk5" Mar 12 08:24:36 crc kubenswrapper[4809]: I0312 08:24:36.161032 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/157d4d8b-15cb-413b-b689-209cdf45f1b7-internal-tls-certs\") pod \"heat-api-84d75fd6d6-jt5pm\" (UID: \"157d4d8b-15cb-413b-b689-209cdf45f1b7\") " pod="openstack/heat-api-84d75fd6d6-jt5pm" Mar 12 08:24:36 crc kubenswrapper[4809]: I0312 08:24:36.161091 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/be0d6711-7a21-40cf-ba47-eff3c52046e7-public-tls-certs\") pod \"heat-cfnapi-765ccf488b-zgbk5\" (UID: \"be0d6711-7a21-40cf-ba47-eff3c52046e7\") " pod="openstack/heat-cfnapi-765ccf488b-zgbk5" Mar 12 08:24:36 crc kubenswrapper[4809]: I0312 08:24:36.161162 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/157d4d8b-15cb-413b-b689-209cdf45f1b7-combined-ca-bundle\") pod \"heat-api-84d75fd6d6-jt5pm\" (UID: \"157d4d8b-15cb-413b-b689-209cdf45f1b7\") " pod="openstack/heat-api-84d75fd6d6-jt5pm" Mar 12 08:24:36 crc kubenswrapper[4809]: I0312 08:24:36.161186 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/be0d6711-7a21-40cf-ba47-eff3c52046e7-config-data-custom\") pod \"heat-cfnapi-765ccf488b-zgbk5\" (UID: \"be0d6711-7a21-40cf-ba47-eff3c52046e7\") " pod="openstack/heat-cfnapi-765ccf488b-zgbk5" Mar 12 08:24:36 crc kubenswrapper[4809]: I0312 08:24:36.161213 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be0d6711-7a21-40cf-ba47-eff3c52046e7-combined-ca-bundle\") pod \"heat-cfnapi-765ccf488b-zgbk5\" (UID: \"be0d6711-7a21-40cf-ba47-eff3c52046e7\") " pod="openstack/heat-cfnapi-765ccf488b-zgbk5" Mar 12 08:24:36 crc kubenswrapper[4809]: I0312 08:24:36.165135 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tk9kx\" (UniqueName: \"kubernetes.io/projected/157d4d8b-15cb-413b-b689-209cdf45f1b7-kube-api-access-tk9kx\") pod \"heat-api-84d75fd6d6-jt5pm\" (UID: \"157d4d8b-15cb-413b-b689-209cdf45f1b7\") " pod="openstack/heat-api-84d75fd6d6-jt5pm" Mar 12 08:24:36 crc kubenswrapper[4809]: I0312 08:24:36.165242 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/157d4d8b-15cb-413b-b689-209cdf45f1b7-config-data\") pod \"heat-api-84d75fd6d6-jt5pm\" (UID: \"157d4d8b-15cb-413b-b689-209cdf45f1b7\") " pod="openstack/heat-api-84d75fd6d6-jt5pm" Mar 12 08:24:36 crc kubenswrapper[4809]: I0312 08:24:36.165410 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/be0d6711-7a21-40cf-ba47-eff3c52046e7-internal-tls-certs\") pod \"heat-cfnapi-765ccf488b-zgbk5\" (UID: \"be0d6711-7a21-40cf-ba47-eff3c52046e7\") " pod="openstack/heat-cfnapi-765ccf488b-zgbk5" Mar 12 08:24:36 crc kubenswrapper[4809]: I0312 08:24:36.171889 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-677f6cdf55-cmx2q" Mar 12 08:24:36 crc kubenswrapper[4809]: I0312 08:24:36.176056 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/157d4d8b-15cb-413b-b689-209cdf45f1b7-internal-tls-certs\") pod \"heat-api-84d75fd6d6-jt5pm\" (UID: \"157d4d8b-15cb-413b-b689-209cdf45f1b7\") " pod="openstack/heat-api-84d75fd6d6-jt5pm" Mar 12 08:24:36 crc kubenswrapper[4809]: I0312 08:24:36.178177 4809 scope.go:117] "RemoveContainer" containerID="f5b4b8d0bfc4bb029e2ed3cd3a5f85628813c70631ec341d6feb9709f5fcabad" Mar 12 08:24:36 crc kubenswrapper[4809]: E0312 08:24:36.180430 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-bcd664c56-8zpnk_openstack(efdd8218-be37-4288-a09b-4b4119b5ca39)\"" pod="openstack/heat-api-bcd664c56-8zpnk" podUID="efdd8218-be37-4288-a09b-4b4119b5ca39" Mar 12 08:24:36 crc kubenswrapper[4809]: I0312 08:24:36.181400 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/157d4d8b-15cb-413b-b689-209cdf45f1b7-config-data-custom\") pod \"heat-api-84d75fd6d6-jt5pm\" (UID: \"157d4d8b-15cb-413b-b689-209cdf45f1b7\") " pod="openstack/heat-api-84d75fd6d6-jt5pm" Mar 12 08:24:36 crc kubenswrapper[4809]: I0312 08:24:36.187237 4809 scope.go:117] "RemoveContainer" containerID="bbc0864639506c8c2decddea0021d65f355e5aca267a33817f42430f82d211ce" Mar 12 08:24:36 crc kubenswrapper[4809]: I0312 08:24:36.188572 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/157d4d8b-15cb-413b-b689-209cdf45f1b7-combined-ca-bundle\") pod \"heat-api-84d75fd6d6-jt5pm\" (UID: \"157d4d8b-15cb-413b-b689-209cdf45f1b7\") " pod="openstack/heat-api-84d75fd6d6-jt5pm" Mar 12 08:24:36 crc kubenswrapper[4809]: E0312 08:24:36.191429 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-6f489d8994-s29s4_openstack(5e4b8a60-4319-4d91-a9e5-897d76e61f96)\"" pod="openstack/heat-cfnapi-6f489d8994-s29s4" podUID="5e4b8a60-4319-4d91-a9e5-897d76e61f96" Mar 12 08:24:36 crc kubenswrapper[4809]: I0312 08:24:36.193801 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/be0d6711-7a21-40cf-ba47-eff3c52046e7-internal-tls-certs\") pod \"heat-cfnapi-765ccf488b-zgbk5\" (UID: \"be0d6711-7a21-40cf-ba47-eff3c52046e7\") " pod="openstack/heat-cfnapi-765ccf488b-zgbk5" Mar 12 08:24:36 crc kubenswrapper[4809]: I0312 08:24:36.198291 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be0d6711-7a21-40cf-ba47-eff3c52046e7-combined-ca-bundle\") pod \"heat-cfnapi-765ccf488b-zgbk5\" (UID: \"be0d6711-7a21-40cf-ba47-eff3c52046e7\") " pod="openstack/heat-cfnapi-765ccf488b-zgbk5" Mar 12 08:24:36 crc kubenswrapper[4809]: I0312 08:24:36.198393 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/be0d6711-7a21-40cf-ba47-eff3c52046e7-public-tls-certs\") pod \"heat-cfnapi-765ccf488b-zgbk5\" (UID: \"be0d6711-7a21-40cf-ba47-eff3c52046e7\") " pod="openstack/heat-cfnapi-765ccf488b-zgbk5" Mar 12 08:24:36 crc kubenswrapper[4809]: I0312 08:24:36.203523 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be0d6711-7a21-40cf-ba47-eff3c52046e7-config-data\") pod \"heat-cfnapi-765ccf488b-zgbk5\" (UID: \"be0d6711-7a21-40cf-ba47-eff3c52046e7\") " pod="openstack/heat-cfnapi-765ccf488b-zgbk5" Mar 12 08:24:36 crc kubenswrapper[4809]: I0312 08:24:36.228322 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/be0d6711-7a21-40cf-ba47-eff3c52046e7-config-data-custom\") pod \"heat-cfnapi-765ccf488b-zgbk5\" (UID: \"be0d6711-7a21-40cf-ba47-eff3c52046e7\") " pod="openstack/heat-cfnapi-765ccf488b-zgbk5" Mar 12 08:24:36 crc kubenswrapper[4809]: I0312 08:24:36.238051 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tk9kx\" (UniqueName: \"kubernetes.io/projected/157d4d8b-15cb-413b-b689-209cdf45f1b7-kube-api-access-tk9kx\") pod \"heat-api-84d75fd6d6-jt5pm\" (UID: \"157d4d8b-15cb-413b-b689-209cdf45f1b7\") " pod="openstack/heat-api-84d75fd6d6-jt5pm" Mar 12 08:24:36 crc kubenswrapper[4809]: I0312 08:24:36.238528 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghlw5\" (UniqueName: \"kubernetes.io/projected/be0d6711-7a21-40cf-ba47-eff3c52046e7-kube-api-access-ghlw5\") pod \"heat-cfnapi-765ccf488b-zgbk5\" (UID: \"be0d6711-7a21-40cf-ba47-eff3c52046e7\") " pod="openstack/heat-cfnapi-765ccf488b-zgbk5" Mar 12 08:24:36 crc kubenswrapper[4809]: I0312 08:24:36.239254 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/157d4d8b-15cb-413b-b689-209cdf45f1b7-config-data\") pod \"heat-api-84d75fd6d6-jt5pm\" (UID: \"157d4d8b-15cb-413b-b689-209cdf45f1b7\") " pod="openstack/heat-api-84d75fd6d6-jt5pm" Mar 12 08:24:36 crc kubenswrapper[4809]: I0312 08:24:36.245514 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/157d4d8b-15cb-413b-b689-209cdf45f1b7-public-tls-certs\") pod \"heat-api-84d75fd6d6-jt5pm\" (UID: \"157d4d8b-15cb-413b-b689-209cdf45f1b7\") " pod="openstack/heat-api-84d75fd6d6-jt5pm" Mar 12 08:24:36 crc kubenswrapper[4809]: I0312 08:24:36.279198 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-765ccf488b-zgbk5" Mar 12 08:24:36 crc kubenswrapper[4809]: I0312 08:24:36.464846 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6f84ff9464-ktgn6" Mar 12 08:24:36 crc kubenswrapper[4809]: I0312 08:24:36.495910 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6f84ff9464-ktgn6" Mar 12 08:24:36 crc kubenswrapper[4809]: I0312 08:24:36.541607 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-84d75fd6d6-jt5pm" Mar 12 08:24:36 crc kubenswrapper[4809]: I0312 08:24:36.586642 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6586c99478-72j5b"] Mar 12 08:24:36 crc kubenswrapper[4809]: I0312 08:24:36.586913 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-6586c99478-72j5b" podUID="0f8141f6-bf63-4133-9451-df9c0dd0c1e7" containerName="placement-log" containerID="cri-o://c78920513e2ac05da31144b07e115193999eae386099b094fa6da04caa07e3bd" gracePeriod=30 Mar 12 08:24:36 crc kubenswrapper[4809]: I0312 08:24:36.587197 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-6586c99478-72j5b" podUID="0f8141f6-bf63-4133-9451-df9c0dd0c1e7" containerName="placement-api" containerID="cri-o://deac96f5d03d62ef8b9106ebf6cbcd9a47096ae5a8380d7f3af6bec1c61cc791" gracePeriod=30 Mar 12 08:24:37 crc kubenswrapper[4809]: I0312 08:24:37.037348 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 08:24:37 crc kubenswrapper[4809]: I0312 08:24:37.211050 4809 generic.go:334] "Generic (PLEG): container finished" podID="0f8141f6-bf63-4133-9451-df9c0dd0c1e7" containerID="c78920513e2ac05da31144b07e115193999eae386099b094fa6da04caa07e3bd" exitCode=143 Mar 12 08:24:37 crc kubenswrapper[4809]: I0312 08:24:37.214256 4809 scope.go:117] "RemoveContainer" containerID="bbc0864639506c8c2decddea0021d65f355e5aca267a33817f42430f82d211ce" Mar 12 08:24:37 crc kubenswrapper[4809]: E0312 08:24:37.214548 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-6f489d8994-s29s4_openstack(5e4b8a60-4319-4d91-a9e5-897d76e61f96)\"" pod="openstack/heat-cfnapi-6f489d8994-s29s4" podUID="5e4b8a60-4319-4d91-a9e5-897d76e61f96" Mar 12 08:24:37 crc kubenswrapper[4809]: I0312 08:24:37.214973 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6586c99478-72j5b" event={"ID":"0f8141f6-bf63-4133-9451-df9c0dd0c1e7","Type":"ContainerDied","Data":"c78920513e2ac05da31144b07e115193999eae386099b094fa6da04caa07e3bd"} Mar 12 08:24:37 crc kubenswrapper[4809]: I0312 08:24:37.215426 4809 scope.go:117] "RemoveContainer" containerID="f5b4b8d0bfc4bb029e2ed3cd3a5f85628813c70631ec341d6feb9709f5fcabad" Mar 12 08:24:37 crc kubenswrapper[4809]: E0312 08:24:37.215649 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-bcd664c56-8zpnk_openstack(efdd8218-be37-4288-a09b-4b4119b5ca39)\"" pod="openstack/heat-api-bcd664c56-8zpnk" podUID="efdd8218-be37-4288-a09b-4b4119b5ca39" Mar 12 08:24:37 crc kubenswrapper[4809]: I0312 08:24:37.790228 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-7fb898569b-fzjx6" Mar 12 08:24:40 crc kubenswrapper[4809]: I0312 08:24:40.380687 4809 generic.go:334] "Generic (PLEG): container finished" podID="0f8141f6-bf63-4133-9451-df9c0dd0c1e7" containerID="deac96f5d03d62ef8b9106ebf6cbcd9a47096ae5a8380d7f3af6bec1c61cc791" exitCode=0 Mar 12 08:24:40 crc kubenswrapper[4809]: I0312 08:24:40.380789 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6586c99478-72j5b" event={"ID":"0f8141f6-bf63-4133-9451-df9c0dd0c1e7","Type":"ContainerDied","Data":"deac96f5d03d62ef8b9106ebf6cbcd9a47096ae5a8380d7f3af6bec1c61cc791"} Mar 12 08:24:41 crc kubenswrapper[4809]: I0312 08:24:41.096576 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-tlxd5" podUID="b9586c2f-2fbd-4ba8-a5a5-45ed306bc53e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 08:24:41 crc kubenswrapper[4809]: I0312 08:24:41.096661 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-tlxd5" podUID="b9586c2f-2fbd-4ba8-a5a5-45ed306bc53e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 08:24:41 crc kubenswrapper[4809]: I0312 08:24:41.914221 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-7fb898569b-fzjx6" podUID="3c291d73-cd7f-494e-8684-e6bda1c78259" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.226:8004/healthcheck\": read tcp 10.217.0.2:56794->10.217.0.226:8004: read: connection reset by peer" Mar 12 08:24:41 crc kubenswrapper[4809]: I0312 08:24:41.930490 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-65959dcdbb-g8h24" Mar 12 08:24:41 crc kubenswrapper[4809]: I0312 08:24:41.953496 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-5f7c584ffb-4dlpd" podUID="5e637b87-4eab-400a-b98b-08f2da100650" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.225:8000/healthcheck\": read tcp 10.217.0.2:49380->10.217.0.225:8000: read: connection reset by peer" Mar 12 08:24:41 crc kubenswrapper[4809]: I0312 08:24:41.954663 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-5f7c584ffb-4dlpd" podUID="5e637b87-4eab-400a-b98b-08f2da100650" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.225:8000/healthcheck\": dial tcp 10.217.0.225:8000: connect: connection refused" Mar 12 08:24:42 crc kubenswrapper[4809]: I0312 08:24:42.298353 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-7fb898569b-fzjx6" podUID="3c291d73-cd7f-494e-8684-e6bda1c78259" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.226:8004/healthcheck\": dial tcp 10.217.0.226:8004: connect: connection refused" Mar 12 08:24:42 crc kubenswrapper[4809]: I0312 08:24:42.415202 4809 generic.go:334] "Generic (PLEG): container finished" podID="5e637b87-4eab-400a-b98b-08f2da100650" containerID="7cb2db8a0543905d4c83711a3c028acb096f2a452bbce32fac6798c8cd437d95" exitCode=0 Mar 12 08:24:42 crc kubenswrapper[4809]: I0312 08:24:42.415286 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5f7c584ffb-4dlpd" event={"ID":"5e637b87-4eab-400a-b98b-08f2da100650","Type":"ContainerDied","Data":"7cb2db8a0543905d4c83711a3c028acb096f2a452bbce32fac6798c8cd437d95"} Mar 12 08:24:42 crc kubenswrapper[4809]: I0312 08:24:42.417539 4809 generic.go:334] "Generic (PLEG): container finished" podID="3c291d73-cd7f-494e-8684-e6bda1c78259" containerID="59b06b4b056813793a9b771b9bc23f38a65b7b4e156d88ed76e70b6acd0398e5" exitCode=0 Mar 12 08:24:42 crc kubenswrapper[4809]: I0312 08:24:42.417641 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7fb898569b-fzjx6" event={"ID":"3c291d73-cd7f-494e-8684-e6bda1c78259","Type":"ContainerDied","Data":"59b06b4b056813793a9b771b9bc23f38a65b7b4e156d88ed76e70b6acd0398e5"} Mar 12 08:24:42 crc kubenswrapper[4809]: I0312 08:24:42.424658 4809 generic.go:334] "Generic (PLEG): container finished" podID="4d797bd8-7c65-4856-9b7b-3c207b1a64c4" containerID="435f2cbaa12c9c1d3ad668b2f3265fca77c1ba7eacba43b2082cdd03c435fef4" exitCode=137 Mar 12 08:24:42 crc kubenswrapper[4809]: I0312 08:24:42.424695 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"4d797bd8-7c65-4856-9b7b-3c207b1a64c4","Type":"ContainerDied","Data":"435f2cbaa12c9c1d3ad668b2f3265fca77c1ba7eacba43b2082cdd03c435fef4"} Mar 12 08:24:42 crc kubenswrapper[4809]: I0312 08:24:42.688160 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="4d797bd8-7c65-4856-9b7b-3c207b1a64c4" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.219:8776/healthcheck\": dial tcp 10.217.0.219:8776: connect: connection refused" Mar 12 08:24:43 crc kubenswrapper[4809]: I0312 08:24:43.329169 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8bcp4" podUID="2d78fd7f-3718-4999-a661-ad590dc808a5" containerName="registry-server" probeResult="failure" output=< Mar 12 08:24:43 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 08:24:43 crc kubenswrapper[4809]: > Mar 12 08:24:44 crc kubenswrapper[4809]: I0312 08:24:44.639376 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 12 08:24:44 crc kubenswrapper[4809]: I0312 08:24:44.640189 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="8766c893-91de-40a1-b884-381264755524" containerName="glance-log" containerID="cri-o://e4d92fee75bd1ec69fd16528e38ef58e14eb355e4dd1b3a0540f3a72c00dc6e5" gracePeriod=30 Mar 12 08:24:44 crc kubenswrapper[4809]: I0312 08:24:44.640374 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="8766c893-91de-40a1-b884-381264755524" containerName="glance-httpd" containerID="cri-o://6fe198a9e0493ad9e7ac15c32d55e3b52abb8fa1dcf93d05ded3ac3af3f2ae7c" gracePeriod=30 Mar 12 08:24:45 crc kubenswrapper[4809]: I0312 08:24:45.048597 4809 patch_prober.go:28] interesting pod/machine-config-daemon-h6d4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 08:24:45 crc kubenswrapper[4809]: I0312 08:24:45.048693 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 08:24:45 crc kubenswrapper[4809]: I0312 08:24:45.048762 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" Mar 12 08:24:45 crc kubenswrapper[4809]: I0312 08:24:45.050213 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"864ab2b9321734db2ec1c2e4ce57f8a096a813a5cbaa0744b231535713d6ab10"} pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 08:24:45 crc kubenswrapper[4809]: I0312 08:24:45.050295 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" containerID="cri-o://864ab2b9321734db2ec1c2e4ce57f8a096a813a5cbaa0744b231535713d6ab10" gracePeriod=600 Mar 12 08:24:45 crc kubenswrapper[4809]: E0312 08:24:45.288511 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:24:45 crc kubenswrapper[4809]: I0312 08:24:45.554449 4809 generic.go:334] "Generic (PLEG): container finished" podID="101483ba-8ed3-40eb-9855-077e9add029f" containerID="864ab2b9321734db2ec1c2e4ce57f8a096a813a5cbaa0744b231535713d6ab10" exitCode=0 Mar 12 08:24:45 crc kubenswrapper[4809]: I0312 08:24:45.554914 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" event={"ID":"101483ba-8ed3-40eb-9855-077e9add029f","Type":"ContainerDied","Data":"864ab2b9321734db2ec1c2e4ce57f8a096a813a5cbaa0744b231535713d6ab10"} Mar 12 08:24:45 crc kubenswrapper[4809]: I0312 08:24:45.554958 4809 scope.go:117] "RemoveContainer" containerID="a328a2cdcc5abe038555d03cc30ceecaf7377dc57d422a2ede895c12e879661b" Mar 12 08:24:45 crc kubenswrapper[4809]: I0312 08:24:45.555895 4809 scope.go:117] "RemoveContainer" containerID="864ab2b9321734db2ec1c2e4ce57f8a096a813a5cbaa0744b231535713d6ab10" Mar 12 08:24:45 crc kubenswrapper[4809]: E0312 08:24:45.558769 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:24:45 crc kubenswrapper[4809]: I0312 08:24:45.616972 4809 generic.go:334] "Generic (PLEG): container finished" podID="8766c893-91de-40a1-b884-381264755524" containerID="e4d92fee75bd1ec69fd16528e38ef58e14eb355e4dd1b3a0540f3a72c00dc6e5" exitCode=143 Mar 12 08:24:45 crc kubenswrapper[4809]: I0312 08:24:45.617037 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8766c893-91de-40a1-b884-381264755524","Type":"ContainerDied","Data":"e4d92fee75bd1ec69fd16528e38ef58e14eb355e4dd1b3a0540f3a72c00dc6e5"} Mar 12 08:24:45 crc kubenswrapper[4809]: I0312 08:24:45.715703 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 12 08:24:45 crc kubenswrapper[4809]: I0312 08:24:45.831837 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fqj2\" (UniqueName: \"kubernetes.io/projected/4d797bd8-7c65-4856-9b7b-3c207b1a64c4-kube-api-access-7fqj2\") pod \"4d797bd8-7c65-4856-9b7b-3c207b1a64c4\" (UID: \"4d797bd8-7c65-4856-9b7b-3c207b1a64c4\") " Mar 12 08:24:45 crc kubenswrapper[4809]: I0312 08:24:45.831908 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4d797bd8-7c65-4856-9b7b-3c207b1a64c4-config-data-custom\") pod \"4d797bd8-7c65-4856-9b7b-3c207b1a64c4\" (UID: \"4d797bd8-7c65-4856-9b7b-3c207b1a64c4\") " Mar 12 08:24:45 crc kubenswrapper[4809]: I0312 08:24:45.831975 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d797bd8-7c65-4856-9b7b-3c207b1a64c4-scripts\") pod \"4d797bd8-7c65-4856-9b7b-3c207b1a64c4\" (UID: \"4d797bd8-7c65-4856-9b7b-3c207b1a64c4\") " Mar 12 08:24:45 crc kubenswrapper[4809]: I0312 08:24:45.832007 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d797bd8-7c65-4856-9b7b-3c207b1a64c4-config-data\") pod \"4d797bd8-7c65-4856-9b7b-3c207b1a64c4\" (UID: \"4d797bd8-7c65-4856-9b7b-3c207b1a64c4\") " Mar 12 08:24:45 crc kubenswrapper[4809]: I0312 08:24:45.832051 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d797bd8-7c65-4856-9b7b-3c207b1a64c4-combined-ca-bundle\") pod \"4d797bd8-7c65-4856-9b7b-3c207b1a64c4\" (UID: \"4d797bd8-7c65-4856-9b7b-3c207b1a64c4\") " Mar 12 08:24:45 crc kubenswrapper[4809]: I0312 08:24:45.832271 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4d797bd8-7c65-4856-9b7b-3c207b1a64c4-etc-machine-id\") pod \"4d797bd8-7c65-4856-9b7b-3c207b1a64c4\" (UID: \"4d797bd8-7c65-4856-9b7b-3c207b1a64c4\") " Mar 12 08:24:45 crc kubenswrapper[4809]: I0312 08:24:45.832300 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d797bd8-7c65-4856-9b7b-3c207b1a64c4-logs\") pod \"4d797bd8-7c65-4856-9b7b-3c207b1a64c4\" (UID: \"4d797bd8-7c65-4856-9b7b-3c207b1a64c4\") " Mar 12 08:24:45 crc kubenswrapper[4809]: I0312 08:24:45.833145 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d797bd8-7c65-4856-9b7b-3c207b1a64c4-logs" (OuterVolumeSpecName: "logs") pod "4d797bd8-7c65-4856-9b7b-3c207b1a64c4" (UID: "4d797bd8-7c65-4856-9b7b-3c207b1a64c4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:24:45 crc kubenswrapper[4809]: I0312 08:24:45.833588 4809 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d797bd8-7c65-4856-9b7b-3c207b1a64c4-logs\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:45 crc kubenswrapper[4809]: I0312 08:24:45.833638 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d797bd8-7c65-4856-9b7b-3c207b1a64c4-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "4d797bd8-7c65-4856-9b7b-3c207b1a64c4" (UID: "4d797bd8-7c65-4856-9b7b-3c207b1a64c4"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 08:24:45 crc kubenswrapper[4809]: I0312 08:24:45.851181 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d797bd8-7c65-4856-9b7b-3c207b1a64c4-scripts" (OuterVolumeSpecName: "scripts") pod "4d797bd8-7c65-4856-9b7b-3c207b1a64c4" (UID: "4d797bd8-7c65-4856-9b7b-3c207b1a64c4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:24:45 crc kubenswrapper[4809]: I0312 08:24:45.852615 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d797bd8-7c65-4856-9b7b-3c207b1a64c4-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4d797bd8-7c65-4856-9b7b-3c207b1a64c4" (UID: "4d797bd8-7c65-4856-9b7b-3c207b1a64c4"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:24:45 crc kubenswrapper[4809]: I0312 08:24:45.854481 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d797bd8-7c65-4856-9b7b-3c207b1a64c4-kube-api-access-7fqj2" (OuterVolumeSpecName: "kube-api-access-7fqj2") pod "4d797bd8-7c65-4856-9b7b-3c207b1a64c4" (UID: "4d797bd8-7c65-4856-9b7b-3c207b1a64c4"). InnerVolumeSpecName "kube-api-access-7fqj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:24:45 crc kubenswrapper[4809]: I0312 08:24:45.937719 4809 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4d797bd8-7c65-4856-9b7b-3c207b1a64c4-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:45 crc kubenswrapper[4809]: I0312 08:24:45.938288 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7fqj2\" (UniqueName: \"kubernetes.io/projected/4d797bd8-7c65-4856-9b7b-3c207b1a64c4-kube-api-access-7fqj2\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:45 crc kubenswrapper[4809]: I0312 08:24:45.938311 4809 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4d797bd8-7c65-4856-9b7b-3c207b1a64c4-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:45 crc kubenswrapper[4809]: I0312 08:24:45.938328 4809 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d797bd8-7c65-4856-9b7b-3c207b1a64c4-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:45 crc kubenswrapper[4809]: E0312 08:24:45.938971 4809 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4d797bd8-7c65-4856-9b7b-3c207b1a64c4-combined-ca-bundle podName:4d797bd8-7c65-4856-9b7b-3c207b1a64c4 nodeName:}" failed. No retries permitted until 2026-03-12 08:24:46.438933476 +0000 UTC m=+1560.020969209 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "combined-ca-bundle" (UniqueName: "kubernetes.io/secret/4d797bd8-7c65-4856-9b7b-3c207b1a64c4-combined-ca-bundle") pod "4d797bd8-7c65-4856-9b7b-3c207b1a64c4" (UID: "4d797bd8-7c65-4856-9b7b-3c207b1a64c4") : error deleting /var/lib/kubelet/pods/4d797bd8-7c65-4856-9b7b-3c207b1a64c4/volume-subpaths: remove /var/lib/kubelet/pods/4d797bd8-7c65-4856-9b7b-3c207b1a64c4/volume-subpaths: no such file or directory Mar 12 08:24:45 crc kubenswrapper[4809]: I0312 08:24:45.943035 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d797bd8-7c65-4856-9b7b-3c207b1a64c4-config-data" (OuterVolumeSpecName: "config-data") pod "4d797bd8-7c65-4856-9b7b-3c207b1a64c4" (UID: "4d797bd8-7c65-4856-9b7b-3c207b1a64c4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.041226 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d797bd8-7c65-4856-9b7b-3c207b1a64c4-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.085322 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7fb898569b-fzjx6" Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.090229 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6586c99478-72j5b" Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.142612 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f8141f6-bf63-4133-9451-df9c0dd0c1e7-config-data\") pod \"0f8141f6-bf63-4133-9451-df9c0dd0c1e7\" (UID: \"0f8141f6-bf63-4133-9451-df9c0dd0c1e7\") " Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.142690 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bh57h\" (UniqueName: \"kubernetes.io/projected/3c291d73-cd7f-494e-8684-e6bda1c78259-kube-api-access-bh57h\") pod \"3c291d73-cd7f-494e-8684-e6bda1c78259\" (UID: \"3c291d73-cd7f-494e-8684-e6bda1c78259\") " Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.142797 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c291d73-cd7f-494e-8684-e6bda1c78259-combined-ca-bundle\") pod \"3c291d73-cd7f-494e-8684-e6bda1c78259\" (UID: \"3c291d73-cd7f-494e-8684-e6bda1c78259\") " Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.142837 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f8141f6-bf63-4133-9451-df9c0dd0c1e7-combined-ca-bundle\") pod \"0f8141f6-bf63-4133-9451-df9c0dd0c1e7\" (UID: \"0f8141f6-bf63-4133-9451-df9c0dd0c1e7\") " Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.142904 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f8141f6-bf63-4133-9451-df9c0dd0c1e7-public-tls-certs\") pod \"0f8141f6-bf63-4133-9451-df9c0dd0c1e7\" (UID: \"0f8141f6-bf63-4133-9451-df9c0dd0c1e7\") " Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.142923 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3c291d73-cd7f-494e-8684-e6bda1c78259-config-data-custom\") pod \"3c291d73-cd7f-494e-8684-e6bda1c78259\" (UID: \"3c291d73-cd7f-494e-8684-e6bda1c78259\") " Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.143016 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c291d73-cd7f-494e-8684-e6bda1c78259-config-data\") pod \"3c291d73-cd7f-494e-8684-e6bda1c78259\" (UID: \"3c291d73-cd7f-494e-8684-e6bda1c78259\") " Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.143079 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-949jb\" (UniqueName: \"kubernetes.io/projected/0f8141f6-bf63-4133-9451-df9c0dd0c1e7-kube-api-access-949jb\") pod \"0f8141f6-bf63-4133-9451-df9c0dd0c1e7\" (UID: \"0f8141f6-bf63-4133-9451-df9c0dd0c1e7\") " Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.143256 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f8141f6-bf63-4133-9451-df9c0dd0c1e7-internal-tls-certs\") pod \"0f8141f6-bf63-4133-9451-df9c0dd0c1e7\" (UID: \"0f8141f6-bf63-4133-9451-df9c0dd0c1e7\") " Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.143316 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f8141f6-bf63-4133-9451-df9c0dd0c1e7-scripts\") pod \"0f8141f6-bf63-4133-9451-df9c0dd0c1e7\" (UID: \"0f8141f6-bf63-4133-9451-df9c0dd0c1e7\") " Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.143389 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f8141f6-bf63-4133-9451-df9c0dd0c1e7-logs\") pod \"0f8141f6-bf63-4133-9451-df9c0dd0c1e7\" (UID: \"0f8141f6-bf63-4133-9451-df9c0dd0c1e7\") " Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.155512 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f8141f6-bf63-4133-9451-df9c0dd0c1e7-logs" (OuterVolumeSpecName: "logs") pod "0f8141f6-bf63-4133-9451-df9c0dd0c1e7" (UID: "0f8141f6-bf63-4133-9451-df9c0dd0c1e7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.161454 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c291d73-cd7f-494e-8684-e6bda1c78259-kube-api-access-bh57h" (OuterVolumeSpecName: "kube-api-access-bh57h") pod "3c291d73-cd7f-494e-8684-e6bda1c78259" (UID: "3c291d73-cd7f-494e-8684-e6bda1c78259"). InnerVolumeSpecName "kube-api-access-bh57h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.170496 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f8141f6-bf63-4133-9451-df9c0dd0c1e7-kube-api-access-949jb" (OuterVolumeSpecName: "kube-api-access-949jb") pod "0f8141f6-bf63-4133-9451-df9c0dd0c1e7" (UID: "0f8141f6-bf63-4133-9451-df9c0dd0c1e7"). InnerVolumeSpecName "kube-api-access-949jb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.171304 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c291d73-cd7f-494e-8684-e6bda1c78259-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3c291d73-cd7f-494e-8684-e6bda1c78259" (UID: "3c291d73-cd7f-494e-8684-e6bda1c78259"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.173966 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f8141f6-bf63-4133-9451-df9c0dd0c1e7-scripts" (OuterVolumeSpecName: "scripts") pod "0f8141f6-bf63-4133-9451-df9c0dd0c1e7" (UID: "0f8141f6-bf63-4133-9451-df9c0dd0c1e7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.233931 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c291d73-cd7f-494e-8684-e6bda1c78259-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3c291d73-cd7f-494e-8684-e6bda1c78259" (UID: "3c291d73-cd7f-494e-8684-e6bda1c78259"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.247135 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c291d73-cd7f-494e-8684-e6bda1c78259-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.247169 4809 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3c291d73-cd7f-494e-8684-e6bda1c78259-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.247180 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-949jb\" (UniqueName: \"kubernetes.io/projected/0f8141f6-bf63-4133-9451-df9c0dd0c1e7-kube-api-access-949jb\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.247195 4809 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f8141f6-bf63-4133-9451-df9c0dd0c1e7-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.247206 4809 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f8141f6-bf63-4133-9451-df9c0dd0c1e7-logs\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.247217 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bh57h\" (UniqueName: \"kubernetes.io/projected/3c291d73-cd7f-494e-8684-e6bda1c78259-kube-api-access-bh57h\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.292482 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c291d73-cd7f-494e-8684-e6bda1c78259-config-data" (OuterVolumeSpecName: "config-data") pod "3c291d73-cd7f-494e-8684-e6bda1c78259" (UID: "3c291d73-cd7f-494e-8684-e6bda1c78259"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.303209 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f8141f6-bf63-4133-9451-df9c0dd0c1e7-config-data" (OuterVolumeSpecName: "config-data") pod "0f8141f6-bf63-4133-9451-df9c0dd0c1e7" (UID: "0f8141f6-bf63-4133-9451-df9c0dd0c1e7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.359889 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c291d73-cd7f-494e-8684-e6bda1c78259-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.371044 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f8141f6-bf63-4133-9451-df9c0dd0c1e7-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.360063 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f8141f6-bf63-4133-9451-df9c0dd0c1e7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0f8141f6-bf63-4133-9451-df9c0dd0c1e7" (UID: "0f8141f6-bf63-4133-9451-df9c0dd0c1e7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.417204 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f8141f6-bf63-4133-9451-df9c0dd0c1e7-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "0f8141f6-bf63-4133-9451-df9c0dd0c1e7" (UID: "0f8141f6-bf63-4133-9451-df9c0dd0c1e7"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.436504 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f8141f6-bf63-4133-9451-df9c0dd0c1e7-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "0f8141f6-bf63-4133-9451-df9c0dd0c1e7" (UID: "0f8141f6-bf63-4133-9451-df9c0dd0c1e7"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.477060 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d797bd8-7c65-4856-9b7b-3c207b1a64c4-combined-ca-bundle\") pod \"4d797bd8-7c65-4856-9b7b-3c207b1a64c4\" (UID: \"4d797bd8-7c65-4856-9b7b-3c207b1a64c4\") " Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.478478 4809 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f8141f6-bf63-4133-9451-df9c0dd0c1e7-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.478508 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f8141f6-bf63-4133-9451-df9c0dd0c1e7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.478535 4809 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f8141f6-bf63-4133-9451-df9c0dd0c1e7-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.494365 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d797bd8-7c65-4856-9b7b-3c207b1a64c4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4d797bd8-7c65-4856-9b7b-3c207b1a64c4" (UID: "4d797bd8-7c65-4856-9b7b-3c207b1a64c4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.506974 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5f7c584ffb-4dlpd" Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.579757 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5e637b87-4eab-400a-b98b-08f2da100650-config-data-custom\") pod \"5e637b87-4eab-400a-b98b-08f2da100650\" (UID: \"5e637b87-4eab-400a-b98b-08f2da100650\") " Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.579849 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szncq\" (UniqueName: \"kubernetes.io/projected/5e637b87-4eab-400a-b98b-08f2da100650-kube-api-access-szncq\") pod \"5e637b87-4eab-400a-b98b-08f2da100650\" (UID: \"5e637b87-4eab-400a-b98b-08f2da100650\") " Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.580136 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e637b87-4eab-400a-b98b-08f2da100650-config-data\") pod \"5e637b87-4eab-400a-b98b-08f2da100650\" (UID: \"5e637b87-4eab-400a-b98b-08f2da100650\") " Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.580187 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e637b87-4eab-400a-b98b-08f2da100650-combined-ca-bundle\") pod \"5e637b87-4eab-400a-b98b-08f2da100650\" (UID: \"5e637b87-4eab-400a-b98b-08f2da100650\") " Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.581266 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d797bd8-7c65-4856-9b7b-3c207b1a64c4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.589397 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e637b87-4eab-400a-b98b-08f2da100650-kube-api-access-szncq" (OuterVolumeSpecName: "kube-api-access-szncq") pod "5e637b87-4eab-400a-b98b-08f2da100650" (UID: "5e637b87-4eab-400a-b98b-08f2da100650"). InnerVolumeSpecName "kube-api-access-szncq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.593698 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-84d75fd6d6-jt5pm"] Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.596626 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e637b87-4eab-400a-b98b-08f2da100650-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5e637b87-4eab-400a-b98b-08f2da100650" (UID: "5e637b87-4eab-400a-b98b-08f2da100650"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.627808 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e637b87-4eab-400a-b98b-08f2da100650-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5e637b87-4eab-400a-b98b-08f2da100650" (UID: "5e637b87-4eab-400a-b98b-08f2da100650"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.633099 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-765ccf488b-zgbk5"] Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.636798 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4","Type":"ContainerStarted","Data":"33861322f0ca7ad6764c313fc70dc41e76de3a469237d1921eeb4516e888e646"} Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.639421 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7fb898569b-fzjx6" event={"ID":"3c291d73-cd7f-494e-8684-e6bda1c78259","Type":"ContainerDied","Data":"0bf55d0d7023594f319062c0eeda7eff49e51d8b80564d2182890187b4d55e76"} Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.639502 4809 scope.go:117] "RemoveContainer" containerID="59b06b4b056813793a9b771b9bc23f38a65b7b4e156d88ed76e70b6acd0398e5" Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.639585 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7fb898569b-fzjx6" Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.692965 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"1af3f1ff-a2d9-4f4d-bd1d-f66ec1cd87bd","Type":"ContainerStarted","Data":"6fd38134b5b0c726482817b6b85176bd1ea472db970305f9cfbbde591331ff44"} Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.698554 4809 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5e637b87-4eab-400a-b98b-08f2da100650-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.698700 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szncq\" (UniqueName: \"kubernetes.io/projected/5e637b87-4eab-400a-b98b-08f2da100650-kube-api-access-szncq\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.698756 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e637b87-4eab-400a-b98b-08f2da100650-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.702693 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-84d75fd6d6-jt5pm" event={"ID":"157d4d8b-15cb-413b-b689-209cdf45f1b7","Type":"ContainerStarted","Data":"f2bf18c73590fe900d6459ca011b5f41e1ddae8e037bf6d6b185acb17aef74ce"} Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.708434 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"4d797bd8-7c65-4856-9b7b-3c207b1a64c4","Type":"ContainerDied","Data":"09469de8e095f27b0ca6a7ced8dced6ccf6fb793b2263389fbd945451c3a4746"} Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.708501 4809 scope.go:117] "RemoveContainer" containerID="435f2cbaa12c9c1d3ad668b2f3265fca77c1ba7eacba43b2082cdd03c435fef4" Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.708523 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.718477 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6586c99478-72j5b" event={"ID":"0f8141f6-bf63-4133-9451-df9c0dd0c1e7","Type":"ContainerDied","Data":"81eb050c0c272ca46efed220e765729bc536f39fd373f2fba696f7f5e8215b7a"} Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.719197 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6586c99478-72j5b" Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.727781 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5f7c584ffb-4dlpd" Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.728804 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-7fb898569b-fzjx6"] Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.728889 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5f7c584ffb-4dlpd" event={"ID":"5e637b87-4eab-400a-b98b-08f2da100650","Type":"ContainerDied","Data":"c158d3a4b588485c82571fb9e425555f1c4b311a22ab3d24659965a5cab8aa44"} Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.737392 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-765ccf488b-zgbk5" event={"ID":"be0d6711-7a21-40cf-ba47-eff3c52046e7","Type":"ContainerStarted","Data":"0b22d5182d0af8c2bb773e2756cbd703a653fca05281ad301535b345b6b0aecd"} Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.747194 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-7fb898569b-fzjx6"] Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.756982 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.7275629710000002 podStartE2EDuration="29.756326158s" podCreationTimestamp="2026-03-12 08:24:17 +0000 UTC" firstStartedPulling="2026-03-12 08:24:18.369478023 +0000 UTC m=+1531.951513756" lastFinishedPulling="2026-03-12 08:24:45.39824121 +0000 UTC m=+1558.980276943" observedRunningTime="2026-03-12 08:24:46.718056497 +0000 UTC m=+1560.300092230" watchObservedRunningTime="2026-03-12 08:24:46.756326158 +0000 UTC m=+1560.338361901" Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.764374 4809 scope.go:117] "RemoveContainer" containerID="38485dbda02f38f759b8a627ebadac18723fadb38f036f5cca5a07bb7f198019" Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.823376 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e637b87-4eab-400a-b98b-08f2da100650-config-data" (OuterVolumeSpecName: "config-data") pod "5e637b87-4eab-400a-b98b-08f2da100650" (UID: "5e637b87-4eab-400a-b98b-08f2da100650"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.832339 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.860138 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.867547 4809 scope.go:117] "RemoveContainer" containerID="deac96f5d03d62ef8b9106ebf6cbcd9a47096ae5a8380d7f3af6bec1c61cc791" Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.882206 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6586c99478-72j5b"] Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.895452 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-6586c99478-72j5b"] Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.906359 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Mar 12 08:24:46 crc kubenswrapper[4809]: E0312 08:24:46.907203 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d797bd8-7c65-4856-9b7b-3c207b1a64c4" containerName="cinder-api" Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.907225 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d797bd8-7c65-4856-9b7b-3c207b1a64c4" containerName="cinder-api" Mar 12 08:24:46 crc kubenswrapper[4809]: E0312 08:24:46.907242 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d797bd8-7c65-4856-9b7b-3c207b1a64c4" containerName="cinder-api-log" Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.907248 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d797bd8-7c65-4856-9b7b-3c207b1a64c4" containerName="cinder-api-log" Mar 12 08:24:46 crc kubenswrapper[4809]: E0312 08:24:46.907265 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f8141f6-bf63-4133-9451-df9c0dd0c1e7" containerName="placement-api" Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.907272 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f8141f6-bf63-4133-9451-df9c0dd0c1e7" containerName="placement-api" Mar 12 08:24:46 crc kubenswrapper[4809]: E0312 08:24:46.907298 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e637b87-4eab-400a-b98b-08f2da100650" containerName="heat-cfnapi" Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.907304 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e637b87-4eab-400a-b98b-08f2da100650" containerName="heat-cfnapi" Mar 12 08:24:46 crc kubenswrapper[4809]: E0312 08:24:46.907316 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c291d73-cd7f-494e-8684-e6bda1c78259" containerName="heat-api" Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.907323 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c291d73-cd7f-494e-8684-e6bda1c78259" containerName="heat-api" Mar 12 08:24:46 crc kubenswrapper[4809]: E0312 08:24:46.907335 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f8141f6-bf63-4133-9451-df9c0dd0c1e7" containerName="placement-log" Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.907341 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f8141f6-bf63-4133-9451-df9c0dd0c1e7" containerName="placement-log" Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.907576 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c291d73-cd7f-494e-8684-e6bda1c78259" containerName="heat-api" Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.907602 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f8141f6-bf63-4133-9451-df9c0dd0c1e7" containerName="placement-api" Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.907669 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d797bd8-7c65-4856-9b7b-3c207b1a64c4" containerName="cinder-api-log" Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.907678 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e637b87-4eab-400a-b98b-08f2da100650" containerName="heat-cfnapi" Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.907691 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f8141f6-bf63-4133-9451-df9c0dd0c1e7" containerName="placement-log" Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.907698 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d797bd8-7c65-4856-9b7b-3c207b1a64c4" containerName="cinder-api" Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.909598 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e637b87-4eab-400a-b98b-08f2da100650-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.909998 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.914067 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.914283 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.914174 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.923867 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.951725 4809 scope.go:117] "RemoveContainer" containerID="c78920513e2ac05da31144b07e115193999eae386099b094fa6da04caa07e3bd" Mar 12 08:24:46 crc kubenswrapper[4809]: I0312 08:24:46.993315 4809 scope.go:117] "RemoveContainer" containerID="7cb2db8a0543905d4c83711a3c028acb096f2a452bbce32fac6798c8cd437d95" Mar 12 08:24:47 crc kubenswrapper[4809]: I0312 08:24:47.011614 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab7a99ec-57ee-48dd-948b-6e9309a0aa10-logs\") pod \"cinder-api-0\" (UID: \"ab7a99ec-57ee-48dd-948b-6e9309a0aa10\") " pod="openstack/cinder-api-0" Mar 12 08:24:47 crc kubenswrapper[4809]: I0312 08:24:47.011972 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab7a99ec-57ee-48dd-948b-6e9309a0aa10-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"ab7a99ec-57ee-48dd-948b-6e9309a0aa10\") " pod="openstack/cinder-api-0" Mar 12 08:24:47 crc kubenswrapper[4809]: I0312 08:24:47.012048 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab7a99ec-57ee-48dd-948b-6e9309a0aa10-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ab7a99ec-57ee-48dd-948b-6e9309a0aa10\") " pod="openstack/cinder-api-0" Mar 12 08:24:47 crc kubenswrapper[4809]: I0312 08:24:47.012217 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab7a99ec-57ee-48dd-948b-6e9309a0aa10-scripts\") pod \"cinder-api-0\" (UID: \"ab7a99ec-57ee-48dd-948b-6e9309a0aa10\") " pod="openstack/cinder-api-0" Mar 12 08:24:47 crc kubenswrapper[4809]: I0312 08:24:47.012318 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab7a99ec-57ee-48dd-948b-6e9309a0aa10-public-tls-certs\") pod \"cinder-api-0\" (UID: \"ab7a99ec-57ee-48dd-948b-6e9309a0aa10\") " pod="openstack/cinder-api-0" Mar 12 08:24:47 crc kubenswrapper[4809]: I0312 08:24:47.012380 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab7a99ec-57ee-48dd-948b-6e9309a0aa10-config-data\") pod \"cinder-api-0\" (UID: \"ab7a99ec-57ee-48dd-948b-6e9309a0aa10\") " pod="openstack/cinder-api-0" Mar 12 08:24:47 crc kubenswrapper[4809]: I0312 08:24:47.012459 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzd6g\" (UniqueName: \"kubernetes.io/projected/ab7a99ec-57ee-48dd-948b-6e9309a0aa10-kube-api-access-mzd6g\") pod \"cinder-api-0\" (UID: \"ab7a99ec-57ee-48dd-948b-6e9309a0aa10\") " pod="openstack/cinder-api-0" Mar 12 08:24:47 crc kubenswrapper[4809]: I0312 08:24:47.012583 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ab7a99ec-57ee-48dd-948b-6e9309a0aa10-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ab7a99ec-57ee-48dd-948b-6e9309a0aa10\") " pod="openstack/cinder-api-0" Mar 12 08:24:47 crc kubenswrapper[4809]: I0312 08:24:47.012679 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ab7a99ec-57ee-48dd-948b-6e9309a0aa10-config-data-custom\") pod \"cinder-api-0\" (UID: \"ab7a99ec-57ee-48dd-948b-6e9309a0aa10\") " pod="openstack/cinder-api-0" Mar 12 08:24:47 crc kubenswrapper[4809]: I0312 08:24:47.077367 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-5f7c584ffb-4dlpd"] Mar 12 08:24:47 crc kubenswrapper[4809]: I0312 08:24:47.089698 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-5f7c584ffb-4dlpd"] Mar 12 08:24:47 crc kubenswrapper[4809]: I0312 08:24:47.116669 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab7a99ec-57ee-48dd-948b-6e9309a0aa10-scripts\") pod \"cinder-api-0\" (UID: \"ab7a99ec-57ee-48dd-948b-6e9309a0aa10\") " pod="openstack/cinder-api-0" Mar 12 08:24:47 crc kubenswrapper[4809]: I0312 08:24:47.116914 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab7a99ec-57ee-48dd-948b-6e9309a0aa10-public-tls-certs\") pod \"cinder-api-0\" (UID: \"ab7a99ec-57ee-48dd-948b-6e9309a0aa10\") " pod="openstack/cinder-api-0" Mar 12 08:24:47 crc kubenswrapper[4809]: I0312 08:24:47.116988 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab7a99ec-57ee-48dd-948b-6e9309a0aa10-config-data\") pod \"cinder-api-0\" (UID: \"ab7a99ec-57ee-48dd-948b-6e9309a0aa10\") " pod="openstack/cinder-api-0" Mar 12 08:24:47 crc kubenswrapper[4809]: I0312 08:24:47.117080 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzd6g\" (UniqueName: \"kubernetes.io/projected/ab7a99ec-57ee-48dd-948b-6e9309a0aa10-kube-api-access-mzd6g\") pod \"cinder-api-0\" (UID: \"ab7a99ec-57ee-48dd-948b-6e9309a0aa10\") " pod="openstack/cinder-api-0" Mar 12 08:24:47 crc kubenswrapper[4809]: I0312 08:24:47.119991 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ab7a99ec-57ee-48dd-948b-6e9309a0aa10-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ab7a99ec-57ee-48dd-948b-6e9309a0aa10\") " pod="openstack/cinder-api-0" Mar 12 08:24:47 crc kubenswrapper[4809]: I0312 08:24:47.120095 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ab7a99ec-57ee-48dd-948b-6e9309a0aa10-config-data-custom\") pod \"cinder-api-0\" (UID: \"ab7a99ec-57ee-48dd-948b-6e9309a0aa10\") " pod="openstack/cinder-api-0" Mar 12 08:24:47 crc kubenswrapper[4809]: I0312 08:24:47.120219 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab7a99ec-57ee-48dd-948b-6e9309a0aa10-logs\") pod \"cinder-api-0\" (UID: \"ab7a99ec-57ee-48dd-948b-6e9309a0aa10\") " pod="openstack/cinder-api-0" Mar 12 08:24:47 crc kubenswrapper[4809]: I0312 08:24:47.120445 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab7a99ec-57ee-48dd-948b-6e9309a0aa10-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"ab7a99ec-57ee-48dd-948b-6e9309a0aa10\") " pod="openstack/cinder-api-0" Mar 12 08:24:47 crc kubenswrapper[4809]: I0312 08:24:47.120517 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab7a99ec-57ee-48dd-948b-6e9309a0aa10-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ab7a99ec-57ee-48dd-948b-6e9309a0aa10\") " pod="openstack/cinder-api-0" Mar 12 08:24:47 crc kubenswrapper[4809]: I0312 08:24:47.121289 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ab7a99ec-57ee-48dd-948b-6e9309a0aa10-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ab7a99ec-57ee-48dd-948b-6e9309a0aa10\") " pod="openstack/cinder-api-0" Mar 12 08:24:47 crc kubenswrapper[4809]: I0312 08:24:47.121776 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab7a99ec-57ee-48dd-948b-6e9309a0aa10-logs\") pod \"cinder-api-0\" (UID: \"ab7a99ec-57ee-48dd-948b-6e9309a0aa10\") " pod="openstack/cinder-api-0" Mar 12 08:24:47 crc kubenswrapper[4809]: I0312 08:24:47.122055 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Mar 12 08:24:47 crc kubenswrapper[4809]: I0312 08:24:47.123748 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Mar 12 08:24:47 crc kubenswrapper[4809]: I0312 08:24:47.124044 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Mar 12 08:24:47 crc kubenswrapper[4809]: I0312 08:24:47.125527 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab7a99ec-57ee-48dd-948b-6e9309a0aa10-config-data\") pod \"cinder-api-0\" (UID: \"ab7a99ec-57ee-48dd-948b-6e9309a0aa10\") " pod="openstack/cinder-api-0" Mar 12 08:24:47 crc kubenswrapper[4809]: I0312 08:24:47.135053 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ab7a99ec-57ee-48dd-948b-6e9309a0aa10-config-data-custom\") pod \"cinder-api-0\" (UID: \"ab7a99ec-57ee-48dd-948b-6e9309a0aa10\") " pod="openstack/cinder-api-0" Mar 12 08:24:47 crc kubenswrapper[4809]: I0312 08:24:47.139321 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab7a99ec-57ee-48dd-948b-6e9309a0aa10-public-tls-certs\") pod \"cinder-api-0\" (UID: \"ab7a99ec-57ee-48dd-948b-6e9309a0aa10\") " pod="openstack/cinder-api-0" Mar 12 08:24:47 crc kubenswrapper[4809]: I0312 08:24:47.145479 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab7a99ec-57ee-48dd-948b-6e9309a0aa10-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ab7a99ec-57ee-48dd-948b-6e9309a0aa10\") " pod="openstack/cinder-api-0" Mar 12 08:24:47 crc kubenswrapper[4809]: I0312 08:24:47.148669 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab7a99ec-57ee-48dd-948b-6e9309a0aa10-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"ab7a99ec-57ee-48dd-948b-6e9309a0aa10\") " pod="openstack/cinder-api-0" Mar 12 08:24:47 crc kubenswrapper[4809]: I0312 08:24:47.150092 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzd6g\" (UniqueName: \"kubernetes.io/projected/ab7a99ec-57ee-48dd-948b-6e9309a0aa10-kube-api-access-mzd6g\") pod \"cinder-api-0\" (UID: \"ab7a99ec-57ee-48dd-948b-6e9309a0aa10\") " pod="openstack/cinder-api-0" Mar 12 08:24:47 crc kubenswrapper[4809]: I0312 08:24:47.156768 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab7a99ec-57ee-48dd-948b-6e9309a0aa10-scripts\") pod \"cinder-api-0\" (UID: \"ab7a99ec-57ee-48dd-948b-6e9309a0aa10\") " pod="openstack/cinder-api-0" Mar 12 08:24:47 crc kubenswrapper[4809]: I0312 08:24:47.158857 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f8141f6-bf63-4133-9451-df9c0dd0c1e7" path="/var/lib/kubelet/pods/0f8141f6-bf63-4133-9451-df9c0dd0c1e7/volumes" Mar 12 08:24:47 crc kubenswrapper[4809]: I0312 08:24:47.162456 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c291d73-cd7f-494e-8684-e6bda1c78259" path="/var/lib/kubelet/pods/3c291d73-cd7f-494e-8684-e6bda1c78259/volumes" Mar 12 08:24:47 crc kubenswrapper[4809]: I0312 08:24:47.163346 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d797bd8-7c65-4856-9b7b-3c207b1a64c4" path="/var/lib/kubelet/pods/4d797bd8-7c65-4856-9b7b-3c207b1a64c4/volumes" Mar 12 08:24:47 crc kubenswrapper[4809]: I0312 08:24:47.164297 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e637b87-4eab-400a-b98b-08f2da100650" path="/var/lib/kubelet/pods/5e637b87-4eab-400a-b98b-08f2da100650/volumes" Mar 12 08:24:47 crc kubenswrapper[4809]: I0312 08:24:47.263896 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 12 08:24:47 crc kubenswrapper[4809]: I0312 08:24:47.763461 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-84d75fd6d6-jt5pm" event={"ID":"157d4d8b-15cb-413b-b689-209cdf45f1b7","Type":"ContainerStarted","Data":"1e428a496d3dc1fe31e4dddfc80db499f8d2f70a04af936956005838b29bb243"} Mar 12 08:24:47 crc kubenswrapper[4809]: I0312 08:24:47.764228 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-84d75fd6d6-jt5pm" Mar 12 08:24:47 crc kubenswrapper[4809]: I0312 08:24:47.766805 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-765ccf488b-zgbk5" event={"ID":"be0d6711-7a21-40cf-ba47-eff3c52046e7","Type":"ContainerStarted","Data":"15d7a3b457ca3bc24189cdfffc58e867d200d6e0bddb3c1fcfccb0ae4d66044d"} Mar 12 08:24:47 crc kubenswrapper[4809]: I0312 08:24:47.766951 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-765ccf488b-zgbk5" Mar 12 08:24:47 crc kubenswrapper[4809]: I0312 08:24:47.771768 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4","Type":"ContainerStarted","Data":"7f979c08fe0996be1a08154d1d9ae9ce77c632626f5bcf5ab04dce25d67c3947"} Mar 12 08:24:47 crc kubenswrapper[4809]: I0312 08:24:47.771800 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4","Type":"ContainerStarted","Data":"29d847e8495617fb01081c72d1b1d6904ae1c763591862874bcadafbca73e439"} Mar 12 08:24:47 crc kubenswrapper[4809]: I0312 08:24:47.790787 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-84d75fd6d6-jt5pm" podStartSLOduration=12.790765804 podStartE2EDuration="12.790765804s" podCreationTimestamp="2026-03-12 08:24:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:24:47.783364632 +0000 UTC m=+1561.365400365" watchObservedRunningTime="2026-03-12 08:24:47.790765804 +0000 UTC m=+1561.372801537" Mar 12 08:24:47 crc kubenswrapper[4809]: I0312 08:24:47.818968 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-765ccf488b-zgbk5" podStartSLOduration=12.81894625 podStartE2EDuration="12.81894625s" podCreationTimestamp="2026-03-12 08:24:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:24:47.81344293 +0000 UTC m=+1561.395478663" watchObservedRunningTime="2026-03-12 08:24:47.81894625 +0000 UTC m=+1561.400981993" Mar 12 08:24:47 crc kubenswrapper[4809]: I0312 08:24:47.859875 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 12 08:24:48 crc kubenswrapper[4809]: I0312 08:24:48.814217 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ab7a99ec-57ee-48dd-948b-6e9309a0aa10","Type":"ContainerStarted","Data":"bd81b5789268d51ac6fd1e137774c1f850c69c1d981fdd09eed490284a04ff4e"} Mar 12 08:24:48 crc kubenswrapper[4809]: I0312 08:24:48.815224 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ab7a99ec-57ee-48dd-948b-6e9309a0aa10","Type":"ContainerStarted","Data":"18d19b7f894b32e86c58371c318034682e4188ff3bf54eae0723460996b3707b"} Mar 12 08:24:48 crc kubenswrapper[4809]: I0312 08:24:48.818300 4809 generic.go:334] "Generic (PLEG): container finished" podID="8766c893-91de-40a1-b884-381264755524" containerID="6fe198a9e0493ad9e7ac15c32d55e3b52abb8fa1dcf93d05ded3ac3af3f2ae7c" exitCode=0 Mar 12 08:24:48 crc kubenswrapper[4809]: I0312 08:24:48.819183 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8766c893-91de-40a1-b884-381264755524","Type":"ContainerDied","Data":"6fe198a9e0493ad9e7ac15c32d55e3b52abb8fa1dcf93d05ded3ac3af3f2ae7c"} Mar 12 08:24:48 crc kubenswrapper[4809]: I0312 08:24:48.819268 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8766c893-91de-40a1-b884-381264755524","Type":"ContainerDied","Data":"8b26403ec7d11594f64ea47fce7f5382e8da5ed65d075f7e10340a77cddfdd51"} Mar 12 08:24:48 crc kubenswrapper[4809]: I0312 08:24:48.819307 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b26403ec7d11594f64ea47fce7f5382e8da5ed65d075f7e10340a77cddfdd51" Mar 12 08:24:48 crc kubenswrapper[4809]: I0312 08:24:48.943027 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 12 08:24:48 crc kubenswrapper[4809]: I0312 08:24:48.945081 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="6749cc55-c4e2-4011-bb67-0f1676ba152a" containerName="glance-httpd" containerID="cri-o://26c86f288b5f00692fc2f46685df82851d0a87b3cba2344baa420954d27b9f69" gracePeriod=30 Mar 12 08:24:48 crc kubenswrapper[4809]: I0312 08:24:48.945050 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="6749cc55-c4e2-4011-bb67-0f1676ba152a" containerName="glance-log" containerID="cri-o://4b3a4b7f8aff56f30bc85b7810c93920eb309ccc4f891a7bd049a1bd7d8eebb8" gracePeriod=30 Mar 12 08:24:48 crc kubenswrapper[4809]: I0312 08:24:48.968033 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 12 08:24:49 crc kubenswrapper[4809]: I0312 08:24:49.106246 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8766c893-91de-40a1-b884-381264755524-config-data\") pod \"8766c893-91de-40a1-b884-381264755524\" (UID: \"8766c893-91de-40a1-b884-381264755524\") " Mar 12 08:24:49 crc kubenswrapper[4809]: I0312 08:24:49.106728 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8766c893-91de-40a1-b884-381264755524-logs\") pod \"8766c893-91de-40a1-b884-381264755524\" (UID: \"8766c893-91de-40a1-b884-381264755524\") " Mar 12 08:24:49 crc kubenswrapper[4809]: I0312 08:24:49.106807 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8766c893-91de-40a1-b884-381264755524-httpd-run\") pod \"8766c893-91de-40a1-b884-381264755524\" (UID: \"8766c893-91de-40a1-b884-381264755524\") " Mar 12 08:24:49 crc kubenswrapper[4809]: I0312 08:24:49.106913 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8766c893-91de-40a1-b884-381264755524-public-tls-certs\") pod \"8766c893-91de-40a1-b884-381264755524\" (UID: \"8766c893-91de-40a1-b884-381264755524\") " Mar 12 08:24:49 crc kubenswrapper[4809]: I0312 08:24:49.106959 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8766c893-91de-40a1-b884-381264755524-scripts\") pod \"8766c893-91de-40a1-b884-381264755524\" (UID: \"8766c893-91de-40a1-b884-381264755524\") " Mar 12 08:24:49 crc kubenswrapper[4809]: I0312 08:24:49.107429 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8766c893-91de-40a1-b884-381264755524-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "8766c893-91de-40a1-b884-381264755524" (UID: "8766c893-91de-40a1-b884-381264755524"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:24:49 crc kubenswrapper[4809]: I0312 08:24:49.107529 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8766c893-91de-40a1-b884-381264755524-logs" (OuterVolumeSpecName: "logs") pod "8766c893-91de-40a1-b884-381264755524" (UID: "8766c893-91de-40a1-b884-381264755524"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:24:49 crc kubenswrapper[4809]: I0312 08:24:49.111787 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3f492e23-d032-4adb-b2a5-07078c3cb028\") pod \"8766c893-91de-40a1-b884-381264755524\" (UID: \"8766c893-91de-40a1-b884-381264755524\") " Mar 12 08:24:49 crc kubenswrapper[4809]: I0312 08:24:49.113087 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8766c893-91de-40a1-b884-381264755524-combined-ca-bundle\") pod \"8766c893-91de-40a1-b884-381264755524\" (UID: \"8766c893-91de-40a1-b884-381264755524\") " Mar 12 08:24:49 crc kubenswrapper[4809]: I0312 08:24:49.113166 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p98jr\" (UniqueName: \"kubernetes.io/projected/8766c893-91de-40a1-b884-381264755524-kube-api-access-p98jr\") pod \"8766c893-91de-40a1-b884-381264755524\" (UID: \"8766c893-91de-40a1-b884-381264755524\") " Mar 12 08:24:49 crc kubenswrapper[4809]: I0312 08:24:49.114268 4809 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8766c893-91de-40a1-b884-381264755524-logs\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:49 crc kubenswrapper[4809]: I0312 08:24:49.114285 4809 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8766c893-91de-40a1-b884-381264755524-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:49 crc kubenswrapper[4809]: I0312 08:24:49.135315 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8766c893-91de-40a1-b884-381264755524-kube-api-access-p98jr" (OuterVolumeSpecName: "kube-api-access-p98jr") pod "8766c893-91de-40a1-b884-381264755524" (UID: "8766c893-91de-40a1-b884-381264755524"). InnerVolumeSpecName "kube-api-access-p98jr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:24:49 crc kubenswrapper[4809]: I0312 08:24:49.140749 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8766c893-91de-40a1-b884-381264755524-scripts" (OuterVolumeSpecName: "scripts") pod "8766c893-91de-40a1-b884-381264755524" (UID: "8766c893-91de-40a1-b884-381264755524"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:24:49 crc kubenswrapper[4809]: I0312 08:24:49.193241 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8766c893-91de-40a1-b884-381264755524-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8766c893-91de-40a1-b884-381264755524" (UID: "8766c893-91de-40a1-b884-381264755524"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:24:49 crc kubenswrapper[4809]: I0312 08:24:49.217326 4809 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8766c893-91de-40a1-b884-381264755524-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:49 crc kubenswrapper[4809]: I0312 08:24:49.217362 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8766c893-91de-40a1-b884-381264755524-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:49 crc kubenswrapper[4809]: I0312 08:24:49.217375 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p98jr\" (UniqueName: \"kubernetes.io/projected/8766c893-91de-40a1-b884-381264755524-kube-api-access-p98jr\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:49 crc kubenswrapper[4809]: I0312 08:24:49.231388 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3f492e23-d032-4adb-b2a5-07078c3cb028" (OuterVolumeSpecName: "glance") pod "8766c893-91de-40a1-b884-381264755524" (UID: "8766c893-91de-40a1-b884-381264755524"). InnerVolumeSpecName "pvc-3f492e23-d032-4adb-b2a5-07078c3cb028". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 12 08:24:49 crc kubenswrapper[4809]: I0312 08:24:49.278996 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8766c893-91de-40a1-b884-381264755524-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "8766c893-91de-40a1-b884-381264755524" (UID: "8766c893-91de-40a1-b884-381264755524"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:24:49 crc kubenswrapper[4809]: I0312 08:24:49.319016 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8766c893-91de-40a1-b884-381264755524-config-data" (OuterVolumeSpecName: "config-data") pod "8766c893-91de-40a1-b884-381264755524" (UID: "8766c893-91de-40a1-b884-381264755524"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:24:49 crc kubenswrapper[4809]: I0312 08:24:49.320195 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8766c893-91de-40a1-b884-381264755524-config-data\") pod \"8766c893-91de-40a1-b884-381264755524\" (UID: \"8766c893-91de-40a1-b884-381264755524\") " Mar 12 08:24:49 crc kubenswrapper[4809]: W0312 08:24:49.320390 4809 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/8766c893-91de-40a1-b884-381264755524/volumes/kubernetes.io~secret/config-data Mar 12 08:24:49 crc kubenswrapper[4809]: I0312 08:24:49.320434 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8766c893-91de-40a1-b884-381264755524-config-data" (OuterVolumeSpecName: "config-data") pod "8766c893-91de-40a1-b884-381264755524" (UID: "8766c893-91de-40a1-b884-381264755524"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:24:49 crc kubenswrapper[4809]: I0312 08:24:49.321923 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8766c893-91de-40a1-b884-381264755524-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:49 crc kubenswrapper[4809]: I0312 08:24:49.321954 4809 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8766c893-91de-40a1-b884-381264755524-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:49 crc kubenswrapper[4809]: I0312 08:24:49.322006 4809 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-3f492e23-d032-4adb-b2a5-07078c3cb028\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3f492e23-d032-4adb-b2a5-07078c3cb028\") on node \"crc\" " Mar 12 08:24:49 crc kubenswrapper[4809]: I0312 08:24:49.395362 4809 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 12 08:24:49 crc kubenswrapper[4809]: I0312 08:24:49.395622 4809 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-3f492e23-d032-4adb-b2a5-07078c3cb028" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3f492e23-d032-4adb-b2a5-07078c3cb028") on node "crc" Mar 12 08:24:49 crc kubenswrapper[4809]: I0312 08:24:49.431190 4809 reconciler_common.go:293] "Volume detached for volume \"pvc-3f492e23-d032-4adb-b2a5-07078c3cb028\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3f492e23-d032-4adb-b2a5-07078c3cb028\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:49 crc kubenswrapper[4809]: I0312 08:24:49.838378 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ab7a99ec-57ee-48dd-948b-6e9309a0aa10","Type":"ContainerStarted","Data":"f801cb1b25a97850f8522f25e4060c983da2bda0f57cb4feb5b931e768c19aed"} Mar 12 08:24:49 crc kubenswrapper[4809]: I0312 08:24:49.840341 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Mar 12 08:24:49 crc kubenswrapper[4809]: I0312 08:24:49.847478 4809 generic.go:334] "Generic (PLEG): container finished" podID="6749cc55-c4e2-4011-bb67-0f1676ba152a" containerID="4b3a4b7f8aff56f30bc85b7810c93920eb309ccc4f891a7bd049a1bd7d8eebb8" exitCode=143 Mar 12 08:24:49 crc kubenswrapper[4809]: I0312 08:24:49.847737 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 12 08:24:49 crc kubenswrapper[4809]: I0312 08:24:49.847577 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6749cc55-c4e2-4011-bb67-0f1676ba152a","Type":"ContainerDied","Data":"4b3a4b7f8aff56f30bc85b7810c93920eb309ccc4f891a7bd049a1bd7d8eebb8"} Mar 12 08:24:49 crc kubenswrapper[4809]: I0312 08:24:49.884175 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.884143921 podStartE2EDuration="3.884143921s" podCreationTimestamp="2026-03-12 08:24:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:24:49.867353144 +0000 UTC m=+1563.449388877" watchObservedRunningTime="2026-03-12 08:24:49.884143921 +0000 UTC m=+1563.466179654" Mar 12 08:24:49 crc kubenswrapper[4809]: I0312 08:24:49.999268 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 12 08:24:50 crc kubenswrapper[4809]: I0312 08:24:50.017950 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 12 08:24:50 crc kubenswrapper[4809]: I0312 08:24:50.024219 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 12 08:24:50 crc kubenswrapper[4809]: E0312 08:24:50.024877 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8766c893-91de-40a1-b884-381264755524" containerName="glance-httpd" Mar 12 08:24:50 crc kubenswrapper[4809]: I0312 08:24:50.024902 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="8766c893-91de-40a1-b884-381264755524" containerName="glance-httpd" Mar 12 08:24:50 crc kubenswrapper[4809]: E0312 08:24:50.024962 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8766c893-91de-40a1-b884-381264755524" containerName="glance-log" Mar 12 08:24:50 crc kubenswrapper[4809]: I0312 08:24:50.024970 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="8766c893-91de-40a1-b884-381264755524" containerName="glance-log" Mar 12 08:24:50 crc kubenswrapper[4809]: I0312 08:24:50.025214 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="8766c893-91de-40a1-b884-381264755524" containerName="glance-log" Mar 12 08:24:50 crc kubenswrapper[4809]: I0312 08:24:50.025250 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="8766c893-91de-40a1-b884-381264755524" containerName="glance-httpd" Mar 12 08:24:50 crc kubenswrapper[4809]: I0312 08:24:50.028187 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 12 08:24:50 crc kubenswrapper[4809]: I0312 08:24:50.046944 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 12 08:24:50 crc kubenswrapper[4809]: I0312 08:24:50.048301 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 12 08:24:50 crc kubenswrapper[4809]: I0312 08:24:50.065977 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 12 08:24:50 crc kubenswrapper[4809]: I0312 08:24:50.108065 4809 scope.go:117] "RemoveContainer" containerID="f5b4b8d0bfc4bb029e2ed3cd3a5f85628813c70631ec341d6feb9709f5fcabad" Mar 12 08:24:50 crc kubenswrapper[4809]: I0312 08:24:50.108494 4809 scope.go:117] "RemoveContainer" containerID="bbc0864639506c8c2decddea0021d65f355e5aca267a33817f42430f82d211ce" Mar 12 08:24:50 crc kubenswrapper[4809]: I0312 08:24:50.159350 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1e2012b7-5391-4ad2-a9aa-0ffb55502fe7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1e2012b7-5391-4ad2-a9aa-0ffb55502fe7\") " pod="openstack/glance-default-external-api-0" Mar 12 08:24:50 crc kubenswrapper[4809]: I0312 08:24:50.159457 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e2012b7-5391-4ad2-a9aa-0ffb55502fe7-config-data\") pod \"glance-default-external-api-0\" (UID: \"1e2012b7-5391-4ad2-a9aa-0ffb55502fe7\") " pod="openstack/glance-default-external-api-0" Mar 12 08:24:50 crc kubenswrapper[4809]: I0312 08:24:50.159515 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e2012b7-5391-4ad2-a9aa-0ffb55502fe7-logs\") pod \"glance-default-external-api-0\" (UID: \"1e2012b7-5391-4ad2-a9aa-0ffb55502fe7\") " pod="openstack/glance-default-external-api-0" Mar 12 08:24:50 crc kubenswrapper[4809]: I0312 08:24:50.159547 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e2012b7-5391-4ad2-a9aa-0ffb55502fe7-scripts\") pod \"glance-default-external-api-0\" (UID: \"1e2012b7-5391-4ad2-a9aa-0ffb55502fe7\") " pod="openstack/glance-default-external-api-0" Mar 12 08:24:50 crc kubenswrapper[4809]: I0312 08:24:50.159578 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7htbl\" (UniqueName: \"kubernetes.io/projected/1e2012b7-5391-4ad2-a9aa-0ffb55502fe7-kube-api-access-7htbl\") pod \"glance-default-external-api-0\" (UID: \"1e2012b7-5391-4ad2-a9aa-0ffb55502fe7\") " pod="openstack/glance-default-external-api-0" Mar 12 08:24:50 crc kubenswrapper[4809]: I0312 08:24:50.159630 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3f492e23-d032-4adb-b2a5-07078c3cb028\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3f492e23-d032-4adb-b2a5-07078c3cb028\") pod \"glance-default-external-api-0\" (UID: \"1e2012b7-5391-4ad2-a9aa-0ffb55502fe7\") " pod="openstack/glance-default-external-api-0" Mar 12 08:24:50 crc kubenswrapper[4809]: I0312 08:24:50.159686 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e2012b7-5391-4ad2-a9aa-0ffb55502fe7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1e2012b7-5391-4ad2-a9aa-0ffb55502fe7\") " pod="openstack/glance-default-external-api-0" Mar 12 08:24:50 crc kubenswrapper[4809]: I0312 08:24:50.159707 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e2012b7-5391-4ad2-a9aa-0ffb55502fe7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1e2012b7-5391-4ad2-a9aa-0ffb55502fe7\") " pod="openstack/glance-default-external-api-0" Mar 12 08:24:50 crc kubenswrapper[4809]: I0312 08:24:50.263212 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e2012b7-5391-4ad2-a9aa-0ffb55502fe7-config-data\") pod \"glance-default-external-api-0\" (UID: \"1e2012b7-5391-4ad2-a9aa-0ffb55502fe7\") " pod="openstack/glance-default-external-api-0" Mar 12 08:24:50 crc kubenswrapper[4809]: I0312 08:24:50.263398 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e2012b7-5391-4ad2-a9aa-0ffb55502fe7-logs\") pod \"glance-default-external-api-0\" (UID: \"1e2012b7-5391-4ad2-a9aa-0ffb55502fe7\") " pod="openstack/glance-default-external-api-0" Mar 12 08:24:50 crc kubenswrapper[4809]: I0312 08:24:50.263446 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e2012b7-5391-4ad2-a9aa-0ffb55502fe7-scripts\") pod \"glance-default-external-api-0\" (UID: \"1e2012b7-5391-4ad2-a9aa-0ffb55502fe7\") " pod="openstack/glance-default-external-api-0" Mar 12 08:24:50 crc kubenswrapper[4809]: I0312 08:24:50.263497 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7htbl\" (UniqueName: \"kubernetes.io/projected/1e2012b7-5391-4ad2-a9aa-0ffb55502fe7-kube-api-access-7htbl\") pod \"glance-default-external-api-0\" (UID: \"1e2012b7-5391-4ad2-a9aa-0ffb55502fe7\") " pod="openstack/glance-default-external-api-0" Mar 12 08:24:50 crc kubenswrapper[4809]: I0312 08:24:50.263582 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3f492e23-d032-4adb-b2a5-07078c3cb028\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3f492e23-d032-4adb-b2a5-07078c3cb028\") pod \"glance-default-external-api-0\" (UID: \"1e2012b7-5391-4ad2-a9aa-0ffb55502fe7\") " pod="openstack/glance-default-external-api-0" Mar 12 08:24:50 crc kubenswrapper[4809]: I0312 08:24:50.263694 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e2012b7-5391-4ad2-a9aa-0ffb55502fe7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1e2012b7-5391-4ad2-a9aa-0ffb55502fe7\") " pod="openstack/glance-default-external-api-0" Mar 12 08:24:50 crc kubenswrapper[4809]: I0312 08:24:50.263755 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e2012b7-5391-4ad2-a9aa-0ffb55502fe7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1e2012b7-5391-4ad2-a9aa-0ffb55502fe7\") " pod="openstack/glance-default-external-api-0" Mar 12 08:24:50 crc kubenswrapper[4809]: I0312 08:24:50.263894 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1e2012b7-5391-4ad2-a9aa-0ffb55502fe7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1e2012b7-5391-4ad2-a9aa-0ffb55502fe7\") " pod="openstack/glance-default-external-api-0" Mar 12 08:24:50 crc kubenswrapper[4809]: I0312 08:24:50.265677 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e2012b7-5391-4ad2-a9aa-0ffb55502fe7-logs\") pod \"glance-default-external-api-0\" (UID: \"1e2012b7-5391-4ad2-a9aa-0ffb55502fe7\") " pod="openstack/glance-default-external-api-0" Mar 12 08:24:50 crc kubenswrapper[4809]: I0312 08:24:50.271309 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1e2012b7-5391-4ad2-a9aa-0ffb55502fe7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1e2012b7-5391-4ad2-a9aa-0ffb55502fe7\") " pod="openstack/glance-default-external-api-0" Mar 12 08:24:50 crc kubenswrapper[4809]: I0312 08:24:50.294068 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e2012b7-5391-4ad2-a9aa-0ffb55502fe7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1e2012b7-5391-4ad2-a9aa-0ffb55502fe7\") " pod="openstack/glance-default-external-api-0" Mar 12 08:24:50 crc kubenswrapper[4809]: I0312 08:24:50.297905 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e2012b7-5391-4ad2-a9aa-0ffb55502fe7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1e2012b7-5391-4ad2-a9aa-0ffb55502fe7\") " pod="openstack/glance-default-external-api-0" Mar 12 08:24:50 crc kubenswrapper[4809]: I0312 08:24:50.299041 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e2012b7-5391-4ad2-a9aa-0ffb55502fe7-scripts\") pod \"glance-default-external-api-0\" (UID: \"1e2012b7-5391-4ad2-a9aa-0ffb55502fe7\") " pod="openstack/glance-default-external-api-0" Mar 12 08:24:50 crc kubenswrapper[4809]: I0312 08:24:50.299393 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e2012b7-5391-4ad2-a9aa-0ffb55502fe7-config-data\") pod \"glance-default-external-api-0\" (UID: \"1e2012b7-5391-4ad2-a9aa-0ffb55502fe7\") " pod="openstack/glance-default-external-api-0" Mar 12 08:24:50 crc kubenswrapper[4809]: I0312 08:24:50.317807 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7htbl\" (UniqueName: \"kubernetes.io/projected/1e2012b7-5391-4ad2-a9aa-0ffb55502fe7-kube-api-access-7htbl\") pod \"glance-default-external-api-0\" (UID: \"1e2012b7-5391-4ad2-a9aa-0ffb55502fe7\") " pod="openstack/glance-default-external-api-0" Mar 12 08:24:50 crc kubenswrapper[4809]: I0312 08:24:50.452844 4809 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 08:24:50 crc kubenswrapper[4809]: I0312 08:24:50.452896 4809 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3f492e23-d032-4adb-b2a5-07078c3cb028\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3f492e23-d032-4adb-b2a5-07078c3cb028\") pod \"glance-default-external-api-0\" (UID: \"1e2012b7-5391-4ad2-a9aa-0ffb55502fe7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/13e7c719e4a31debe9dfef24b9790869cfa71b5da18da849b6185df18b1db2d8/globalmount\"" pod="openstack/glance-default-external-api-0" Mar 12 08:24:50 crc kubenswrapper[4809]: I0312 08:24:50.553782 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3f492e23-d032-4adb-b2a5-07078c3cb028\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3f492e23-d032-4adb-b2a5-07078c3cb028\") pod \"glance-default-external-api-0\" (UID: \"1e2012b7-5391-4ad2-a9aa-0ffb55502fe7\") " pod="openstack/glance-default-external-api-0" Mar 12 08:24:50 crc kubenswrapper[4809]: I0312 08:24:50.690700 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 12 08:24:50 crc kubenswrapper[4809]: I0312 08:24:50.893907 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6f489d8994-s29s4" event={"ID":"5e4b8a60-4319-4d91-a9e5-897d76e61f96","Type":"ContainerStarted","Data":"596be4b988621d353b5466866266e9e649c932a4ce55fdf02d6f7ec1930974ed"} Mar 12 08:24:50 crc kubenswrapper[4809]: I0312 08:24:50.894516 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-6f489d8994-s29s4" Mar 12 08:24:50 crc kubenswrapper[4809]: I0312 08:24:50.904549 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4","Type":"ContainerStarted","Data":"6a119d07ac6fa4d92287565a9c82a2eecacfb02e80db1efdfb36d0e990757e6a"} Mar 12 08:24:50 crc kubenswrapper[4809]: I0312 08:24:50.904708 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4" containerName="ceilometer-central-agent" containerID="cri-o://33861322f0ca7ad6764c313fc70dc41e76de3a469237d1921eeb4516e888e646" gracePeriod=30 Mar 12 08:24:50 crc kubenswrapper[4809]: I0312 08:24:50.904773 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 12 08:24:50 crc kubenswrapper[4809]: I0312 08:24:50.904826 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4" containerName="proxy-httpd" containerID="cri-o://6a119d07ac6fa4d92287565a9c82a2eecacfb02e80db1efdfb36d0e990757e6a" gracePeriod=30 Mar 12 08:24:50 crc kubenswrapper[4809]: I0312 08:24:50.904847 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4" containerName="sg-core" containerID="cri-o://7f979c08fe0996be1a08154d1d9ae9ce77c632626f5bcf5ab04dce25d67c3947" gracePeriod=30 Mar 12 08:24:50 crc kubenswrapper[4809]: I0312 08:24:50.904916 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4" containerName="ceilometer-notification-agent" containerID="cri-o://29d847e8495617fb01081c72d1b1d6904ae1c763591862874bcadafbca73e439" gracePeriod=30 Mar 12 08:24:50 crc kubenswrapper[4809]: I0312 08:24:50.927234 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-bcd664c56-8zpnk" event={"ID":"efdd8218-be37-4288-a09b-4b4119b5ca39","Type":"ContainerStarted","Data":"be3735f0f67c6917d2b921e4dc1e7667101646436f00d546509750763220c817"} Mar 12 08:24:50 crc kubenswrapper[4809]: I0312 08:24:50.932307 4809 scope.go:117] "RemoveContainer" containerID="be3735f0f67c6917d2b921e4dc1e7667101646436f00d546509750763220c817" Mar 12 08:24:50 crc kubenswrapper[4809]: E0312 08:24:50.932719 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 20s restarting failed container=heat-api pod=heat-api-bcd664c56-8zpnk_openstack(efdd8218-be37-4288-a09b-4b4119b5ca39)\"" pod="openstack/heat-api-bcd664c56-8zpnk" podUID="efdd8218-be37-4288-a09b-4b4119b5ca39" Mar 12 08:24:50 crc kubenswrapper[4809]: I0312 08:24:50.948309 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.479926079 podStartE2EDuration="16.948288394s" podCreationTimestamp="2026-03-12 08:24:34 +0000 UTC" firstStartedPulling="2026-03-12 08:24:35.329301764 +0000 UTC m=+1548.911337497" lastFinishedPulling="2026-03-12 08:24:49.797664079 +0000 UTC m=+1563.379699812" observedRunningTime="2026-03-12 08:24:50.944757027 +0000 UTC m=+1564.526792760" watchObservedRunningTime="2026-03-12 08:24:50.948288394 +0000 UTC m=+1564.530324127" Mar 12 08:24:50 crc kubenswrapper[4809]: I0312 08:24:50.986375 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-776db87b84-bnbp7" Mar 12 08:24:51 crc kubenswrapper[4809]: I0312 08:24:51.065483 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-bcd664c56-8zpnk" Mar 12 08:24:51 crc kubenswrapper[4809]: I0312 08:24:51.067370 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-bcd664c56-8zpnk" Mar 12 08:24:51 crc kubenswrapper[4809]: I0312 08:24:51.077720 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-65959dcdbb-g8h24"] Mar 12 08:24:51 crc kubenswrapper[4809]: I0312 08:24:51.077958 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-65959dcdbb-g8h24" podUID="970fd7c0-4095-4aa7-8e61-f300972a7124" containerName="heat-engine" containerID="cri-o://eaa71738ce9ba7a19c458b10f5fc2b4bf86dce7e57dd2894e77ed4731b9c6d6e" gracePeriod=60 Mar 12 08:24:51 crc kubenswrapper[4809]: I0312 08:24:51.136358 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8766c893-91de-40a1-b884-381264755524" path="/var/lib/kubelet/pods/8766c893-91de-40a1-b884-381264755524/volumes" Mar 12 08:24:51 crc kubenswrapper[4809]: I0312 08:24:51.429994 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 12 08:24:51 crc kubenswrapper[4809]: W0312 08:24:51.431565 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e2012b7_5391_4ad2_a9aa_0ffb55502fe7.slice/crio-464c83912a4d7047356ab2e9c2d76d5921cc97e4207637bef6f7b231d3f41eeb WatchSource:0}: Error finding container 464c83912a4d7047356ab2e9c2d76d5921cc97e4207637bef6f7b231d3f41eeb: Status 404 returned error can't find the container with id 464c83912a4d7047356ab2e9c2d76d5921cc97e4207637bef6f7b231d3f41eeb Mar 12 08:24:51 crc kubenswrapper[4809]: E0312 08:24:51.899211 4809 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="eaa71738ce9ba7a19c458b10f5fc2b4bf86dce7e57dd2894e77ed4731b9c6d6e" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 12 08:24:51 crc kubenswrapper[4809]: E0312 08:24:51.901884 4809 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="eaa71738ce9ba7a19c458b10f5fc2b4bf86dce7e57dd2894e77ed4731b9c6d6e" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 12 08:24:51 crc kubenswrapper[4809]: E0312 08:24:51.904607 4809 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="eaa71738ce9ba7a19c458b10f5fc2b4bf86dce7e57dd2894e77ed4731b9c6d6e" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 12 08:24:51 crc kubenswrapper[4809]: E0312 08:24:51.904662 4809 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-65959dcdbb-g8h24" podUID="970fd7c0-4095-4aa7-8e61-f300972a7124" containerName="heat-engine" Mar 12 08:24:51 crc kubenswrapper[4809]: I0312 08:24:51.948592 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6f489d8994-s29s4" event={"ID":"5e4b8a60-4319-4d91-a9e5-897d76e61f96","Type":"ContainerDied","Data":"596be4b988621d353b5466866266e9e649c932a4ce55fdf02d6f7ec1930974ed"} Mar 12 08:24:51 crc kubenswrapper[4809]: I0312 08:24:51.948726 4809 scope.go:117] "RemoveContainer" containerID="bbc0864639506c8c2decddea0021d65f355e5aca267a33817f42430f82d211ce" Mar 12 08:24:51 crc kubenswrapper[4809]: I0312 08:24:51.949960 4809 scope.go:117] "RemoveContainer" containerID="596be4b988621d353b5466866266e9e649c932a4ce55fdf02d6f7ec1930974ed" Mar 12 08:24:51 crc kubenswrapper[4809]: E0312 08:24:51.950441 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 20s restarting failed container=heat-cfnapi pod=heat-cfnapi-6f489d8994-s29s4_openstack(5e4b8a60-4319-4d91-a9e5-897d76e61f96)\"" pod="openstack/heat-cfnapi-6f489d8994-s29s4" podUID="5e4b8a60-4319-4d91-a9e5-897d76e61f96" Mar 12 08:24:51 crc kubenswrapper[4809]: I0312 08:24:51.948351 4809 generic.go:334] "Generic (PLEG): container finished" podID="5e4b8a60-4319-4d91-a9e5-897d76e61f96" containerID="596be4b988621d353b5466866266e9e649c932a4ce55fdf02d6f7ec1930974ed" exitCode=1 Mar 12 08:24:51 crc kubenswrapper[4809]: I0312 08:24:51.955854 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1e2012b7-5391-4ad2-a9aa-0ffb55502fe7","Type":"ContainerStarted","Data":"464c83912a4d7047356ab2e9c2d76d5921cc97e4207637bef6f7b231d3f41eeb"} Mar 12 08:24:51 crc kubenswrapper[4809]: I0312 08:24:51.982981 4809 generic.go:334] "Generic (PLEG): container finished" podID="ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4" containerID="6a119d07ac6fa4d92287565a9c82a2eecacfb02e80db1efdfb36d0e990757e6a" exitCode=0 Mar 12 08:24:51 crc kubenswrapper[4809]: I0312 08:24:51.983030 4809 generic.go:334] "Generic (PLEG): container finished" podID="ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4" containerID="7f979c08fe0996be1a08154d1d9ae9ce77c632626f5bcf5ab04dce25d67c3947" exitCode=2 Mar 12 08:24:51 crc kubenswrapper[4809]: I0312 08:24:51.983041 4809 generic.go:334] "Generic (PLEG): container finished" podID="ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4" containerID="29d847e8495617fb01081c72d1b1d6904ae1c763591862874bcadafbca73e439" exitCode=0 Mar 12 08:24:51 crc kubenswrapper[4809]: I0312 08:24:51.983186 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4","Type":"ContainerDied","Data":"6a119d07ac6fa4d92287565a9c82a2eecacfb02e80db1efdfb36d0e990757e6a"} Mar 12 08:24:51 crc kubenswrapper[4809]: I0312 08:24:51.983243 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4","Type":"ContainerDied","Data":"7f979c08fe0996be1a08154d1d9ae9ce77c632626f5bcf5ab04dce25d67c3947"} Mar 12 08:24:51 crc kubenswrapper[4809]: I0312 08:24:51.983254 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4","Type":"ContainerDied","Data":"29d847e8495617fb01081c72d1b1d6904ae1c763591862874bcadafbca73e439"} Mar 12 08:24:52 crc kubenswrapper[4809]: I0312 08:24:52.002918 4809 generic.go:334] "Generic (PLEG): container finished" podID="efdd8218-be37-4288-a09b-4b4119b5ca39" containerID="be3735f0f67c6917d2b921e4dc1e7667101646436f00d546509750763220c817" exitCode=1 Mar 12 08:24:52 crc kubenswrapper[4809]: I0312 08:24:52.002989 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-bcd664c56-8zpnk" event={"ID":"efdd8218-be37-4288-a09b-4b4119b5ca39","Type":"ContainerDied","Data":"be3735f0f67c6917d2b921e4dc1e7667101646436f00d546509750763220c817"} Mar 12 08:24:52 crc kubenswrapper[4809]: I0312 08:24:52.016417 4809 scope.go:117] "RemoveContainer" containerID="be3735f0f67c6917d2b921e4dc1e7667101646436f00d546509750763220c817" Mar 12 08:24:52 crc kubenswrapper[4809]: E0312 08:24:52.017490 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 20s restarting failed container=heat-api pod=heat-api-bcd664c56-8zpnk_openstack(efdd8218-be37-4288-a09b-4b4119b5ca39)\"" pod="openstack/heat-api-bcd664c56-8zpnk" podUID="efdd8218-be37-4288-a09b-4b4119b5ca39" Mar 12 08:24:52 crc kubenswrapper[4809]: I0312 08:24:52.017948 4809 scope.go:117] "RemoveContainer" containerID="f5b4b8d0bfc4bb029e2ed3cd3a5f85628813c70631ec341d6feb9709f5fcabad" Mar 12 08:24:52 crc kubenswrapper[4809]: I0312 08:24:52.334817 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8bcp4" Mar 12 08:24:52 crc kubenswrapper[4809]: I0312 08:24:52.410847 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8bcp4" Mar 12 08:24:52 crc kubenswrapper[4809]: I0312 08:24:52.578409 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8bcp4"] Mar 12 08:24:53 crc kubenswrapper[4809]: I0312 08:24:53.024322 4809 scope.go:117] "RemoveContainer" containerID="be3735f0f67c6917d2b921e4dc1e7667101646436f00d546509750763220c817" Mar 12 08:24:53 crc kubenswrapper[4809]: E0312 08:24:53.025769 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 20s restarting failed container=heat-api pod=heat-api-bcd664c56-8zpnk_openstack(efdd8218-be37-4288-a09b-4b4119b5ca39)\"" pod="openstack/heat-api-bcd664c56-8zpnk" podUID="efdd8218-be37-4288-a09b-4b4119b5ca39" Mar 12 08:24:53 crc kubenswrapper[4809]: I0312 08:24:53.036974 4809 scope.go:117] "RemoveContainer" containerID="596be4b988621d353b5466866266e9e649c932a4ce55fdf02d6f7ec1930974ed" Mar 12 08:24:53 crc kubenswrapper[4809]: E0312 08:24:53.037353 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 20s restarting failed container=heat-cfnapi pod=heat-cfnapi-6f489d8994-s29s4_openstack(5e4b8a60-4319-4d91-a9e5-897d76e61f96)\"" pod="openstack/heat-cfnapi-6f489d8994-s29s4" podUID="5e4b8a60-4319-4d91-a9e5-897d76e61f96" Mar 12 08:24:53 crc kubenswrapper[4809]: I0312 08:24:53.048552 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1e2012b7-5391-4ad2-a9aa-0ffb55502fe7","Type":"ContainerStarted","Data":"ff63da6956515015d023c54997c63acf2701c8dca03cc3db4a668e8dc89e09b0"} Mar 12 08:24:53 crc kubenswrapper[4809]: I0312 08:24:53.048838 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1e2012b7-5391-4ad2-a9aa-0ffb55502fe7","Type":"ContainerStarted","Data":"f6839db747ebaed1d0c077b7e97cbb6da5639713793a288aa1c4ab812f576caa"} Mar 12 08:24:53 crc kubenswrapper[4809]: I0312 08:24:53.061811 4809 generic.go:334] "Generic (PLEG): container finished" podID="6749cc55-c4e2-4011-bb67-0f1676ba152a" containerID="26c86f288b5f00692fc2f46685df82851d0a87b3cba2344baa420954d27b9f69" exitCode=0 Mar 12 08:24:53 crc kubenswrapper[4809]: I0312 08:24:53.062751 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6749cc55-c4e2-4011-bb67-0f1676ba152a","Type":"ContainerDied","Data":"26c86f288b5f00692fc2f46685df82851d0a87b3cba2344baa420954d27b9f69"} Mar 12 08:24:53 crc kubenswrapper[4809]: I0312 08:24:53.105001 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.104973472 podStartE2EDuration="4.104973472s" podCreationTimestamp="2026-03-12 08:24:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:24:53.094742014 +0000 UTC m=+1566.676777757" watchObservedRunningTime="2026-03-12 08:24:53.104973472 +0000 UTC m=+1566.687009205" Mar 12 08:24:53 crc kubenswrapper[4809]: I0312 08:24:53.278145 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 12 08:24:53 crc kubenswrapper[4809]: I0312 08:24:53.405811 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cf6534d4-9b30-4b23-b28c-87ea2179a8ac\") pod \"6749cc55-c4e2-4011-bb67-0f1676ba152a\" (UID: \"6749cc55-c4e2-4011-bb67-0f1676ba152a\") " Mar 12 08:24:53 crc kubenswrapper[4809]: I0312 08:24:53.405890 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6749cc55-c4e2-4011-bb67-0f1676ba152a-combined-ca-bundle\") pod \"6749cc55-c4e2-4011-bb67-0f1676ba152a\" (UID: \"6749cc55-c4e2-4011-bb67-0f1676ba152a\") " Mar 12 08:24:53 crc kubenswrapper[4809]: I0312 08:24:53.405950 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6749cc55-c4e2-4011-bb67-0f1676ba152a-logs\") pod \"6749cc55-c4e2-4011-bb67-0f1676ba152a\" (UID: \"6749cc55-c4e2-4011-bb67-0f1676ba152a\") " Mar 12 08:24:53 crc kubenswrapper[4809]: I0312 08:24:53.406052 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6749cc55-c4e2-4011-bb67-0f1676ba152a-internal-tls-certs\") pod \"6749cc55-c4e2-4011-bb67-0f1676ba152a\" (UID: \"6749cc55-c4e2-4011-bb67-0f1676ba152a\") " Mar 12 08:24:53 crc kubenswrapper[4809]: I0312 08:24:53.406174 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shrw2\" (UniqueName: \"kubernetes.io/projected/6749cc55-c4e2-4011-bb67-0f1676ba152a-kube-api-access-shrw2\") pod \"6749cc55-c4e2-4011-bb67-0f1676ba152a\" (UID: \"6749cc55-c4e2-4011-bb67-0f1676ba152a\") " Mar 12 08:24:53 crc kubenswrapper[4809]: I0312 08:24:53.406240 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6749cc55-c4e2-4011-bb67-0f1676ba152a-httpd-run\") pod \"6749cc55-c4e2-4011-bb67-0f1676ba152a\" (UID: \"6749cc55-c4e2-4011-bb67-0f1676ba152a\") " Mar 12 08:24:53 crc kubenswrapper[4809]: I0312 08:24:53.406350 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6749cc55-c4e2-4011-bb67-0f1676ba152a-config-data\") pod \"6749cc55-c4e2-4011-bb67-0f1676ba152a\" (UID: \"6749cc55-c4e2-4011-bb67-0f1676ba152a\") " Mar 12 08:24:53 crc kubenswrapper[4809]: I0312 08:24:53.406396 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6749cc55-c4e2-4011-bb67-0f1676ba152a-scripts\") pod \"6749cc55-c4e2-4011-bb67-0f1676ba152a\" (UID: \"6749cc55-c4e2-4011-bb67-0f1676ba152a\") " Mar 12 08:24:53 crc kubenswrapper[4809]: I0312 08:24:53.406941 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6749cc55-c4e2-4011-bb67-0f1676ba152a-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "6749cc55-c4e2-4011-bb67-0f1676ba152a" (UID: "6749cc55-c4e2-4011-bb67-0f1676ba152a"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:24:53 crc kubenswrapper[4809]: I0312 08:24:53.407300 4809 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6749cc55-c4e2-4011-bb67-0f1676ba152a-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:53 crc kubenswrapper[4809]: I0312 08:24:53.407797 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6749cc55-c4e2-4011-bb67-0f1676ba152a-logs" (OuterVolumeSpecName: "logs") pod "6749cc55-c4e2-4011-bb67-0f1676ba152a" (UID: "6749cc55-c4e2-4011-bb67-0f1676ba152a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:24:53 crc kubenswrapper[4809]: I0312 08:24:53.420582 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6749cc55-c4e2-4011-bb67-0f1676ba152a-kube-api-access-shrw2" (OuterVolumeSpecName: "kube-api-access-shrw2") pod "6749cc55-c4e2-4011-bb67-0f1676ba152a" (UID: "6749cc55-c4e2-4011-bb67-0f1676ba152a"). InnerVolumeSpecName "kube-api-access-shrw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:24:53 crc kubenswrapper[4809]: I0312 08:24:53.430477 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6749cc55-c4e2-4011-bb67-0f1676ba152a-scripts" (OuterVolumeSpecName: "scripts") pod "6749cc55-c4e2-4011-bb67-0f1676ba152a" (UID: "6749cc55-c4e2-4011-bb67-0f1676ba152a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:24:53 crc kubenswrapper[4809]: I0312 08:24:53.482501 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6749cc55-c4e2-4011-bb67-0f1676ba152a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6749cc55-c4e2-4011-bb67-0f1676ba152a" (UID: "6749cc55-c4e2-4011-bb67-0f1676ba152a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:24:53 crc kubenswrapper[4809]: I0312 08:24:53.509644 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6749cc55-c4e2-4011-bb67-0f1676ba152a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:53 crc kubenswrapper[4809]: I0312 08:24:53.509680 4809 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6749cc55-c4e2-4011-bb67-0f1676ba152a-logs\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:53 crc kubenswrapper[4809]: I0312 08:24:53.509697 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shrw2\" (UniqueName: \"kubernetes.io/projected/6749cc55-c4e2-4011-bb67-0f1676ba152a-kube-api-access-shrw2\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:53 crc kubenswrapper[4809]: I0312 08:24:53.509710 4809 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6749cc55-c4e2-4011-bb67-0f1676ba152a-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:53 crc kubenswrapper[4809]: I0312 08:24:53.541395 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6749cc55-c4e2-4011-bb67-0f1676ba152a-config-data" (OuterVolumeSpecName: "config-data") pod "6749cc55-c4e2-4011-bb67-0f1676ba152a" (UID: "6749cc55-c4e2-4011-bb67-0f1676ba152a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:24:53 crc kubenswrapper[4809]: I0312 08:24:53.556943 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6749cc55-c4e2-4011-bb67-0f1676ba152a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "6749cc55-c4e2-4011-bb67-0f1676ba152a" (UID: "6749cc55-c4e2-4011-bb67-0f1676ba152a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:24:53 crc kubenswrapper[4809]: I0312 08:24:53.580422 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cf6534d4-9b30-4b23-b28c-87ea2179a8ac" (OuterVolumeSpecName: "glance") pod "6749cc55-c4e2-4011-bb67-0f1676ba152a" (UID: "6749cc55-c4e2-4011-bb67-0f1676ba152a"). InnerVolumeSpecName "pvc-cf6534d4-9b30-4b23-b28c-87ea2179a8ac". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 12 08:24:53 crc kubenswrapper[4809]: I0312 08:24:53.627191 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-887k5"] Mar 12 08:24:53 crc kubenswrapper[4809]: E0312 08:24:53.627969 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6749cc55-c4e2-4011-bb67-0f1676ba152a" containerName="glance-httpd" Mar 12 08:24:53 crc kubenswrapper[4809]: I0312 08:24:53.627985 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="6749cc55-c4e2-4011-bb67-0f1676ba152a" containerName="glance-httpd" Mar 12 08:24:53 crc kubenswrapper[4809]: E0312 08:24:53.628026 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6749cc55-c4e2-4011-bb67-0f1676ba152a" containerName="glance-log" Mar 12 08:24:53 crc kubenswrapper[4809]: I0312 08:24:53.628032 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="6749cc55-c4e2-4011-bb67-0f1676ba152a" containerName="glance-log" Mar 12 08:24:53 crc kubenswrapper[4809]: I0312 08:24:53.628314 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="6749cc55-c4e2-4011-bb67-0f1676ba152a" containerName="glance-httpd" Mar 12 08:24:53 crc kubenswrapper[4809]: I0312 08:24:53.628329 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="6749cc55-c4e2-4011-bb67-0f1676ba152a" containerName="glance-log" Mar 12 08:24:53 crc kubenswrapper[4809]: I0312 08:24:53.643711 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-887k5" Mar 12 08:24:53 crc kubenswrapper[4809]: I0312 08:24:53.646261 4809 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-cf6534d4-9b30-4b23-b28c-87ea2179a8ac\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cf6534d4-9b30-4b23-b28c-87ea2179a8ac\") on node \"crc\" " Mar 12 08:24:53 crc kubenswrapper[4809]: I0312 08:24:53.646314 4809 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6749cc55-c4e2-4011-bb67-0f1676ba152a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:53 crc kubenswrapper[4809]: I0312 08:24:53.646335 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6749cc55-c4e2-4011-bb67-0f1676ba152a-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:53 crc kubenswrapper[4809]: I0312 08:24:53.665083 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-887k5"] Mar 12 08:24:53 crc kubenswrapper[4809]: I0312 08:24:53.704878 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-bqf9m"] Mar 12 08:24:53 crc kubenswrapper[4809]: I0312 08:24:53.707226 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-bqf9m" Mar 12 08:24:53 crc kubenswrapper[4809]: I0312 08:24:53.735999 4809 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 12 08:24:53 crc kubenswrapper[4809]: I0312 08:24:53.736370 4809 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-cf6534d4-9b30-4b23-b28c-87ea2179a8ac" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cf6534d4-9b30-4b23-b28c-87ea2179a8ac") on node "crc" Mar 12 08:24:53 crc kubenswrapper[4809]: I0312 08:24:53.753765 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdp8l\" (UniqueName: \"kubernetes.io/projected/3b066594-e122-4fc4-95bb-c66b48bfd0f3-kube-api-access-cdp8l\") pod \"nova-api-db-create-887k5\" (UID: \"3b066594-e122-4fc4-95bb-c66b48bfd0f3\") " pod="openstack/nova-api-db-create-887k5" Mar 12 08:24:53 crc kubenswrapper[4809]: I0312 08:24:53.754479 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b066594-e122-4fc4-95bb-c66b48bfd0f3-operator-scripts\") pod \"nova-api-db-create-887k5\" (UID: \"3b066594-e122-4fc4-95bb-c66b48bfd0f3\") " pod="openstack/nova-api-db-create-887k5" Mar 12 08:24:53 crc kubenswrapper[4809]: I0312 08:24:53.754934 4809 reconciler_common.go:293] "Volume detached for volume \"pvc-cf6534d4-9b30-4b23-b28c-87ea2179a8ac\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cf6534d4-9b30-4b23-b28c-87ea2179a8ac\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:53 crc kubenswrapper[4809]: I0312 08:24:53.780223 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-353e-account-create-update-hmg8n"] Mar 12 08:24:53 crc kubenswrapper[4809]: I0312 08:24:53.782371 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-353e-account-create-update-hmg8n" Mar 12 08:24:53 crc kubenswrapper[4809]: I0312 08:24:53.787505 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Mar 12 08:24:53 crc kubenswrapper[4809]: I0312 08:24:53.813218 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-353e-account-create-update-hmg8n"] Mar 12 08:24:53 crc kubenswrapper[4809]: I0312 08:24:53.826575 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-bqf9m"] Mar 12 08:24:53 crc kubenswrapper[4809]: I0312 08:24:53.860696 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdp8l\" (UniqueName: \"kubernetes.io/projected/3b066594-e122-4fc4-95bb-c66b48bfd0f3-kube-api-access-cdp8l\") pod \"nova-api-db-create-887k5\" (UID: \"3b066594-e122-4fc4-95bb-c66b48bfd0f3\") " pod="openstack/nova-api-db-create-887k5" Mar 12 08:24:53 crc kubenswrapper[4809]: I0312 08:24:53.860847 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b066594-e122-4fc4-95bb-c66b48bfd0f3-operator-scripts\") pod \"nova-api-db-create-887k5\" (UID: \"3b066594-e122-4fc4-95bb-c66b48bfd0f3\") " pod="openstack/nova-api-db-create-887k5" Mar 12 08:24:53 crc kubenswrapper[4809]: I0312 08:24:53.861167 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nlgf\" (UniqueName: \"kubernetes.io/projected/547a107f-945b-4d0d-929d-061a9a044077-kube-api-access-2nlgf\") pod \"nova-cell0-db-create-bqf9m\" (UID: \"547a107f-945b-4d0d-929d-061a9a044077\") " pod="openstack/nova-cell0-db-create-bqf9m" Mar 12 08:24:53 crc kubenswrapper[4809]: I0312 08:24:53.861251 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/547a107f-945b-4d0d-929d-061a9a044077-operator-scripts\") pod \"nova-cell0-db-create-bqf9m\" (UID: \"547a107f-945b-4d0d-929d-061a9a044077\") " pod="openstack/nova-cell0-db-create-bqf9m" Mar 12 08:24:53 crc kubenswrapper[4809]: I0312 08:24:53.862097 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b066594-e122-4fc4-95bb-c66b48bfd0f3-operator-scripts\") pod \"nova-api-db-create-887k5\" (UID: \"3b066594-e122-4fc4-95bb-c66b48bfd0f3\") " pod="openstack/nova-api-db-create-887k5" Mar 12 08:24:53 crc kubenswrapper[4809]: I0312 08:24:53.882496 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-rr92v"] Mar 12 08:24:53 crc kubenswrapper[4809]: I0312 08:24:53.884270 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-rr92v" Mar 12 08:24:53 crc kubenswrapper[4809]: I0312 08:24:53.884918 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdp8l\" (UniqueName: \"kubernetes.io/projected/3b066594-e122-4fc4-95bb-c66b48bfd0f3-kube-api-access-cdp8l\") pod \"nova-api-db-create-887k5\" (UID: \"3b066594-e122-4fc4-95bb-c66b48bfd0f3\") " pod="openstack/nova-api-db-create-887k5" Mar 12 08:24:53 crc kubenswrapper[4809]: I0312 08:24:53.904754 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-rr92v"] Mar 12 08:24:53 crc kubenswrapper[4809]: I0312 08:24:53.959530 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-ed56-account-create-update-mdf8p"] Mar 12 08:24:53 crc kubenswrapper[4809]: I0312 08:24:53.974668 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4043f775-958c-4f5d-828f-dda0811c0a7e-operator-scripts\") pod \"nova-api-353e-account-create-update-hmg8n\" (UID: \"4043f775-958c-4f5d-828f-dda0811c0a7e\") " pod="openstack/nova-api-353e-account-create-update-hmg8n" Mar 12 08:24:53 crc kubenswrapper[4809]: I0312 08:24:53.974800 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzpcg\" (UniqueName: \"kubernetes.io/projected/4043f775-958c-4f5d-828f-dda0811c0a7e-kube-api-access-gzpcg\") pod \"nova-api-353e-account-create-update-hmg8n\" (UID: \"4043f775-958c-4f5d-828f-dda0811c0a7e\") " pod="openstack/nova-api-353e-account-create-update-hmg8n" Mar 12 08:24:53 crc kubenswrapper[4809]: I0312 08:24:53.975067 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nlgf\" (UniqueName: \"kubernetes.io/projected/547a107f-945b-4d0d-929d-061a9a044077-kube-api-access-2nlgf\") pod \"nova-cell0-db-create-bqf9m\" (UID: \"547a107f-945b-4d0d-929d-061a9a044077\") " pod="openstack/nova-cell0-db-create-bqf9m" Mar 12 08:24:53 crc kubenswrapper[4809]: I0312 08:24:53.975154 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/547a107f-945b-4d0d-929d-061a9a044077-operator-scripts\") pod \"nova-cell0-db-create-bqf9m\" (UID: \"547a107f-945b-4d0d-929d-061a9a044077\") " pod="openstack/nova-cell0-db-create-bqf9m" Mar 12 08:24:53 crc kubenswrapper[4809]: I0312 08:24:53.979497 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/547a107f-945b-4d0d-929d-061a9a044077-operator-scripts\") pod \"nova-cell0-db-create-bqf9m\" (UID: \"547a107f-945b-4d0d-929d-061a9a044077\") " pod="openstack/nova-cell0-db-create-bqf9m" Mar 12 08:24:53 crc kubenswrapper[4809]: I0312 08:24:53.980181 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-887k5" Mar 12 08:24:53 crc kubenswrapper[4809]: I0312 08:24:53.998993 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-ed56-account-create-update-mdf8p"] Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.000050 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-ed56-account-create-update-mdf8p" Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.000133 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nlgf\" (UniqueName: \"kubernetes.io/projected/547a107f-945b-4d0d-929d-061a9a044077-kube-api-access-2nlgf\") pod \"nova-cell0-db-create-bqf9m\" (UID: \"547a107f-945b-4d0d-929d-061a9a044077\") " pod="openstack/nova-cell0-db-create-bqf9m" Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.004376 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.034971 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-bqf9m" Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.079410 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cfmq\" (UniqueName: \"kubernetes.io/projected/3698cbf3-c28c-429f-b568-d8a68b979dfb-kube-api-access-4cfmq\") pod \"nova-cell1-db-create-rr92v\" (UID: \"3698cbf3-c28c-429f-b568-d8a68b979dfb\") " pod="openstack/nova-cell1-db-create-rr92v" Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.079472 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzpcg\" (UniqueName: \"kubernetes.io/projected/4043f775-958c-4f5d-828f-dda0811c0a7e-kube-api-access-gzpcg\") pod \"nova-api-353e-account-create-update-hmg8n\" (UID: \"4043f775-958c-4f5d-828f-dda0811c0a7e\") " pod="openstack/nova-api-353e-account-create-update-hmg8n" Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.079631 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3698cbf3-c28c-429f-b568-d8a68b979dfb-operator-scripts\") pod \"nova-cell1-db-create-rr92v\" (UID: \"3698cbf3-c28c-429f-b568-d8a68b979dfb\") " pod="openstack/nova-cell1-db-create-rr92v" Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.079702 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4043f775-958c-4f5d-828f-dda0811c0a7e-operator-scripts\") pod \"nova-api-353e-account-create-update-hmg8n\" (UID: \"4043f775-958c-4f5d-828f-dda0811c0a7e\") " pod="openstack/nova-api-353e-account-create-update-hmg8n" Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.082494 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4043f775-958c-4f5d-828f-dda0811c0a7e-operator-scripts\") pod \"nova-api-353e-account-create-update-hmg8n\" (UID: \"4043f775-958c-4f5d-828f-dda0811c0a7e\") " pod="openstack/nova-api-353e-account-create-update-hmg8n" Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.102391 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-c95a-account-create-update-k247d"] Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.102502 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzpcg\" (UniqueName: \"kubernetes.io/projected/4043f775-958c-4f5d-828f-dda0811c0a7e-kube-api-access-gzpcg\") pod \"nova-api-353e-account-create-update-hmg8n\" (UID: \"4043f775-958c-4f5d-828f-dda0811c0a7e\") " pod="openstack/nova-api-353e-account-create-update-hmg8n" Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.104866 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c95a-account-create-update-k247d" Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.113817 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-353e-account-create-update-hmg8n" Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.123480 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-c95a-account-create-update-k247d"] Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.134569 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.159979 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8bcp4" podUID="2d78fd7f-3718-4999-a661-ad590dc808a5" containerName="registry-server" containerID="cri-o://a8d6861d68e11422991828ff49aed83698b07929c3598f11c5badcbf7ebf6610" gracePeriod=2 Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.160977 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.162059 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6749cc55-c4e2-4011-bb67-0f1676ba152a","Type":"ContainerDied","Data":"f84528640c039303cb4e4f580e5ec019695f6453e3f2b7259c9deb955d535714"} Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.162310 4809 scope.go:117] "RemoveContainer" containerID="26c86f288b5f00692fc2f46685df82851d0a87b3cba2344baa420954d27b9f69" Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.182832 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3698cbf3-c28c-429f-b568-d8a68b979dfb-operator-scripts\") pod \"nova-cell1-db-create-rr92v\" (UID: \"3698cbf3-c28c-429f-b568-d8a68b979dfb\") " pod="openstack/nova-cell1-db-create-rr92v" Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.182976 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cfmq\" (UniqueName: \"kubernetes.io/projected/3698cbf3-c28c-429f-b568-d8a68b979dfb-kube-api-access-4cfmq\") pod \"nova-cell1-db-create-rr92v\" (UID: \"3698cbf3-c28c-429f-b568-d8a68b979dfb\") " pod="openstack/nova-cell1-db-create-rr92v" Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.183073 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c61faad0-e94e-4480-8ac7-854d1717dc78-operator-scripts\") pod \"nova-cell0-ed56-account-create-update-mdf8p\" (UID: \"c61faad0-e94e-4480-8ac7-854d1717dc78\") " pod="openstack/nova-cell0-ed56-account-create-update-mdf8p" Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.183194 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5v989\" (UniqueName: \"kubernetes.io/projected/c61faad0-e94e-4480-8ac7-854d1717dc78-kube-api-access-5v989\") pod \"nova-cell0-ed56-account-create-update-mdf8p\" (UID: \"c61faad0-e94e-4480-8ac7-854d1717dc78\") " pod="openstack/nova-cell0-ed56-account-create-update-mdf8p" Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.183971 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3698cbf3-c28c-429f-b568-d8a68b979dfb-operator-scripts\") pod \"nova-cell1-db-create-rr92v\" (UID: \"3698cbf3-c28c-429f-b568-d8a68b979dfb\") " pod="openstack/nova-cell1-db-create-rr92v" Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.214786 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cfmq\" (UniqueName: \"kubernetes.io/projected/3698cbf3-c28c-429f-b568-d8a68b979dfb-kube-api-access-4cfmq\") pod \"nova-cell1-db-create-rr92v\" (UID: \"3698cbf3-c28c-429f-b568-d8a68b979dfb\") " pod="openstack/nova-cell1-db-create-rr92v" Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.285965 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c61faad0-e94e-4480-8ac7-854d1717dc78-operator-scripts\") pod \"nova-cell0-ed56-account-create-update-mdf8p\" (UID: \"c61faad0-e94e-4480-8ac7-854d1717dc78\") " pod="openstack/nova-cell0-ed56-account-create-update-mdf8p" Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.286145 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/efb90ca2-e6b9-4160-a1be-80e9418f9d43-operator-scripts\") pod \"nova-cell1-c95a-account-create-update-k247d\" (UID: \"efb90ca2-e6b9-4160-a1be-80e9418f9d43\") " pod="openstack/nova-cell1-c95a-account-create-update-k247d" Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.286219 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5v989\" (UniqueName: \"kubernetes.io/projected/c61faad0-e94e-4480-8ac7-854d1717dc78-kube-api-access-5v989\") pod \"nova-cell0-ed56-account-create-update-mdf8p\" (UID: \"c61faad0-e94e-4480-8ac7-854d1717dc78\") " pod="openstack/nova-cell0-ed56-account-create-update-mdf8p" Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.286302 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvlz5\" (UniqueName: \"kubernetes.io/projected/efb90ca2-e6b9-4160-a1be-80e9418f9d43-kube-api-access-tvlz5\") pod \"nova-cell1-c95a-account-create-update-k247d\" (UID: \"efb90ca2-e6b9-4160-a1be-80e9418f9d43\") " pod="openstack/nova-cell1-c95a-account-create-update-k247d" Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.288239 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c61faad0-e94e-4480-8ac7-854d1717dc78-operator-scripts\") pod \"nova-cell0-ed56-account-create-update-mdf8p\" (UID: \"c61faad0-e94e-4480-8ac7-854d1717dc78\") " pod="openstack/nova-cell0-ed56-account-create-update-mdf8p" Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.334218 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5v989\" (UniqueName: \"kubernetes.io/projected/c61faad0-e94e-4480-8ac7-854d1717dc78-kube-api-access-5v989\") pod \"nova-cell0-ed56-account-create-update-mdf8p\" (UID: \"c61faad0-e94e-4480-8ac7-854d1717dc78\") " pod="openstack/nova-cell0-ed56-account-create-update-mdf8p" Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.336696 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-ed56-account-create-update-mdf8p" Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.389237 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/efb90ca2-e6b9-4160-a1be-80e9418f9d43-operator-scripts\") pod \"nova-cell1-c95a-account-create-update-k247d\" (UID: \"efb90ca2-e6b9-4160-a1be-80e9418f9d43\") " pod="openstack/nova-cell1-c95a-account-create-update-k247d" Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.389741 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvlz5\" (UniqueName: \"kubernetes.io/projected/efb90ca2-e6b9-4160-a1be-80e9418f9d43-kube-api-access-tvlz5\") pod \"nova-cell1-c95a-account-create-update-k247d\" (UID: \"efb90ca2-e6b9-4160-a1be-80e9418f9d43\") " pod="openstack/nova-cell1-c95a-account-create-update-k247d" Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.393322 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/efb90ca2-e6b9-4160-a1be-80e9418f9d43-operator-scripts\") pod \"nova-cell1-c95a-account-create-update-k247d\" (UID: \"efb90ca2-e6b9-4160-a1be-80e9418f9d43\") " pod="openstack/nova-cell1-c95a-account-create-update-k247d" Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.417204 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvlz5\" (UniqueName: \"kubernetes.io/projected/efb90ca2-e6b9-4160-a1be-80e9418f9d43-kube-api-access-tvlz5\") pod \"nova-cell1-c95a-account-create-update-k247d\" (UID: \"efb90ca2-e6b9-4160-a1be-80e9418f9d43\") " pod="openstack/nova-cell1-c95a-account-create-update-k247d" Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.529598 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-rr92v" Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.590673 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c95a-account-create-update-k247d" Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.605270 4809 scope.go:117] "RemoveContainer" containerID="4b3a4b7f8aff56f30bc85b7810c93920eb309ccc4f891a7bd049a1bd7d8eebb8" Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.646237 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.657679 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.672065 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.679140 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.685853 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.686258 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.692628 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.710520 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-887k5"] Mar 12 08:24:54 crc kubenswrapper[4809]: W0312 08:24:54.744215 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b066594_e122_4fc4_95bb_c66b48bfd0f3.slice/crio-96eeae435fb413c2328f87e4e1f9db61de4175bd380e177da5396628640d3562 WatchSource:0}: Error finding container 96eeae435fb413c2328f87e4e1f9db61de4175bd380e177da5396628640d3562: Status 404 returned error can't find the container with id 96eeae435fb413c2328f87e4e1f9db61de4175bd380e177da5396628640d3562 Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.819414 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d1e77680-3cef-42db-8595-1aa372cf995b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d1e77680-3cef-42db-8595-1aa372cf995b\") " pod="openstack/glance-default-internal-api-0" Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.819651 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1e77680-3cef-42db-8595-1aa372cf995b-logs\") pod \"glance-default-internal-api-0\" (UID: \"d1e77680-3cef-42db-8595-1aa372cf995b\") " pod="openstack/glance-default-internal-api-0" Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.819689 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1e77680-3cef-42db-8595-1aa372cf995b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d1e77680-3cef-42db-8595-1aa372cf995b\") " pod="openstack/glance-default-internal-api-0" Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.819820 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1e77680-3cef-42db-8595-1aa372cf995b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d1e77680-3cef-42db-8595-1aa372cf995b\") " pod="openstack/glance-default-internal-api-0" Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.819862 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-cf6534d4-9b30-4b23-b28c-87ea2179a8ac\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cf6534d4-9b30-4b23-b28c-87ea2179a8ac\") pod \"glance-default-internal-api-0\" (UID: \"d1e77680-3cef-42db-8595-1aa372cf995b\") " pod="openstack/glance-default-internal-api-0" Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.819893 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1e77680-3cef-42db-8595-1aa372cf995b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d1e77680-3cef-42db-8595-1aa372cf995b\") " pod="openstack/glance-default-internal-api-0" Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.819923 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5sff\" (UniqueName: \"kubernetes.io/projected/d1e77680-3cef-42db-8595-1aa372cf995b-kube-api-access-j5sff\") pod \"glance-default-internal-api-0\" (UID: \"d1e77680-3cef-42db-8595-1aa372cf995b\") " pod="openstack/glance-default-internal-api-0" Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.819971 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1e77680-3cef-42db-8595-1aa372cf995b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d1e77680-3cef-42db-8595-1aa372cf995b\") " pod="openstack/glance-default-internal-api-0" Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.898035 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-bqf9m"] Mar 12 08:24:54 crc kubenswrapper[4809]: W0312 08:24:54.914677 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod547a107f_945b_4d0d_929d_061a9a044077.slice/crio-d80a585ca1064681bf2a761396af7020fdeea9887cd72e1a1cdc7857c17933df WatchSource:0}: Error finding container d80a585ca1064681bf2a761396af7020fdeea9887cd72e1a1cdc7857c17933df: Status 404 returned error can't find the container with id d80a585ca1064681bf2a761396af7020fdeea9887cd72e1a1cdc7857c17933df Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.922415 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1e77680-3cef-42db-8595-1aa372cf995b-logs\") pod \"glance-default-internal-api-0\" (UID: \"d1e77680-3cef-42db-8595-1aa372cf995b\") " pod="openstack/glance-default-internal-api-0" Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.922499 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1e77680-3cef-42db-8595-1aa372cf995b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d1e77680-3cef-42db-8595-1aa372cf995b\") " pod="openstack/glance-default-internal-api-0" Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.922739 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1e77680-3cef-42db-8595-1aa372cf995b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d1e77680-3cef-42db-8595-1aa372cf995b\") " pod="openstack/glance-default-internal-api-0" Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.922774 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-cf6534d4-9b30-4b23-b28c-87ea2179a8ac\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cf6534d4-9b30-4b23-b28c-87ea2179a8ac\") pod \"glance-default-internal-api-0\" (UID: \"d1e77680-3cef-42db-8595-1aa372cf995b\") " pod="openstack/glance-default-internal-api-0" Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.922801 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1e77680-3cef-42db-8595-1aa372cf995b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d1e77680-3cef-42db-8595-1aa372cf995b\") " pod="openstack/glance-default-internal-api-0" Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.922829 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5sff\" (UniqueName: \"kubernetes.io/projected/d1e77680-3cef-42db-8595-1aa372cf995b-kube-api-access-j5sff\") pod \"glance-default-internal-api-0\" (UID: \"d1e77680-3cef-42db-8595-1aa372cf995b\") " pod="openstack/glance-default-internal-api-0" Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.922863 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1e77680-3cef-42db-8595-1aa372cf995b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d1e77680-3cef-42db-8595-1aa372cf995b\") " pod="openstack/glance-default-internal-api-0" Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.922910 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d1e77680-3cef-42db-8595-1aa372cf995b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d1e77680-3cef-42db-8595-1aa372cf995b\") " pod="openstack/glance-default-internal-api-0" Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.923678 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d1e77680-3cef-42db-8595-1aa372cf995b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d1e77680-3cef-42db-8595-1aa372cf995b\") " pod="openstack/glance-default-internal-api-0" Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.924106 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1e77680-3cef-42db-8595-1aa372cf995b-logs\") pod \"glance-default-internal-api-0\" (UID: \"d1e77680-3cef-42db-8595-1aa372cf995b\") " pod="openstack/glance-default-internal-api-0" Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.931699 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1e77680-3cef-42db-8595-1aa372cf995b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d1e77680-3cef-42db-8595-1aa372cf995b\") " pod="openstack/glance-default-internal-api-0" Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.932154 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1e77680-3cef-42db-8595-1aa372cf995b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d1e77680-3cef-42db-8595-1aa372cf995b\") " pod="openstack/glance-default-internal-api-0" Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.935101 4809 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.935294 4809 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-cf6534d4-9b30-4b23-b28c-87ea2179a8ac\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cf6534d4-9b30-4b23-b28c-87ea2179a8ac\") pod \"glance-default-internal-api-0\" (UID: \"d1e77680-3cef-42db-8595-1aa372cf995b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f5a7d6d655feca30f79868ff57bcbc2eb87574a4dc9b83f3feb9e7096f859fa1/globalmount\"" pod="openstack/glance-default-internal-api-0" Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.938505 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1e77680-3cef-42db-8595-1aa372cf995b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d1e77680-3cef-42db-8595-1aa372cf995b\") " pod="openstack/glance-default-internal-api-0" Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.956088 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5sff\" (UniqueName: \"kubernetes.io/projected/d1e77680-3cef-42db-8595-1aa372cf995b-kube-api-access-j5sff\") pod \"glance-default-internal-api-0\" (UID: \"d1e77680-3cef-42db-8595-1aa372cf995b\") " pod="openstack/glance-default-internal-api-0" Mar 12 08:24:54 crc kubenswrapper[4809]: I0312 08:24:54.957729 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1e77680-3cef-42db-8595-1aa372cf995b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d1e77680-3cef-42db-8595-1aa372cf995b\") " pod="openstack/glance-default-internal-api-0" Mar 12 08:24:55 crc kubenswrapper[4809]: I0312 08:24:55.060820 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8bcp4" Mar 12 08:24:55 crc kubenswrapper[4809]: I0312 08:24:55.116043 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-cf6534d4-9b30-4b23-b28c-87ea2179a8ac\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cf6534d4-9b30-4b23-b28c-87ea2179a8ac\") pod \"glance-default-internal-api-0\" (UID: \"d1e77680-3cef-42db-8595-1aa372cf995b\") " pod="openstack/glance-default-internal-api-0" Mar 12 08:24:55 crc kubenswrapper[4809]: I0312 08:24:55.144694 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6749cc55-c4e2-4011-bb67-0f1676ba152a" path="/var/lib/kubelet/pods/6749cc55-c4e2-4011-bb67-0f1676ba152a/volumes" Mar 12 08:24:55 crc kubenswrapper[4809]: I0312 08:24:55.188042 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-353e-account-create-update-hmg8n"] Mar 12 08:24:55 crc kubenswrapper[4809]: I0312 08:24:55.204450 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-bqf9m" event={"ID":"547a107f-945b-4d0d-929d-061a9a044077","Type":"ContainerStarted","Data":"d80a585ca1064681bf2a761396af7020fdeea9887cd72e1a1cdc7857c17933df"} Mar 12 08:24:55 crc kubenswrapper[4809]: W0312 08:24:55.207680 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4043f775_958c_4f5d_828f_dda0811c0a7e.slice/crio-3a3412259294258b579b5d04c981215ce34296bfa34fddce789a447db2ab0800 WatchSource:0}: Error finding container 3a3412259294258b579b5d04c981215ce34296bfa34fddce789a447db2ab0800: Status 404 returned error can't find the container with id 3a3412259294258b579b5d04c981215ce34296bfa34fddce789a447db2ab0800 Mar 12 08:24:55 crc kubenswrapper[4809]: I0312 08:24:55.208328 4809 generic.go:334] "Generic (PLEG): container finished" podID="2d78fd7f-3718-4999-a661-ad590dc808a5" containerID="a8d6861d68e11422991828ff49aed83698b07929c3598f11c5badcbf7ebf6610" exitCode=0 Mar 12 08:24:55 crc kubenswrapper[4809]: I0312 08:24:55.208373 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8bcp4" event={"ID":"2d78fd7f-3718-4999-a661-ad590dc808a5","Type":"ContainerDied","Data":"a8d6861d68e11422991828ff49aed83698b07929c3598f11c5badcbf7ebf6610"} Mar 12 08:24:55 crc kubenswrapper[4809]: I0312 08:24:55.208395 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8bcp4" event={"ID":"2d78fd7f-3718-4999-a661-ad590dc808a5","Type":"ContainerDied","Data":"7d41e40b9a666531a3ebf1ec61e9a147c13de70d148d8d03485e7f098977c5d2"} Mar 12 08:24:55 crc kubenswrapper[4809]: I0312 08:24:55.208415 4809 scope.go:117] "RemoveContainer" containerID="a8d6861d68e11422991828ff49aed83698b07929c3598f11c5badcbf7ebf6610" Mar 12 08:24:55 crc kubenswrapper[4809]: I0312 08:24:55.208532 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8bcp4" Mar 12 08:24:55 crc kubenswrapper[4809]: I0312 08:24:55.228656 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-887k5" event={"ID":"3b066594-e122-4fc4-95bb-c66b48bfd0f3","Type":"ContainerStarted","Data":"96eeae435fb413c2328f87e4e1f9db61de4175bd380e177da5396628640d3562"} Mar 12 08:24:55 crc kubenswrapper[4809]: I0312 08:24:55.235381 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d78fd7f-3718-4999-a661-ad590dc808a5-utilities\") pod \"2d78fd7f-3718-4999-a661-ad590dc808a5\" (UID: \"2d78fd7f-3718-4999-a661-ad590dc808a5\") " Mar 12 08:24:55 crc kubenswrapper[4809]: I0312 08:24:55.235791 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vh9zt\" (UniqueName: \"kubernetes.io/projected/2d78fd7f-3718-4999-a661-ad590dc808a5-kube-api-access-vh9zt\") pod \"2d78fd7f-3718-4999-a661-ad590dc808a5\" (UID: \"2d78fd7f-3718-4999-a661-ad590dc808a5\") " Mar 12 08:24:55 crc kubenswrapper[4809]: I0312 08:24:55.235935 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d78fd7f-3718-4999-a661-ad590dc808a5-catalog-content\") pod \"2d78fd7f-3718-4999-a661-ad590dc808a5\" (UID: \"2d78fd7f-3718-4999-a661-ad590dc808a5\") " Mar 12 08:24:55 crc kubenswrapper[4809]: I0312 08:24:55.250956 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d78fd7f-3718-4999-a661-ad590dc808a5-utilities" (OuterVolumeSpecName: "utilities") pod "2d78fd7f-3718-4999-a661-ad590dc808a5" (UID: "2d78fd7f-3718-4999-a661-ad590dc808a5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:24:55 crc kubenswrapper[4809]: I0312 08:24:55.260476 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-887k5" podStartSLOduration=2.260444398 podStartE2EDuration="2.260444398s" podCreationTimestamp="2026-03-12 08:24:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:24:55.250100386 +0000 UTC m=+1568.832136119" watchObservedRunningTime="2026-03-12 08:24:55.260444398 +0000 UTC m=+1568.842480131" Mar 12 08:24:55 crc kubenswrapper[4809]: I0312 08:24:55.279466 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d78fd7f-3718-4999-a661-ad590dc808a5-kube-api-access-vh9zt" (OuterVolumeSpecName: "kube-api-access-vh9zt") pod "2d78fd7f-3718-4999-a661-ad590dc808a5" (UID: "2d78fd7f-3718-4999-a661-ad590dc808a5"). InnerVolumeSpecName "kube-api-access-vh9zt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:24:55 crc kubenswrapper[4809]: I0312 08:24:55.319681 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 12 08:24:55 crc kubenswrapper[4809]: I0312 08:24:55.328247 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-ed56-account-create-update-mdf8p"] Mar 12 08:24:55 crc kubenswrapper[4809]: I0312 08:24:55.339575 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vh9zt\" (UniqueName: \"kubernetes.io/projected/2d78fd7f-3718-4999-a661-ad590dc808a5-kube-api-access-vh9zt\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:55 crc kubenswrapper[4809]: I0312 08:24:55.339626 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d78fd7f-3718-4999-a661-ad590dc808a5-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:55 crc kubenswrapper[4809]: I0312 08:24:55.457746 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d78fd7f-3718-4999-a661-ad590dc808a5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2d78fd7f-3718-4999-a661-ad590dc808a5" (UID: "2d78fd7f-3718-4999-a661-ad590dc808a5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:24:55 crc kubenswrapper[4809]: I0312 08:24:55.550812 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d78fd7f-3718-4999-a661-ad590dc808a5-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:55 crc kubenswrapper[4809]: I0312 08:24:55.570345 4809 scope.go:117] "RemoveContainer" containerID="fdfa9153c7267601195f32a3e9028ddb46b0a9fe65503b33b8fb79591d309c31" Mar 12 08:24:55 crc kubenswrapper[4809]: I0312 08:24:55.586386 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-c95a-account-create-update-k247d"] Mar 12 08:24:55 crc kubenswrapper[4809]: I0312 08:24:55.663738 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-rr92v"] Mar 12 08:24:55 crc kubenswrapper[4809]: W0312 08:24:55.670703 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3698cbf3_c28c_429f_b568_d8a68b979dfb.slice/crio-bae63e65bd27460568b6b8bac0963ef49100dff9d9321bca3839e99e504d72aa WatchSource:0}: Error finding container bae63e65bd27460568b6b8bac0963ef49100dff9d9321bca3839e99e504d72aa: Status 404 returned error can't find the container with id bae63e65bd27460568b6b8bac0963ef49100dff9d9321bca3839e99e504d72aa Mar 12 08:24:55 crc kubenswrapper[4809]: I0312 08:24:55.722226 4809 scope.go:117] "RemoveContainer" containerID="68d476243deb09e4b447eeaaae4fdeeb56b84c38379d5fa1fa4cdb7dc90b66ff" Mar 12 08:24:55 crc kubenswrapper[4809]: I0312 08:24:55.727206 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8bcp4"] Mar 12 08:24:55 crc kubenswrapper[4809]: I0312 08:24:55.746880 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8bcp4"] Mar 12 08:24:55 crc kubenswrapper[4809]: I0312 08:24:55.906427 4809 scope.go:117] "RemoveContainer" containerID="a8d6861d68e11422991828ff49aed83698b07929c3598f11c5badcbf7ebf6610" Mar 12 08:24:55 crc kubenswrapper[4809]: E0312 08:24:55.907301 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8d6861d68e11422991828ff49aed83698b07929c3598f11c5badcbf7ebf6610\": container with ID starting with a8d6861d68e11422991828ff49aed83698b07929c3598f11c5badcbf7ebf6610 not found: ID does not exist" containerID="a8d6861d68e11422991828ff49aed83698b07929c3598f11c5badcbf7ebf6610" Mar 12 08:24:55 crc kubenswrapper[4809]: I0312 08:24:55.907358 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8d6861d68e11422991828ff49aed83698b07929c3598f11c5badcbf7ebf6610"} err="failed to get container status \"a8d6861d68e11422991828ff49aed83698b07929c3598f11c5badcbf7ebf6610\": rpc error: code = NotFound desc = could not find container \"a8d6861d68e11422991828ff49aed83698b07929c3598f11c5badcbf7ebf6610\": container with ID starting with a8d6861d68e11422991828ff49aed83698b07929c3598f11c5badcbf7ebf6610 not found: ID does not exist" Mar 12 08:24:55 crc kubenswrapper[4809]: I0312 08:24:55.907386 4809 scope.go:117] "RemoveContainer" containerID="fdfa9153c7267601195f32a3e9028ddb46b0a9fe65503b33b8fb79591d309c31" Mar 12 08:24:55 crc kubenswrapper[4809]: E0312 08:24:55.907958 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdfa9153c7267601195f32a3e9028ddb46b0a9fe65503b33b8fb79591d309c31\": container with ID starting with fdfa9153c7267601195f32a3e9028ddb46b0a9fe65503b33b8fb79591d309c31 not found: ID does not exist" containerID="fdfa9153c7267601195f32a3e9028ddb46b0a9fe65503b33b8fb79591d309c31" Mar 12 08:24:55 crc kubenswrapper[4809]: I0312 08:24:55.907981 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdfa9153c7267601195f32a3e9028ddb46b0a9fe65503b33b8fb79591d309c31"} err="failed to get container status \"fdfa9153c7267601195f32a3e9028ddb46b0a9fe65503b33b8fb79591d309c31\": rpc error: code = NotFound desc = could not find container \"fdfa9153c7267601195f32a3e9028ddb46b0a9fe65503b33b8fb79591d309c31\": container with ID starting with fdfa9153c7267601195f32a3e9028ddb46b0a9fe65503b33b8fb79591d309c31 not found: ID does not exist" Mar 12 08:24:55 crc kubenswrapper[4809]: I0312 08:24:55.907996 4809 scope.go:117] "RemoveContainer" containerID="68d476243deb09e4b447eeaaae4fdeeb56b84c38379d5fa1fa4cdb7dc90b66ff" Mar 12 08:24:55 crc kubenswrapper[4809]: E0312 08:24:55.908492 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68d476243deb09e4b447eeaaae4fdeeb56b84c38379d5fa1fa4cdb7dc90b66ff\": container with ID starting with 68d476243deb09e4b447eeaaae4fdeeb56b84c38379d5fa1fa4cdb7dc90b66ff not found: ID does not exist" containerID="68d476243deb09e4b447eeaaae4fdeeb56b84c38379d5fa1fa4cdb7dc90b66ff" Mar 12 08:24:55 crc kubenswrapper[4809]: I0312 08:24:55.908519 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68d476243deb09e4b447eeaaae4fdeeb56b84c38379d5fa1fa4cdb7dc90b66ff"} err="failed to get container status \"68d476243deb09e4b447eeaaae4fdeeb56b84c38379d5fa1fa4cdb7dc90b66ff\": rpc error: code = NotFound desc = could not find container \"68d476243deb09e4b447eeaaae4fdeeb56b84c38379d5fa1fa4cdb7dc90b66ff\": container with ID starting with 68d476243deb09e4b447eeaaae4fdeeb56b84c38379d5fa1fa4cdb7dc90b66ff not found: ID does not exist" Mar 12 08:24:56 crc kubenswrapper[4809]: I0312 08:24:56.083376 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-6f489d8994-s29s4" Mar 12 08:24:56 crc kubenswrapper[4809]: I0312 08:24:56.084437 4809 scope.go:117] "RemoveContainer" containerID="596be4b988621d353b5466866266e9e649c932a4ce55fdf02d6f7ec1930974ed" Mar 12 08:24:56 crc kubenswrapper[4809]: E0312 08:24:56.084702 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 20s restarting failed container=heat-cfnapi pod=heat-cfnapi-6f489d8994-s29s4_openstack(5e4b8a60-4319-4d91-a9e5-897d76e61f96)\"" pod="openstack/heat-cfnapi-6f489d8994-s29s4" podUID="5e4b8a60-4319-4d91-a9e5-897d76e61f96" Mar 12 08:24:56 crc kubenswrapper[4809]: I0312 08:24:56.225099 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 12 08:24:56 crc kubenswrapper[4809]: I0312 08:24:56.310335 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-rr92v" event={"ID":"3698cbf3-c28c-429f-b568-d8a68b979dfb","Type":"ContainerStarted","Data":"bae63e65bd27460568b6b8bac0963ef49100dff9d9321bca3839e99e504d72aa"} Mar 12 08:24:56 crc kubenswrapper[4809]: I0312 08:24:56.338676 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-rr92v" podStartSLOduration=3.338648284 podStartE2EDuration="3.338648284s" podCreationTimestamp="2026-03-12 08:24:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:24:56.333897134 +0000 UTC m=+1569.915932867" watchObservedRunningTime="2026-03-12 08:24:56.338648284 +0000 UTC m=+1569.920684017" Mar 12 08:24:56 crc kubenswrapper[4809]: I0312 08:24:56.382034 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-ed56-account-create-update-mdf8p" event={"ID":"c61faad0-e94e-4480-8ac7-854d1717dc78","Type":"ContainerStarted","Data":"8291d5b0647993172b77bf5ff49ff769f72510cee6072dffdedd08da817cff18"} Mar 12 08:24:56 crc kubenswrapper[4809]: I0312 08:24:56.382204 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-ed56-account-create-update-mdf8p" event={"ID":"c61faad0-e94e-4480-8ac7-854d1717dc78","Type":"ContainerStarted","Data":"90c38b015a0cc473abda6f60c5c0b1295faa725d09a3acaca9c9154f976baab6"} Mar 12 08:24:56 crc kubenswrapper[4809]: I0312 08:24:56.403559 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-353e-account-create-update-hmg8n" event={"ID":"4043f775-958c-4f5d-828f-dda0811c0a7e","Type":"ContainerStarted","Data":"7519b599ab886240e22563946d82b66fe3199afd3757d33036c754c228eadd92"} Mar 12 08:24:56 crc kubenswrapper[4809]: I0312 08:24:56.403625 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-353e-account-create-update-hmg8n" event={"ID":"4043f775-958c-4f5d-828f-dda0811c0a7e","Type":"ContainerStarted","Data":"3a3412259294258b579b5d04c981215ce34296bfa34fddce789a447db2ab0800"} Mar 12 08:24:56 crc kubenswrapper[4809]: I0312 08:24:56.417389 4809 generic.go:334] "Generic (PLEG): container finished" podID="3b066594-e122-4fc4-95bb-c66b48bfd0f3" containerID="86c33c08028baa3b4062ef96f1f1e2ad8effe34c49f8ea3c394ec6630d7acb89" exitCode=0 Mar 12 08:24:56 crc kubenswrapper[4809]: I0312 08:24:56.417461 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-887k5" event={"ID":"3b066594-e122-4fc4-95bb-c66b48bfd0f3","Type":"ContainerDied","Data":"86c33c08028baa3b4062ef96f1f1e2ad8effe34c49f8ea3c394ec6630d7acb89"} Mar 12 08:24:56 crc kubenswrapper[4809]: I0312 08:24:56.421908 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-ed56-account-create-update-mdf8p" podStartSLOduration=3.421878607 podStartE2EDuration="3.421878607s" podCreationTimestamp="2026-03-12 08:24:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:24:56.403629991 +0000 UTC m=+1569.985665724" watchObservedRunningTime="2026-03-12 08:24:56.421878607 +0000 UTC m=+1570.003914340" Mar 12 08:24:56 crc kubenswrapper[4809]: I0312 08:24:56.422657 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-bqf9m" event={"ID":"547a107f-945b-4d0d-929d-061a9a044077","Type":"ContainerStarted","Data":"41aea85cc92f7ccdcc94d0796405333e1ca4469648cab2e1a3a034cd5178dce1"} Mar 12 08:24:56 crc kubenswrapper[4809]: I0312 08:24:56.438369 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-353e-account-create-update-hmg8n" podStartSLOduration=3.438340775 podStartE2EDuration="3.438340775s" podCreationTimestamp="2026-03-12 08:24:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:24:56.433487432 +0000 UTC m=+1570.015523165" watchObservedRunningTime="2026-03-12 08:24:56.438340775 +0000 UTC m=+1570.020376508" Mar 12 08:24:56 crc kubenswrapper[4809]: I0312 08:24:56.446713 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-c95a-account-create-update-k247d" event={"ID":"efb90ca2-e6b9-4160-a1be-80e9418f9d43","Type":"ContainerStarted","Data":"28ab8f075b84fcafe9f059f99c244c02f523ec7e33c220de759fc6097d5e4d2f"} Mar 12 08:24:56 crc kubenswrapper[4809]: I0312 08:24:56.446788 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-c95a-account-create-update-k247d" event={"ID":"efb90ca2-e6b9-4160-a1be-80e9418f9d43","Type":"ContainerStarted","Data":"0c9cf859c64f1372072108bc033cbcbe7ed5d7c945e493671a68f2e15653e9b5"} Mar 12 08:24:56 crc kubenswrapper[4809]: I0312 08:24:56.490078 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-bqf9m" podStartSLOduration=3.490047871 podStartE2EDuration="3.490047871s" podCreationTimestamp="2026-03-12 08:24:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:24:56.454630688 +0000 UTC m=+1570.036666421" watchObservedRunningTime="2026-03-12 08:24:56.490047871 +0000 UTC m=+1570.072083604" Mar 12 08:24:56 crc kubenswrapper[4809]: I0312 08:24:56.604282 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-c95a-account-create-update-k247d" podStartSLOduration=2.604254567 podStartE2EDuration="2.604254567s" podCreationTimestamp="2026-03-12 08:24:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:24:56.503207179 +0000 UTC m=+1570.085242932" watchObservedRunningTime="2026-03-12 08:24:56.604254567 +0000 UTC m=+1570.186290300" Mar 12 08:24:57 crc kubenswrapper[4809]: I0312 08:24:57.117637 4809 scope.go:117] "RemoveContainer" containerID="864ab2b9321734db2ec1c2e4ce57f8a096a813a5cbaa0744b231535713d6ab10" Mar 12 08:24:57 crc kubenswrapper[4809]: E0312 08:24:57.117984 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:24:57 crc kubenswrapper[4809]: I0312 08:24:57.125834 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d78fd7f-3718-4999-a661-ad590dc808a5" path="/var/lib/kubelet/pods/2d78fd7f-3718-4999-a661-ad590dc808a5/volumes" Mar 12 08:24:57 crc kubenswrapper[4809]: I0312 08:24:57.477294 4809 generic.go:334] "Generic (PLEG): container finished" podID="c61faad0-e94e-4480-8ac7-854d1717dc78" containerID="8291d5b0647993172b77bf5ff49ff769f72510cee6072dffdedd08da817cff18" exitCode=0 Mar 12 08:24:57 crc kubenswrapper[4809]: I0312 08:24:57.477495 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-ed56-account-create-update-mdf8p" event={"ID":"c61faad0-e94e-4480-8ac7-854d1717dc78","Type":"ContainerDied","Data":"8291d5b0647993172b77bf5ff49ff769f72510cee6072dffdedd08da817cff18"} Mar 12 08:24:57 crc kubenswrapper[4809]: I0312 08:24:57.481451 4809 generic.go:334] "Generic (PLEG): container finished" podID="4043f775-958c-4f5d-828f-dda0811c0a7e" containerID="7519b599ab886240e22563946d82b66fe3199afd3757d33036c754c228eadd92" exitCode=0 Mar 12 08:24:57 crc kubenswrapper[4809]: I0312 08:24:57.481517 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-353e-account-create-update-hmg8n" event={"ID":"4043f775-958c-4f5d-828f-dda0811c0a7e","Type":"ContainerDied","Data":"7519b599ab886240e22563946d82b66fe3199afd3757d33036c754c228eadd92"} Mar 12 08:24:57 crc kubenswrapper[4809]: I0312 08:24:57.493145 4809 generic.go:334] "Generic (PLEG): container finished" podID="547a107f-945b-4d0d-929d-061a9a044077" containerID="41aea85cc92f7ccdcc94d0796405333e1ca4469648cab2e1a3a034cd5178dce1" exitCode=0 Mar 12 08:24:57 crc kubenswrapper[4809]: I0312 08:24:57.493229 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-bqf9m" event={"ID":"547a107f-945b-4d0d-929d-061a9a044077","Type":"ContainerDied","Data":"41aea85cc92f7ccdcc94d0796405333e1ca4469648cab2e1a3a034cd5178dce1"} Mar 12 08:24:57 crc kubenswrapper[4809]: I0312 08:24:57.504625 4809 generic.go:334] "Generic (PLEG): container finished" podID="efb90ca2-e6b9-4160-a1be-80e9418f9d43" containerID="28ab8f075b84fcafe9f059f99c244c02f523ec7e33c220de759fc6097d5e4d2f" exitCode=0 Mar 12 08:24:57 crc kubenswrapper[4809]: I0312 08:24:57.504738 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-c95a-account-create-update-k247d" event={"ID":"efb90ca2-e6b9-4160-a1be-80e9418f9d43","Type":"ContainerDied","Data":"28ab8f075b84fcafe9f059f99c244c02f523ec7e33c220de759fc6097d5e4d2f"} Mar 12 08:24:57 crc kubenswrapper[4809]: I0312 08:24:57.509754 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d1e77680-3cef-42db-8595-1aa372cf995b","Type":"ContainerStarted","Data":"128f8a9f0ae668a3609a5c1d6aaf29702d5c739c05b9ebb7aa7a4445339b3733"} Mar 12 08:24:57 crc kubenswrapper[4809]: I0312 08:24:57.509825 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d1e77680-3cef-42db-8595-1aa372cf995b","Type":"ContainerStarted","Data":"76666621fad5528cfecf82172b4fd462f46945663975396f1f428704fafd82dc"} Mar 12 08:24:57 crc kubenswrapper[4809]: I0312 08:24:57.519398 4809 generic.go:334] "Generic (PLEG): container finished" podID="3698cbf3-c28c-429f-b568-d8a68b979dfb" containerID="0e60788436b4e057435c814d9f27fba1dc94f4221b9bac8a2b753ea57d926a0c" exitCode=0 Mar 12 08:24:57 crc kubenswrapper[4809]: I0312 08:24:57.519629 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-rr92v" event={"ID":"3698cbf3-c28c-429f-b568-d8a68b979dfb","Type":"ContainerDied","Data":"0e60788436b4e057435c814d9f27fba1dc94f4221b9bac8a2b753ea57d926a0c"} Mar 12 08:24:58 crc kubenswrapper[4809]: I0312 08:24:58.104437 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-887k5" Mar 12 08:24:58 crc kubenswrapper[4809]: I0312 08:24:58.134847 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b066594-e122-4fc4-95bb-c66b48bfd0f3-operator-scripts\") pod \"3b066594-e122-4fc4-95bb-c66b48bfd0f3\" (UID: \"3b066594-e122-4fc4-95bb-c66b48bfd0f3\") " Mar 12 08:24:58 crc kubenswrapper[4809]: I0312 08:24:58.135067 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdp8l\" (UniqueName: \"kubernetes.io/projected/3b066594-e122-4fc4-95bb-c66b48bfd0f3-kube-api-access-cdp8l\") pod \"3b066594-e122-4fc4-95bb-c66b48bfd0f3\" (UID: \"3b066594-e122-4fc4-95bb-c66b48bfd0f3\") " Mar 12 08:24:58 crc kubenswrapper[4809]: I0312 08:24:58.138071 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b066594-e122-4fc4-95bb-c66b48bfd0f3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3b066594-e122-4fc4-95bb-c66b48bfd0f3" (UID: "3b066594-e122-4fc4-95bb-c66b48bfd0f3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:24:58 crc kubenswrapper[4809]: I0312 08:24:58.153492 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b066594-e122-4fc4-95bb-c66b48bfd0f3-kube-api-access-cdp8l" (OuterVolumeSpecName: "kube-api-access-cdp8l") pod "3b066594-e122-4fc4-95bb-c66b48bfd0f3" (UID: "3b066594-e122-4fc4-95bb-c66b48bfd0f3"). InnerVolumeSpecName "kube-api-access-cdp8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:24:58 crc kubenswrapper[4809]: I0312 08:24:58.251595 4809 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b066594-e122-4fc4-95bb-c66b48bfd0f3-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:58 crc kubenswrapper[4809]: I0312 08:24:58.251656 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdp8l\" (UniqueName: \"kubernetes.io/projected/3b066594-e122-4fc4-95bb-c66b48bfd0f3-kube-api-access-cdp8l\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:58 crc kubenswrapper[4809]: I0312 08:24:58.548425 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-887k5" event={"ID":"3b066594-e122-4fc4-95bb-c66b48bfd0f3","Type":"ContainerDied","Data":"96eeae435fb413c2328f87e4e1f9db61de4175bd380e177da5396628640d3562"} Mar 12 08:24:58 crc kubenswrapper[4809]: I0312 08:24:58.548769 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96eeae435fb413c2328f87e4e1f9db61de4175bd380e177da5396628640d3562" Mar 12 08:24:58 crc kubenswrapper[4809]: I0312 08:24:58.548587 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-887k5" Mar 12 08:24:58 crc kubenswrapper[4809]: I0312 08:24:58.572471 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d1e77680-3cef-42db-8595-1aa372cf995b","Type":"ContainerStarted","Data":"64bc921035668daf4a313371bc9accf5fd5e4cfbd1c373718787ce64935d6c0a"} Mar 12 08:24:58 crc kubenswrapper[4809]: I0312 08:24:58.619432 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.619408277 podStartE2EDuration="4.619408277s" podCreationTimestamp="2026-03-12 08:24:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:24:58.603528854 +0000 UTC m=+1572.185564587" watchObservedRunningTime="2026-03-12 08:24:58.619408277 +0000 UTC m=+1572.201444010" Mar 12 08:24:58 crc kubenswrapper[4809]: I0312 08:24:58.732816 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-765ccf488b-zgbk5" Mar 12 08:24:58 crc kubenswrapper[4809]: I0312 08:24:58.840783 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6f489d8994-s29s4"] Mar 12 08:24:59 crc kubenswrapper[4809]: I0312 08:24:59.246387 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c95a-account-create-update-k247d" Mar 12 08:24:59 crc kubenswrapper[4809]: I0312 08:24:59.291282 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/efb90ca2-e6b9-4160-a1be-80e9418f9d43-operator-scripts\") pod \"efb90ca2-e6b9-4160-a1be-80e9418f9d43\" (UID: \"efb90ca2-e6b9-4160-a1be-80e9418f9d43\") " Mar 12 08:24:59 crc kubenswrapper[4809]: I0312 08:24:59.291359 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvlz5\" (UniqueName: \"kubernetes.io/projected/efb90ca2-e6b9-4160-a1be-80e9418f9d43-kube-api-access-tvlz5\") pod \"efb90ca2-e6b9-4160-a1be-80e9418f9d43\" (UID: \"efb90ca2-e6b9-4160-a1be-80e9418f9d43\") " Mar 12 08:24:59 crc kubenswrapper[4809]: I0312 08:24:59.292108 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efb90ca2-e6b9-4160-a1be-80e9418f9d43-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "efb90ca2-e6b9-4160-a1be-80e9418f9d43" (UID: "efb90ca2-e6b9-4160-a1be-80e9418f9d43"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:24:59 crc kubenswrapper[4809]: I0312 08:24:59.330332 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efb90ca2-e6b9-4160-a1be-80e9418f9d43-kube-api-access-tvlz5" (OuterVolumeSpecName: "kube-api-access-tvlz5") pod "efb90ca2-e6b9-4160-a1be-80e9418f9d43" (UID: "efb90ca2-e6b9-4160-a1be-80e9418f9d43"). InnerVolumeSpecName "kube-api-access-tvlz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:24:59 crc kubenswrapper[4809]: I0312 08:24:59.396504 4809 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/efb90ca2-e6b9-4160-a1be-80e9418f9d43-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:59 crc kubenswrapper[4809]: I0312 08:24:59.396547 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvlz5\" (UniqueName: \"kubernetes.io/projected/efb90ca2-e6b9-4160-a1be-80e9418f9d43-kube-api-access-tvlz5\") on node \"crc\" DevicePath \"\"" Mar 12 08:24:59 crc kubenswrapper[4809]: I0312 08:24:59.602243 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c95a-account-create-update-k247d" Mar 12 08:24:59 crc kubenswrapper[4809]: I0312 08:24:59.602606 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-c95a-account-create-update-k247d" event={"ID":"efb90ca2-e6b9-4160-a1be-80e9418f9d43","Type":"ContainerDied","Data":"0c9cf859c64f1372072108bc033cbcbe7ed5d7c945e493671a68f2e15653e9b5"} Mar 12 08:24:59 crc kubenswrapper[4809]: I0312 08:24:59.603750 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c9cf859c64f1372072108bc033cbcbe7ed5d7c945e493671a68f2e15653e9b5" Mar 12 08:24:59 crc kubenswrapper[4809]: I0312 08:24:59.705676 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-84d75fd6d6-jt5pm" Mar 12 08:24:59 crc kubenswrapper[4809]: I0312 08:24:59.786259 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-bcd664c56-8zpnk"] Mar 12 08:24:59 crc kubenswrapper[4809]: I0312 08:24:59.892496 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-bqf9m" Mar 12 08:24:59 crc kubenswrapper[4809]: I0312 08:24:59.893415 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-ed56-account-create-update-mdf8p" Mar 12 08:24:59 crc kubenswrapper[4809]: I0312 08:24:59.895446 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-rr92v" Mar 12 08:25:00 crc kubenswrapper[4809]: I0312 08:25:00.038001 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/547a107f-945b-4d0d-929d-061a9a044077-operator-scripts\") pod \"547a107f-945b-4d0d-929d-061a9a044077\" (UID: \"547a107f-945b-4d0d-929d-061a9a044077\") " Mar 12 08:25:00 crc kubenswrapper[4809]: I0312 08:25:00.038242 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2nlgf\" (UniqueName: \"kubernetes.io/projected/547a107f-945b-4d0d-929d-061a9a044077-kube-api-access-2nlgf\") pod \"547a107f-945b-4d0d-929d-061a9a044077\" (UID: \"547a107f-945b-4d0d-929d-061a9a044077\") " Mar 12 08:25:00 crc kubenswrapper[4809]: I0312 08:25:00.038348 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5v989\" (UniqueName: \"kubernetes.io/projected/c61faad0-e94e-4480-8ac7-854d1717dc78-kube-api-access-5v989\") pod \"c61faad0-e94e-4480-8ac7-854d1717dc78\" (UID: \"c61faad0-e94e-4480-8ac7-854d1717dc78\") " Mar 12 08:25:00 crc kubenswrapper[4809]: I0312 08:25:00.038457 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cfmq\" (UniqueName: \"kubernetes.io/projected/3698cbf3-c28c-429f-b568-d8a68b979dfb-kube-api-access-4cfmq\") pod \"3698cbf3-c28c-429f-b568-d8a68b979dfb\" (UID: \"3698cbf3-c28c-429f-b568-d8a68b979dfb\") " Mar 12 08:25:00 crc kubenswrapper[4809]: I0312 08:25:00.038678 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c61faad0-e94e-4480-8ac7-854d1717dc78-operator-scripts\") pod \"c61faad0-e94e-4480-8ac7-854d1717dc78\" (UID: \"c61faad0-e94e-4480-8ac7-854d1717dc78\") " Mar 12 08:25:00 crc kubenswrapper[4809]: I0312 08:25:00.038764 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3698cbf3-c28c-429f-b568-d8a68b979dfb-operator-scripts\") pod \"3698cbf3-c28c-429f-b568-d8a68b979dfb\" (UID: \"3698cbf3-c28c-429f-b568-d8a68b979dfb\") " Mar 12 08:25:00 crc kubenswrapper[4809]: I0312 08:25:00.039053 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/547a107f-945b-4d0d-929d-061a9a044077-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "547a107f-945b-4d0d-929d-061a9a044077" (UID: "547a107f-945b-4d0d-929d-061a9a044077"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:25:00 crc kubenswrapper[4809]: I0312 08:25:00.039744 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c61faad0-e94e-4480-8ac7-854d1717dc78-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c61faad0-e94e-4480-8ac7-854d1717dc78" (UID: "c61faad0-e94e-4480-8ac7-854d1717dc78"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:25:00 crc kubenswrapper[4809]: I0312 08:25:00.039708 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3698cbf3-c28c-429f-b568-d8a68b979dfb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3698cbf3-c28c-429f-b568-d8a68b979dfb" (UID: "3698cbf3-c28c-429f-b568-d8a68b979dfb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:25:00 crc kubenswrapper[4809]: I0312 08:25:00.040872 4809 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c61faad0-e94e-4480-8ac7-854d1717dc78-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 08:25:00 crc kubenswrapper[4809]: I0312 08:25:00.041045 4809 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3698cbf3-c28c-429f-b568-d8a68b979dfb-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 08:25:00 crc kubenswrapper[4809]: I0312 08:25:00.041105 4809 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/547a107f-945b-4d0d-929d-061a9a044077-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 08:25:00 crc kubenswrapper[4809]: I0312 08:25:00.048583 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c61faad0-e94e-4480-8ac7-854d1717dc78-kube-api-access-5v989" (OuterVolumeSpecName: "kube-api-access-5v989") pod "c61faad0-e94e-4480-8ac7-854d1717dc78" (UID: "c61faad0-e94e-4480-8ac7-854d1717dc78"). InnerVolumeSpecName "kube-api-access-5v989". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:25:00 crc kubenswrapper[4809]: I0312 08:25:00.048988 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3698cbf3-c28c-429f-b568-d8a68b979dfb-kube-api-access-4cfmq" (OuterVolumeSpecName: "kube-api-access-4cfmq") pod "3698cbf3-c28c-429f-b568-d8a68b979dfb" (UID: "3698cbf3-c28c-429f-b568-d8a68b979dfb"). InnerVolumeSpecName "kube-api-access-4cfmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:25:00 crc kubenswrapper[4809]: I0312 08:25:00.055394 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/547a107f-945b-4d0d-929d-061a9a044077-kube-api-access-2nlgf" (OuterVolumeSpecName: "kube-api-access-2nlgf") pod "547a107f-945b-4d0d-929d-061a9a044077" (UID: "547a107f-945b-4d0d-929d-061a9a044077"). InnerVolumeSpecName "kube-api-access-2nlgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:25:00 crc kubenswrapper[4809]: I0312 08:25:00.143650 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4cfmq\" (UniqueName: \"kubernetes.io/projected/3698cbf3-c28c-429f-b568-d8a68b979dfb-kube-api-access-4cfmq\") on node \"crc\" DevicePath \"\"" Mar 12 08:25:00 crc kubenswrapper[4809]: I0312 08:25:00.143685 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2nlgf\" (UniqueName: \"kubernetes.io/projected/547a107f-945b-4d0d-929d-061a9a044077-kube-api-access-2nlgf\") on node \"crc\" DevicePath \"\"" Mar 12 08:25:00 crc kubenswrapper[4809]: I0312 08:25:00.143700 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5v989\" (UniqueName: \"kubernetes.io/projected/c61faad0-e94e-4480-8ac7-854d1717dc78-kube-api-access-5v989\") on node \"crc\" DevicePath \"\"" Mar 12 08:25:00 crc kubenswrapper[4809]: I0312 08:25:00.160086 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-353e-account-create-update-hmg8n" Mar 12 08:25:00 crc kubenswrapper[4809]: I0312 08:25:00.171201 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6f489d8994-s29s4" Mar 12 08:25:00 crc kubenswrapper[4809]: I0312 08:25:00.355886 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzpcg\" (UniqueName: \"kubernetes.io/projected/4043f775-958c-4f5d-828f-dda0811c0a7e-kube-api-access-gzpcg\") pod \"4043f775-958c-4f5d-828f-dda0811c0a7e\" (UID: \"4043f775-958c-4f5d-828f-dda0811c0a7e\") " Mar 12 08:25:00 crc kubenswrapper[4809]: I0312 08:25:00.356486 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e4b8a60-4319-4d91-a9e5-897d76e61f96-config-data\") pod \"5e4b8a60-4319-4d91-a9e5-897d76e61f96\" (UID: \"5e4b8a60-4319-4d91-a9e5-897d76e61f96\") " Mar 12 08:25:00 crc kubenswrapper[4809]: I0312 08:25:00.356540 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgd5v\" (UniqueName: \"kubernetes.io/projected/5e4b8a60-4319-4d91-a9e5-897d76e61f96-kube-api-access-tgd5v\") pod \"5e4b8a60-4319-4d91-a9e5-897d76e61f96\" (UID: \"5e4b8a60-4319-4d91-a9e5-897d76e61f96\") " Mar 12 08:25:00 crc kubenswrapper[4809]: I0312 08:25:00.356869 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e4b8a60-4319-4d91-a9e5-897d76e61f96-combined-ca-bundle\") pod \"5e4b8a60-4319-4d91-a9e5-897d76e61f96\" (UID: \"5e4b8a60-4319-4d91-a9e5-897d76e61f96\") " Mar 12 08:25:00 crc kubenswrapper[4809]: I0312 08:25:00.356950 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5e4b8a60-4319-4d91-a9e5-897d76e61f96-config-data-custom\") pod \"5e4b8a60-4319-4d91-a9e5-897d76e61f96\" (UID: \"5e4b8a60-4319-4d91-a9e5-897d76e61f96\") " Mar 12 08:25:00 crc kubenswrapper[4809]: I0312 08:25:00.357032 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4043f775-958c-4f5d-828f-dda0811c0a7e-operator-scripts\") pod \"4043f775-958c-4f5d-828f-dda0811c0a7e\" (UID: \"4043f775-958c-4f5d-828f-dda0811c0a7e\") " Mar 12 08:25:00 crc kubenswrapper[4809]: I0312 08:25:00.376750 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4043f775-958c-4f5d-828f-dda0811c0a7e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4043f775-958c-4f5d-828f-dda0811c0a7e" (UID: "4043f775-958c-4f5d-828f-dda0811c0a7e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:25:00 crc kubenswrapper[4809]: I0312 08:25:00.383875 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e4b8a60-4319-4d91-a9e5-897d76e61f96-kube-api-access-tgd5v" (OuterVolumeSpecName: "kube-api-access-tgd5v") pod "5e4b8a60-4319-4d91-a9e5-897d76e61f96" (UID: "5e4b8a60-4319-4d91-a9e5-897d76e61f96"). InnerVolumeSpecName "kube-api-access-tgd5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:25:00 crc kubenswrapper[4809]: I0312 08:25:00.385862 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e4b8a60-4319-4d91-a9e5-897d76e61f96-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5e4b8a60-4319-4d91-a9e5-897d76e61f96" (UID: "5e4b8a60-4319-4d91-a9e5-897d76e61f96"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:25:00 crc kubenswrapper[4809]: I0312 08:25:00.416734 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4043f775-958c-4f5d-828f-dda0811c0a7e-kube-api-access-gzpcg" (OuterVolumeSpecName: "kube-api-access-gzpcg") pod "4043f775-958c-4f5d-828f-dda0811c0a7e" (UID: "4043f775-958c-4f5d-828f-dda0811c0a7e"). InnerVolumeSpecName "kube-api-access-gzpcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:25:00 crc kubenswrapper[4809]: I0312 08:25:00.456107 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e4b8a60-4319-4d91-a9e5-897d76e61f96-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5e4b8a60-4319-4d91-a9e5-897d76e61f96" (UID: "5e4b8a60-4319-4d91-a9e5-897d76e61f96"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:25:00 crc kubenswrapper[4809]: I0312 08:25:00.466686 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e4b8a60-4319-4d91-a9e5-897d76e61f96-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:25:00 crc kubenswrapper[4809]: I0312 08:25:00.466730 4809 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5e4b8a60-4319-4d91-a9e5-897d76e61f96-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 12 08:25:00 crc kubenswrapper[4809]: I0312 08:25:00.466741 4809 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4043f775-958c-4f5d-828f-dda0811c0a7e-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 08:25:00 crc kubenswrapper[4809]: I0312 08:25:00.466777 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzpcg\" (UniqueName: \"kubernetes.io/projected/4043f775-958c-4f5d-828f-dda0811c0a7e-kube-api-access-gzpcg\") on node \"crc\" DevicePath \"\"" Mar 12 08:25:00 crc kubenswrapper[4809]: I0312 08:25:00.466792 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgd5v\" (UniqueName: \"kubernetes.io/projected/5e4b8a60-4319-4d91-a9e5-897d76e61f96-kube-api-access-tgd5v\") on node \"crc\" DevicePath \"\"" Mar 12 08:25:00 crc kubenswrapper[4809]: I0312 08:25:00.496406 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e4b8a60-4319-4d91-a9e5-897d76e61f96-config-data" (OuterVolumeSpecName: "config-data") pod "5e4b8a60-4319-4d91-a9e5-897d76e61f96" (UID: "5e4b8a60-4319-4d91-a9e5-897d76e61f96"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:25:00 crc kubenswrapper[4809]: I0312 08:25:00.569338 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e4b8a60-4319-4d91-a9e5-897d76e61f96-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 08:25:00 crc kubenswrapper[4809]: I0312 08:25:00.622499 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-ed56-account-create-update-mdf8p" event={"ID":"c61faad0-e94e-4480-8ac7-854d1717dc78","Type":"ContainerDied","Data":"90c38b015a0cc473abda6f60c5c0b1295faa725d09a3acaca9c9154f976baab6"} Mar 12 08:25:00 crc kubenswrapper[4809]: I0312 08:25:00.622564 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90c38b015a0cc473abda6f60c5c0b1295faa725d09a3acaca9c9154f976baab6" Mar 12 08:25:00 crc kubenswrapper[4809]: I0312 08:25:00.622665 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-ed56-account-create-update-mdf8p" Mar 12 08:25:00 crc kubenswrapper[4809]: I0312 08:25:00.635252 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-bcd664c56-8zpnk" event={"ID":"efdd8218-be37-4288-a09b-4b4119b5ca39","Type":"ContainerDied","Data":"8261c475e03d2d9e2d235c63fb28aac820fd5993d8722c085a75fb130246bf94"} Mar 12 08:25:00 crc kubenswrapper[4809]: I0312 08:25:00.635328 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8261c475e03d2d9e2d235c63fb28aac820fd5993d8722c085a75fb130246bf94" Mar 12 08:25:00 crc kubenswrapper[4809]: I0312 08:25:00.639710 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-353e-account-create-update-hmg8n" event={"ID":"4043f775-958c-4f5d-828f-dda0811c0a7e","Type":"ContainerDied","Data":"3a3412259294258b579b5d04c981215ce34296bfa34fddce789a447db2ab0800"} Mar 12 08:25:00 crc kubenswrapper[4809]: I0312 08:25:00.639770 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a3412259294258b579b5d04c981215ce34296bfa34fddce789a447db2ab0800" Mar 12 08:25:00 crc kubenswrapper[4809]: I0312 08:25:00.640314 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-353e-account-create-update-hmg8n" Mar 12 08:25:00 crc kubenswrapper[4809]: I0312 08:25:00.645851 4809 generic.go:334] "Generic (PLEG): container finished" podID="970fd7c0-4095-4aa7-8e61-f300972a7124" containerID="eaa71738ce9ba7a19c458b10f5fc2b4bf86dce7e57dd2894e77ed4731b9c6d6e" exitCode=0 Mar 12 08:25:00 crc kubenswrapper[4809]: I0312 08:25:00.645944 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-65959dcdbb-g8h24" event={"ID":"970fd7c0-4095-4aa7-8e61-f300972a7124","Type":"ContainerDied","Data":"eaa71738ce9ba7a19c458b10f5fc2b4bf86dce7e57dd2894e77ed4731b9c6d6e"} Mar 12 08:25:00 crc kubenswrapper[4809]: I0312 08:25:00.658318 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-bqf9m" event={"ID":"547a107f-945b-4d0d-929d-061a9a044077","Type":"ContainerDied","Data":"d80a585ca1064681bf2a761396af7020fdeea9887cd72e1a1cdc7857c17933df"} Mar 12 08:25:00 crc kubenswrapper[4809]: I0312 08:25:00.658369 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d80a585ca1064681bf2a761396af7020fdeea9887cd72e1a1cdc7857c17933df" Mar 12 08:25:00 crc kubenswrapper[4809]: I0312 08:25:00.658472 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-bqf9m" Mar 12 08:25:00 crc kubenswrapper[4809]: I0312 08:25:00.677404 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6f489d8994-s29s4" event={"ID":"5e4b8a60-4319-4d91-a9e5-897d76e61f96","Type":"ContainerDied","Data":"f877cdd60e648e11b9664dc7142d4e7a8c5bc18c4cf0f7853f2537be669db5ad"} Mar 12 08:25:00 crc kubenswrapper[4809]: I0312 08:25:00.677467 4809 scope.go:117] "RemoveContainer" containerID="596be4b988621d353b5466866266e9e649c932a4ce55fdf02d6f7ec1930974ed" Mar 12 08:25:00 crc kubenswrapper[4809]: I0312 08:25:00.677603 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6f489d8994-s29s4" Mar 12 08:25:01 crc kubenswrapper[4809]: I0312 08:25:00.688051 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-rr92v" event={"ID":"3698cbf3-c28c-429f-b568-d8a68b979dfb","Type":"ContainerDied","Data":"bae63e65bd27460568b6b8bac0963ef49100dff9d9321bca3839e99e504d72aa"} Mar 12 08:25:01 crc kubenswrapper[4809]: I0312 08:25:00.688450 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bae63e65bd27460568b6b8bac0963ef49100dff9d9321bca3839e99e504d72aa" Mar 12 08:25:01 crc kubenswrapper[4809]: I0312 08:25:00.688539 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-rr92v" Mar 12 08:25:01 crc kubenswrapper[4809]: I0312 08:25:00.691544 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 12 08:25:01 crc kubenswrapper[4809]: I0312 08:25:00.692733 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 12 08:25:01 crc kubenswrapper[4809]: I0312 08:25:00.751515 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Mar 12 08:25:01 crc kubenswrapper[4809]: I0312 08:25:01.100250 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 12 08:25:01 crc kubenswrapper[4809]: I0312 08:25:01.139985 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-bcd664c56-8zpnk" Mar 12 08:25:01 crc kubenswrapper[4809]: I0312 08:25:01.144834 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 12 08:25:01 crc kubenswrapper[4809]: I0312 08:25:01.190460 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efdd8218-be37-4288-a09b-4b4119b5ca39-combined-ca-bundle\") pod \"efdd8218-be37-4288-a09b-4b4119b5ca39\" (UID: \"efdd8218-be37-4288-a09b-4b4119b5ca39\") " Mar 12 08:25:01 crc kubenswrapper[4809]: I0312 08:25:01.190638 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bd5cd\" (UniqueName: \"kubernetes.io/projected/efdd8218-be37-4288-a09b-4b4119b5ca39-kube-api-access-bd5cd\") pod \"efdd8218-be37-4288-a09b-4b4119b5ca39\" (UID: \"efdd8218-be37-4288-a09b-4b4119b5ca39\") " Mar 12 08:25:01 crc kubenswrapper[4809]: I0312 08:25:01.190729 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/efdd8218-be37-4288-a09b-4b4119b5ca39-config-data-custom\") pod \"efdd8218-be37-4288-a09b-4b4119b5ca39\" (UID: \"efdd8218-be37-4288-a09b-4b4119b5ca39\") " Mar 12 08:25:01 crc kubenswrapper[4809]: I0312 08:25:01.190866 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efdd8218-be37-4288-a09b-4b4119b5ca39-config-data\") pod \"efdd8218-be37-4288-a09b-4b4119b5ca39\" (UID: \"efdd8218-be37-4288-a09b-4b4119b5ca39\") " Mar 12 08:25:01 crc kubenswrapper[4809]: I0312 08:25:01.204851 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd8218-be37-4288-a09b-4b4119b5ca39-kube-api-access-bd5cd" (OuterVolumeSpecName: "kube-api-access-bd5cd") pod "efdd8218-be37-4288-a09b-4b4119b5ca39" (UID: "efdd8218-be37-4288-a09b-4b4119b5ca39"). InnerVolumeSpecName "kube-api-access-bd5cd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:25:01 crc kubenswrapper[4809]: I0312 08:25:01.208100 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd8218-be37-4288-a09b-4b4119b5ca39-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "efdd8218-be37-4288-a09b-4b4119b5ca39" (UID: "efdd8218-be37-4288-a09b-4b4119b5ca39"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:25:01 crc kubenswrapper[4809]: I0312 08:25:01.231277 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-65959dcdbb-g8h24" Mar 12 08:25:01 crc kubenswrapper[4809]: E0312 08:25:01.241963 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod970fd7c0_4095_4aa7_8e61_f300972a7124.slice/crio-eaa71738ce9ba7a19c458b10f5fc2b4bf86dce7e57dd2894e77ed4731b9c6d6e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e4b8a60_4319_4d91_a9e5_897d76e61f96.slice\": RecentStats: unable to find data in memory cache]" Mar 12 08:25:01 crc kubenswrapper[4809]: I0312 08:25:01.284164 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd8218-be37-4288-a09b-4b4119b5ca39-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "efdd8218-be37-4288-a09b-4b4119b5ca39" (UID: "efdd8218-be37-4288-a09b-4b4119b5ca39"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:25:01 crc kubenswrapper[4809]: I0312 08:25:01.304627 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7z2gx\" (UniqueName: \"kubernetes.io/projected/970fd7c0-4095-4aa7-8e61-f300972a7124-kube-api-access-7z2gx\") pod \"970fd7c0-4095-4aa7-8e61-f300972a7124\" (UID: \"970fd7c0-4095-4aa7-8e61-f300972a7124\") " Mar 12 08:25:01 crc kubenswrapper[4809]: I0312 08:25:01.304716 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/970fd7c0-4095-4aa7-8e61-f300972a7124-config-data-custom\") pod \"970fd7c0-4095-4aa7-8e61-f300972a7124\" (UID: \"970fd7c0-4095-4aa7-8e61-f300972a7124\") " Mar 12 08:25:01 crc kubenswrapper[4809]: I0312 08:25:01.304795 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/970fd7c0-4095-4aa7-8e61-f300972a7124-config-data\") pod \"970fd7c0-4095-4aa7-8e61-f300972a7124\" (UID: \"970fd7c0-4095-4aa7-8e61-f300972a7124\") " Mar 12 08:25:01 crc kubenswrapper[4809]: I0312 08:25:01.304948 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/970fd7c0-4095-4aa7-8e61-f300972a7124-combined-ca-bundle\") pod \"970fd7c0-4095-4aa7-8e61-f300972a7124\" (UID: \"970fd7c0-4095-4aa7-8e61-f300972a7124\") " Mar 12 08:25:01 crc kubenswrapper[4809]: I0312 08:25:01.306011 4809 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/efdd8218-be37-4288-a09b-4b4119b5ca39-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 12 08:25:01 crc kubenswrapper[4809]: I0312 08:25:01.306038 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efdd8218-be37-4288-a09b-4b4119b5ca39-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:25:01 crc kubenswrapper[4809]: I0312 08:25:01.306051 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bd5cd\" (UniqueName: \"kubernetes.io/projected/efdd8218-be37-4288-a09b-4b4119b5ca39-kube-api-access-bd5cd\") on node \"crc\" DevicePath \"\"" Mar 12 08:25:01 crc kubenswrapper[4809]: I0312 08:25:01.310673 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/970fd7c0-4095-4aa7-8e61-f300972a7124-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "970fd7c0-4095-4aa7-8e61-f300972a7124" (UID: "970fd7c0-4095-4aa7-8e61-f300972a7124"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:25:01 crc kubenswrapper[4809]: I0312 08:25:01.313210 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/970fd7c0-4095-4aa7-8e61-f300972a7124-kube-api-access-7z2gx" (OuterVolumeSpecName: "kube-api-access-7z2gx") pod "970fd7c0-4095-4aa7-8e61-f300972a7124" (UID: "970fd7c0-4095-4aa7-8e61-f300972a7124"). InnerVolumeSpecName "kube-api-access-7z2gx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:25:01 crc kubenswrapper[4809]: I0312 08:25:01.315908 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6f489d8994-s29s4"] Mar 12 08:25:01 crc kubenswrapper[4809]: I0312 08:25:01.327379 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-6f489d8994-s29s4"] Mar 12 08:25:01 crc kubenswrapper[4809]: I0312 08:25:01.340323 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd8218-be37-4288-a09b-4b4119b5ca39-config-data" (OuterVolumeSpecName: "config-data") pod "efdd8218-be37-4288-a09b-4b4119b5ca39" (UID: "efdd8218-be37-4288-a09b-4b4119b5ca39"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:25:01 crc kubenswrapper[4809]: I0312 08:25:01.362148 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/970fd7c0-4095-4aa7-8e61-f300972a7124-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "970fd7c0-4095-4aa7-8e61-f300972a7124" (UID: "970fd7c0-4095-4aa7-8e61-f300972a7124"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:25:01 crc kubenswrapper[4809]: I0312 08:25:01.411373 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7z2gx\" (UniqueName: \"kubernetes.io/projected/970fd7c0-4095-4aa7-8e61-f300972a7124-kube-api-access-7z2gx\") on node \"crc\" DevicePath \"\"" Mar 12 08:25:01 crc kubenswrapper[4809]: I0312 08:25:01.411409 4809 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/970fd7c0-4095-4aa7-8e61-f300972a7124-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 12 08:25:01 crc kubenswrapper[4809]: I0312 08:25:01.411418 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/970fd7c0-4095-4aa7-8e61-f300972a7124-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:25:01 crc kubenswrapper[4809]: I0312 08:25:01.411430 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efdd8218-be37-4288-a09b-4b4119b5ca39-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 08:25:01 crc kubenswrapper[4809]: I0312 08:25:01.458189 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/970fd7c0-4095-4aa7-8e61-f300972a7124-config-data" (OuterVolumeSpecName: "config-data") pod "970fd7c0-4095-4aa7-8e61-f300972a7124" (UID: "970fd7c0-4095-4aa7-8e61-f300972a7124"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:25:01 crc kubenswrapper[4809]: I0312 08:25:01.513787 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/970fd7c0-4095-4aa7-8e61-f300972a7124-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 08:25:01 crc kubenswrapper[4809]: I0312 08:25:01.706056 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-65959dcdbb-g8h24" event={"ID":"970fd7c0-4095-4aa7-8e61-f300972a7124","Type":"ContainerDied","Data":"f3171087752fe2b9c63669c3d53986c7ca3d32fe9f769e2ea4bda6f8d40df289"} Mar 12 08:25:01 crc kubenswrapper[4809]: I0312 08:25:01.706545 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 12 08:25:01 crc kubenswrapper[4809]: I0312 08:25:01.706566 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 12 08:25:01 crc kubenswrapper[4809]: I0312 08:25:01.706266 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-65959dcdbb-g8h24" Mar 12 08:25:01 crc kubenswrapper[4809]: I0312 08:25:01.706208 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-bcd664c56-8zpnk" Mar 12 08:25:01 crc kubenswrapper[4809]: I0312 08:25:01.706639 4809 scope.go:117] "RemoveContainer" containerID="eaa71738ce9ba7a19c458b10f5fc2b4bf86dce7e57dd2894e77ed4731b9c6d6e" Mar 12 08:25:01 crc kubenswrapper[4809]: I0312 08:25:01.784519 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-65959dcdbb-g8h24"] Mar 12 08:25:01 crc kubenswrapper[4809]: I0312 08:25:01.798307 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-65959dcdbb-g8h24"] Mar 12 08:25:01 crc kubenswrapper[4809]: I0312 08:25:01.810943 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-bcd664c56-8zpnk"] Mar 12 08:25:01 crc kubenswrapper[4809]: I0312 08:25:01.822223 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-bcd664c56-8zpnk"] Mar 12 08:25:02 crc kubenswrapper[4809]: I0312 08:25:02.530282 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 08:25:02 crc kubenswrapper[4809]: I0312 08:25:02.595462 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4-scripts\") pod \"ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4\" (UID: \"ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4\") " Mar 12 08:25:02 crc kubenswrapper[4809]: I0312 08:25:02.595591 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmgtm\" (UniqueName: \"kubernetes.io/projected/ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4-kube-api-access-vmgtm\") pod \"ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4\" (UID: \"ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4\") " Mar 12 08:25:02 crc kubenswrapper[4809]: I0312 08:25:02.595661 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4-combined-ca-bundle\") pod \"ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4\" (UID: \"ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4\") " Mar 12 08:25:02 crc kubenswrapper[4809]: I0312 08:25:02.595841 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4-config-data\") pod \"ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4\" (UID: \"ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4\") " Mar 12 08:25:02 crc kubenswrapper[4809]: I0312 08:25:02.595957 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4-log-httpd\") pod \"ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4\" (UID: \"ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4\") " Mar 12 08:25:02 crc kubenswrapper[4809]: I0312 08:25:02.596010 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4-sg-core-conf-yaml\") pod \"ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4\" (UID: \"ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4\") " Mar 12 08:25:02 crc kubenswrapper[4809]: I0312 08:25:02.596068 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4-run-httpd\") pod \"ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4\" (UID: \"ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4\") " Mar 12 08:25:02 crc kubenswrapper[4809]: I0312 08:25:02.597539 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4" (UID: "ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:25:02 crc kubenswrapper[4809]: I0312 08:25:02.607554 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4" (UID: "ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:25:02 crc kubenswrapper[4809]: I0312 08:25:02.633296 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4-scripts" (OuterVolumeSpecName: "scripts") pod "ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4" (UID: "ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:25:02 crc kubenswrapper[4809]: I0312 08:25:02.699617 4809 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 08:25:02 crc kubenswrapper[4809]: I0312 08:25:02.699649 4809 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 08:25:02 crc kubenswrapper[4809]: I0312 08:25:02.699657 4809 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 08:25:02 crc kubenswrapper[4809]: I0312 08:25:02.715572 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4-kube-api-access-vmgtm" (OuterVolumeSpecName: "kube-api-access-vmgtm") pod "ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4" (UID: "ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4"). InnerVolumeSpecName "kube-api-access-vmgtm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:25:02 crc kubenswrapper[4809]: I0312 08:25:02.792883 4809 generic.go:334] "Generic (PLEG): container finished" podID="ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4" containerID="33861322f0ca7ad6764c313fc70dc41e76de3a469237d1921eeb4516e888e646" exitCode=0 Mar 12 08:25:02 crc kubenswrapper[4809]: I0312 08:25:02.793323 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 08:25:02 crc kubenswrapper[4809]: I0312 08:25:02.794813 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4","Type":"ContainerDied","Data":"33861322f0ca7ad6764c313fc70dc41e76de3a469237d1921eeb4516e888e646"} Mar 12 08:25:02 crc kubenswrapper[4809]: I0312 08:25:02.794865 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4","Type":"ContainerDied","Data":"a554ae445332c23b64205fac8f1e7d67ca6fba2f16f8bd751719dff0511ef86c"} Mar 12 08:25:02 crc kubenswrapper[4809]: I0312 08:25:02.794888 4809 scope.go:117] "RemoveContainer" containerID="6a119d07ac6fa4d92287565a9c82a2eecacfb02e80db1efdfb36d0e990757e6a" Mar 12 08:25:02 crc kubenswrapper[4809]: I0312 08:25:02.804031 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmgtm\" (UniqueName: \"kubernetes.io/projected/ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4-kube-api-access-vmgtm\") on node \"crc\" DevicePath \"\"" Mar 12 08:25:02 crc kubenswrapper[4809]: I0312 08:25:02.833315 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4" (UID: "ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:25:02 crc kubenswrapper[4809]: I0312 08:25:02.848970 4809 scope.go:117] "RemoveContainer" containerID="7f979c08fe0996be1a08154d1d9ae9ce77c632626f5bcf5ab04dce25d67c3947" Mar 12 08:25:02 crc kubenswrapper[4809]: I0312 08:25:02.848987 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4" (UID: "ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:25:02 crc kubenswrapper[4809]: I0312 08:25:02.904629 4809 scope.go:117] "RemoveContainer" containerID="29d847e8495617fb01081c72d1b1d6904ae1c763591862874bcadafbca73e439" Mar 12 08:25:02 crc kubenswrapper[4809]: I0312 08:25:02.918544 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:25:02 crc kubenswrapper[4809]: I0312 08:25:02.918589 4809 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 12 08:25:02 crc kubenswrapper[4809]: I0312 08:25:02.969380 4809 scope.go:117] "RemoveContainer" containerID="33861322f0ca7ad6764c313fc70dc41e76de3a469237d1921eeb4516e888e646" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.016295 4809 scope.go:117] "RemoveContainer" containerID="6a119d07ac6fa4d92287565a9c82a2eecacfb02e80db1efdfb36d0e990757e6a" Mar 12 08:25:03 crc kubenswrapper[4809]: E0312 08:25:03.025314 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a119d07ac6fa4d92287565a9c82a2eecacfb02e80db1efdfb36d0e990757e6a\": container with ID starting with 6a119d07ac6fa4d92287565a9c82a2eecacfb02e80db1efdfb36d0e990757e6a not found: ID does not exist" containerID="6a119d07ac6fa4d92287565a9c82a2eecacfb02e80db1efdfb36d0e990757e6a" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.025386 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a119d07ac6fa4d92287565a9c82a2eecacfb02e80db1efdfb36d0e990757e6a"} err="failed to get container status \"6a119d07ac6fa4d92287565a9c82a2eecacfb02e80db1efdfb36d0e990757e6a\": rpc error: code = NotFound desc = could not find container \"6a119d07ac6fa4d92287565a9c82a2eecacfb02e80db1efdfb36d0e990757e6a\": container with ID starting with 6a119d07ac6fa4d92287565a9c82a2eecacfb02e80db1efdfb36d0e990757e6a not found: ID does not exist" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.025424 4809 scope.go:117] "RemoveContainer" containerID="7f979c08fe0996be1a08154d1d9ae9ce77c632626f5bcf5ab04dce25d67c3947" Mar 12 08:25:03 crc kubenswrapper[4809]: E0312 08:25:03.028236 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f979c08fe0996be1a08154d1d9ae9ce77c632626f5bcf5ab04dce25d67c3947\": container with ID starting with 7f979c08fe0996be1a08154d1d9ae9ce77c632626f5bcf5ab04dce25d67c3947 not found: ID does not exist" containerID="7f979c08fe0996be1a08154d1d9ae9ce77c632626f5bcf5ab04dce25d67c3947" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.028267 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f979c08fe0996be1a08154d1d9ae9ce77c632626f5bcf5ab04dce25d67c3947"} err="failed to get container status \"7f979c08fe0996be1a08154d1d9ae9ce77c632626f5bcf5ab04dce25d67c3947\": rpc error: code = NotFound desc = could not find container \"7f979c08fe0996be1a08154d1d9ae9ce77c632626f5bcf5ab04dce25d67c3947\": container with ID starting with 7f979c08fe0996be1a08154d1d9ae9ce77c632626f5bcf5ab04dce25d67c3947 not found: ID does not exist" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.028286 4809 scope.go:117] "RemoveContainer" containerID="29d847e8495617fb01081c72d1b1d6904ae1c763591862874bcadafbca73e439" Mar 12 08:25:03 crc kubenswrapper[4809]: E0312 08:25:03.032958 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29d847e8495617fb01081c72d1b1d6904ae1c763591862874bcadafbca73e439\": container with ID starting with 29d847e8495617fb01081c72d1b1d6904ae1c763591862874bcadafbca73e439 not found: ID does not exist" containerID="29d847e8495617fb01081c72d1b1d6904ae1c763591862874bcadafbca73e439" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.032989 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29d847e8495617fb01081c72d1b1d6904ae1c763591862874bcadafbca73e439"} err="failed to get container status \"29d847e8495617fb01081c72d1b1d6904ae1c763591862874bcadafbca73e439\": rpc error: code = NotFound desc = could not find container \"29d847e8495617fb01081c72d1b1d6904ae1c763591862874bcadafbca73e439\": container with ID starting with 29d847e8495617fb01081c72d1b1d6904ae1c763591862874bcadafbca73e439 not found: ID does not exist" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.033012 4809 scope.go:117] "RemoveContainer" containerID="33861322f0ca7ad6764c313fc70dc41e76de3a469237d1921eeb4516e888e646" Mar 12 08:25:03 crc kubenswrapper[4809]: E0312 08:25:03.035313 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33861322f0ca7ad6764c313fc70dc41e76de3a469237d1921eeb4516e888e646\": container with ID starting with 33861322f0ca7ad6764c313fc70dc41e76de3a469237d1921eeb4516e888e646 not found: ID does not exist" containerID="33861322f0ca7ad6764c313fc70dc41e76de3a469237d1921eeb4516e888e646" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.035374 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33861322f0ca7ad6764c313fc70dc41e76de3a469237d1921eeb4516e888e646"} err="failed to get container status \"33861322f0ca7ad6764c313fc70dc41e76de3a469237d1921eeb4516e888e646\": rpc error: code = NotFound desc = could not find container \"33861322f0ca7ad6764c313fc70dc41e76de3a469237d1921eeb4516e888e646\": container with ID starting with 33861322f0ca7ad6764c313fc70dc41e76de3a469237d1921eeb4516e888e646 not found: ID does not exist" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.069298 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4-config-data" (OuterVolumeSpecName: "config-data") pod "ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4" (UID: "ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.142473 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.291959 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e4b8a60-4319-4d91-a9e5-897d76e61f96" path="/var/lib/kubelet/pods/5e4b8a60-4319-4d91-a9e5-897d76e61f96/volumes" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.292838 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="970fd7c0-4095-4aa7-8e61-f300972a7124" path="/var/lib/kubelet/pods/970fd7c0-4095-4aa7-8e61-f300972a7124/volumes" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.293614 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd8218-be37-4288-a09b-4b4119b5ca39" path="/var/lib/kubelet/pods/efdd8218-be37-4288-a09b-4b4119b5ca39/volumes" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.332005 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.332068 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.332090 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 12 08:25:03 crc kubenswrapper[4809]: E0312 08:25:03.332610 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efdd8218-be37-4288-a09b-4b4119b5ca39" containerName="heat-api" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.332636 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="efdd8218-be37-4288-a09b-4b4119b5ca39" containerName="heat-api" Mar 12 08:25:03 crc kubenswrapper[4809]: E0312 08:25:03.332654 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b066594-e122-4fc4-95bb-c66b48bfd0f3" containerName="mariadb-database-create" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.332662 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b066594-e122-4fc4-95bb-c66b48bfd0f3" containerName="mariadb-database-create" Mar 12 08:25:03 crc kubenswrapper[4809]: E0312 08:25:03.332677 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efdd8218-be37-4288-a09b-4b4119b5ca39" containerName="heat-api" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.332683 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="efdd8218-be37-4288-a09b-4b4119b5ca39" containerName="heat-api" Mar 12 08:25:03 crc kubenswrapper[4809]: E0312 08:25:03.332702 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e4b8a60-4319-4d91-a9e5-897d76e61f96" containerName="heat-cfnapi" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.332708 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e4b8a60-4319-4d91-a9e5-897d76e61f96" containerName="heat-cfnapi" Mar 12 08:25:03 crc kubenswrapper[4809]: E0312 08:25:03.332723 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e4b8a60-4319-4d91-a9e5-897d76e61f96" containerName="heat-cfnapi" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.332734 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e4b8a60-4319-4d91-a9e5-897d76e61f96" containerName="heat-cfnapi" Mar 12 08:25:03 crc kubenswrapper[4809]: E0312 08:25:03.332742 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d78fd7f-3718-4999-a661-ad590dc808a5" containerName="extract-content" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.332748 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d78fd7f-3718-4999-a661-ad590dc808a5" containerName="extract-content" Mar 12 08:25:03 crc kubenswrapper[4809]: E0312 08:25:03.332765 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4" containerName="ceilometer-notification-agent" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.332771 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4" containerName="ceilometer-notification-agent" Mar 12 08:25:03 crc kubenswrapper[4809]: E0312 08:25:03.332787 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3698cbf3-c28c-429f-b568-d8a68b979dfb" containerName="mariadb-database-create" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.332794 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="3698cbf3-c28c-429f-b568-d8a68b979dfb" containerName="mariadb-database-create" Mar 12 08:25:03 crc kubenswrapper[4809]: E0312 08:25:03.332807 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d78fd7f-3718-4999-a661-ad590dc808a5" containerName="extract-utilities" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.332814 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d78fd7f-3718-4999-a661-ad590dc808a5" containerName="extract-utilities" Mar 12 08:25:03 crc kubenswrapper[4809]: E0312 08:25:03.332831 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c61faad0-e94e-4480-8ac7-854d1717dc78" containerName="mariadb-account-create-update" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.332837 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="c61faad0-e94e-4480-8ac7-854d1717dc78" containerName="mariadb-account-create-update" Mar 12 08:25:03 crc kubenswrapper[4809]: E0312 08:25:03.332849 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4" containerName="sg-core" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.332856 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4" containerName="sg-core" Mar 12 08:25:03 crc kubenswrapper[4809]: E0312 08:25:03.332869 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4043f775-958c-4f5d-828f-dda0811c0a7e" containerName="mariadb-account-create-update" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.332875 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="4043f775-958c-4f5d-828f-dda0811c0a7e" containerName="mariadb-account-create-update" Mar 12 08:25:03 crc kubenswrapper[4809]: E0312 08:25:03.332886 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efb90ca2-e6b9-4160-a1be-80e9418f9d43" containerName="mariadb-account-create-update" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.332892 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="efb90ca2-e6b9-4160-a1be-80e9418f9d43" containerName="mariadb-account-create-update" Mar 12 08:25:03 crc kubenswrapper[4809]: E0312 08:25:03.332905 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4" containerName="proxy-httpd" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.332913 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4" containerName="proxy-httpd" Mar 12 08:25:03 crc kubenswrapper[4809]: E0312 08:25:03.332921 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="547a107f-945b-4d0d-929d-061a9a044077" containerName="mariadb-database-create" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.332929 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="547a107f-945b-4d0d-929d-061a9a044077" containerName="mariadb-database-create" Mar 12 08:25:03 crc kubenswrapper[4809]: E0312 08:25:03.332941 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d78fd7f-3718-4999-a661-ad590dc808a5" containerName="registry-server" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.332947 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d78fd7f-3718-4999-a661-ad590dc808a5" containerName="registry-server" Mar 12 08:25:03 crc kubenswrapper[4809]: E0312 08:25:03.332960 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="970fd7c0-4095-4aa7-8e61-f300972a7124" containerName="heat-engine" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.332966 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="970fd7c0-4095-4aa7-8e61-f300972a7124" containerName="heat-engine" Mar 12 08:25:03 crc kubenswrapper[4809]: E0312 08:25:03.332977 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4" containerName="ceilometer-central-agent" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.332984 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4" containerName="ceilometer-central-agent" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.333218 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="970fd7c0-4095-4aa7-8e61-f300972a7124" containerName="heat-engine" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.333230 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="3698cbf3-c28c-429f-b568-d8a68b979dfb" containerName="mariadb-database-create" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.333241 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e4b8a60-4319-4d91-a9e5-897d76e61f96" containerName="heat-cfnapi" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.333249 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="efdd8218-be37-4288-a09b-4b4119b5ca39" containerName="heat-api" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.333258 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4" containerName="sg-core" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.333268 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="efdd8218-be37-4288-a09b-4b4119b5ca39" containerName="heat-api" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.333280 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e4b8a60-4319-4d91-a9e5-897d76e61f96" containerName="heat-cfnapi" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.333290 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d78fd7f-3718-4999-a661-ad590dc808a5" containerName="registry-server" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.333301 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4" containerName="proxy-httpd" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.333310 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b066594-e122-4fc4-95bb-c66b48bfd0f3" containerName="mariadb-database-create" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.333320 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="c61faad0-e94e-4480-8ac7-854d1717dc78" containerName="mariadb-account-create-update" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.333328 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="efdd8218-be37-4288-a09b-4b4119b5ca39" containerName="heat-api" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.333339 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="efb90ca2-e6b9-4160-a1be-80e9418f9d43" containerName="mariadb-account-create-update" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.333348 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="4043f775-958c-4f5d-828f-dda0811c0a7e" containerName="mariadb-account-create-update" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.333359 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="547a107f-945b-4d0d-929d-061a9a044077" containerName="mariadb-database-create" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.333374 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4" containerName="ceilometer-central-agent" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.333383 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4" containerName="ceilometer-notification-agent" Mar 12 08:25:03 crc kubenswrapper[4809]: E0312 08:25:03.333794 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e4b8a60-4319-4d91-a9e5-897d76e61f96" containerName="heat-cfnapi" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.333802 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e4b8a60-4319-4d91-a9e5-897d76e61f96" containerName="heat-cfnapi" Mar 12 08:25:03 crc kubenswrapper[4809]: E0312 08:25:03.333831 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efdd8218-be37-4288-a09b-4b4119b5ca39" containerName="heat-api" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.333838 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="efdd8218-be37-4288-a09b-4b4119b5ca39" containerName="heat-api" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.334104 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e4b8a60-4319-4d91-a9e5-897d76e61f96" containerName="heat-cfnapi" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.356197 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.359133 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.361776 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.362224 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.471594 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g95qp\" (UniqueName: \"kubernetes.io/projected/af54219e-6d39-440c-b652-b1c9b21588b8-kube-api-access-g95qp\") pod \"ceilometer-0\" (UID: \"af54219e-6d39-440c-b652-b1c9b21588b8\") " pod="openstack/ceilometer-0" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.472438 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af54219e-6d39-440c-b652-b1c9b21588b8-scripts\") pod \"ceilometer-0\" (UID: \"af54219e-6d39-440c-b652-b1c9b21588b8\") " pod="openstack/ceilometer-0" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.472983 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/af54219e-6d39-440c-b652-b1c9b21588b8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"af54219e-6d39-440c-b652-b1c9b21588b8\") " pod="openstack/ceilometer-0" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.473499 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/af54219e-6d39-440c-b652-b1c9b21588b8-run-httpd\") pod \"ceilometer-0\" (UID: \"af54219e-6d39-440c-b652-b1c9b21588b8\") " pod="openstack/ceilometer-0" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.473830 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af54219e-6d39-440c-b652-b1c9b21588b8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"af54219e-6d39-440c-b652-b1c9b21588b8\") " pod="openstack/ceilometer-0" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.474061 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af54219e-6d39-440c-b652-b1c9b21588b8-config-data\") pod \"ceilometer-0\" (UID: \"af54219e-6d39-440c-b652-b1c9b21588b8\") " pod="openstack/ceilometer-0" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.475999 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/af54219e-6d39-440c-b652-b1c9b21588b8-log-httpd\") pod \"ceilometer-0\" (UID: \"af54219e-6d39-440c-b652-b1c9b21588b8\") " pod="openstack/ceilometer-0" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.578645 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af54219e-6d39-440c-b652-b1c9b21588b8-scripts\") pod \"ceilometer-0\" (UID: \"af54219e-6d39-440c-b652-b1c9b21588b8\") " pod="openstack/ceilometer-0" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.578757 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/af54219e-6d39-440c-b652-b1c9b21588b8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"af54219e-6d39-440c-b652-b1c9b21588b8\") " pod="openstack/ceilometer-0" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.578820 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/af54219e-6d39-440c-b652-b1c9b21588b8-run-httpd\") pod \"ceilometer-0\" (UID: \"af54219e-6d39-440c-b652-b1c9b21588b8\") " pod="openstack/ceilometer-0" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.578891 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af54219e-6d39-440c-b652-b1c9b21588b8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"af54219e-6d39-440c-b652-b1c9b21588b8\") " pod="openstack/ceilometer-0" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.578920 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af54219e-6d39-440c-b652-b1c9b21588b8-config-data\") pod \"ceilometer-0\" (UID: \"af54219e-6d39-440c-b652-b1c9b21588b8\") " pod="openstack/ceilometer-0" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.579089 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/af54219e-6d39-440c-b652-b1c9b21588b8-log-httpd\") pod \"ceilometer-0\" (UID: \"af54219e-6d39-440c-b652-b1c9b21588b8\") " pod="openstack/ceilometer-0" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.579157 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g95qp\" (UniqueName: \"kubernetes.io/projected/af54219e-6d39-440c-b652-b1c9b21588b8-kube-api-access-g95qp\") pod \"ceilometer-0\" (UID: \"af54219e-6d39-440c-b652-b1c9b21588b8\") " pod="openstack/ceilometer-0" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.580767 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/af54219e-6d39-440c-b652-b1c9b21588b8-log-httpd\") pod \"ceilometer-0\" (UID: \"af54219e-6d39-440c-b652-b1c9b21588b8\") " pod="openstack/ceilometer-0" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.581335 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/af54219e-6d39-440c-b652-b1c9b21588b8-run-httpd\") pod \"ceilometer-0\" (UID: \"af54219e-6d39-440c-b652-b1c9b21588b8\") " pod="openstack/ceilometer-0" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.597670 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af54219e-6d39-440c-b652-b1c9b21588b8-scripts\") pod \"ceilometer-0\" (UID: \"af54219e-6d39-440c-b652-b1c9b21588b8\") " pod="openstack/ceilometer-0" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.606228 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/af54219e-6d39-440c-b652-b1c9b21588b8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"af54219e-6d39-440c-b652-b1c9b21588b8\") " pod="openstack/ceilometer-0" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.620101 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af54219e-6d39-440c-b652-b1c9b21588b8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"af54219e-6d39-440c-b652-b1c9b21588b8\") " pod="openstack/ceilometer-0" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.620309 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g95qp\" (UniqueName: \"kubernetes.io/projected/af54219e-6d39-440c-b652-b1c9b21588b8-kube-api-access-g95qp\") pod \"ceilometer-0\" (UID: \"af54219e-6d39-440c-b652-b1c9b21588b8\") " pod="openstack/ceilometer-0" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.629536 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af54219e-6d39-440c-b652-b1c9b21588b8-config-data\") pod \"ceilometer-0\" (UID: \"af54219e-6d39-440c-b652-b1c9b21588b8\") " pod="openstack/ceilometer-0" Mar 12 08:25:03 crc kubenswrapper[4809]: I0312 08:25:03.706893 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 08:25:04 crc kubenswrapper[4809]: I0312 08:25:04.458343 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 08:25:04 crc kubenswrapper[4809]: I0312 08:25:04.504292 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-vbtp5"] Mar 12 08:25:04 crc kubenswrapper[4809]: I0312 08:25:04.549680 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-vbtp5"] Mar 12 08:25:04 crc kubenswrapper[4809]: I0312 08:25:04.549835 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-vbtp5" Mar 12 08:25:04 crc kubenswrapper[4809]: I0312 08:25:04.555181 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-zlh6z" Mar 12 08:25:04 crc kubenswrapper[4809]: I0312 08:25:04.555265 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 12 08:25:04 crc kubenswrapper[4809]: I0312 08:25:04.557527 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Mar 12 08:25:04 crc kubenswrapper[4809]: I0312 08:25:04.662338 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ff0ea45-0824-477e-9dde-a71bca537168-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-vbtp5\" (UID: \"7ff0ea45-0824-477e-9dde-a71bca537168\") " pod="openstack/nova-cell0-conductor-db-sync-vbtp5" Mar 12 08:25:04 crc kubenswrapper[4809]: I0312 08:25:04.662938 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnhh8\" (UniqueName: \"kubernetes.io/projected/7ff0ea45-0824-477e-9dde-a71bca537168-kube-api-access-gnhh8\") pod \"nova-cell0-conductor-db-sync-vbtp5\" (UID: \"7ff0ea45-0824-477e-9dde-a71bca537168\") " pod="openstack/nova-cell0-conductor-db-sync-vbtp5" Mar 12 08:25:04 crc kubenswrapper[4809]: I0312 08:25:04.662983 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ff0ea45-0824-477e-9dde-a71bca537168-config-data\") pod \"nova-cell0-conductor-db-sync-vbtp5\" (UID: \"7ff0ea45-0824-477e-9dde-a71bca537168\") " pod="openstack/nova-cell0-conductor-db-sync-vbtp5" Mar 12 08:25:04 crc kubenswrapper[4809]: I0312 08:25:04.663185 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ff0ea45-0824-477e-9dde-a71bca537168-scripts\") pod \"nova-cell0-conductor-db-sync-vbtp5\" (UID: \"7ff0ea45-0824-477e-9dde-a71bca537168\") " pod="openstack/nova-cell0-conductor-db-sync-vbtp5" Mar 12 08:25:04 crc kubenswrapper[4809]: I0312 08:25:04.767085 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ff0ea45-0824-477e-9dde-a71bca537168-scripts\") pod \"nova-cell0-conductor-db-sync-vbtp5\" (UID: \"7ff0ea45-0824-477e-9dde-a71bca537168\") " pod="openstack/nova-cell0-conductor-db-sync-vbtp5" Mar 12 08:25:04 crc kubenswrapper[4809]: I0312 08:25:04.767183 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ff0ea45-0824-477e-9dde-a71bca537168-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-vbtp5\" (UID: \"7ff0ea45-0824-477e-9dde-a71bca537168\") " pod="openstack/nova-cell0-conductor-db-sync-vbtp5" Mar 12 08:25:04 crc kubenswrapper[4809]: I0312 08:25:04.767238 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnhh8\" (UniqueName: \"kubernetes.io/projected/7ff0ea45-0824-477e-9dde-a71bca537168-kube-api-access-gnhh8\") pod \"nova-cell0-conductor-db-sync-vbtp5\" (UID: \"7ff0ea45-0824-477e-9dde-a71bca537168\") " pod="openstack/nova-cell0-conductor-db-sync-vbtp5" Mar 12 08:25:04 crc kubenswrapper[4809]: I0312 08:25:04.767264 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ff0ea45-0824-477e-9dde-a71bca537168-config-data\") pod \"nova-cell0-conductor-db-sync-vbtp5\" (UID: \"7ff0ea45-0824-477e-9dde-a71bca537168\") " pod="openstack/nova-cell0-conductor-db-sync-vbtp5" Mar 12 08:25:04 crc kubenswrapper[4809]: I0312 08:25:04.780320 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ff0ea45-0824-477e-9dde-a71bca537168-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-vbtp5\" (UID: \"7ff0ea45-0824-477e-9dde-a71bca537168\") " pod="openstack/nova-cell0-conductor-db-sync-vbtp5" Mar 12 08:25:04 crc kubenswrapper[4809]: I0312 08:25:04.796527 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ff0ea45-0824-477e-9dde-a71bca537168-scripts\") pod \"nova-cell0-conductor-db-sync-vbtp5\" (UID: \"7ff0ea45-0824-477e-9dde-a71bca537168\") " pod="openstack/nova-cell0-conductor-db-sync-vbtp5" Mar 12 08:25:04 crc kubenswrapper[4809]: I0312 08:25:04.799280 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ff0ea45-0824-477e-9dde-a71bca537168-config-data\") pod \"nova-cell0-conductor-db-sync-vbtp5\" (UID: \"7ff0ea45-0824-477e-9dde-a71bca537168\") " pod="openstack/nova-cell0-conductor-db-sync-vbtp5" Mar 12 08:25:04 crc kubenswrapper[4809]: I0312 08:25:04.803821 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnhh8\" (UniqueName: \"kubernetes.io/projected/7ff0ea45-0824-477e-9dde-a71bca537168-kube-api-access-gnhh8\") pod \"nova-cell0-conductor-db-sync-vbtp5\" (UID: \"7ff0ea45-0824-477e-9dde-a71bca537168\") " pod="openstack/nova-cell0-conductor-db-sync-vbtp5" Mar 12 08:25:04 crc kubenswrapper[4809]: I0312 08:25:04.914341 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-vbtp5" Mar 12 08:25:04 crc kubenswrapper[4809]: I0312 08:25:04.984429 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"af54219e-6d39-440c-b652-b1c9b21588b8","Type":"ContainerStarted","Data":"9bde68a66f74fb1cb498ea0ae17a965a6c4e59e11cfa39686a2795fc5fafb8df"} Mar 12 08:25:05 crc kubenswrapper[4809]: I0312 08:25:05.164541 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4" path="/var/lib/kubelet/pods/ad0c288d-ad03-4b1e-bb80-fb3a17bb43b4/volumes" Mar 12 08:25:05 crc kubenswrapper[4809]: I0312 08:25:05.321506 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 12 08:25:05 crc kubenswrapper[4809]: I0312 08:25:05.327367 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 12 08:25:05 crc kubenswrapper[4809]: I0312 08:25:05.412840 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 12 08:25:05 crc kubenswrapper[4809]: I0312 08:25:05.468703 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 12 08:25:05 crc kubenswrapper[4809]: I0312 08:25:05.725880 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-vbtp5"] Mar 12 08:25:05 crc kubenswrapper[4809]: I0312 08:25:05.790953 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 12 08:25:05 crc kubenswrapper[4809]: I0312 08:25:05.791146 4809 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 08:25:05 crc kubenswrapper[4809]: I0312 08:25:05.804855 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 12 08:25:06 crc kubenswrapper[4809]: I0312 08:25:06.008153 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-vbtp5" event={"ID":"7ff0ea45-0824-477e-9dde-a71bca537168","Type":"ContainerStarted","Data":"eb2ea2651471d18ac9fbe14f629dc510d9d1fae6e3cdf236b68540d89ad4c25c"} Mar 12 08:25:06 crc kubenswrapper[4809]: I0312 08:25:06.012714 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"af54219e-6d39-440c-b652-b1c9b21588b8","Type":"ContainerStarted","Data":"a874a15901146c96b8af8cf44e4ee0171cb7f2f1416fb9537e4d69d8bc61a78b"} Mar 12 08:25:06 crc kubenswrapper[4809]: I0312 08:25:06.013339 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 12 08:25:06 crc kubenswrapper[4809]: I0312 08:25:06.013941 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 12 08:25:07 crc kubenswrapper[4809]: I0312 08:25:07.047679 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"af54219e-6d39-440c-b652-b1c9b21588b8","Type":"ContainerStarted","Data":"551c0ff174feb8d2b277e430ba590567b30d95f1d4cbe77b316da86a5bfb016a"} Mar 12 08:25:07 crc kubenswrapper[4809]: I0312 08:25:07.047754 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"af54219e-6d39-440c-b652-b1c9b21588b8","Type":"ContainerStarted","Data":"bc1696149afa3efa4c2c7614a3ab764ee68a721bf0538cd612938a8385627ba1"} Mar 12 08:25:09 crc kubenswrapper[4809]: I0312 08:25:09.154317 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"af54219e-6d39-440c-b652-b1c9b21588b8","Type":"ContainerStarted","Data":"b832cb2acb463b61a2457f91dc10003a0197fa782dfaca3d5f4799194bdf73ce"} Mar 12 08:25:09 crc kubenswrapper[4809]: I0312 08:25:09.155219 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 12 08:25:09 crc kubenswrapper[4809]: I0312 08:25:09.176144 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.0571445329999998 podStartE2EDuration="6.176094331s" podCreationTimestamp="2026-03-12 08:25:03 +0000 UTC" firstStartedPulling="2026-03-12 08:25:04.450890074 +0000 UTC m=+1578.032925807" lastFinishedPulling="2026-03-12 08:25:08.569839872 +0000 UTC m=+1582.151875605" observedRunningTime="2026-03-12 08:25:09.160157918 +0000 UTC m=+1582.742193671" watchObservedRunningTime="2026-03-12 08:25:09.176094331 +0000 UTC m=+1582.758130064" Mar 12 08:25:09 crc kubenswrapper[4809]: I0312 08:25:09.208905 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 12 08:25:09 crc kubenswrapper[4809]: I0312 08:25:09.209545 4809 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 08:25:09 crc kubenswrapper[4809]: I0312 08:25:09.214878 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 12 08:25:11 crc kubenswrapper[4809]: I0312 08:25:11.106998 4809 scope.go:117] "RemoveContainer" containerID="864ab2b9321734db2ec1c2e4ce57f8a096a813a5cbaa0744b231535713d6ab10" Mar 12 08:25:11 crc kubenswrapper[4809]: E0312 08:25:11.107738 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:25:11 crc kubenswrapper[4809]: I0312 08:25:11.425565 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 08:25:11 crc kubenswrapper[4809]: I0312 08:25:11.425997 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="af54219e-6d39-440c-b652-b1c9b21588b8" containerName="ceilometer-central-agent" containerID="cri-o://a874a15901146c96b8af8cf44e4ee0171cb7f2f1416fb9537e4d69d8bc61a78b" gracePeriod=30 Mar 12 08:25:11 crc kubenswrapper[4809]: I0312 08:25:11.426334 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="af54219e-6d39-440c-b652-b1c9b21588b8" containerName="proxy-httpd" containerID="cri-o://b832cb2acb463b61a2457f91dc10003a0197fa782dfaca3d5f4799194bdf73ce" gracePeriod=30 Mar 12 08:25:11 crc kubenswrapper[4809]: I0312 08:25:11.426540 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="af54219e-6d39-440c-b652-b1c9b21588b8" containerName="sg-core" containerID="cri-o://551c0ff174feb8d2b277e430ba590567b30d95f1d4cbe77b316da86a5bfb016a" gracePeriod=30 Mar 12 08:25:11 crc kubenswrapper[4809]: I0312 08:25:11.426685 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="af54219e-6d39-440c-b652-b1c9b21588b8" containerName="ceilometer-notification-agent" containerID="cri-o://bc1696149afa3efa4c2c7614a3ab764ee68a721bf0538cd612938a8385627ba1" gracePeriod=30 Mar 12 08:25:12 crc kubenswrapper[4809]: I0312 08:25:12.187948 4809 generic.go:334] "Generic (PLEG): container finished" podID="af54219e-6d39-440c-b652-b1c9b21588b8" containerID="b832cb2acb463b61a2457f91dc10003a0197fa782dfaca3d5f4799194bdf73ce" exitCode=0 Mar 12 08:25:12 crc kubenswrapper[4809]: I0312 08:25:12.188613 4809 generic.go:334] "Generic (PLEG): container finished" podID="af54219e-6d39-440c-b652-b1c9b21588b8" containerID="551c0ff174feb8d2b277e430ba590567b30d95f1d4cbe77b316da86a5bfb016a" exitCode=2 Mar 12 08:25:12 crc kubenswrapper[4809]: I0312 08:25:12.188627 4809 generic.go:334] "Generic (PLEG): container finished" podID="af54219e-6d39-440c-b652-b1c9b21588b8" containerID="bc1696149afa3efa4c2c7614a3ab764ee68a721bf0538cd612938a8385627ba1" exitCode=0 Mar 12 08:25:12 crc kubenswrapper[4809]: I0312 08:25:12.188013 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"af54219e-6d39-440c-b652-b1c9b21588b8","Type":"ContainerDied","Data":"b832cb2acb463b61a2457f91dc10003a0197fa782dfaca3d5f4799194bdf73ce"} Mar 12 08:25:12 crc kubenswrapper[4809]: I0312 08:25:12.188739 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"af54219e-6d39-440c-b652-b1c9b21588b8","Type":"ContainerDied","Data":"551c0ff174feb8d2b277e430ba590567b30d95f1d4cbe77b316da86a5bfb016a"} Mar 12 08:25:12 crc kubenswrapper[4809]: I0312 08:25:12.188781 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"af54219e-6d39-440c-b652-b1c9b21588b8","Type":"ContainerDied","Data":"bc1696149afa3efa4c2c7614a3ab764ee68a721bf0538cd612938a8385627ba1"} Mar 12 08:25:18 crc kubenswrapper[4809]: I0312 08:25:18.290758 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-vbtp5" event={"ID":"7ff0ea45-0824-477e-9dde-a71bca537168","Type":"ContainerStarted","Data":"06827f215a5beefb6b2dc60783316a20e387ab9c23e3d80e5ba1763b2e194afe"} Mar 12 08:25:18 crc kubenswrapper[4809]: I0312 08:25:18.325830 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-vbtp5" podStartSLOduration=2.216056683 podStartE2EDuration="14.325802398s" podCreationTimestamp="2026-03-12 08:25:04 +0000 UTC" firstStartedPulling="2026-03-12 08:25:05.761832039 +0000 UTC m=+1579.343867772" lastFinishedPulling="2026-03-12 08:25:17.871577744 +0000 UTC m=+1591.453613487" observedRunningTime="2026-03-12 08:25:18.310661066 +0000 UTC m=+1591.892696799" watchObservedRunningTime="2026-03-12 08:25:18.325802398 +0000 UTC m=+1591.907838131" Mar 12 08:25:20 crc kubenswrapper[4809]: I0312 08:25:20.182005 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cmv6w"] Mar 12 08:25:20 crc kubenswrapper[4809]: I0312 08:25:20.185722 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cmv6w" Mar 12 08:25:20 crc kubenswrapper[4809]: I0312 08:25:20.206442 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cmv6w"] Mar 12 08:25:20 crc kubenswrapper[4809]: I0312 08:25:20.310847 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81403e46-3215-4867-a857-ec7bc0b08c0d-utilities\") pod \"redhat-marketplace-cmv6w\" (UID: \"81403e46-3215-4867-a857-ec7bc0b08c0d\") " pod="openshift-marketplace/redhat-marketplace-cmv6w" Mar 12 08:25:20 crc kubenswrapper[4809]: I0312 08:25:20.310993 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pzrm\" (UniqueName: \"kubernetes.io/projected/81403e46-3215-4867-a857-ec7bc0b08c0d-kube-api-access-4pzrm\") pod \"redhat-marketplace-cmv6w\" (UID: \"81403e46-3215-4867-a857-ec7bc0b08c0d\") " pod="openshift-marketplace/redhat-marketplace-cmv6w" Mar 12 08:25:20 crc kubenswrapper[4809]: I0312 08:25:20.311630 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81403e46-3215-4867-a857-ec7bc0b08c0d-catalog-content\") pod \"redhat-marketplace-cmv6w\" (UID: \"81403e46-3215-4867-a857-ec7bc0b08c0d\") " pod="openshift-marketplace/redhat-marketplace-cmv6w" Mar 12 08:25:20 crc kubenswrapper[4809]: I0312 08:25:20.319206 4809 generic.go:334] "Generic (PLEG): container finished" podID="af54219e-6d39-440c-b652-b1c9b21588b8" containerID="a874a15901146c96b8af8cf44e4ee0171cb7f2f1416fb9537e4d69d8bc61a78b" exitCode=0 Mar 12 08:25:20 crc kubenswrapper[4809]: I0312 08:25:20.319257 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"af54219e-6d39-440c-b652-b1c9b21588b8","Type":"ContainerDied","Data":"a874a15901146c96b8af8cf44e4ee0171cb7f2f1416fb9537e4d69d8bc61a78b"} Mar 12 08:25:20 crc kubenswrapper[4809]: I0312 08:25:20.319291 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"af54219e-6d39-440c-b652-b1c9b21588b8","Type":"ContainerDied","Data":"9bde68a66f74fb1cb498ea0ae17a965a6c4e59e11cfa39686a2795fc5fafb8df"} Mar 12 08:25:20 crc kubenswrapper[4809]: I0312 08:25:20.319304 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9bde68a66f74fb1cb498ea0ae17a965a6c4e59e11cfa39686a2795fc5fafb8df" Mar 12 08:25:20 crc kubenswrapper[4809]: I0312 08:25:20.343350 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 08:25:20 crc kubenswrapper[4809]: I0312 08:25:20.413974 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af54219e-6d39-440c-b652-b1c9b21588b8-config-data\") pod \"af54219e-6d39-440c-b652-b1c9b21588b8\" (UID: \"af54219e-6d39-440c-b652-b1c9b21588b8\") " Mar 12 08:25:20 crc kubenswrapper[4809]: I0312 08:25:20.414093 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/af54219e-6d39-440c-b652-b1c9b21588b8-sg-core-conf-yaml\") pod \"af54219e-6d39-440c-b652-b1c9b21588b8\" (UID: \"af54219e-6d39-440c-b652-b1c9b21588b8\") " Mar 12 08:25:20 crc kubenswrapper[4809]: I0312 08:25:20.414141 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af54219e-6d39-440c-b652-b1c9b21588b8-scripts\") pod \"af54219e-6d39-440c-b652-b1c9b21588b8\" (UID: \"af54219e-6d39-440c-b652-b1c9b21588b8\") " Mar 12 08:25:20 crc kubenswrapper[4809]: I0312 08:25:20.414171 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g95qp\" (UniqueName: \"kubernetes.io/projected/af54219e-6d39-440c-b652-b1c9b21588b8-kube-api-access-g95qp\") pod \"af54219e-6d39-440c-b652-b1c9b21588b8\" (UID: \"af54219e-6d39-440c-b652-b1c9b21588b8\") " Mar 12 08:25:20 crc kubenswrapper[4809]: I0312 08:25:20.414371 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/af54219e-6d39-440c-b652-b1c9b21588b8-log-httpd\") pod \"af54219e-6d39-440c-b652-b1c9b21588b8\" (UID: \"af54219e-6d39-440c-b652-b1c9b21588b8\") " Mar 12 08:25:20 crc kubenswrapper[4809]: I0312 08:25:20.414498 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/af54219e-6d39-440c-b652-b1c9b21588b8-run-httpd\") pod \"af54219e-6d39-440c-b652-b1c9b21588b8\" (UID: \"af54219e-6d39-440c-b652-b1c9b21588b8\") " Mar 12 08:25:20 crc kubenswrapper[4809]: I0312 08:25:20.414545 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af54219e-6d39-440c-b652-b1c9b21588b8-combined-ca-bundle\") pod \"af54219e-6d39-440c-b652-b1c9b21588b8\" (UID: \"af54219e-6d39-440c-b652-b1c9b21588b8\") " Mar 12 08:25:20 crc kubenswrapper[4809]: I0312 08:25:20.415220 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af54219e-6d39-440c-b652-b1c9b21588b8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "af54219e-6d39-440c-b652-b1c9b21588b8" (UID: "af54219e-6d39-440c-b652-b1c9b21588b8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:25:20 crc kubenswrapper[4809]: I0312 08:25:20.415712 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af54219e-6d39-440c-b652-b1c9b21588b8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "af54219e-6d39-440c-b652-b1c9b21588b8" (UID: "af54219e-6d39-440c-b652-b1c9b21588b8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:25:20 crc kubenswrapper[4809]: I0312 08:25:20.416026 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81403e46-3215-4867-a857-ec7bc0b08c0d-utilities\") pod \"redhat-marketplace-cmv6w\" (UID: \"81403e46-3215-4867-a857-ec7bc0b08c0d\") " pod="openshift-marketplace/redhat-marketplace-cmv6w" Mar 12 08:25:20 crc kubenswrapper[4809]: I0312 08:25:20.416650 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81403e46-3215-4867-a857-ec7bc0b08c0d-utilities\") pod \"redhat-marketplace-cmv6w\" (UID: \"81403e46-3215-4867-a857-ec7bc0b08c0d\") " pod="openshift-marketplace/redhat-marketplace-cmv6w" Mar 12 08:25:20 crc kubenswrapper[4809]: I0312 08:25:20.416688 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pzrm\" (UniqueName: \"kubernetes.io/projected/81403e46-3215-4867-a857-ec7bc0b08c0d-kube-api-access-4pzrm\") pod \"redhat-marketplace-cmv6w\" (UID: \"81403e46-3215-4867-a857-ec7bc0b08c0d\") " pod="openshift-marketplace/redhat-marketplace-cmv6w" Mar 12 08:25:20 crc kubenswrapper[4809]: I0312 08:25:20.417013 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81403e46-3215-4867-a857-ec7bc0b08c0d-catalog-content\") pod \"redhat-marketplace-cmv6w\" (UID: \"81403e46-3215-4867-a857-ec7bc0b08c0d\") " pod="openshift-marketplace/redhat-marketplace-cmv6w" Mar 12 08:25:20 crc kubenswrapper[4809]: I0312 08:25:20.417382 4809 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/af54219e-6d39-440c-b652-b1c9b21588b8-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 08:25:20 crc kubenswrapper[4809]: I0312 08:25:20.417411 4809 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/af54219e-6d39-440c-b652-b1c9b21588b8-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 08:25:20 crc kubenswrapper[4809]: I0312 08:25:20.418107 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81403e46-3215-4867-a857-ec7bc0b08c0d-catalog-content\") pod \"redhat-marketplace-cmv6w\" (UID: \"81403e46-3215-4867-a857-ec7bc0b08c0d\") " pod="openshift-marketplace/redhat-marketplace-cmv6w" Mar 12 08:25:20 crc kubenswrapper[4809]: I0312 08:25:20.422318 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af54219e-6d39-440c-b652-b1c9b21588b8-kube-api-access-g95qp" (OuterVolumeSpecName: "kube-api-access-g95qp") pod "af54219e-6d39-440c-b652-b1c9b21588b8" (UID: "af54219e-6d39-440c-b652-b1c9b21588b8"). InnerVolumeSpecName "kube-api-access-g95qp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:25:20 crc kubenswrapper[4809]: I0312 08:25:20.439878 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pzrm\" (UniqueName: \"kubernetes.io/projected/81403e46-3215-4867-a857-ec7bc0b08c0d-kube-api-access-4pzrm\") pod \"redhat-marketplace-cmv6w\" (UID: \"81403e46-3215-4867-a857-ec7bc0b08c0d\") " pod="openshift-marketplace/redhat-marketplace-cmv6w" Mar 12 08:25:20 crc kubenswrapper[4809]: I0312 08:25:20.440471 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af54219e-6d39-440c-b652-b1c9b21588b8-scripts" (OuterVolumeSpecName: "scripts") pod "af54219e-6d39-440c-b652-b1c9b21588b8" (UID: "af54219e-6d39-440c-b652-b1c9b21588b8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:25:20 crc kubenswrapper[4809]: I0312 08:25:20.467834 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af54219e-6d39-440c-b652-b1c9b21588b8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "af54219e-6d39-440c-b652-b1c9b21588b8" (UID: "af54219e-6d39-440c-b652-b1c9b21588b8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:25:20 crc kubenswrapper[4809]: I0312 08:25:20.519470 4809 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/af54219e-6d39-440c-b652-b1c9b21588b8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 12 08:25:20 crc kubenswrapper[4809]: I0312 08:25:20.519504 4809 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af54219e-6d39-440c-b652-b1c9b21588b8-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 08:25:20 crc kubenswrapper[4809]: I0312 08:25:20.519513 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g95qp\" (UniqueName: \"kubernetes.io/projected/af54219e-6d39-440c-b652-b1c9b21588b8-kube-api-access-g95qp\") on node \"crc\" DevicePath \"\"" Mar 12 08:25:20 crc kubenswrapper[4809]: I0312 08:25:20.566372 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af54219e-6d39-440c-b652-b1c9b21588b8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "af54219e-6d39-440c-b652-b1c9b21588b8" (UID: "af54219e-6d39-440c-b652-b1c9b21588b8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:25:20 crc kubenswrapper[4809]: I0312 08:25:20.585179 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af54219e-6d39-440c-b652-b1c9b21588b8-config-data" (OuterVolumeSpecName: "config-data") pod "af54219e-6d39-440c-b652-b1c9b21588b8" (UID: "af54219e-6d39-440c-b652-b1c9b21588b8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:25:20 crc kubenswrapper[4809]: I0312 08:25:20.622081 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af54219e-6d39-440c-b652-b1c9b21588b8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:25:20 crc kubenswrapper[4809]: I0312 08:25:20.622617 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af54219e-6d39-440c-b652-b1c9b21588b8-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 08:25:20 crc kubenswrapper[4809]: I0312 08:25:20.634801 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cmv6w" Mar 12 08:25:21 crc kubenswrapper[4809]: I0312 08:25:21.329367 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 08:25:21 crc kubenswrapper[4809]: I0312 08:25:21.368282 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 08:25:21 crc kubenswrapper[4809]: I0312 08:25:21.387928 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 12 08:25:21 crc kubenswrapper[4809]: I0312 08:25:21.409067 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cmv6w"] Mar 12 08:25:21 crc kubenswrapper[4809]: I0312 08:25:21.452162 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 12 08:25:21 crc kubenswrapper[4809]: E0312 08:25:21.452719 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af54219e-6d39-440c-b652-b1c9b21588b8" containerName="ceilometer-central-agent" Mar 12 08:25:21 crc kubenswrapper[4809]: I0312 08:25:21.452739 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="af54219e-6d39-440c-b652-b1c9b21588b8" containerName="ceilometer-central-agent" Mar 12 08:25:21 crc kubenswrapper[4809]: E0312 08:25:21.452753 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af54219e-6d39-440c-b652-b1c9b21588b8" containerName="sg-core" Mar 12 08:25:21 crc kubenswrapper[4809]: I0312 08:25:21.452760 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="af54219e-6d39-440c-b652-b1c9b21588b8" containerName="sg-core" Mar 12 08:25:21 crc kubenswrapper[4809]: E0312 08:25:21.452774 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af54219e-6d39-440c-b652-b1c9b21588b8" containerName="ceilometer-notification-agent" Mar 12 08:25:21 crc kubenswrapper[4809]: I0312 08:25:21.452780 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="af54219e-6d39-440c-b652-b1c9b21588b8" containerName="ceilometer-notification-agent" Mar 12 08:25:21 crc kubenswrapper[4809]: E0312 08:25:21.452807 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af54219e-6d39-440c-b652-b1c9b21588b8" containerName="proxy-httpd" Mar 12 08:25:21 crc kubenswrapper[4809]: I0312 08:25:21.452813 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="af54219e-6d39-440c-b652-b1c9b21588b8" containerName="proxy-httpd" Mar 12 08:25:21 crc kubenswrapper[4809]: I0312 08:25:21.453034 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="af54219e-6d39-440c-b652-b1c9b21588b8" containerName="sg-core" Mar 12 08:25:21 crc kubenswrapper[4809]: I0312 08:25:21.453058 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="af54219e-6d39-440c-b652-b1c9b21588b8" containerName="ceilometer-notification-agent" Mar 12 08:25:21 crc kubenswrapper[4809]: I0312 08:25:21.453075 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="af54219e-6d39-440c-b652-b1c9b21588b8" containerName="proxy-httpd" Mar 12 08:25:21 crc kubenswrapper[4809]: I0312 08:25:21.453087 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="af54219e-6d39-440c-b652-b1c9b21588b8" containerName="ceilometer-central-agent" Mar 12 08:25:21 crc kubenswrapper[4809]: I0312 08:25:21.460390 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 08:25:21 crc kubenswrapper[4809]: I0312 08:25:21.464653 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38e51b68-51b0-4e3b-928f-9f367ffc672a-log-httpd\") pod \"ceilometer-0\" (UID: \"38e51b68-51b0-4e3b-928f-9f367ffc672a\") " pod="openstack/ceilometer-0" Mar 12 08:25:21 crc kubenswrapper[4809]: I0312 08:25:21.464698 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38e51b68-51b0-4e3b-928f-9f367ffc672a-run-httpd\") pod \"ceilometer-0\" (UID: \"38e51b68-51b0-4e3b-928f-9f367ffc672a\") " pod="openstack/ceilometer-0" Mar 12 08:25:21 crc kubenswrapper[4809]: I0312 08:25:21.464795 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38e51b68-51b0-4e3b-928f-9f367ffc672a-config-data\") pod \"ceilometer-0\" (UID: \"38e51b68-51b0-4e3b-928f-9f367ffc672a\") " pod="openstack/ceilometer-0" Mar 12 08:25:21 crc kubenswrapper[4809]: I0312 08:25:21.464846 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/38e51b68-51b0-4e3b-928f-9f367ffc672a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"38e51b68-51b0-4e3b-928f-9f367ffc672a\") " pod="openstack/ceilometer-0" Mar 12 08:25:21 crc kubenswrapper[4809]: I0312 08:25:21.464879 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8pzs\" (UniqueName: \"kubernetes.io/projected/38e51b68-51b0-4e3b-928f-9f367ffc672a-kube-api-access-p8pzs\") pod \"ceilometer-0\" (UID: \"38e51b68-51b0-4e3b-928f-9f367ffc672a\") " pod="openstack/ceilometer-0" Mar 12 08:25:21 crc kubenswrapper[4809]: I0312 08:25:21.464939 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38e51b68-51b0-4e3b-928f-9f367ffc672a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"38e51b68-51b0-4e3b-928f-9f367ffc672a\") " pod="openstack/ceilometer-0" Mar 12 08:25:21 crc kubenswrapper[4809]: I0312 08:25:21.464970 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38e51b68-51b0-4e3b-928f-9f367ffc672a-scripts\") pod \"ceilometer-0\" (UID: \"38e51b68-51b0-4e3b-928f-9f367ffc672a\") " pod="openstack/ceilometer-0" Mar 12 08:25:21 crc kubenswrapper[4809]: I0312 08:25:21.465602 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 12 08:25:21 crc kubenswrapper[4809]: I0312 08:25:21.467506 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 12 08:25:21 crc kubenswrapper[4809]: I0312 08:25:21.501616 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 08:25:21 crc kubenswrapper[4809]: I0312 08:25:21.566971 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38e51b68-51b0-4e3b-928f-9f367ffc672a-config-data\") pod \"ceilometer-0\" (UID: \"38e51b68-51b0-4e3b-928f-9f367ffc672a\") " pod="openstack/ceilometer-0" Mar 12 08:25:21 crc kubenswrapper[4809]: I0312 08:25:21.567064 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/38e51b68-51b0-4e3b-928f-9f367ffc672a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"38e51b68-51b0-4e3b-928f-9f367ffc672a\") " pod="openstack/ceilometer-0" Mar 12 08:25:21 crc kubenswrapper[4809]: I0312 08:25:21.567109 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8pzs\" (UniqueName: \"kubernetes.io/projected/38e51b68-51b0-4e3b-928f-9f367ffc672a-kube-api-access-p8pzs\") pod \"ceilometer-0\" (UID: \"38e51b68-51b0-4e3b-928f-9f367ffc672a\") " pod="openstack/ceilometer-0" Mar 12 08:25:21 crc kubenswrapper[4809]: I0312 08:25:21.567213 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38e51b68-51b0-4e3b-928f-9f367ffc672a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"38e51b68-51b0-4e3b-928f-9f367ffc672a\") " pod="openstack/ceilometer-0" Mar 12 08:25:21 crc kubenswrapper[4809]: I0312 08:25:21.567247 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38e51b68-51b0-4e3b-928f-9f367ffc672a-scripts\") pod \"ceilometer-0\" (UID: \"38e51b68-51b0-4e3b-928f-9f367ffc672a\") " pod="openstack/ceilometer-0" Mar 12 08:25:21 crc kubenswrapper[4809]: I0312 08:25:21.567395 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38e51b68-51b0-4e3b-928f-9f367ffc672a-log-httpd\") pod \"ceilometer-0\" (UID: \"38e51b68-51b0-4e3b-928f-9f367ffc672a\") " pod="openstack/ceilometer-0" Mar 12 08:25:21 crc kubenswrapper[4809]: I0312 08:25:21.567423 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38e51b68-51b0-4e3b-928f-9f367ffc672a-run-httpd\") pod \"ceilometer-0\" (UID: \"38e51b68-51b0-4e3b-928f-9f367ffc672a\") " pod="openstack/ceilometer-0" Mar 12 08:25:21 crc kubenswrapper[4809]: I0312 08:25:21.568227 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38e51b68-51b0-4e3b-928f-9f367ffc672a-log-httpd\") pod \"ceilometer-0\" (UID: \"38e51b68-51b0-4e3b-928f-9f367ffc672a\") " pod="openstack/ceilometer-0" Mar 12 08:25:21 crc kubenswrapper[4809]: I0312 08:25:21.568525 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38e51b68-51b0-4e3b-928f-9f367ffc672a-run-httpd\") pod \"ceilometer-0\" (UID: \"38e51b68-51b0-4e3b-928f-9f367ffc672a\") " pod="openstack/ceilometer-0" Mar 12 08:25:21 crc kubenswrapper[4809]: I0312 08:25:21.576304 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38e51b68-51b0-4e3b-928f-9f367ffc672a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"38e51b68-51b0-4e3b-928f-9f367ffc672a\") " pod="openstack/ceilometer-0" Mar 12 08:25:21 crc kubenswrapper[4809]: I0312 08:25:21.582742 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/38e51b68-51b0-4e3b-928f-9f367ffc672a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"38e51b68-51b0-4e3b-928f-9f367ffc672a\") " pod="openstack/ceilometer-0" Mar 12 08:25:21 crc kubenswrapper[4809]: I0312 08:25:21.588335 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8pzs\" (UniqueName: \"kubernetes.io/projected/38e51b68-51b0-4e3b-928f-9f367ffc672a-kube-api-access-p8pzs\") pod \"ceilometer-0\" (UID: \"38e51b68-51b0-4e3b-928f-9f367ffc672a\") " pod="openstack/ceilometer-0" Mar 12 08:25:21 crc kubenswrapper[4809]: I0312 08:25:21.593662 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38e51b68-51b0-4e3b-928f-9f367ffc672a-config-data\") pod \"ceilometer-0\" (UID: \"38e51b68-51b0-4e3b-928f-9f367ffc672a\") " pod="openstack/ceilometer-0" Mar 12 08:25:21 crc kubenswrapper[4809]: I0312 08:25:21.614092 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38e51b68-51b0-4e3b-928f-9f367ffc672a-scripts\") pod \"ceilometer-0\" (UID: \"38e51b68-51b0-4e3b-928f-9f367ffc672a\") " pod="openstack/ceilometer-0" Mar 12 08:25:21 crc kubenswrapper[4809]: I0312 08:25:21.806434 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 08:25:22 crc kubenswrapper[4809]: W0312 08:25:22.320034 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38e51b68_51b0_4e3b_928f_9f367ffc672a.slice/crio-c1d7947176f277fc0c28f9b13bbe87eb02c38e54733533522719d6073b150f74 WatchSource:0}: Error finding container c1d7947176f277fc0c28f9b13bbe87eb02c38e54733533522719d6073b150f74: Status 404 returned error can't find the container with id c1d7947176f277fc0c28f9b13bbe87eb02c38e54733533522719d6073b150f74 Mar 12 08:25:22 crc kubenswrapper[4809]: I0312 08:25:22.327047 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 08:25:22 crc kubenswrapper[4809]: I0312 08:25:22.348628 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"38e51b68-51b0-4e3b-928f-9f367ffc672a","Type":"ContainerStarted","Data":"c1d7947176f277fc0c28f9b13bbe87eb02c38e54733533522719d6073b150f74"} Mar 12 08:25:22 crc kubenswrapper[4809]: I0312 08:25:22.355973 4809 generic.go:334] "Generic (PLEG): container finished" podID="81403e46-3215-4867-a857-ec7bc0b08c0d" containerID="15e02705f4d2d77891d788869360f15fe0aef05bdcdcd4046d89d9c2cf3e3ff9" exitCode=0 Mar 12 08:25:22 crc kubenswrapper[4809]: I0312 08:25:22.356044 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cmv6w" event={"ID":"81403e46-3215-4867-a857-ec7bc0b08c0d","Type":"ContainerDied","Data":"15e02705f4d2d77891d788869360f15fe0aef05bdcdcd4046d89d9c2cf3e3ff9"} Mar 12 08:25:22 crc kubenswrapper[4809]: I0312 08:25:22.356086 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cmv6w" event={"ID":"81403e46-3215-4867-a857-ec7bc0b08c0d","Type":"ContainerStarted","Data":"409872c90e973a3c164613e8dd0c7f7821c9ccdacd7e2565e3d34ce14c58f0b7"} Mar 12 08:25:23 crc kubenswrapper[4809]: I0312 08:25:23.130521 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af54219e-6d39-440c-b652-b1c9b21588b8" path="/var/lib/kubelet/pods/af54219e-6d39-440c-b652-b1c9b21588b8/volumes" Mar 12 08:25:23 crc kubenswrapper[4809]: I0312 08:25:23.373789 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cmv6w" event={"ID":"81403e46-3215-4867-a857-ec7bc0b08c0d","Type":"ContainerStarted","Data":"96153f4bfac4680bbb81bab07fcd3c4db29374c87247123c0bfde78b8922cf7c"} Mar 12 08:25:23 crc kubenswrapper[4809]: I0312 08:25:23.376932 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"38e51b68-51b0-4e3b-928f-9f367ffc672a","Type":"ContainerStarted","Data":"eebcc444e21547a19780d297a3b1c0c9aa13b02b4195f85903b9a0afe1b7cd97"} Mar 12 08:25:24 crc kubenswrapper[4809]: I0312 08:25:24.395420 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"38e51b68-51b0-4e3b-928f-9f367ffc672a","Type":"ContainerStarted","Data":"d565fda23b679cc42023b299e7d3d51ca19aa0153fcb5fd8012f0e4ee9cbedaa"} Mar 12 08:25:25 crc kubenswrapper[4809]: I0312 08:25:25.413821 4809 generic.go:334] "Generic (PLEG): container finished" podID="81403e46-3215-4867-a857-ec7bc0b08c0d" containerID="96153f4bfac4680bbb81bab07fcd3c4db29374c87247123c0bfde78b8922cf7c" exitCode=0 Mar 12 08:25:25 crc kubenswrapper[4809]: I0312 08:25:25.413918 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cmv6w" event={"ID":"81403e46-3215-4867-a857-ec7bc0b08c0d","Type":"ContainerDied","Data":"96153f4bfac4680bbb81bab07fcd3c4db29374c87247123c0bfde78b8922cf7c"} Mar 12 08:25:25 crc kubenswrapper[4809]: I0312 08:25:25.417819 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"38e51b68-51b0-4e3b-928f-9f367ffc672a","Type":"ContainerStarted","Data":"a84f72f142d9ee5bfb4688de9e30ecb6c83045e6fe740ee7a5658f9f5e9909c2"} Mar 12 08:25:26 crc kubenswrapper[4809]: I0312 08:25:26.107151 4809 scope.go:117] "RemoveContainer" containerID="864ab2b9321734db2ec1c2e4ce57f8a096a813a5cbaa0744b231535713d6ab10" Mar 12 08:25:26 crc kubenswrapper[4809]: E0312 08:25:26.108135 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:25:26 crc kubenswrapper[4809]: I0312 08:25:26.435577 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cmv6w" event={"ID":"81403e46-3215-4867-a857-ec7bc0b08c0d","Type":"ContainerStarted","Data":"13a6f9a4d1c84a85d6536ed3fee6503e84c0c929bf70f5747ea200abb0391c36"} Mar 12 08:25:26 crc kubenswrapper[4809]: I0312 08:25:26.469314 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cmv6w" podStartSLOduration=3.011622854 podStartE2EDuration="6.469282187s" podCreationTimestamp="2026-03-12 08:25:20 +0000 UTC" firstStartedPulling="2026-03-12 08:25:22.358631134 +0000 UTC m=+1595.940666867" lastFinishedPulling="2026-03-12 08:25:25.816290467 +0000 UTC m=+1599.398326200" observedRunningTime="2026-03-12 08:25:26.455937355 +0000 UTC m=+1600.037973088" watchObservedRunningTime="2026-03-12 08:25:26.469282187 +0000 UTC m=+1600.051317920" Mar 12 08:25:27 crc kubenswrapper[4809]: I0312 08:25:27.450385 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"38e51b68-51b0-4e3b-928f-9f367ffc672a","Type":"ContainerStarted","Data":"26caba6f6b0b93cd4fdda850f4500a1db0042d8a1fafd70266705bca021b0d29"} Mar 12 08:25:27 crc kubenswrapper[4809]: I0312 08:25:27.451072 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 12 08:25:27 crc kubenswrapper[4809]: I0312 08:25:27.479197 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.230293734 podStartE2EDuration="6.479174096s" podCreationTimestamp="2026-03-12 08:25:21 +0000 UTC" firstStartedPulling="2026-03-12 08:25:22.323863829 +0000 UTC m=+1595.905899562" lastFinishedPulling="2026-03-12 08:25:26.572744191 +0000 UTC m=+1600.154779924" observedRunningTime="2026-03-12 08:25:27.477089048 +0000 UTC m=+1601.059124791" watchObservedRunningTime="2026-03-12 08:25:27.479174096 +0000 UTC m=+1601.061209829" Mar 12 08:25:30 crc kubenswrapper[4809]: I0312 08:25:30.635065 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cmv6w" Mar 12 08:25:30 crc kubenswrapper[4809]: I0312 08:25:30.637481 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cmv6w" Mar 12 08:25:31 crc kubenswrapper[4809]: I0312 08:25:31.692004 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-cmv6w" podUID="81403e46-3215-4867-a857-ec7bc0b08c0d" containerName="registry-server" probeResult="failure" output=< Mar 12 08:25:31 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 08:25:31 crc kubenswrapper[4809]: > Mar 12 08:25:32 crc kubenswrapper[4809]: I0312 08:25:32.513080 4809 generic.go:334] "Generic (PLEG): container finished" podID="7ff0ea45-0824-477e-9dde-a71bca537168" containerID="06827f215a5beefb6b2dc60783316a20e387ab9c23e3d80e5ba1763b2e194afe" exitCode=0 Mar 12 08:25:32 crc kubenswrapper[4809]: I0312 08:25:32.513544 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-vbtp5" event={"ID":"7ff0ea45-0824-477e-9dde-a71bca537168","Type":"ContainerDied","Data":"06827f215a5beefb6b2dc60783316a20e387ab9c23e3d80e5ba1763b2e194afe"} Mar 12 08:25:34 crc kubenswrapper[4809]: I0312 08:25:34.082965 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-vbtp5" Mar 12 08:25:34 crc kubenswrapper[4809]: I0312 08:25:34.162304 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ff0ea45-0824-477e-9dde-a71bca537168-scripts\") pod \"7ff0ea45-0824-477e-9dde-a71bca537168\" (UID: \"7ff0ea45-0824-477e-9dde-a71bca537168\") " Mar 12 08:25:34 crc kubenswrapper[4809]: I0312 08:25:34.162602 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ff0ea45-0824-477e-9dde-a71bca537168-combined-ca-bundle\") pod \"7ff0ea45-0824-477e-9dde-a71bca537168\" (UID: \"7ff0ea45-0824-477e-9dde-a71bca537168\") " Mar 12 08:25:34 crc kubenswrapper[4809]: I0312 08:25:34.162711 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ff0ea45-0824-477e-9dde-a71bca537168-config-data\") pod \"7ff0ea45-0824-477e-9dde-a71bca537168\" (UID: \"7ff0ea45-0824-477e-9dde-a71bca537168\") " Mar 12 08:25:34 crc kubenswrapper[4809]: I0312 08:25:34.162793 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnhh8\" (UniqueName: \"kubernetes.io/projected/7ff0ea45-0824-477e-9dde-a71bca537168-kube-api-access-gnhh8\") pod \"7ff0ea45-0824-477e-9dde-a71bca537168\" (UID: \"7ff0ea45-0824-477e-9dde-a71bca537168\") " Mar 12 08:25:34 crc kubenswrapper[4809]: I0312 08:25:34.174393 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ff0ea45-0824-477e-9dde-a71bca537168-scripts" (OuterVolumeSpecName: "scripts") pod "7ff0ea45-0824-477e-9dde-a71bca537168" (UID: "7ff0ea45-0824-477e-9dde-a71bca537168"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:25:34 crc kubenswrapper[4809]: I0312 08:25:34.196947 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ff0ea45-0824-477e-9dde-a71bca537168-kube-api-access-gnhh8" (OuterVolumeSpecName: "kube-api-access-gnhh8") pod "7ff0ea45-0824-477e-9dde-a71bca537168" (UID: "7ff0ea45-0824-477e-9dde-a71bca537168"). InnerVolumeSpecName "kube-api-access-gnhh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:25:34 crc kubenswrapper[4809]: I0312 08:25:34.204175 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ff0ea45-0824-477e-9dde-a71bca537168-config-data" (OuterVolumeSpecName: "config-data") pod "7ff0ea45-0824-477e-9dde-a71bca537168" (UID: "7ff0ea45-0824-477e-9dde-a71bca537168"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:25:34 crc kubenswrapper[4809]: I0312 08:25:34.215974 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ff0ea45-0824-477e-9dde-a71bca537168-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7ff0ea45-0824-477e-9dde-a71bca537168" (UID: "7ff0ea45-0824-477e-9dde-a71bca537168"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:25:34 crc kubenswrapper[4809]: I0312 08:25:34.266677 4809 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ff0ea45-0824-477e-9dde-a71bca537168-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 08:25:34 crc kubenswrapper[4809]: I0312 08:25:34.266716 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ff0ea45-0824-477e-9dde-a71bca537168-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:25:34 crc kubenswrapper[4809]: I0312 08:25:34.266730 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ff0ea45-0824-477e-9dde-a71bca537168-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 08:25:34 crc kubenswrapper[4809]: I0312 08:25:34.266742 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnhh8\" (UniqueName: \"kubernetes.io/projected/7ff0ea45-0824-477e-9dde-a71bca537168-kube-api-access-gnhh8\") on node \"crc\" DevicePath \"\"" Mar 12 08:25:34 crc kubenswrapper[4809]: I0312 08:25:34.486437 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 08:25:34 crc kubenswrapper[4809]: I0312 08:25:34.486849 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="38e51b68-51b0-4e3b-928f-9f367ffc672a" containerName="ceilometer-central-agent" containerID="cri-o://eebcc444e21547a19780d297a3b1c0c9aa13b02b4195f85903b9a0afe1b7cd97" gracePeriod=30 Mar 12 08:25:34 crc kubenswrapper[4809]: I0312 08:25:34.486938 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="38e51b68-51b0-4e3b-928f-9f367ffc672a" containerName="proxy-httpd" containerID="cri-o://26caba6f6b0b93cd4fdda850f4500a1db0042d8a1fafd70266705bca021b0d29" gracePeriod=30 Mar 12 08:25:34 crc kubenswrapper[4809]: I0312 08:25:34.487046 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="38e51b68-51b0-4e3b-928f-9f367ffc672a" containerName="ceilometer-notification-agent" containerID="cri-o://d565fda23b679cc42023b299e7d3d51ca19aa0153fcb5fd8012f0e4ee9cbedaa" gracePeriod=30 Mar 12 08:25:34 crc kubenswrapper[4809]: I0312 08:25:34.486994 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="38e51b68-51b0-4e3b-928f-9f367ffc672a" containerName="sg-core" containerID="cri-o://a84f72f142d9ee5bfb4688de9e30ecb6c83045e6fe740ee7a5658f9f5e9909c2" gracePeriod=30 Mar 12 08:25:34 crc kubenswrapper[4809]: I0312 08:25:34.542621 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-vbtp5" event={"ID":"7ff0ea45-0824-477e-9dde-a71bca537168","Type":"ContainerDied","Data":"eb2ea2651471d18ac9fbe14f629dc510d9d1fae6e3cdf236b68540d89ad4c25c"} Mar 12 08:25:34 crc kubenswrapper[4809]: I0312 08:25:34.542673 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb2ea2651471d18ac9fbe14f629dc510d9d1fae6e3cdf236b68540d89ad4c25c" Mar 12 08:25:34 crc kubenswrapper[4809]: I0312 08:25:34.542735 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-vbtp5" Mar 12 08:25:34 crc kubenswrapper[4809]: I0312 08:25:34.669229 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 12 08:25:34 crc kubenswrapper[4809]: E0312 08:25:34.671218 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ff0ea45-0824-477e-9dde-a71bca537168" containerName="nova-cell0-conductor-db-sync" Mar 12 08:25:34 crc kubenswrapper[4809]: I0312 08:25:34.671311 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ff0ea45-0824-477e-9dde-a71bca537168" containerName="nova-cell0-conductor-db-sync" Mar 12 08:25:34 crc kubenswrapper[4809]: I0312 08:25:34.671848 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ff0ea45-0824-477e-9dde-a71bca537168" containerName="nova-cell0-conductor-db-sync" Mar 12 08:25:34 crc kubenswrapper[4809]: I0312 08:25:34.673741 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 12 08:25:34 crc kubenswrapper[4809]: I0312 08:25:34.677830 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-zlh6z" Mar 12 08:25:34 crc kubenswrapper[4809]: I0312 08:25:34.678134 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 12 08:25:34 crc kubenswrapper[4809]: I0312 08:25:34.686068 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 12 08:25:34 crc kubenswrapper[4809]: I0312 08:25:34.778933 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5eb0f97f-8243-4466-90df-a657b0bd5ceb-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"5eb0f97f-8243-4466-90df-a657b0bd5ceb\") " pod="openstack/nova-cell0-conductor-0" Mar 12 08:25:34 crc kubenswrapper[4809]: I0312 08:25:34.779074 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jh4km\" (UniqueName: \"kubernetes.io/projected/5eb0f97f-8243-4466-90df-a657b0bd5ceb-kube-api-access-jh4km\") pod \"nova-cell0-conductor-0\" (UID: \"5eb0f97f-8243-4466-90df-a657b0bd5ceb\") " pod="openstack/nova-cell0-conductor-0" Mar 12 08:25:34 crc kubenswrapper[4809]: I0312 08:25:34.779095 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5eb0f97f-8243-4466-90df-a657b0bd5ceb-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"5eb0f97f-8243-4466-90df-a657b0bd5ceb\") " pod="openstack/nova-cell0-conductor-0" Mar 12 08:25:34 crc kubenswrapper[4809]: I0312 08:25:34.881895 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jh4km\" (UniqueName: \"kubernetes.io/projected/5eb0f97f-8243-4466-90df-a657b0bd5ceb-kube-api-access-jh4km\") pod \"nova-cell0-conductor-0\" (UID: \"5eb0f97f-8243-4466-90df-a657b0bd5ceb\") " pod="openstack/nova-cell0-conductor-0" Mar 12 08:25:34 crc kubenswrapper[4809]: I0312 08:25:34.881967 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5eb0f97f-8243-4466-90df-a657b0bd5ceb-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"5eb0f97f-8243-4466-90df-a657b0bd5ceb\") " pod="openstack/nova-cell0-conductor-0" Mar 12 08:25:34 crc kubenswrapper[4809]: I0312 08:25:34.882232 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5eb0f97f-8243-4466-90df-a657b0bd5ceb-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"5eb0f97f-8243-4466-90df-a657b0bd5ceb\") " pod="openstack/nova-cell0-conductor-0" Mar 12 08:25:34 crc kubenswrapper[4809]: I0312 08:25:34.888263 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5eb0f97f-8243-4466-90df-a657b0bd5ceb-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"5eb0f97f-8243-4466-90df-a657b0bd5ceb\") " pod="openstack/nova-cell0-conductor-0" Mar 12 08:25:34 crc kubenswrapper[4809]: I0312 08:25:34.888344 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5eb0f97f-8243-4466-90df-a657b0bd5ceb-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"5eb0f97f-8243-4466-90df-a657b0bd5ceb\") " pod="openstack/nova-cell0-conductor-0" Mar 12 08:25:34 crc kubenswrapper[4809]: I0312 08:25:34.906368 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jh4km\" (UniqueName: \"kubernetes.io/projected/5eb0f97f-8243-4466-90df-a657b0bd5ceb-kube-api-access-jh4km\") pod \"nova-cell0-conductor-0\" (UID: \"5eb0f97f-8243-4466-90df-a657b0bd5ceb\") " pod="openstack/nova-cell0-conductor-0" Mar 12 08:25:35 crc kubenswrapper[4809]: I0312 08:25:35.001622 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 12 08:25:35 crc kubenswrapper[4809]: I0312 08:25:35.565245 4809 generic.go:334] "Generic (PLEG): container finished" podID="38e51b68-51b0-4e3b-928f-9f367ffc672a" containerID="26caba6f6b0b93cd4fdda850f4500a1db0042d8a1fafd70266705bca021b0d29" exitCode=0 Mar 12 08:25:35 crc kubenswrapper[4809]: I0312 08:25:35.565593 4809 generic.go:334] "Generic (PLEG): container finished" podID="38e51b68-51b0-4e3b-928f-9f367ffc672a" containerID="a84f72f142d9ee5bfb4688de9e30ecb6c83045e6fe740ee7a5658f9f5e9909c2" exitCode=2 Mar 12 08:25:35 crc kubenswrapper[4809]: I0312 08:25:35.565603 4809 generic.go:334] "Generic (PLEG): container finished" podID="38e51b68-51b0-4e3b-928f-9f367ffc672a" containerID="d565fda23b679cc42023b299e7d3d51ca19aa0153fcb5fd8012f0e4ee9cbedaa" exitCode=0 Mar 12 08:25:35 crc kubenswrapper[4809]: I0312 08:25:35.565611 4809 generic.go:334] "Generic (PLEG): container finished" podID="38e51b68-51b0-4e3b-928f-9f367ffc672a" containerID="eebcc444e21547a19780d297a3b1c0c9aa13b02b4195f85903b9a0afe1b7cd97" exitCode=0 Mar 12 08:25:35 crc kubenswrapper[4809]: I0312 08:25:35.565340 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"38e51b68-51b0-4e3b-928f-9f367ffc672a","Type":"ContainerDied","Data":"26caba6f6b0b93cd4fdda850f4500a1db0042d8a1fafd70266705bca021b0d29"} Mar 12 08:25:35 crc kubenswrapper[4809]: I0312 08:25:35.565657 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"38e51b68-51b0-4e3b-928f-9f367ffc672a","Type":"ContainerDied","Data":"a84f72f142d9ee5bfb4688de9e30ecb6c83045e6fe740ee7a5658f9f5e9909c2"} Mar 12 08:25:35 crc kubenswrapper[4809]: I0312 08:25:35.565674 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"38e51b68-51b0-4e3b-928f-9f367ffc672a","Type":"ContainerDied","Data":"d565fda23b679cc42023b299e7d3d51ca19aa0153fcb5fd8012f0e4ee9cbedaa"} Mar 12 08:25:35 crc kubenswrapper[4809]: I0312 08:25:35.565686 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"38e51b68-51b0-4e3b-928f-9f367ffc672a","Type":"ContainerDied","Data":"eebcc444e21547a19780d297a3b1c0c9aa13b02b4195f85903b9a0afe1b7cd97"} Mar 12 08:25:35 crc kubenswrapper[4809]: I0312 08:25:35.607517 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 12 08:25:35 crc kubenswrapper[4809]: I0312 08:25:35.697368 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 08:25:35 crc kubenswrapper[4809]: I0312 08:25:35.819639 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/38e51b68-51b0-4e3b-928f-9f367ffc672a-sg-core-conf-yaml\") pod \"38e51b68-51b0-4e3b-928f-9f367ffc672a\" (UID: \"38e51b68-51b0-4e3b-928f-9f367ffc672a\") " Mar 12 08:25:35 crc kubenswrapper[4809]: I0312 08:25:35.820318 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8pzs\" (UniqueName: \"kubernetes.io/projected/38e51b68-51b0-4e3b-928f-9f367ffc672a-kube-api-access-p8pzs\") pod \"38e51b68-51b0-4e3b-928f-9f367ffc672a\" (UID: \"38e51b68-51b0-4e3b-928f-9f367ffc672a\") " Mar 12 08:25:35 crc kubenswrapper[4809]: I0312 08:25:35.820384 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38e51b68-51b0-4e3b-928f-9f367ffc672a-run-httpd\") pod \"38e51b68-51b0-4e3b-928f-9f367ffc672a\" (UID: \"38e51b68-51b0-4e3b-928f-9f367ffc672a\") " Mar 12 08:25:35 crc kubenswrapper[4809]: I0312 08:25:35.820461 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38e51b68-51b0-4e3b-928f-9f367ffc672a-log-httpd\") pod \"38e51b68-51b0-4e3b-928f-9f367ffc672a\" (UID: \"38e51b68-51b0-4e3b-928f-9f367ffc672a\") " Mar 12 08:25:35 crc kubenswrapper[4809]: I0312 08:25:35.820524 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38e51b68-51b0-4e3b-928f-9f367ffc672a-scripts\") pod \"38e51b68-51b0-4e3b-928f-9f367ffc672a\" (UID: \"38e51b68-51b0-4e3b-928f-9f367ffc672a\") " Mar 12 08:25:35 crc kubenswrapper[4809]: I0312 08:25:35.820709 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38e51b68-51b0-4e3b-928f-9f367ffc672a-config-data\") pod \"38e51b68-51b0-4e3b-928f-9f367ffc672a\" (UID: \"38e51b68-51b0-4e3b-928f-9f367ffc672a\") " Mar 12 08:25:35 crc kubenswrapper[4809]: I0312 08:25:35.820751 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38e51b68-51b0-4e3b-928f-9f367ffc672a-combined-ca-bundle\") pod \"38e51b68-51b0-4e3b-928f-9f367ffc672a\" (UID: \"38e51b68-51b0-4e3b-928f-9f367ffc672a\") " Mar 12 08:25:35 crc kubenswrapper[4809]: I0312 08:25:35.821425 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38e51b68-51b0-4e3b-928f-9f367ffc672a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "38e51b68-51b0-4e3b-928f-9f367ffc672a" (UID: "38e51b68-51b0-4e3b-928f-9f367ffc672a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:25:35 crc kubenswrapper[4809]: I0312 08:25:35.821623 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38e51b68-51b0-4e3b-928f-9f367ffc672a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "38e51b68-51b0-4e3b-928f-9f367ffc672a" (UID: "38e51b68-51b0-4e3b-928f-9f367ffc672a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:25:35 crc kubenswrapper[4809]: I0312 08:25:35.824340 4809 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38e51b68-51b0-4e3b-928f-9f367ffc672a-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 08:25:35 crc kubenswrapper[4809]: I0312 08:25:35.824364 4809 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38e51b68-51b0-4e3b-928f-9f367ffc672a-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 08:25:35 crc kubenswrapper[4809]: I0312 08:25:35.825430 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38e51b68-51b0-4e3b-928f-9f367ffc672a-scripts" (OuterVolumeSpecName: "scripts") pod "38e51b68-51b0-4e3b-928f-9f367ffc672a" (UID: "38e51b68-51b0-4e3b-928f-9f367ffc672a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:25:35 crc kubenswrapper[4809]: I0312 08:25:35.828364 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38e51b68-51b0-4e3b-928f-9f367ffc672a-kube-api-access-p8pzs" (OuterVolumeSpecName: "kube-api-access-p8pzs") pod "38e51b68-51b0-4e3b-928f-9f367ffc672a" (UID: "38e51b68-51b0-4e3b-928f-9f367ffc672a"). InnerVolumeSpecName "kube-api-access-p8pzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:25:35 crc kubenswrapper[4809]: I0312 08:25:35.859515 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38e51b68-51b0-4e3b-928f-9f367ffc672a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "38e51b68-51b0-4e3b-928f-9f367ffc672a" (UID: "38e51b68-51b0-4e3b-928f-9f367ffc672a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:25:35 crc kubenswrapper[4809]: I0312 08:25:35.920227 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38e51b68-51b0-4e3b-928f-9f367ffc672a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "38e51b68-51b0-4e3b-928f-9f367ffc672a" (UID: "38e51b68-51b0-4e3b-928f-9f367ffc672a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:25:35 crc kubenswrapper[4809]: I0312 08:25:35.927330 4809 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38e51b68-51b0-4e3b-928f-9f367ffc672a-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 08:25:35 crc kubenswrapper[4809]: I0312 08:25:35.927373 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38e51b68-51b0-4e3b-928f-9f367ffc672a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:25:35 crc kubenswrapper[4809]: I0312 08:25:35.927390 4809 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/38e51b68-51b0-4e3b-928f-9f367ffc672a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 12 08:25:35 crc kubenswrapper[4809]: I0312 08:25:35.927400 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8pzs\" (UniqueName: \"kubernetes.io/projected/38e51b68-51b0-4e3b-928f-9f367ffc672a-kube-api-access-p8pzs\") on node \"crc\" DevicePath \"\"" Mar 12 08:25:35 crc kubenswrapper[4809]: I0312 08:25:35.977620 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38e51b68-51b0-4e3b-928f-9f367ffc672a-config-data" (OuterVolumeSpecName: "config-data") pod "38e51b68-51b0-4e3b-928f-9f367ffc672a" (UID: "38e51b68-51b0-4e3b-928f-9f367ffc672a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:25:36 crc kubenswrapper[4809]: I0312 08:25:36.056190 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38e51b68-51b0-4e3b-928f-9f367ffc672a-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 08:25:36 crc kubenswrapper[4809]: I0312 08:25:36.578549 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"5eb0f97f-8243-4466-90df-a657b0bd5ceb","Type":"ContainerStarted","Data":"d5f84d4232937b62222898fde654aea5d09f91b0316c2265cb34b5c8a32fac8e"} Mar 12 08:25:36 crc kubenswrapper[4809]: I0312 08:25:36.579671 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Mar 12 08:25:36 crc kubenswrapper[4809]: I0312 08:25:36.579827 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"5eb0f97f-8243-4466-90df-a657b0bd5ceb","Type":"ContainerStarted","Data":"270337e1b0473f182b6927a12456e5c0e4642d3a29dc049a4c89070b13e988ae"} Mar 12 08:25:36 crc kubenswrapper[4809]: I0312 08:25:36.583316 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"38e51b68-51b0-4e3b-928f-9f367ffc672a","Type":"ContainerDied","Data":"c1d7947176f277fc0c28f9b13bbe87eb02c38e54733533522719d6073b150f74"} Mar 12 08:25:36 crc kubenswrapper[4809]: I0312 08:25:36.583422 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 08:25:36 crc kubenswrapper[4809]: I0312 08:25:36.583511 4809 scope.go:117] "RemoveContainer" containerID="26caba6f6b0b93cd4fdda850f4500a1db0042d8a1fafd70266705bca021b0d29" Mar 12 08:25:36 crc kubenswrapper[4809]: I0312 08:25:36.624050 4809 scope.go:117] "RemoveContainer" containerID="a84f72f142d9ee5bfb4688de9e30ecb6c83045e6fe740ee7a5658f9f5e9909c2" Mar 12 08:25:36 crc kubenswrapper[4809]: I0312 08:25:36.646471 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.64644862 podStartE2EDuration="2.64644862s" podCreationTimestamp="2026-03-12 08:25:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:25:36.614706266 +0000 UTC m=+1610.196741999" watchObservedRunningTime="2026-03-12 08:25:36.64644862 +0000 UTC m=+1610.228484353" Mar 12 08:25:36 crc kubenswrapper[4809]: I0312 08:25:36.650327 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 08:25:36 crc kubenswrapper[4809]: I0312 08:25:36.652535 4809 scope.go:117] "RemoveContainer" containerID="d565fda23b679cc42023b299e7d3d51ca19aa0153fcb5fd8012f0e4ee9cbedaa" Mar 12 08:25:36 crc kubenswrapper[4809]: I0312 08:25:36.691294 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 12 08:25:36 crc kubenswrapper[4809]: I0312 08:25:36.704681 4809 scope.go:117] "RemoveContainer" containerID="eebcc444e21547a19780d297a3b1c0c9aa13b02b4195f85903b9a0afe1b7cd97" Mar 12 08:25:36 crc kubenswrapper[4809]: I0312 08:25:36.732346 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 12 08:25:36 crc kubenswrapper[4809]: E0312 08:25:36.733228 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38e51b68-51b0-4e3b-928f-9f367ffc672a" containerName="ceilometer-central-agent" Mar 12 08:25:36 crc kubenswrapper[4809]: I0312 08:25:36.733252 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="38e51b68-51b0-4e3b-928f-9f367ffc672a" containerName="ceilometer-central-agent" Mar 12 08:25:36 crc kubenswrapper[4809]: E0312 08:25:36.733295 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38e51b68-51b0-4e3b-928f-9f367ffc672a" containerName="proxy-httpd" Mar 12 08:25:36 crc kubenswrapper[4809]: I0312 08:25:36.733304 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="38e51b68-51b0-4e3b-928f-9f367ffc672a" containerName="proxy-httpd" Mar 12 08:25:36 crc kubenswrapper[4809]: E0312 08:25:36.733324 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38e51b68-51b0-4e3b-928f-9f367ffc672a" containerName="ceilometer-notification-agent" Mar 12 08:25:36 crc kubenswrapper[4809]: I0312 08:25:36.733333 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="38e51b68-51b0-4e3b-928f-9f367ffc672a" containerName="ceilometer-notification-agent" Mar 12 08:25:36 crc kubenswrapper[4809]: E0312 08:25:36.733342 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38e51b68-51b0-4e3b-928f-9f367ffc672a" containerName="sg-core" Mar 12 08:25:36 crc kubenswrapper[4809]: I0312 08:25:36.733348 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="38e51b68-51b0-4e3b-928f-9f367ffc672a" containerName="sg-core" Mar 12 08:25:36 crc kubenswrapper[4809]: I0312 08:25:36.733607 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="38e51b68-51b0-4e3b-928f-9f367ffc672a" containerName="proxy-httpd" Mar 12 08:25:36 crc kubenswrapper[4809]: I0312 08:25:36.733634 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="38e51b68-51b0-4e3b-928f-9f367ffc672a" containerName="ceilometer-notification-agent" Mar 12 08:25:36 crc kubenswrapper[4809]: I0312 08:25:36.733648 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="38e51b68-51b0-4e3b-928f-9f367ffc672a" containerName="ceilometer-central-agent" Mar 12 08:25:36 crc kubenswrapper[4809]: I0312 08:25:36.733661 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="38e51b68-51b0-4e3b-928f-9f367ffc672a" containerName="sg-core" Mar 12 08:25:36 crc kubenswrapper[4809]: I0312 08:25:36.737204 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 08:25:36 crc kubenswrapper[4809]: I0312 08:25:36.741757 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 12 08:25:36 crc kubenswrapper[4809]: I0312 08:25:36.742202 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 12 08:25:36 crc kubenswrapper[4809]: I0312 08:25:36.760334 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 08:25:36 crc kubenswrapper[4809]: I0312 08:25:36.880315 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8faa62a-95fb-49f9-a07c-0d4922ed23a6-config-data\") pod \"ceilometer-0\" (UID: \"a8faa62a-95fb-49f9-a07c-0d4922ed23a6\") " pod="openstack/ceilometer-0" Mar 12 08:25:36 crc kubenswrapper[4809]: I0312 08:25:36.880404 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8faa62a-95fb-49f9-a07c-0d4922ed23a6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a8faa62a-95fb-49f9-a07c-0d4922ed23a6\") " pod="openstack/ceilometer-0" Mar 12 08:25:36 crc kubenswrapper[4809]: I0312 08:25:36.880588 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a8faa62a-95fb-49f9-a07c-0d4922ed23a6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a8faa62a-95fb-49f9-a07c-0d4922ed23a6\") " pod="openstack/ceilometer-0" Mar 12 08:25:36 crc kubenswrapper[4809]: I0312 08:25:36.880639 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8faa62a-95fb-49f9-a07c-0d4922ed23a6-scripts\") pod \"ceilometer-0\" (UID: \"a8faa62a-95fb-49f9-a07c-0d4922ed23a6\") " pod="openstack/ceilometer-0" Mar 12 08:25:36 crc kubenswrapper[4809]: I0312 08:25:36.880676 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tdh5\" (UniqueName: \"kubernetes.io/projected/a8faa62a-95fb-49f9-a07c-0d4922ed23a6-kube-api-access-6tdh5\") pod \"ceilometer-0\" (UID: \"a8faa62a-95fb-49f9-a07c-0d4922ed23a6\") " pod="openstack/ceilometer-0" Mar 12 08:25:36 crc kubenswrapper[4809]: I0312 08:25:36.880706 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8faa62a-95fb-49f9-a07c-0d4922ed23a6-run-httpd\") pod \"ceilometer-0\" (UID: \"a8faa62a-95fb-49f9-a07c-0d4922ed23a6\") " pod="openstack/ceilometer-0" Mar 12 08:25:36 crc kubenswrapper[4809]: I0312 08:25:36.880763 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8faa62a-95fb-49f9-a07c-0d4922ed23a6-log-httpd\") pod \"ceilometer-0\" (UID: \"a8faa62a-95fb-49f9-a07c-0d4922ed23a6\") " pod="openstack/ceilometer-0" Mar 12 08:25:36 crc kubenswrapper[4809]: I0312 08:25:36.984857 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8faa62a-95fb-49f9-a07c-0d4922ed23a6-log-httpd\") pod \"ceilometer-0\" (UID: \"a8faa62a-95fb-49f9-a07c-0d4922ed23a6\") " pod="openstack/ceilometer-0" Mar 12 08:25:36 crc kubenswrapper[4809]: I0312 08:25:36.985072 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8faa62a-95fb-49f9-a07c-0d4922ed23a6-config-data\") pod \"ceilometer-0\" (UID: \"a8faa62a-95fb-49f9-a07c-0d4922ed23a6\") " pod="openstack/ceilometer-0" Mar 12 08:25:36 crc kubenswrapper[4809]: I0312 08:25:36.985170 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8faa62a-95fb-49f9-a07c-0d4922ed23a6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a8faa62a-95fb-49f9-a07c-0d4922ed23a6\") " pod="openstack/ceilometer-0" Mar 12 08:25:36 crc kubenswrapper[4809]: I0312 08:25:36.985350 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a8faa62a-95fb-49f9-a07c-0d4922ed23a6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a8faa62a-95fb-49f9-a07c-0d4922ed23a6\") " pod="openstack/ceilometer-0" Mar 12 08:25:36 crc kubenswrapper[4809]: I0312 08:25:36.985413 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8faa62a-95fb-49f9-a07c-0d4922ed23a6-scripts\") pod \"ceilometer-0\" (UID: \"a8faa62a-95fb-49f9-a07c-0d4922ed23a6\") " pod="openstack/ceilometer-0" Mar 12 08:25:36 crc kubenswrapper[4809]: I0312 08:25:36.985456 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tdh5\" (UniqueName: \"kubernetes.io/projected/a8faa62a-95fb-49f9-a07c-0d4922ed23a6-kube-api-access-6tdh5\") pod \"ceilometer-0\" (UID: \"a8faa62a-95fb-49f9-a07c-0d4922ed23a6\") " pod="openstack/ceilometer-0" Mar 12 08:25:36 crc kubenswrapper[4809]: I0312 08:25:36.985495 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8faa62a-95fb-49f9-a07c-0d4922ed23a6-run-httpd\") pod \"ceilometer-0\" (UID: \"a8faa62a-95fb-49f9-a07c-0d4922ed23a6\") " pod="openstack/ceilometer-0" Mar 12 08:25:36 crc kubenswrapper[4809]: I0312 08:25:36.985781 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8faa62a-95fb-49f9-a07c-0d4922ed23a6-log-httpd\") pod \"ceilometer-0\" (UID: \"a8faa62a-95fb-49f9-a07c-0d4922ed23a6\") " pod="openstack/ceilometer-0" Mar 12 08:25:36 crc kubenswrapper[4809]: I0312 08:25:36.985854 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8faa62a-95fb-49f9-a07c-0d4922ed23a6-run-httpd\") pod \"ceilometer-0\" (UID: \"a8faa62a-95fb-49f9-a07c-0d4922ed23a6\") " pod="openstack/ceilometer-0" Mar 12 08:25:36 crc kubenswrapper[4809]: I0312 08:25:36.994907 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a8faa62a-95fb-49f9-a07c-0d4922ed23a6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a8faa62a-95fb-49f9-a07c-0d4922ed23a6\") " pod="openstack/ceilometer-0" Mar 12 08:25:36 crc kubenswrapper[4809]: I0312 08:25:36.999606 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8faa62a-95fb-49f9-a07c-0d4922ed23a6-scripts\") pod \"ceilometer-0\" (UID: \"a8faa62a-95fb-49f9-a07c-0d4922ed23a6\") " pod="openstack/ceilometer-0" Mar 12 08:25:37 crc kubenswrapper[4809]: I0312 08:25:37.001162 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8faa62a-95fb-49f9-a07c-0d4922ed23a6-config-data\") pod \"ceilometer-0\" (UID: \"a8faa62a-95fb-49f9-a07c-0d4922ed23a6\") " pod="openstack/ceilometer-0" Mar 12 08:25:37 crc kubenswrapper[4809]: I0312 08:25:37.012841 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8faa62a-95fb-49f9-a07c-0d4922ed23a6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a8faa62a-95fb-49f9-a07c-0d4922ed23a6\") " pod="openstack/ceilometer-0" Mar 12 08:25:37 crc kubenswrapper[4809]: I0312 08:25:37.016287 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tdh5\" (UniqueName: \"kubernetes.io/projected/a8faa62a-95fb-49f9-a07c-0d4922ed23a6-kube-api-access-6tdh5\") pod \"ceilometer-0\" (UID: \"a8faa62a-95fb-49f9-a07c-0d4922ed23a6\") " pod="openstack/ceilometer-0" Mar 12 08:25:37 crc kubenswrapper[4809]: I0312 08:25:37.040514 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 08:25:37 crc kubenswrapper[4809]: I0312 08:25:37.041637 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 08:25:37 crc kubenswrapper[4809]: I0312 08:25:37.128816 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38e51b68-51b0-4e3b-928f-9f367ffc672a" path="/var/lib/kubelet/pods/38e51b68-51b0-4e3b-928f-9f367ffc672a/volumes" Mar 12 08:25:37 crc kubenswrapper[4809]: I0312 08:25:37.580839 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 08:25:37 crc kubenswrapper[4809]: W0312 08:25:37.593403 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8faa62a_95fb_49f9_a07c_0d4922ed23a6.slice/crio-89065ab054e33add7f588cf9e4b0c43cd66fce0e905ca93af2acb1d17a12b8c6 WatchSource:0}: Error finding container 89065ab054e33add7f588cf9e4b0c43cd66fce0e905ca93af2acb1d17a12b8c6: Status 404 returned error can't find the container with id 89065ab054e33add7f588cf9e4b0c43cd66fce0e905ca93af2acb1d17a12b8c6 Mar 12 08:25:37 crc kubenswrapper[4809]: I0312 08:25:37.608729 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8faa62a-95fb-49f9-a07c-0d4922ed23a6","Type":"ContainerStarted","Data":"89065ab054e33add7f588cf9e4b0c43cd66fce0e905ca93af2acb1d17a12b8c6"} Mar 12 08:25:39 crc kubenswrapper[4809]: I0312 08:25:39.639955 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8faa62a-95fb-49f9-a07c-0d4922ed23a6","Type":"ContainerStarted","Data":"1fb238a7ca9a6b8e1c466bb294e083127290a23a2e0498cc54d1ae51e5463716"} Mar 12 08:25:39 crc kubenswrapper[4809]: I0312 08:25:39.640832 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8faa62a-95fb-49f9-a07c-0d4922ed23a6","Type":"ContainerStarted","Data":"ab6c698c5b6aa1039853d1a6c498c2982a85ceaa221200cd94c72b43ecaa4ebe"} Mar 12 08:25:40 crc kubenswrapper[4809]: I0312 08:25:40.036836 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Mar 12 08:25:40 crc kubenswrapper[4809]: I0312 08:25:40.652272 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-m6nvs"] Mar 12 08:25:40 crc kubenswrapper[4809]: I0312 08:25:40.654966 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-m6nvs" Mar 12 08:25:40 crc kubenswrapper[4809]: I0312 08:25:40.662631 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Mar 12 08:25:40 crc kubenswrapper[4809]: I0312 08:25:40.662852 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Mar 12 08:25:40 crc kubenswrapper[4809]: I0312 08:25:40.666921 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-m6nvs"] Mar 12 08:25:40 crc kubenswrapper[4809]: I0312 08:25:40.711274 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8faa62a-95fb-49f9-a07c-0d4922ed23a6","Type":"ContainerStarted","Data":"2d561ad4afc353dad21d1e700846f54f5711b8817bee0bd755991e9880381651"} Mar 12 08:25:40 crc kubenswrapper[4809]: I0312 08:25:40.743692 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cmv6w" Mar 12 08:25:40 crc kubenswrapper[4809]: I0312 08:25:40.797347 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbps6\" (UniqueName: \"kubernetes.io/projected/661848d4-7363-415e-8446-98751a00c6de-kube-api-access-xbps6\") pod \"nova-cell0-cell-mapping-m6nvs\" (UID: \"661848d4-7363-415e-8446-98751a00c6de\") " pod="openstack/nova-cell0-cell-mapping-m6nvs" Mar 12 08:25:40 crc kubenswrapper[4809]: I0312 08:25:40.797441 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/661848d4-7363-415e-8446-98751a00c6de-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-m6nvs\" (UID: \"661848d4-7363-415e-8446-98751a00c6de\") " pod="openstack/nova-cell0-cell-mapping-m6nvs" Mar 12 08:25:40 crc kubenswrapper[4809]: I0312 08:25:40.797504 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/661848d4-7363-415e-8446-98751a00c6de-scripts\") pod \"nova-cell0-cell-mapping-m6nvs\" (UID: \"661848d4-7363-415e-8446-98751a00c6de\") " pod="openstack/nova-cell0-cell-mapping-m6nvs" Mar 12 08:25:40 crc kubenswrapper[4809]: I0312 08:25:40.797554 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/661848d4-7363-415e-8446-98751a00c6de-config-data\") pod \"nova-cell0-cell-mapping-m6nvs\" (UID: \"661848d4-7363-415e-8446-98751a00c6de\") " pod="openstack/nova-cell0-cell-mapping-m6nvs" Mar 12 08:25:40 crc kubenswrapper[4809]: I0312 08:25:40.900856 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbps6\" (UniqueName: \"kubernetes.io/projected/661848d4-7363-415e-8446-98751a00c6de-kube-api-access-xbps6\") pod \"nova-cell0-cell-mapping-m6nvs\" (UID: \"661848d4-7363-415e-8446-98751a00c6de\") " pod="openstack/nova-cell0-cell-mapping-m6nvs" Mar 12 08:25:40 crc kubenswrapper[4809]: I0312 08:25:40.900962 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/661848d4-7363-415e-8446-98751a00c6de-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-m6nvs\" (UID: \"661848d4-7363-415e-8446-98751a00c6de\") " pod="openstack/nova-cell0-cell-mapping-m6nvs" Mar 12 08:25:40 crc kubenswrapper[4809]: I0312 08:25:40.901028 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/661848d4-7363-415e-8446-98751a00c6de-scripts\") pod \"nova-cell0-cell-mapping-m6nvs\" (UID: \"661848d4-7363-415e-8446-98751a00c6de\") " pod="openstack/nova-cell0-cell-mapping-m6nvs" Mar 12 08:25:40 crc kubenswrapper[4809]: I0312 08:25:40.901078 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/661848d4-7363-415e-8446-98751a00c6de-config-data\") pod \"nova-cell0-cell-mapping-m6nvs\" (UID: \"661848d4-7363-415e-8446-98751a00c6de\") " pod="openstack/nova-cell0-cell-mapping-m6nvs" Mar 12 08:25:40 crc kubenswrapper[4809]: I0312 08:25:40.913150 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/661848d4-7363-415e-8446-98751a00c6de-scripts\") pod \"nova-cell0-cell-mapping-m6nvs\" (UID: \"661848d4-7363-415e-8446-98751a00c6de\") " pod="openstack/nova-cell0-cell-mapping-m6nvs" Mar 12 08:25:40 crc kubenswrapper[4809]: I0312 08:25:40.916365 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/661848d4-7363-415e-8446-98751a00c6de-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-m6nvs\" (UID: \"661848d4-7363-415e-8446-98751a00c6de\") " pod="openstack/nova-cell0-cell-mapping-m6nvs" Mar 12 08:25:40 crc kubenswrapper[4809]: I0312 08:25:40.919479 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/661848d4-7363-415e-8446-98751a00c6de-config-data\") pod \"nova-cell0-cell-mapping-m6nvs\" (UID: \"661848d4-7363-415e-8446-98751a00c6de\") " pod="openstack/nova-cell0-cell-mapping-m6nvs" Mar 12 08:25:40 crc kubenswrapper[4809]: I0312 08:25:40.927655 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cmv6w" Mar 12 08:25:40 crc kubenswrapper[4809]: I0312 08:25:40.943187 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 12 08:25:40 crc kubenswrapper[4809]: I0312 08:25:40.945547 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 08:25:40 crc kubenswrapper[4809]: I0312 08:25:40.951538 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 12 08:25:40 crc kubenswrapper[4809]: I0312 08:25:40.958922 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-qrkcj"] Mar 12 08:25:40 crc kubenswrapper[4809]: I0312 08:25:40.960750 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-qrkcj" Mar 12 08:25:40 crc kubenswrapper[4809]: I0312 08:25:40.973437 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 12 08:25:40 crc kubenswrapper[4809]: I0312 08:25:40.999234 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-qrkcj"] Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.009403 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbps6\" (UniqueName: \"kubernetes.io/projected/661848d4-7363-415e-8446-98751a00c6de-kube-api-access-xbps6\") pod \"nova-cell0-cell-mapping-m6nvs\" (UID: \"661848d4-7363-415e-8446-98751a00c6de\") " pod="openstack/nova-cell0-cell-mapping-m6nvs" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.038809 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.041369 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.063622 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.094330 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.107975 4809 scope.go:117] "RemoveContainer" containerID="864ab2b9321734db2ec1c2e4ce57f8a096a813a5cbaa0744b231535713d6ab10" Mar 12 08:25:41 crc kubenswrapper[4809]: E0312 08:25:41.108371 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.114420 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7871824-2524-4083-8a87-f7f27f0f533f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d7871824-2524-4083-8a87-f7f27f0f533f\") " pod="openstack/nova-api-0" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.114531 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7871824-2524-4083-8a87-f7f27f0f533f-config-data\") pod \"nova-api-0\" (UID: \"d7871824-2524-4083-8a87-f7f27f0f533f\") " pod="openstack/nova-api-0" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.114603 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7871824-2524-4083-8a87-f7f27f0f533f-logs\") pod \"nova-api-0\" (UID: \"d7871824-2524-4083-8a87-f7f27f0f533f\") " pod="openstack/nova-api-0" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.114719 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8ws8\" (UniqueName: \"kubernetes.io/projected/0d584655-82a5-46a8-ae0b-9c1abf01de7a-kube-api-access-l8ws8\") pod \"aodh-db-create-qrkcj\" (UID: \"0d584655-82a5-46a8-ae0b-9c1abf01de7a\") " pod="openstack/aodh-db-create-qrkcj" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.114824 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d584655-82a5-46a8-ae0b-9c1abf01de7a-operator-scripts\") pod \"aodh-db-create-qrkcj\" (UID: \"0d584655-82a5-46a8-ae0b-9c1abf01de7a\") " pod="openstack/aodh-db-create-qrkcj" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.114895 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94lln\" (UniqueName: \"kubernetes.io/projected/d7871824-2524-4083-8a87-f7f27f0f533f-kube-api-access-94lln\") pod \"nova-api-0\" (UID: \"d7871824-2524-4083-8a87-f7f27f0f533f\") " pod="openstack/nova-api-0" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.219810 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d584655-82a5-46a8-ae0b-9c1abf01de7a-operator-scripts\") pod \"aodh-db-create-qrkcj\" (UID: \"0d584655-82a5-46a8-ae0b-9c1abf01de7a\") " pod="openstack/aodh-db-create-qrkcj" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.223128 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94lln\" (UniqueName: \"kubernetes.io/projected/d7871824-2524-4083-8a87-f7f27f0f533f-kube-api-access-94lln\") pod \"nova-api-0\" (UID: \"d7871824-2524-4083-8a87-f7f27f0f533f\") " pod="openstack/nova-api-0" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.223290 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7871824-2524-4083-8a87-f7f27f0f533f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d7871824-2524-4083-8a87-f7f27f0f533f\") " pod="openstack/nova-api-0" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.223391 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t64vb\" (UniqueName: \"kubernetes.io/projected/04249628-0bfe-4de8-b6ad-f8508e7b1a8a-kube-api-access-t64vb\") pod \"nova-scheduler-0\" (UID: \"04249628-0bfe-4de8-b6ad-f8508e7b1a8a\") " pod="openstack/nova-scheduler-0" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.223428 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7871824-2524-4083-8a87-f7f27f0f533f-config-data\") pod \"nova-api-0\" (UID: \"d7871824-2524-4083-8a87-f7f27f0f533f\") " pod="openstack/nova-api-0" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.223509 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7871824-2524-4083-8a87-f7f27f0f533f-logs\") pod \"nova-api-0\" (UID: \"d7871824-2524-4083-8a87-f7f27f0f533f\") " pod="openstack/nova-api-0" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.223526 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04249628-0bfe-4de8-b6ad-f8508e7b1a8a-config-data\") pod \"nova-scheduler-0\" (UID: \"04249628-0bfe-4de8-b6ad-f8508e7b1a8a\") " pod="openstack/nova-scheduler-0" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.223106 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d584655-82a5-46a8-ae0b-9c1abf01de7a-operator-scripts\") pod \"aodh-db-create-qrkcj\" (UID: \"0d584655-82a5-46a8-ae0b-9c1abf01de7a\") " pod="openstack/aodh-db-create-qrkcj" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.229383 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7871824-2524-4083-8a87-f7f27f0f533f-logs\") pod \"nova-api-0\" (UID: \"d7871824-2524-4083-8a87-f7f27f0f533f\") " pod="openstack/nova-api-0" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.231002 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8ws8\" (UniqueName: \"kubernetes.io/projected/0d584655-82a5-46a8-ae0b-9c1abf01de7a-kube-api-access-l8ws8\") pod \"aodh-db-create-qrkcj\" (UID: \"0d584655-82a5-46a8-ae0b-9c1abf01de7a\") " pod="openstack/aodh-db-create-qrkcj" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.231170 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04249628-0bfe-4de8-b6ad-f8508e7b1a8a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"04249628-0bfe-4de8-b6ad-f8508e7b1a8a\") " pod="openstack/nova-scheduler-0" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.249939 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7871824-2524-4083-8a87-f7f27f0f533f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d7871824-2524-4083-8a87-f7f27f0f533f\") " pod="openstack/nova-api-0" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.256493 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cmv6w"] Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.266914 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7871824-2524-4083-8a87-f7f27f0f533f-config-data\") pod \"nova-api-0\" (UID: \"d7871824-2524-4083-8a87-f7f27f0f533f\") " pod="openstack/nova-api-0" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.267171 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94lln\" (UniqueName: \"kubernetes.io/projected/d7871824-2524-4083-8a87-f7f27f0f533f-kube-api-access-94lln\") pod \"nova-api-0\" (UID: \"d7871824-2524-4083-8a87-f7f27f0f533f\") " pod="openstack/nova-api-0" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.272817 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8ws8\" (UniqueName: \"kubernetes.io/projected/0d584655-82a5-46a8-ae0b-9c1abf01de7a-kube-api-access-l8ws8\") pod \"aodh-db-create-qrkcj\" (UID: \"0d584655-82a5-46a8-ae0b-9c1abf01de7a\") " pod="openstack/aodh-db-create-qrkcj" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.294515 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-m6nvs" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.339997 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t64vb\" (UniqueName: \"kubernetes.io/projected/04249628-0bfe-4de8-b6ad-f8508e7b1a8a-kube-api-access-t64vb\") pod \"nova-scheduler-0\" (UID: \"04249628-0bfe-4de8-b6ad-f8508e7b1a8a\") " pod="openstack/nova-scheduler-0" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.340089 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04249628-0bfe-4de8-b6ad-f8508e7b1a8a-config-data\") pod \"nova-scheduler-0\" (UID: \"04249628-0bfe-4de8-b6ad-f8508e7b1a8a\") " pod="openstack/nova-scheduler-0" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.340243 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04249628-0bfe-4de8-b6ad-f8508e7b1a8a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"04249628-0bfe-4de8-b6ad-f8508e7b1a8a\") " pod="openstack/nova-scheduler-0" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.348820 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04249628-0bfe-4de8-b6ad-f8508e7b1a8a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"04249628-0bfe-4de8-b6ad-f8508e7b1a8a\") " pod="openstack/nova-scheduler-0" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.368501 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04249628-0bfe-4de8-b6ad-f8508e7b1a8a-config-data\") pod \"nova-scheduler-0\" (UID: \"04249628-0bfe-4de8-b6ad-f8508e7b1a8a\") " pod="openstack/nova-scheduler-0" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.370141 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t64vb\" (UniqueName: \"kubernetes.io/projected/04249628-0bfe-4de8-b6ad-f8508e7b1a8a-kube-api-access-t64vb\") pod \"nova-scheduler-0\" (UID: \"04249628-0bfe-4de8-b6ad-f8508e7b1a8a\") " pod="openstack/nova-scheduler-0" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.370300 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.374603 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.380879 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.393923 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-814c-account-create-update-8vv4f"] Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.396776 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-814c-account-create-update-8vv4f" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.399748 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.416421 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-814c-account-create-update-8vv4f"] Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.494742 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.498777 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5dd15dc5-2ce6-4c1b-a683-f73beca93754-operator-scripts\") pod \"aodh-814c-account-create-update-8vv4f\" (UID: \"5dd15dc5-2ce6-4c1b-a683-f73beca93754\") " pod="openstack/aodh-814c-account-create-update-8vv4f" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.499006 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prvhj\" (UniqueName: \"kubernetes.io/projected/5dd15dc5-2ce6-4c1b-a683-f73beca93754-kube-api-access-prvhj\") pod \"aodh-814c-account-create-update-8vv4f\" (UID: \"5dd15dc5-2ce6-4c1b-a683-f73beca93754\") " pod="openstack/aodh-814c-account-create-update-8vv4f" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.512735 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-qrkcj" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.528266 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.573580 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.608214 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1483a15a-a49d-46c7-886b-85e8f7fcfc0b-config-data\") pod \"nova-metadata-0\" (UID: \"1483a15a-a49d-46c7-886b-85e8f7fcfc0b\") " pod="openstack/nova-metadata-0" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.608268 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1483a15a-a49d-46c7-886b-85e8f7fcfc0b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1483a15a-a49d-46c7-886b-85e8f7fcfc0b\") " pod="openstack/nova-metadata-0" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.608308 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1483a15a-a49d-46c7-886b-85e8f7fcfc0b-logs\") pod \"nova-metadata-0\" (UID: \"1483a15a-a49d-46c7-886b-85e8f7fcfc0b\") " pod="openstack/nova-metadata-0" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.611053 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9jg2\" (UniqueName: \"kubernetes.io/projected/1483a15a-a49d-46c7-886b-85e8f7fcfc0b-kube-api-access-r9jg2\") pod \"nova-metadata-0\" (UID: \"1483a15a-a49d-46c7-886b-85e8f7fcfc0b\") " pod="openstack/nova-metadata-0" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.611328 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5dd15dc5-2ce6-4c1b-a683-f73beca93754-operator-scripts\") pod \"aodh-814c-account-create-update-8vv4f\" (UID: \"5dd15dc5-2ce6-4c1b-a683-f73beca93754\") " pod="openstack/aodh-814c-account-create-update-8vv4f" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.611459 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prvhj\" (UniqueName: \"kubernetes.io/projected/5dd15dc5-2ce6-4c1b-a683-f73beca93754-kube-api-access-prvhj\") pod \"aodh-814c-account-create-update-8vv4f\" (UID: \"5dd15dc5-2ce6-4c1b-a683-f73beca93754\") " pod="openstack/aodh-814c-account-create-update-8vv4f" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.613288 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5dd15dc5-2ce6-4c1b-a683-f73beca93754-operator-scripts\") pod \"aodh-814c-account-create-update-8vv4f\" (UID: \"5dd15dc5-2ce6-4c1b-a683-f73beca93754\") " pod="openstack/aodh-814c-account-create-update-8vv4f" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.638271 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.641229 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.642007 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prvhj\" (UniqueName: \"kubernetes.io/projected/5dd15dc5-2ce6-4c1b-a683-f73beca93754-kube-api-access-prvhj\") pod \"aodh-814c-account-create-update-8vv4f\" (UID: \"5dd15dc5-2ce6-4c1b-a683-f73beca93754\") " pod="openstack/aodh-814c-account-create-update-8vv4f" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.646192 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.683803 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.715312 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b1185a8-a7d9-4f17-b98f-02b8051d196a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3b1185a8-a7d9-4f17-b98f-02b8051d196a\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.715436 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b1185a8-a7d9-4f17-b98f-02b8051d196a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3b1185a8-a7d9-4f17-b98f-02b8051d196a\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.715974 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1483a15a-a49d-46c7-886b-85e8f7fcfc0b-config-data\") pod \"nova-metadata-0\" (UID: \"1483a15a-a49d-46c7-886b-85e8f7fcfc0b\") " pod="openstack/nova-metadata-0" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.716062 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kk6m6\" (UniqueName: \"kubernetes.io/projected/3b1185a8-a7d9-4f17-b98f-02b8051d196a-kube-api-access-kk6m6\") pod \"nova-cell1-novncproxy-0\" (UID: \"3b1185a8-a7d9-4f17-b98f-02b8051d196a\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.716146 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1483a15a-a49d-46c7-886b-85e8f7fcfc0b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1483a15a-a49d-46c7-886b-85e8f7fcfc0b\") " pod="openstack/nova-metadata-0" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.716221 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1483a15a-a49d-46c7-886b-85e8f7fcfc0b-logs\") pod \"nova-metadata-0\" (UID: \"1483a15a-a49d-46c7-886b-85e8f7fcfc0b\") " pod="openstack/nova-metadata-0" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.716298 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9jg2\" (UniqueName: \"kubernetes.io/projected/1483a15a-a49d-46c7-886b-85e8f7fcfc0b-kube-api-access-r9jg2\") pod \"nova-metadata-0\" (UID: \"1483a15a-a49d-46c7-886b-85e8f7fcfc0b\") " pod="openstack/nova-metadata-0" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.718482 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1483a15a-a49d-46c7-886b-85e8f7fcfc0b-logs\") pod \"nova-metadata-0\" (UID: \"1483a15a-a49d-46c7-886b-85e8f7fcfc0b\") " pod="openstack/nova-metadata-0" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.724196 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1483a15a-a49d-46c7-886b-85e8f7fcfc0b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1483a15a-a49d-46c7-886b-85e8f7fcfc0b\") " pod="openstack/nova-metadata-0" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.729048 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1483a15a-a49d-46c7-886b-85e8f7fcfc0b-config-data\") pod \"nova-metadata-0\" (UID: \"1483a15a-a49d-46c7-886b-85e8f7fcfc0b\") " pod="openstack/nova-metadata-0" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.731605 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-jhzbr"] Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.736421 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-jhzbr" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.748449 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9jg2\" (UniqueName: \"kubernetes.io/projected/1483a15a-a49d-46c7-886b-85e8f7fcfc0b-kube-api-access-r9jg2\") pod \"nova-metadata-0\" (UID: \"1483a15a-a49d-46c7-886b-85e8f7fcfc0b\") " pod="openstack/nova-metadata-0" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.761767 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-jhzbr"] Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.783464 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-814c-account-create-update-8vv4f" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.820235 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kk6m6\" (UniqueName: \"kubernetes.io/projected/3b1185a8-a7d9-4f17-b98f-02b8051d196a-kube-api-access-kk6m6\") pod \"nova-cell1-novncproxy-0\" (UID: \"3b1185a8-a7d9-4f17-b98f-02b8051d196a\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.820514 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b1185a8-a7d9-4f17-b98f-02b8051d196a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3b1185a8-a7d9-4f17-b98f-02b8051d196a\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.820641 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b1185a8-a7d9-4f17-b98f-02b8051d196a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3b1185a8-a7d9-4f17-b98f-02b8051d196a\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.829709 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b1185a8-a7d9-4f17-b98f-02b8051d196a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3b1185a8-a7d9-4f17-b98f-02b8051d196a\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.832359 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b1185a8-a7d9-4f17-b98f-02b8051d196a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3b1185a8-a7d9-4f17-b98f-02b8051d196a\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.851173 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kk6m6\" (UniqueName: \"kubernetes.io/projected/3b1185a8-a7d9-4f17-b98f-02b8051d196a-kube-api-access-kk6m6\") pod \"nova-cell1-novncproxy-0\" (UID: \"3b1185a8-a7d9-4f17-b98f-02b8051d196a\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.923778 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/92857c04-f2b0-41d7-b825-591496c43e0c-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-jhzbr\" (UID: \"92857c04-f2b0-41d7-b825-591496c43e0c\") " pod="openstack/dnsmasq-dns-9b86998b5-jhzbr" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.924087 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/92857c04-f2b0-41d7-b825-591496c43e0c-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-jhzbr\" (UID: \"92857c04-f2b0-41d7-b825-591496c43e0c\") " pod="openstack/dnsmasq-dns-9b86998b5-jhzbr" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.924238 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92857c04-f2b0-41d7-b825-591496c43e0c-dns-svc\") pod \"dnsmasq-dns-9b86998b5-jhzbr\" (UID: \"92857c04-f2b0-41d7-b825-591496c43e0c\") " pod="openstack/dnsmasq-dns-9b86998b5-jhzbr" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.924426 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npnsz\" (UniqueName: \"kubernetes.io/projected/92857c04-f2b0-41d7-b825-591496c43e0c-kube-api-access-npnsz\") pod \"dnsmasq-dns-9b86998b5-jhzbr\" (UID: \"92857c04-f2b0-41d7-b825-591496c43e0c\") " pod="openstack/dnsmasq-dns-9b86998b5-jhzbr" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.924502 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92857c04-f2b0-41d7-b825-591496c43e0c-config\") pod \"dnsmasq-dns-9b86998b5-jhzbr\" (UID: \"92857c04-f2b0-41d7-b825-591496c43e0c\") " pod="openstack/dnsmasq-dns-9b86998b5-jhzbr" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.924576 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/92857c04-f2b0-41d7-b825-591496c43e0c-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-jhzbr\" (UID: \"92857c04-f2b0-41d7-b825-591496c43e0c\") " pod="openstack/dnsmasq-dns-9b86998b5-jhzbr" Mar 12 08:25:41 crc kubenswrapper[4809]: I0312 08:25:41.986912 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 12 08:25:42 crc kubenswrapper[4809]: I0312 08:25:42.015214 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 08:25:42 crc kubenswrapper[4809]: I0312 08:25:42.027583 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/92857c04-f2b0-41d7-b825-591496c43e0c-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-jhzbr\" (UID: \"92857c04-f2b0-41d7-b825-591496c43e0c\") " pod="openstack/dnsmasq-dns-9b86998b5-jhzbr" Mar 12 08:25:42 crc kubenswrapper[4809]: I0312 08:25:42.027668 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/92857c04-f2b0-41d7-b825-591496c43e0c-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-jhzbr\" (UID: \"92857c04-f2b0-41d7-b825-591496c43e0c\") " pod="openstack/dnsmasq-dns-9b86998b5-jhzbr" Mar 12 08:25:42 crc kubenswrapper[4809]: I0312 08:25:42.027742 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92857c04-f2b0-41d7-b825-591496c43e0c-dns-svc\") pod \"dnsmasq-dns-9b86998b5-jhzbr\" (UID: \"92857c04-f2b0-41d7-b825-591496c43e0c\") " pod="openstack/dnsmasq-dns-9b86998b5-jhzbr" Mar 12 08:25:42 crc kubenswrapper[4809]: I0312 08:25:42.027968 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npnsz\" (UniqueName: \"kubernetes.io/projected/92857c04-f2b0-41d7-b825-591496c43e0c-kube-api-access-npnsz\") pod \"dnsmasq-dns-9b86998b5-jhzbr\" (UID: \"92857c04-f2b0-41d7-b825-591496c43e0c\") " pod="openstack/dnsmasq-dns-9b86998b5-jhzbr" Mar 12 08:25:42 crc kubenswrapper[4809]: I0312 08:25:42.027994 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92857c04-f2b0-41d7-b825-591496c43e0c-config\") pod \"dnsmasq-dns-9b86998b5-jhzbr\" (UID: \"92857c04-f2b0-41d7-b825-591496c43e0c\") " pod="openstack/dnsmasq-dns-9b86998b5-jhzbr" Mar 12 08:25:42 crc kubenswrapper[4809]: I0312 08:25:42.028016 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/92857c04-f2b0-41d7-b825-591496c43e0c-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-jhzbr\" (UID: \"92857c04-f2b0-41d7-b825-591496c43e0c\") " pod="openstack/dnsmasq-dns-9b86998b5-jhzbr" Mar 12 08:25:42 crc kubenswrapper[4809]: I0312 08:25:42.028887 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/92857c04-f2b0-41d7-b825-591496c43e0c-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-jhzbr\" (UID: \"92857c04-f2b0-41d7-b825-591496c43e0c\") " pod="openstack/dnsmasq-dns-9b86998b5-jhzbr" Mar 12 08:25:42 crc kubenswrapper[4809]: I0312 08:25:42.029684 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92857c04-f2b0-41d7-b825-591496c43e0c-dns-svc\") pod \"dnsmasq-dns-9b86998b5-jhzbr\" (UID: \"92857c04-f2b0-41d7-b825-591496c43e0c\") " pod="openstack/dnsmasq-dns-9b86998b5-jhzbr" Mar 12 08:25:42 crc kubenswrapper[4809]: I0312 08:25:42.030021 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92857c04-f2b0-41d7-b825-591496c43e0c-config\") pod \"dnsmasq-dns-9b86998b5-jhzbr\" (UID: \"92857c04-f2b0-41d7-b825-591496c43e0c\") " pod="openstack/dnsmasq-dns-9b86998b5-jhzbr" Mar 12 08:25:42 crc kubenswrapper[4809]: I0312 08:25:42.030489 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/92857c04-f2b0-41d7-b825-591496c43e0c-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-jhzbr\" (UID: \"92857c04-f2b0-41d7-b825-591496c43e0c\") " pod="openstack/dnsmasq-dns-9b86998b5-jhzbr" Mar 12 08:25:42 crc kubenswrapper[4809]: I0312 08:25:42.030698 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/92857c04-f2b0-41d7-b825-591496c43e0c-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-jhzbr\" (UID: \"92857c04-f2b0-41d7-b825-591496c43e0c\") " pod="openstack/dnsmasq-dns-9b86998b5-jhzbr" Mar 12 08:25:42 crc kubenswrapper[4809]: I0312 08:25:42.057694 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npnsz\" (UniqueName: \"kubernetes.io/projected/92857c04-f2b0-41d7-b825-591496c43e0c-kube-api-access-npnsz\") pod \"dnsmasq-dns-9b86998b5-jhzbr\" (UID: \"92857c04-f2b0-41d7-b825-591496c43e0c\") " pod="openstack/dnsmasq-dns-9b86998b5-jhzbr" Mar 12 08:25:42 crc kubenswrapper[4809]: I0312 08:25:42.107908 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-jhzbr" Mar 12 08:25:42 crc kubenswrapper[4809]: I0312 08:25:42.114350 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-m6nvs"] Mar 12 08:25:42 crc kubenswrapper[4809]: I0312 08:25:42.674491 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 08:25:42 crc kubenswrapper[4809]: I0312 08:25:42.704811 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-qrkcj"] Mar 12 08:25:42 crc kubenswrapper[4809]: I0312 08:25:42.719972 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 12 08:25:42 crc kubenswrapper[4809]: I0312 08:25:42.781183 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d7871824-2524-4083-8a87-f7f27f0f533f","Type":"ContainerStarted","Data":"00d311cf6e1cefdee794500f6eae37bf511651973155bf65b6529ba23634a0b5"} Mar 12 08:25:42 crc kubenswrapper[4809]: I0312 08:25:42.801031 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-qrkcj" event={"ID":"0d584655-82a5-46a8-ae0b-9c1abf01de7a","Type":"ContainerStarted","Data":"30c97709207e79f0c536d6ebfe1df4e37d846fb9bc88971cfe6f703a57058b51"} Mar 12 08:25:42 crc kubenswrapper[4809]: I0312 08:25:42.817362 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"04249628-0bfe-4de8-b6ad-f8508e7b1a8a","Type":"ContainerStarted","Data":"52b8968390064a3028329ea2daac949e19b60d9b6ac724f35f0f0ed82b3cf764"} Mar 12 08:25:42 crc kubenswrapper[4809]: I0312 08:25:42.819845 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cmv6w" podUID="81403e46-3215-4867-a857-ec7bc0b08c0d" containerName="registry-server" containerID="cri-o://13a6f9a4d1c84a85d6536ed3fee6503e84c0c929bf70f5747ea200abb0391c36" gracePeriod=2 Mar 12 08:25:42 crc kubenswrapper[4809]: I0312 08:25:42.821318 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-m6nvs" event={"ID":"661848d4-7363-415e-8446-98751a00c6de","Type":"ContainerStarted","Data":"3d5c947b257864e12e92b5e465b049ec476bdd76f6f249a743a5bc23774eb9da"} Mar 12 08:25:42 crc kubenswrapper[4809]: I0312 08:25:42.821362 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-m6nvs" event={"ID":"661848d4-7363-415e-8446-98751a00c6de","Type":"ContainerStarted","Data":"ce21b4d31d1ac8920373ea11323590218231374a752393172d317329f0dedcb7"} Mar 12 08:25:42 crc kubenswrapper[4809]: I0312 08:25:42.823529 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-814c-account-create-update-8vv4f"] Mar 12 08:25:42 crc kubenswrapper[4809]: I0312 08:25:42.857695 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-m6nvs" podStartSLOduration=2.857669285 podStartE2EDuration="2.857669285s" podCreationTimestamp="2026-03-12 08:25:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:25:42.83870082 +0000 UTC m=+1616.420736553" watchObservedRunningTime="2026-03-12 08:25:42.857669285 +0000 UTC m=+1616.439705018" Mar 12 08:25:43 crc kubenswrapper[4809]: I0312 08:25:43.208635 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 12 08:25:43 crc kubenswrapper[4809]: I0312 08:25:43.209020 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 08:25:43 crc kubenswrapper[4809]: I0312 08:25:43.450203 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-jhzbr"] Mar 12 08:25:43 crc kubenswrapper[4809]: I0312 08:25:43.565861 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bc2l4"] Mar 12 08:25:43 crc kubenswrapper[4809]: I0312 08:25:43.571069 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bc2l4" Mar 12 08:25:43 crc kubenswrapper[4809]: I0312 08:25:43.579900 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Mar 12 08:25:43 crc kubenswrapper[4809]: I0312 08:25:43.580402 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Mar 12 08:25:43 crc kubenswrapper[4809]: I0312 08:25:43.627771 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bc2l4"] Mar 12 08:25:43 crc kubenswrapper[4809]: I0312 08:25:43.710414 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20b188d8-97a2-47ef-a863-e243b1f38483-config-data\") pod \"nova-cell1-conductor-db-sync-bc2l4\" (UID: \"20b188d8-97a2-47ef-a863-e243b1f38483\") " pod="openstack/nova-cell1-conductor-db-sync-bc2l4" Mar 12 08:25:43 crc kubenswrapper[4809]: I0312 08:25:43.710489 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20b188d8-97a2-47ef-a863-e243b1f38483-scripts\") pod \"nova-cell1-conductor-db-sync-bc2l4\" (UID: \"20b188d8-97a2-47ef-a863-e243b1f38483\") " pod="openstack/nova-cell1-conductor-db-sync-bc2l4" Mar 12 08:25:43 crc kubenswrapper[4809]: I0312 08:25:43.710594 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5h65\" (UniqueName: \"kubernetes.io/projected/20b188d8-97a2-47ef-a863-e243b1f38483-kube-api-access-f5h65\") pod \"nova-cell1-conductor-db-sync-bc2l4\" (UID: \"20b188d8-97a2-47ef-a863-e243b1f38483\") " pod="openstack/nova-cell1-conductor-db-sync-bc2l4" Mar 12 08:25:43 crc kubenswrapper[4809]: I0312 08:25:43.710626 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20b188d8-97a2-47ef-a863-e243b1f38483-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-bc2l4\" (UID: \"20b188d8-97a2-47ef-a863-e243b1f38483\") " pod="openstack/nova-cell1-conductor-db-sync-bc2l4" Mar 12 08:25:43 crc kubenswrapper[4809]: I0312 08:25:43.739331 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cmv6w" Mar 12 08:25:43 crc kubenswrapper[4809]: I0312 08:25:43.813672 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20b188d8-97a2-47ef-a863-e243b1f38483-config-data\") pod \"nova-cell1-conductor-db-sync-bc2l4\" (UID: \"20b188d8-97a2-47ef-a863-e243b1f38483\") " pod="openstack/nova-cell1-conductor-db-sync-bc2l4" Mar 12 08:25:43 crc kubenswrapper[4809]: I0312 08:25:43.816708 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20b188d8-97a2-47ef-a863-e243b1f38483-scripts\") pod \"nova-cell1-conductor-db-sync-bc2l4\" (UID: \"20b188d8-97a2-47ef-a863-e243b1f38483\") " pod="openstack/nova-cell1-conductor-db-sync-bc2l4" Mar 12 08:25:43 crc kubenswrapper[4809]: I0312 08:25:43.817206 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5h65\" (UniqueName: \"kubernetes.io/projected/20b188d8-97a2-47ef-a863-e243b1f38483-kube-api-access-f5h65\") pod \"nova-cell1-conductor-db-sync-bc2l4\" (UID: \"20b188d8-97a2-47ef-a863-e243b1f38483\") " pod="openstack/nova-cell1-conductor-db-sync-bc2l4" Mar 12 08:25:43 crc kubenswrapper[4809]: I0312 08:25:43.817276 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20b188d8-97a2-47ef-a863-e243b1f38483-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-bc2l4\" (UID: \"20b188d8-97a2-47ef-a863-e243b1f38483\") " pod="openstack/nova-cell1-conductor-db-sync-bc2l4" Mar 12 08:25:43 crc kubenswrapper[4809]: I0312 08:25:43.821859 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20b188d8-97a2-47ef-a863-e243b1f38483-scripts\") pod \"nova-cell1-conductor-db-sync-bc2l4\" (UID: \"20b188d8-97a2-47ef-a863-e243b1f38483\") " pod="openstack/nova-cell1-conductor-db-sync-bc2l4" Mar 12 08:25:43 crc kubenswrapper[4809]: I0312 08:25:43.821932 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20b188d8-97a2-47ef-a863-e243b1f38483-config-data\") pod \"nova-cell1-conductor-db-sync-bc2l4\" (UID: \"20b188d8-97a2-47ef-a863-e243b1f38483\") " pod="openstack/nova-cell1-conductor-db-sync-bc2l4" Mar 12 08:25:43 crc kubenswrapper[4809]: I0312 08:25:43.833409 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20b188d8-97a2-47ef-a863-e243b1f38483-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-bc2l4\" (UID: \"20b188d8-97a2-47ef-a863-e243b1f38483\") " pod="openstack/nova-cell1-conductor-db-sync-bc2l4" Mar 12 08:25:43 crc kubenswrapper[4809]: I0312 08:25:43.838830 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5h65\" (UniqueName: \"kubernetes.io/projected/20b188d8-97a2-47ef-a863-e243b1f38483-kube-api-access-f5h65\") pod \"nova-cell1-conductor-db-sync-bc2l4\" (UID: \"20b188d8-97a2-47ef-a863-e243b1f38483\") " pod="openstack/nova-cell1-conductor-db-sync-bc2l4" Mar 12 08:25:43 crc kubenswrapper[4809]: I0312 08:25:43.847456 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-qrkcj" event={"ID":"0d584655-82a5-46a8-ae0b-9c1abf01de7a","Type":"ContainerDied","Data":"00e51b55baea3a9058a538fab5506ef615a01a6c2515cdf89e35ddf508e181b2"} Mar 12 08:25:43 crc kubenswrapper[4809]: I0312 08:25:43.847469 4809 generic.go:334] "Generic (PLEG): container finished" podID="0d584655-82a5-46a8-ae0b-9c1abf01de7a" containerID="00e51b55baea3a9058a538fab5506ef615a01a6c2515cdf89e35ddf508e181b2" exitCode=0 Mar 12 08:25:43 crc kubenswrapper[4809]: I0312 08:25:43.849290 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-jhzbr" event={"ID":"92857c04-f2b0-41d7-b825-591496c43e0c","Type":"ContainerStarted","Data":"8f8f338f2ace511693611de23fb4ec2761649c17318bb58f657b81bec5b4979a"} Mar 12 08:25:43 crc kubenswrapper[4809]: I0312 08:25:43.852090 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1483a15a-a49d-46c7-886b-85e8f7fcfc0b","Type":"ContainerStarted","Data":"21d0cb3cd65bcea9071cc8298bbe1d8b3843b1c834b3f898ac925e752f273924"} Mar 12 08:25:43 crc kubenswrapper[4809]: I0312 08:25:43.858034 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-814c-account-create-update-8vv4f" event={"ID":"5dd15dc5-2ce6-4c1b-a683-f73beca93754","Type":"ContainerStarted","Data":"7a0a35cb89bb6f296c304da624d5217bb095ccc7c7933fbcb6efbef2f6ea96b5"} Mar 12 08:25:43 crc kubenswrapper[4809]: I0312 08:25:43.858078 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-814c-account-create-update-8vv4f" event={"ID":"5dd15dc5-2ce6-4c1b-a683-f73beca93754","Type":"ContainerStarted","Data":"bbda3b28d5b2f257012a715f17e1e8c34057a3d6226575d62be4aff33931abc2"} Mar 12 08:25:43 crc kubenswrapper[4809]: I0312 08:25:43.873300 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3b1185a8-a7d9-4f17-b98f-02b8051d196a","Type":"ContainerStarted","Data":"273dda83bb5f2b01afb8849c7e133afcaa9b9fb33ea408dfb71f7b8eb774d53c"} Mar 12 08:25:43 crc kubenswrapper[4809]: I0312 08:25:43.879917 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8faa62a-95fb-49f9-a07c-0d4922ed23a6","Type":"ContainerStarted","Data":"eb85097106a8bea3e909cf01327fc59c5b9761eef0a8b15e6a6e87651ce74733"} Mar 12 08:25:43 crc kubenswrapper[4809]: I0312 08:25:43.880009 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 12 08:25:43 crc kubenswrapper[4809]: I0312 08:25:43.880067 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a8faa62a-95fb-49f9-a07c-0d4922ed23a6" containerName="ceilometer-central-agent" containerID="cri-o://ab6c698c5b6aa1039853d1a6c498c2982a85ceaa221200cd94c72b43ecaa4ebe" gracePeriod=30 Mar 12 08:25:43 crc kubenswrapper[4809]: I0312 08:25:43.880130 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a8faa62a-95fb-49f9-a07c-0d4922ed23a6" containerName="proxy-httpd" containerID="cri-o://eb85097106a8bea3e909cf01327fc59c5b9761eef0a8b15e6a6e87651ce74733" gracePeriod=30 Mar 12 08:25:43 crc kubenswrapper[4809]: I0312 08:25:43.880080 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a8faa62a-95fb-49f9-a07c-0d4922ed23a6" containerName="sg-core" containerID="cri-o://2d561ad4afc353dad21d1e700846f54f5711b8817bee0bd755991e9880381651" gracePeriod=30 Mar 12 08:25:43 crc kubenswrapper[4809]: I0312 08:25:43.880101 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a8faa62a-95fb-49f9-a07c-0d4922ed23a6" containerName="ceilometer-notification-agent" containerID="cri-o://1fb238a7ca9a6b8e1c466bb294e083127290a23a2e0498cc54d1ae51e5463716" gracePeriod=30 Mar 12 08:25:43 crc kubenswrapper[4809]: I0312 08:25:43.894762 4809 generic.go:334] "Generic (PLEG): container finished" podID="81403e46-3215-4867-a857-ec7bc0b08c0d" containerID="13a6f9a4d1c84a85d6536ed3fee6503e84c0c929bf70f5747ea200abb0391c36" exitCode=0 Mar 12 08:25:43 crc kubenswrapper[4809]: I0312 08:25:43.896257 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cmv6w" Mar 12 08:25:43 crc kubenswrapper[4809]: I0312 08:25:43.896365 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cmv6w" event={"ID":"81403e46-3215-4867-a857-ec7bc0b08c0d","Type":"ContainerDied","Data":"13a6f9a4d1c84a85d6536ed3fee6503e84c0c929bf70f5747ea200abb0391c36"} Mar 12 08:25:43 crc kubenswrapper[4809]: I0312 08:25:43.896443 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cmv6w" event={"ID":"81403e46-3215-4867-a857-ec7bc0b08c0d","Type":"ContainerDied","Data":"409872c90e973a3c164613e8dd0c7f7821c9ccdacd7e2565e3d34ce14c58f0b7"} Mar 12 08:25:43 crc kubenswrapper[4809]: I0312 08:25:43.896471 4809 scope.go:117] "RemoveContainer" containerID="13a6f9a4d1c84a85d6536ed3fee6503e84c0c929bf70f5747ea200abb0391c36" Mar 12 08:25:43 crc kubenswrapper[4809]: I0312 08:25:43.907857 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-814c-account-create-update-8vv4f" podStartSLOduration=2.907825588 podStartE2EDuration="2.907825588s" podCreationTimestamp="2026-03-12 08:25:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:25:43.886080867 +0000 UTC m=+1617.468116610" watchObservedRunningTime="2026-03-12 08:25:43.907825588 +0000 UTC m=+1617.489861321" Mar 12 08:25:43 crc kubenswrapper[4809]: I0312 08:25:43.912163 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bc2l4" Mar 12 08:25:43 crc kubenswrapper[4809]: I0312 08:25:43.927655 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pzrm\" (UniqueName: \"kubernetes.io/projected/81403e46-3215-4867-a857-ec7bc0b08c0d-kube-api-access-4pzrm\") pod \"81403e46-3215-4867-a857-ec7bc0b08c0d\" (UID: \"81403e46-3215-4867-a857-ec7bc0b08c0d\") " Mar 12 08:25:43 crc kubenswrapper[4809]: I0312 08:25:43.927728 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81403e46-3215-4867-a857-ec7bc0b08c0d-catalog-content\") pod \"81403e46-3215-4867-a857-ec7bc0b08c0d\" (UID: \"81403e46-3215-4867-a857-ec7bc0b08c0d\") " Mar 12 08:25:43 crc kubenswrapper[4809]: I0312 08:25:43.928134 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81403e46-3215-4867-a857-ec7bc0b08c0d-utilities\") pod \"81403e46-3215-4867-a857-ec7bc0b08c0d\" (UID: \"81403e46-3215-4867-a857-ec7bc0b08c0d\") " Mar 12 08:25:43 crc kubenswrapper[4809]: I0312 08:25:43.935152 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81403e46-3215-4867-a857-ec7bc0b08c0d-utilities" (OuterVolumeSpecName: "utilities") pod "81403e46-3215-4867-a857-ec7bc0b08c0d" (UID: "81403e46-3215-4867-a857-ec7bc0b08c0d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:25:43 crc kubenswrapper[4809]: I0312 08:25:43.940056 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81403e46-3215-4867-a857-ec7bc0b08c0d-kube-api-access-4pzrm" (OuterVolumeSpecName: "kube-api-access-4pzrm") pod "81403e46-3215-4867-a857-ec7bc0b08c0d" (UID: "81403e46-3215-4867-a857-ec7bc0b08c0d"). InnerVolumeSpecName "kube-api-access-4pzrm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:25:43 crc kubenswrapper[4809]: I0312 08:25:43.945908 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.044503703 podStartE2EDuration="7.945877942s" podCreationTimestamp="2026-03-12 08:25:36 +0000 UTC" firstStartedPulling="2026-03-12 08:25:37.59832933 +0000 UTC m=+1611.180365063" lastFinishedPulling="2026-03-12 08:25:42.499703569 +0000 UTC m=+1616.081739302" observedRunningTime="2026-03-12 08:25:43.933913598 +0000 UTC m=+1617.515949341" watchObservedRunningTime="2026-03-12 08:25:43.945877942 +0000 UTC m=+1617.527913675" Mar 12 08:25:43 crc kubenswrapper[4809]: I0312 08:25:43.969214 4809 scope.go:117] "RemoveContainer" containerID="96153f4bfac4680bbb81bab07fcd3c4db29374c87247123c0bfde78b8922cf7c" Mar 12 08:25:43 crc kubenswrapper[4809]: I0312 08:25:43.981383 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81403e46-3215-4867-a857-ec7bc0b08c0d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "81403e46-3215-4867-a857-ec7bc0b08c0d" (UID: "81403e46-3215-4867-a857-ec7bc0b08c0d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:25:44 crc kubenswrapper[4809]: I0312 08:25:44.041025 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pzrm\" (UniqueName: \"kubernetes.io/projected/81403e46-3215-4867-a857-ec7bc0b08c0d-kube-api-access-4pzrm\") on node \"crc\" DevicePath \"\"" Mar 12 08:25:44 crc kubenswrapper[4809]: I0312 08:25:44.041079 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81403e46-3215-4867-a857-ec7bc0b08c0d-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 08:25:44 crc kubenswrapper[4809]: I0312 08:25:44.041097 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81403e46-3215-4867-a857-ec7bc0b08c0d-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 08:25:44 crc kubenswrapper[4809]: I0312 08:25:44.330406 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cmv6w"] Mar 12 08:25:44 crc kubenswrapper[4809]: I0312 08:25:44.351761 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cmv6w"] Mar 12 08:25:44 crc kubenswrapper[4809]: I0312 08:25:44.359626 4809 scope.go:117] "RemoveContainer" containerID="15e02705f4d2d77891d788869360f15fe0aef05bdcdcd4046d89d9c2cf3e3ff9" Mar 12 08:25:44 crc kubenswrapper[4809]: I0312 08:25:44.487814 4809 scope.go:117] "RemoveContainer" containerID="13a6f9a4d1c84a85d6536ed3fee6503e84c0c929bf70f5747ea200abb0391c36" Mar 12 08:25:44 crc kubenswrapper[4809]: E0312 08:25:44.488562 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13a6f9a4d1c84a85d6536ed3fee6503e84c0c929bf70f5747ea200abb0391c36\": container with ID starting with 13a6f9a4d1c84a85d6536ed3fee6503e84c0c929bf70f5747ea200abb0391c36 not found: ID does not exist" containerID="13a6f9a4d1c84a85d6536ed3fee6503e84c0c929bf70f5747ea200abb0391c36" Mar 12 08:25:44 crc kubenswrapper[4809]: I0312 08:25:44.488627 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13a6f9a4d1c84a85d6536ed3fee6503e84c0c929bf70f5747ea200abb0391c36"} err="failed to get container status \"13a6f9a4d1c84a85d6536ed3fee6503e84c0c929bf70f5747ea200abb0391c36\": rpc error: code = NotFound desc = could not find container \"13a6f9a4d1c84a85d6536ed3fee6503e84c0c929bf70f5747ea200abb0391c36\": container with ID starting with 13a6f9a4d1c84a85d6536ed3fee6503e84c0c929bf70f5747ea200abb0391c36 not found: ID does not exist" Mar 12 08:25:44 crc kubenswrapper[4809]: I0312 08:25:44.488665 4809 scope.go:117] "RemoveContainer" containerID="96153f4bfac4680bbb81bab07fcd3c4db29374c87247123c0bfde78b8922cf7c" Mar 12 08:25:44 crc kubenswrapper[4809]: E0312 08:25:44.489448 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96153f4bfac4680bbb81bab07fcd3c4db29374c87247123c0bfde78b8922cf7c\": container with ID starting with 96153f4bfac4680bbb81bab07fcd3c4db29374c87247123c0bfde78b8922cf7c not found: ID does not exist" containerID="96153f4bfac4680bbb81bab07fcd3c4db29374c87247123c0bfde78b8922cf7c" Mar 12 08:25:44 crc kubenswrapper[4809]: I0312 08:25:44.489520 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96153f4bfac4680bbb81bab07fcd3c4db29374c87247123c0bfde78b8922cf7c"} err="failed to get container status \"96153f4bfac4680bbb81bab07fcd3c4db29374c87247123c0bfde78b8922cf7c\": rpc error: code = NotFound desc = could not find container \"96153f4bfac4680bbb81bab07fcd3c4db29374c87247123c0bfde78b8922cf7c\": container with ID starting with 96153f4bfac4680bbb81bab07fcd3c4db29374c87247123c0bfde78b8922cf7c not found: ID does not exist" Mar 12 08:25:44 crc kubenswrapper[4809]: I0312 08:25:44.489544 4809 scope.go:117] "RemoveContainer" containerID="15e02705f4d2d77891d788869360f15fe0aef05bdcdcd4046d89d9c2cf3e3ff9" Mar 12 08:25:44 crc kubenswrapper[4809]: E0312 08:25:44.490228 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15e02705f4d2d77891d788869360f15fe0aef05bdcdcd4046d89d9c2cf3e3ff9\": container with ID starting with 15e02705f4d2d77891d788869360f15fe0aef05bdcdcd4046d89d9c2cf3e3ff9 not found: ID does not exist" containerID="15e02705f4d2d77891d788869360f15fe0aef05bdcdcd4046d89d9c2cf3e3ff9" Mar 12 08:25:44 crc kubenswrapper[4809]: I0312 08:25:44.490256 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15e02705f4d2d77891d788869360f15fe0aef05bdcdcd4046d89d9c2cf3e3ff9"} err="failed to get container status \"15e02705f4d2d77891d788869360f15fe0aef05bdcdcd4046d89d9c2cf3e3ff9\": rpc error: code = NotFound desc = could not find container \"15e02705f4d2d77891d788869360f15fe0aef05bdcdcd4046d89d9c2cf3e3ff9\": container with ID starting with 15e02705f4d2d77891d788869360f15fe0aef05bdcdcd4046d89d9c2cf3e3ff9 not found: ID does not exist" Mar 12 08:25:44 crc kubenswrapper[4809]: I0312 08:25:44.756333 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bc2l4"] Mar 12 08:25:44 crc kubenswrapper[4809]: I0312 08:25:44.941709 4809 generic.go:334] "Generic (PLEG): container finished" podID="a8faa62a-95fb-49f9-a07c-0d4922ed23a6" containerID="eb85097106a8bea3e909cf01327fc59c5b9761eef0a8b15e6a6e87651ce74733" exitCode=0 Mar 12 08:25:44 crc kubenswrapper[4809]: I0312 08:25:44.943167 4809 generic.go:334] "Generic (PLEG): container finished" podID="a8faa62a-95fb-49f9-a07c-0d4922ed23a6" containerID="2d561ad4afc353dad21d1e700846f54f5711b8817bee0bd755991e9880381651" exitCode=2 Mar 12 08:25:44 crc kubenswrapper[4809]: I0312 08:25:44.943182 4809 generic.go:334] "Generic (PLEG): container finished" podID="a8faa62a-95fb-49f9-a07c-0d4922ed23a6" containerID="1fb238a7ca9a6b8e1c466bb294e083127290a23a2e0498cc54d1ae51e5463716" exitCode=0 Mar 12 08:25:44 crc kubenswrapper[4809]: I0312 08:25:44.941827 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8faa62a-95fb-49f9-a07c-0d4922ed23a6","Type":"ContainerDied","Data":"eb85097106a8bea3e909cf01327fc59c5b9761eef0a8b15e6a6e87651ce74733"} Mar 12 08:25:44 crc kubenswrapper[4809]: I0312 08:25:44.943338 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8faa62a-95fb-49f9-a07c-0d4922ed23a6","Type":"ContainerDied","Data":"2d561ad4afc353dad21d1e700846f54f5711b8817bee0bd755991e9880381651"} Mar 12 08:25:44 crc kubenswrapper[4809]: I0312 08:25:44.943400 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8faa62a-95fb-49f9-a07c-0d4922ed23a6","Type":"ContainerDied","Data":"1fb238a7ca9a6b8e1c466bb294e083127290a23a2e0498cc54d1ae51e5463716"} Mar 12 08:25:44 crc kubenswrapper[4809]: I0312 08:25:44.950045 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bc2l4" event={"ID":"20b188d8-97a2-47ef-a863-e243b1f38483","Type":"ContainerStarted","Data":"c8798eee77e1b723703063c276c910457eb3b88690e631204da17705d8d91b68"} Mar 12 08:25:44 crc kubenswrapper[4809]: I0312 08:25:44.960913 4809 generic.go:334] "Generic (PLEG): container finished" podID="92857c04-f2b0-41d7-b825-591496c43e0c" containerID="ea06fe06a2ac0feda255000f0c5e14cdb76cafba523247bd8556d90d2b4adacf" exitCode=0 Mar 12 08:25:44 crc kubenswrapper[4809]: I0312 08:25:44.961269 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-jhzbr" event={"ID":"92857c04-f2b0-41d7-b825-591496c43e0c","Type":"ContainerDied","Data":"ea06fe06a2ac0feda255000f0c5e14cdb76cafba523247bd8556d90d2b4adacf"} Mar 12 08:25:44 crc kubenswrapper[4809]: I0312 08:25:44.971239 4809 generic.go:334] "Generic (PLEG): container finished" podID="5dd15dc5-2ce6-4c1b-a683-f73beca93754" containerID="7a0a35cb89bb6f296c304da624d5217bb095ccc7c7933fbcb6efbef2f6ea96b5" exitCode=0 Mar 12 08:25:44 crc kubenswrapper[4809]: I0312 08:25:44.971545 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-814c-account-create-update-8vv4f" event={"ID":"5dd15dc5-2ce6-4c1b-a683-f73beca93754","Type":"ContainerDied","Data":"7a0a35cb89bb6f296c304da624d5217bb095ccc7c7933fbcb6efbef2f6ea96b5"} Mar 12 08:25:45 crc kubenswrapper[4809]: I0312 08:25:45.074384 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 08:25:45 crc kubenswrapper[4809]: I0312 08:25:45.092146 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 12 08:25:45 crc kubenswrapper[4809]: I0312 08:25:45.130959 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81403e46-3215-4867-a857-ec7bc0b08c0d" path="/var/lib/kubelet/pods/81403e46-3215-4867-a857-ec7bc0b08c0d/volumes" Mar 12 08:25:45 crc kubenswrapper[4809]: I0312 08:25:45.991044 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bc2l4" event={"ID":"20b188d8-97a2-47ef-a863-e243b1f38483","Type":"ContainerStarted","Data":"d2160229e26eb2441b207a1b23558ef1b1791130c13a1faa180bce47278795bd"} Mar 12 08:25:46 crc kubenswrapper[4809]: I0312 08:25:46.024458 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-bc2l4" podStartSLOduration=3.024430956 podStartE2EDuration="3.024430956s" podCreationTimestamp="2026-03-12 08:25:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:25:46.010925768 +0000 UTC m=+1619.592961501" watchObservedRunningTime="2026-03-12 08:25:46.024430956 +0000 UTC m=+1619.606466689" Mar 12 08:25:46 crc kubenswrapper[4809]: I0312 08:25:46.516596 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-814c-account-create-update-8vv4f" Mar 12 08:25:46 crc kubenswrapper[4809]: I0312 08:25:46.551538 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prvhj\" (UniqueName: \"kubernetes.io/projected/5dd15dc5-2ce6-4c1b-a683-f73beca93754-kube-api-access-prvhj\") pod \"5dd15dc5-2ce6-4c1b-a683-f73beca93754\" (UID: \"5dd15dc5-2ce6-4c1b-a683-f73beca93754\") " Mar 12 08:25:46 crc kubenswrapper[4809]: I0312 08:25:46.552213 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5dd15dc5-2ce6-4c1b-a683-f73beca93754-operator-scripts\") pod \"5dd15dc5-2ce6-4c1b-a683-f73beca93754\" (UID: \"5dd15dc5-2ce6-4c1b-a683-f73beca93754\") " Mar 12 08:25:46 crc kubenswrapper[4809]: I0312 08:25:46.554002 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5dd15dc5-2ce6-4c1b-a683-f73beca93754-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5dd15dc5-2ce6-4c1b-a683-f73beca93754" (UID: "5dd15dc5-2ce6-4c1b-a683-f73beca93754"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:25:46 crc kubenswrapper[4809]: I0312 08:25:46.563241 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5dd15dc5-2ce6-4c1b-a683-f73beca93754-kube-api-access-prvhj" (OuterVolumeSpecName: "kube-api-access-prvhj") pod "5dd15dc5-2ce6-4c1b-a683-f73beca93754" (UID: "5dd15dc5-2ce6-4c1b-a683-f73beca93754"). InnerVolumeSpecName "kube-api-access-prvhj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:25:46 crc kubenswrapper[4809]: I0312 08:25:46.655157 4809 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5dd15dc5-2ce6-4c1b-a683-f73beca93754-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 08:25:46 crc kubenswrapper[4809]: I0312 08:25:46.655197 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prvhj\" (UniqueName: \"kubernetes.io/projected/5dd15dc5-2ce6-4c1b-a683-f73beca93754-kube-api-access-prvhj\") on node \"crc\" DevicePath \"\"" Mar 12 08:25:47 crc kubenswrapper[4809]: I0312 08:25:47.011986 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-814c-account-create-update-8vv4f" event={"ID":"5dd15dc5-2ce6-4c1b-a683-f73beca93754","Type":"ContainerDied","Data":"bbda3b28d5b2f257012a715f17e1e8c34057a3d6226575d62be4aff33931abc2"} Mar 12 08:25:47 crc kubenswrapper[4809]: I0312 08:25:47.012054 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bbda3b28d5b2f257012a715f17e1e8c34057a3d6226575d62be4aff33931abc2" Mar 12 08:25:47 crc kubenswrapper[4809]: I0312 08:25:47.012047 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-814c-account-create-update-8vv4f" Mar 12 08:25:48 crc kubenswrapper[4809]: I0312 08:25:48.030716 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-qrkcj" event={"ID":"0d584655-82a5-46a8-ae0b-9c1abf01de7a","Type":"ContainerDied","Data":"30c97709207e79f0c536d6ebfe1df4e37d846fb9bc88971cfe6f703a57058b51"} Mar 12 08:25:48 crc kubenswrapper[4809]: I0312 08:25:48.031265 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30c97709207e79f0c536d6ebfe1df4e37d846fb9bc88971cfe6f703a57058b51" Mar 12 08:25:48 crc kubenswrapper[4809]: I0312 08:25:48.037517 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-qrkcj" Mar 12 08:25:48 crc kubenswrapper[4809]: I0312 08:25:48.102401 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d584655-82a5-46a8-ae0b-9c1abf01de7a-operator-scripts\") pod \"0d584655-82a5-46a8-ae0b-9c1abf01de7a\" (UID: \"0d584655-82a5-46a8-ae0b-9c1abf01de7a\") " Mar 12 08:25:48 crc kubenswrapper[4809]: I0312 08:25:48.102447 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8ws8\" (UniqueName: \"kubernetes.io/projected/0d584655-82a5-46a8-ae0b-9c1abf01de7a-kube-api-access-l8ws8\") pod \"0d584655-82a5-46a8-ae0b-9c1abf01de7a\" (UID: \"0d584655-82a5-46a8-ae0b-9c1abf01de7a\") " Mar 12 08:25:48 crc kubenswrapper[4809]: I0312 08:25:48.103305 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d584655-82a5-46a8-ae0b-9c1abf01de7a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0d584655-82a5-46a8-ae0b-9c1abf01de7a" (UID: "0d584655-82a5-46a8-ae0b-9c1abf01de7a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:25:48 crc kubenswrapper[4809]: I0312 08:25:48.106157 4809 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d584655-82a5-46a8-ae0b-9c1abf01de7a-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 08:25:48 crc kubenswrapper[4809]: I0312 08:25:48.116225 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d584655-82a5-46a8-ae0b-9c1abf01de7a-kube-api-access-l8ws8" (OuterVolumeSpecName: "kube-api-access-l8ws8") pod "0d584655-82a5-46a8-ae0b-9c1abf01de7a" (UID: "0d584655-82a5-46a8-ae0b-9c1abf01de7a"). InnerVolumeSpecName "kube-api-access-l8ws8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:25:48 crc kubenswrapper[4809]: I0312 08:25:48.208508 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8ws8\" (UniqueName: \"kubernetes.io/projected/0d584655-82a5-46a8-ae0b-9c1abf01de7a-kube-api-access-l8ws8\") on node \"crc\" DevicePath \"\"" Mar 12 08:25:49 crc kubenswrapper[4809]: I0312 08:25:49.047420 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d7871824-2524-4083-8a87-f7f27f0f533f","Type":"ContainerStarted","Data":"158b3499e55f9be843afd74724441318312b8a36ad2dfcb94b5469080b433793"} Mar 12 08:25:49 crc kubenswrapper[4809]: I0312 08:25:49.048244 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d7871824-2524-4083-8a87-f7f27f0f533f","Type":"ContainerStarted","Data":"bfd1785d5e2c4a7e31cce9c1c527cda33937f0505156dcde14c8c8740009470d"} Mar 12 08:25:49 crc kubenswrapper[4809]: I0312 08:25:49.053353 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-jhzbr" event={"ID":"92857c04-f2b0-41d7-b825-591496c43e0c","Type":"ContainerStarted","Data":"89f840827fdf7cd1b10ca2e038fc3c8c9f5a09ec8a3e5f77fc61386bc2ca2ad9"} Mar 12 08:25:49 crc kubenswrapper[4809]: I0312 08:25:49.054326 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-9b86998b5-jhzbr" Mar 12 08:25:49 crc kubenswrapper[4809]: I0312 08:25:49.058699 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1483a15a-a49d-46c7-886b-85e8f7fcfc0b","Type":"ContainerStarted","Data":"46fa804680c4c132c053f72983664223c5736562ea0a038ca0e078cc72efa9b0"} Mar 12 08:25:49 crc kubenswrapper[4809]: I0312 08:25:49.058749 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1483a15a-a49d-46c7-886b-85e8f7fcfc0b","Type":"ContainerStarted","Data":"b875d847b8185f2b45627fe74323b145e0f95ab5abb43fa4ed36a0c5d3bef2eb"} Mar 12 08:25:49 crc kubenswrapper[4809]: I0312 08:25:49.058879 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1483a15a-a49d-46c7-886b-85e8f7fcfc0b" containerName="nova-metadata-log" containerID="cri-o://b875d847b8185f2b45627fe74323b145e0f95ab5abb43fa4ed36a0c5d3bef2eb" gracePeriod=30 Mar 12 08:25:49 crc kubenswrapper[4809]: I0312 08:25:49.059152 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1483a15a-a49d-46c7-886b-85e8f7fcfc0b" containerName="nova-metadata-metadata" containerID="cri-o://46fa804680c4c132c053f72983664223c5736562ea0a038ca0e078cc72efa9b0" gracePeriod=30 Mar 12 08:25:49 crc kubenswrapper[4809]: I0312 08:25:49.067454 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"04249628-0bfe-4de8-b6ad-f8508e7b1a8a","Type":"ContainerStarted","Data":"3086965d63cda37dedda0b1906cd757d0cad972875d3261a35076c43e2761d72"} Mar 12 08:25:49 crc kubenswrapper[4809]: I0312 08:25:49.070528 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-qrkcj" Mar 12 08:25:49 crc kubenswrapper[4809]: I0312 08:25:49.070610 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="3b1185a8-a7d9-4f17-b98f-02b8051d196a" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://f135b96d0f7810dc8854db30f2c19ac1969e1b0450b0adea5447bc291a30f8b9" gracePeriod=30 Mar 12 08:25:49 crc kubenswrapper[4809]: I0312 08:25:49.070750 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3b1185a8-a7d9-4f17-b98f-02b8051d196a","Type":"ContainerStarted","Data":"f135b96d0f7810dc8854db30f2c19ac1969e1b0450b0adea5447bc291a30f8b9"} Mar 12 08:25:49 crc kubenswrapper[4809]: I0312 08:25:49.115713 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.817277806 podStartE2EDuration="9.115690484s" podCreationTimestamp="2026-03-12 08:25:40 +0000 UTC" firstStartedPulling="2026-03-12 08:25:42.750809849 +0000 UTC m=+1616.332845582" lastFinishedPulling="2026-03-12 08:25:48.049222527 +0000 UTC m=+1621.631258260" observedRunningTime="2026-03-12 08:25:49.084065063 +0000 UTC m=+1622.666100796" watchObservedRunningTime="2026-03-12 08:25:49.115690484 +0000 UTC m=+1622.697726217" Mar 12 08:25:49 crc kubenswrapper[4809]: I0312 08:25:49.144784 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.331410478 podStartE2EDuration="8.144754823s" podCreationTimestamp="2026-03-12 08:25:41 +0000 UTC" firstStartedPulling="2026-03-12 08:25:43.228400879 +0000 UTC m=+1616.810436612" lastFinishedPulling="2026-03-12 08:25:48.041745224 +0000 UTC m=+1621.623780957" observedRunningTime="2026-03-12 08:25:49.11408513 +0000 UTC m=+1622.696120863" watchObservedRunningTime="2026-03-12 08:25:49.144754823 +0000 UTC m=+1622.726790556" Mar 12 08:25:49 crc kubenswrapper[4809]: I0312 08:25:49.178478 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.770531134 podStartE2EDuration="9.178450231s" podCreationTimestamp="2026-03-12 08:25:40 +0000 UTC" firstStartedPulling="2026-03-12 08:25:42.633639842 +0000 UTC m=+1616.215675575" lastFinishedPulling="2026-03-12 08:25:48.041558939 +0000 UTC m=+1621.623594672" observedRunningTime="2026-03-12 08:25:49.14718487 +0000 UTC m=+1622.729220603" watchObservedRunningTime="2026-03-12 08:25:49.178450231 +0000 UTC m=+1622.760485964" Mar 12 08:25:49 crc kubenswrapper[4809]: I0312 08:25:49.245496 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.512012641 podStartE2EDuration="8.245469093s" podCreationTimestamp="2026-03-12 08:25:41 +0000 UTC" firstStartedPulling="2026-03-12 08:25:43.307741777 +0000 UTC m=+1616.889777510" lastFinishedPulling="2026-03-12 08:25:48.041198219 +0000 UTC m=+1621.623233962" observedRunningTime="2026-03-12 08:25:49.186991673 +0000 UTC m=+1622.769027406" watchObservedRunningTime="2026-03-12 08:25:49.245469093 +0000 UTC m=+1622.827504826" Mar 12 08:25:49 crc kubenswrapper[4809]: I0312 08:25:49.266038 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-9b86998b5-jhzbr" podStartSLOduration=8.266006821 podStartE2EDuration="8.266006821s" podCreationTimestamp="2026-03-12 08:25:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:25:49.213889634 +0000 UTC m=+1622.795925377" watchObservedRunningTime="2026-03-12 08:25:49.266006821 +0000 UTC m=+1622.848042544" Mar 12 08:25:49 crc kubenswrapper[4809]: E0312 08:25:49.574133 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d584655_82a5_46a8_ae0b_9c1abf01de7a.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8faa62a_95fb_49f9_a07c_0d4922ed23a6.slice/crio-conmon-ab6c698c5b6aa1039853d1a6c498c2982a85ceaa221200cd94c72b43ecaa4ebe.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d584655_82a5_46a8_ae0b_9c1abf01de7a.slice/crio-30c97709207e79f0c536d6ebfe1df4e37d846fb9bc88971cfe6f703a57058b51\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8faa62a_95fb_49f9_a07c_0d4922ed23a6.slice/crio-ab6c698c5b6aa1039853d1a6c498c2982a85ceaa221200cd94c72b43ecaa4ebe.scope\": RecentStats: unable to find data in memory cache]" Mar 12 08:25:49 crc kubenswrapper[4809]: E0312 08:25:49.574186 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d584655_82a5_46a8_ae0b_9c1abf01de7a.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d584655_82a5_46a8_ae0b_9c1abf01de7a.slice/crio-30c97709207e79f0c536d6ebfe1df4e37d846fb9bc88971cfe6f703a57058b51\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8faa62a_95fb_49f9_a07c_0d4922ed23a6.slice/crio-ab6c698c5b6aa1039853d1a6c498c2982a85ceaa221200cd94c72b43ecaa4ebe.scope\": RecentStats: unable to find data in memory cache]" Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.006950 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.073827 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8faa62a-95fb-49f9-a07c-0d4922ed23a6-log-httpd\") pod \"a8faa62a-95fb-49f9-a07c-0d4922ed23a6\" (UID: \"a8faa62a-95fb-49f9-a07c-0d4922ed23a6\") " Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.074016 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8faa62a-95fb-49f9-a07c-0d4922ed23a6-run-httpd\") pod \"a8faa62a-95fb-49f9-a07c-0d4922ed23a6\" (UID: \"a8faa62a-95fb-49f9-a07c-0d4922ed23a6\") " Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.074436 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8faa62a-95fb-49f9-a07c-0d4922ed23a6-config-data\") pod \"a8faa62a-95fb-49f9-a07c-0d4922ed23a6\" (UID: \"a8faa62a-95fb-49f9-a07c-0d4922ed23a6\") " Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.074588 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8faa62a-95fb-49f9-a07c-0d4922ed23a6-combined-ca-bundle\") pod \"a8faa62a-95fb-49f9-a07c-0d4922ed23a6\" (UID: \"a8faa62a-95fb-49f9-a07c-0d4922ed23a6\") " Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.074673 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8faa62a-95fb-49f9-a07c-0d4922ed23a6-scripts\") pod \"a8faa62a-95fb-49f9-a07c-0d4922ed23a6\" (UID: \"a8faa62a-95fb-49f9-a07c-0d4922ed23a6\") " Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.074685 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8faa62a-95fb-49f9-a07c-0d4922ed23a6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a8faa62a-95fb-49f9-a07c-0d4922ed23a6" (UID: "a8faa62a-95fb-49f9-a07c-0d4922ed23a6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.074713 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a8faa62a-95fb-49f9-a07c-0d4922ed23a6-sg-core-conf-yaml\") pod \"a8faa62a-95fb-49f9-a07c-0d4922ed23a6\" (UID: \"a8faa62a-95fb-49f9-a07c-0d4922ed23a6\") " Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.074809 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6tdh5\" (UniqueName: \"kubernetes.io/projected/a8faa62a-95fb-49f9-a07c-0d4922ed23a6-kube-api-access-6tdh5\") pod \"a8faa62a-95fb-49f9-a07c-0d4922ed23a6\" (UID: \"a8faa62a-95fb-49f9-a07c-0d4922ed23a6\") " Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.076770 4809 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8faa62a-95fb-49f9-a07c-0d4922ed23a6-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.077626 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8faa62a-95fb-49f9-a07c-0d4922ed23a6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a8faa62a-95fb-49f9-a07c-0d4922ed23a6" (UID: "a8faa62a-95fb-49f9-a07c-0d4922ed23a6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.082365 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8faa62a-95fb-49f9-a07c-0d4922ed23a6-kube-api-access-6tdh5" (OuterVolumeSpecName: "kube-api-access-6tdh5") pod "a8faa62a-95fb-49f9-a07c-0d4922ed23a6" (UID: "a8faa62a-95fb-49f9-a07c-0d4922ed23a6"). InnerVolumeSpecName "kube-api-access-6tdh5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.088679 4809 generic.go:334] "Generic (PLEG): container finished" podID="1483a15a-a49d-46c7-886b-85e8f7fcfc0b" containerID="b875d847b8185f2b45627fe74323b145e0f95ab5abb43fa4ed36a0c5d3bef2eb" exitCode=143 Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.088741 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1483a15a-a49d-46c7-886b-85e8f7fcfc0b","Type":"ContainerDied","Data":"b875d847b8185f2b45627fe74323b145e0f95ab5abb43fa4ed36a0c5d3bef2eb"} Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.090990 4809 generic.go:334] "Generic (PLEG): container finished" podID="a8faa62a-95fb-49f9-a07c-0d4922ed23a6" containerID="ab6c698c5b6aa1039853d1a6c498c2982a85ceaa221200cd94c72b43ecaa4ebe" exitCode=0 Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.094520 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.095258 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8faa62a-95fb-49f9-a07c-0d4922ed23a6","Type":"ContainerDied","Data":"ab6c698c5b6aa1039853d1a6c498c2982a85ceaa221200cd94c72b43ecaa4ebe"} Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.095294 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8faa62a-95fb-49f9-a07c-0d4922ed23a6","Type":"ContainerDied","Data":"89065ab054e33add7f588cf9e4b0c43cd66fce0e905ca93af2acb1d17a12b8c6"} Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.095333 4809 scope.go:117] "RemoveContainer" containerID="eb85097106a8bea3e909cf01327fc59c5b9761eef0a8b15e6a6e87651ce74733" Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.103364 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8faa62a-95fb-49f9-a07c-0d4922ed23a6-scripts" (OuterVolumeSpecName: "scripts") pod "a8faa62a-95fb-49f9-a07c-0d4922ed23a6" (UID: "a8faa62a-95fb-49f9-a07c-0d4922ed23a6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.125674 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8faa62a-95fb-49f9-a07c-0d4922ed23a6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a8faa62a-95fb-49f9-a07c-0d4922ed23a6" (UID: "a8faa62a-95fb-49f9-a07c-0d4922ed23a6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.180412 4809 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8faa62a-95fb-49f9-a07c-0d4922ed23a6-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.180468 4809 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a8faa62a-95fb-49f9-a07c-0d4922ed23a6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.180482 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6tdh5\" (UniqueName: \"kubernetes.io/projected/a8faa62a-95fb-49f9-a07c-0d4922ed23a6-kube-api-access-6tdh5\") on node \"crc\" DevicePath \"\"" Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.180496 4809 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8faa62a-95fb-49f9-a07c-0d4922ed23a6-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.220203 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8faa62a-95fb-49f9-a07c-0d4922ed23a6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a8faa62a-95fb-49f9-a07c-0d4922ed23a6" (UID: "a8faa62a-95fb-49f9-a07c-0d4922ed23a6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.264628 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8faa62a-95fb-49f9-a07c-0d4922ed23a6-config-data" (OuterVolumeSpecName: "config-data") pod "a8faa62a-95fb-49f9-a07c-0d4922ed23a6" (UID: "a8faa62a-95fb-49f9-a07c-0d4922ed23a6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.282871 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8faa62a-95fb-49f9-a07c-0d4922ed23a6-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.282905 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8faa62a-95fb-49f9-a07c-0d4922ed23a6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.406989 4809 scope.go:117] "RemoveContainer" containerID="2d561ad4afc353dad21d1e700846f54f5711b8817bee0bd755991e9880381651" Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.443656 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.498826 4809 scope.go:117] "RemoveContainer" containerID="1fb238a7ca9a6b8e1c466bb294e083127290a23a2e0498cc54d1ae51e5463716" Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.534435 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.547831 4809 scope.go:117] "RemoveContainer" containerID="ab6c698c5b6aa1039853d1a6c498c2982a85ceaa221200cd94c72b43ecaa4ebe" Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.561212 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 12 08:25:50 crc kubenswrapper[4809]: E0312 08:25:50.562071 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dd15dc5-2ce6-4c1b-a683-f73beca93754" containerName="mariadb-account-create-update" Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.562092 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dd15dc5-2ce6-4c1b-a683-f73beca93754" containerName="mariadb-account-create-update" Mar 12 08:25:50 crc kubenswrapper[4809]: E0312 08:25:50.562136 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d584655-82a5-46a8-ae0b-9c1abf01de7a" containerName="mariadb-database-create" Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.562145 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d584655-82a5-46a8-ae0b-9c1abf01de7a" containerName="mariadb-database-create" Mar 12 08:25:50 crc kubenswrapper[4809]: E0312 08:25:50.562158 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8faa62a-95fb-49f9-a07c-0d4922ed23a6" containerName="sg-core" Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.562165 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8faa62a-95fb-49f9-a07c-0d4922ed23a6" containerName="sg-core" Mar 12 08:25:50 crc kubenswrapper[4809]: E0312 08:25:50.562200 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81403e46-3215-4867-a857-ec7bc0b08c0d" containerName="extract-content" Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.562207 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="81403e46-3215-4867-a857-ec7bc0b08c0d" containerName="extract-content" Mar 12 08:25:50 crc kubenswrapper[4809]: E0312 08:25:50.562224 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81403e46-3215-4867-a857-ec7bc0b08c0d" containerName="extract-utilities" Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.562233 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="81403e46-3215-4867-a857-ec7bc0b08c0d" containerName="extract-utilities" Mar 12 08:25:50 crc kubenswrapper[4809]: E0312 08:25:50.562246 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8faa62a-95fb-49f9-a07c-0d4922ed23a6" containerName="ceilometer-central-agent" Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.562252 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8faa62a-95fb-49f9-a07c-0d4922ed23a6" containerName="ceilometer-central-agent" Mar 12 08:25:50 crc kubenswrapper[4809]: E0312 08:25:50.562268 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8faa62a-95fb-49f9-a07c-0d4922ed23a6" containerName="proxy-httpd" Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.562277 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8faa62a-95fb-49f9-a07c-0d4922ed23a6" containerName="proxy-httpd" Mar 12 08:25:50 crc kubenswrapper[4809]: E0312 08:25:50.562296 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81403e46-3215-4867-a857-ec7bc0b08c0d" containerName="registry-server" Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.562303 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="81403e46-3215-4867-a857-ec7bc0b08c0d" containerName="registry-server" Mar 12 08:25:50 crc kubenswrapper[4809]: E0312 08:25:50.562325 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8faa62a-95fb-49f9-a07c-0d4922ed23a6" containerName="ceilometer-notification-agent" Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.562333 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8faa62a-95fb-49f9-a07c-0d4922ed23a6" containerName="ceilometer-notification-agent" Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.562689 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d584655-82a5-46a8-ae0b-9c1abf01de7a" containerName="mariadb-database-create" Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.562713 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8faa62a-95fb-49f9-a07c-0d4922ed23a6" containerName="ceilometer-notification-agent" Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.562727 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8faa62a-95fb-49f9-a07c-0d4922ed23a6" containerName="sg-core" Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.562742 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8faa62a-95fb-49f9-a07c-0d4922ed23a6" containerName="proxy-httpd" Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.562761 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="81403e46-3215-4867-a857-ec7bc0b08c0d" containerName="registry-server" Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.562775 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dd15dc5-2ce6-4c1b-a683-f73beca93754" containerName="mariadb-account-create-update" Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.562786 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8faa62a-95fb-49f9-a07c-0d4922ed23a6" containerName="ceilometer-central-agent" Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.571287 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.575334 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.575752 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.580643 4809 scope.go:117] "RemoveContainer" containerID="eb85097106a8bea3e909cf01327fc59c5b9761eef0a8b15e6a6e87651ce74733" Mar 12 08:25:50 crc kubenswrapper[4809]: E0312 08:25:50.581228 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb85097106a8bea3e909cf01327fc59c5b9761eef0a8b15e6a6e87651ce74733\": container with ID starting with eb85097106a8bea3e909cf01327fc59c5b9761eef0a8b15e6a6e87651ce74733 not found: ID does not exist" containerID="eb85097106a8bea3e909cf01327fc59c5b9761eef0a8b15e6a6e87651ce74733" Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.581269 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb85097106a8bea3e909cf01327fc59c5b9761eef0a8b15e6a6e87651ce74733"} err="failed to get container status \"eb85097106a8bea3e909cf01327fc59c5b9761eef0a8b15e6a6e87651ce74733\": rpc error: code = NotFound desc = could not find container \"eb85097106a8bea3e909cf01327fc59c5b9761eef0a8b15e6a6e87651ce74733\": container with ID starting with eb85097106a8bea3e909cf01327fc59c5b9761eef0a8b15e6a6e87651ce74733 not found: ID does not exist" Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.581303 4809 scope.go:117] "RemoveContainer" containerID="2d561ad4afc353dad21d1e700846f54f5711b8817bee0bd755991e9880381651" Mar 12 08:25:50 crc kubenswrapper[4809]: E0312 08:25:50.581624 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d561ad4afc353dad21d1e700846f54f5711b8817bee0bd755991e9880381651\": container with ID starting with 2d561ad4afc353dad21d1e700846f54f5711b8817bee0bd755991e9880381651 not found: ID does not exist" containerID="2d561ad4afc353dad21d1e700846f54f5711b8817bee0bd755991e9880381651" Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.581657 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d561ad4afc353dad21d1e700846f54f5711b8817bee0bd755991e9880381651"} err="failed to get container status \"2d561ad4afc353dad21d1e700846f54f5711b8817bee0bd755991e9880381651\": rpc error: code = NotFound desc = could not find container \"2d561ad4afc353dad21d1e700846f54f5711b8817bee0bd755991e9880381651\": container with ID starting with 2d561ad4afc353dad21d1e700846f54f5711b8817bee0bd755991e9880381651 not found: ID does not exist" Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.581678 4809 scope.go:117] "RemoveContainer" containerID="1fb238a7ca9a6b8e1c466bb294e083127290a23a2e0498cc54d1ae51e5463716" Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.582214 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 08:25:50 crc kubenswrapper[4809]: E0312 08:25:50.582237 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fb238a7ca9a6b8e1c466bb294e083127290a23a2e0498cc54d1ae51e5463716\": container with ID starting with 1fb238a7ca9a6b8e1c466bb294e083127290a23a2e0498cc54d1ae51e5463716 not found: ID does not exist" containerID="1fb238a7ca9a6b8e1c466bb294e083127290a23a2e0498cc54d1ae51e5463716" Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.582266 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fb238a7ca9a6b8e1c466bb294e083127290a23a2e0498cc54d1ae51e5463716"} err="failed to get container status \"1fb238a7ca9a6b8e1c466bb294e083127290a23a2e0498cc54d1ae51e5463716\": rpc error: code = NotFound desc = could not find container \"1fb238a7ca9a6b8e1c466bb294e083127290a23a2e0498cc54d1ae51e5463716\": container with ID starting with 1fb238a7ca9a6b8e1c466bb294e083127290a23a2e0498cc54d1ae51e5463716 not found: ID does not exist" Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.582287 4809 scope.go:117] "RemoveContainer" containerID="ab6c698c5b6aa1039853d1a6c498c2982a85ceaa221200cd94c72b43ecaa4ebe" Mar 12 08:25:50 crc kubenswrapper[4809]: E0312 08:25:50.582695 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab6c698c5b6aa1039853d1a6c498c2982a85ceaa221200cd94c72b43ecaa4ebe\": container with ID starting with ab6c698c5b6aa1039853d1a6c498c2982a85ceaa221200cd94c72b43ecaa4ebe not found: ID does not exist" containerID="ab6c698c5b6aa1039853d1a6c498c2982a85ceaa221200cd94c72b43ecaa4ebe" Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.582739 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab6c698c5b6aa1039853d1a6c498c2982a85ceaa221200cd94c72b43ecaa4ebe"} err="failed to get container status \"ab6c698c5b6aa1039853d1a6c498c2982a85ceaa221200cd94c72b43ecaa4ebe\": rpc error: code = NotFound desc = could not find container \"ab6c698c5b6aa1039853d1a6c498c2982a85ceaa221200cd94c72b43ecaa4ebe\": container with ID starting with ab6c698c5b6aa1039853d1a6c498c2982a85ceaa221200cd94c72b43ecaa4ebe not found: ID does not exist" Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.606547 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77145032-e837-4a39-bc67-703645401e34-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"77145032-e837-4a39-bc67-703645401e34\") " pod="openstack/ceilometer-0" Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.606745 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77145032-e837-4a39-bc67-703645401e34-scripts\") pod \"ceilometer-0\" (UID: \"77145032-e837-4a39-bc67-703645401e34\") " pod="openstack/ceilometer-0" Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.606950 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77145032-e837-4a39-bc67-703645401e34-config-data\") pod \"ceilometer-0\" (UID: \"77145032-e837-4a39-bc67-703645401e34\") " pod="openstack/ceilometer-0" Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.607092 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77145032-e837-4a39-bc67-703645401e34-run-httpd\") pod \"ceilometer-0\" (UID: \"77145032-e837-4a39-bc67-703645401e34\") " pod="openstack/ceilometer-0" Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.607167 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj4qp\" (UniqueName: \"kubernetes.io/projected/77145032-e837-4a39-bc67-703645401e34-kube-api-access-vj4qp\") pod \"ceilometer-0\" (UID: \"77145032-e837-4a39-bc67-703645401e34\") " pod="openstack/ceilometer-0" Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.607910 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77145032-e837-4a39-bc67-703645401e34-log-httpd\") pod \"ceilometer-0\" (UID: \"77145032-e837-4a39-bc67-703645401e34\") " pod="openstack/ceilometer-0" Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.608081 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/77145032-e837-4a39-bc67-703645401e34-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"77145032-e837-4a39-bc67-703645401e34\") " pod="openstack/ceilometer-0" Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.715664 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/77145032-e837-4a39-bc67-703645401e34-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"77145032-e837-4a39-bc67-703645401e34\") " pod="openstack/ceilometer-0" Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.717951 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77145032-e837-4a39-bc67-703645401e34-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"77145032-e837-4a39-bc67-703645401e34\") " pod="openstack/ceilometer-0" Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.718260 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77145032-e837-4a39-bc67-703645401e34-scripts\") pod \"ceilometer-0\" (UID: \"77145032-e837-4a39-bc67-703645401e34\") " pod="openstack/ceilometer-0" Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.718374 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77145032-e837-4a39-bc67-703645401e34-config-data\") pod \"ceilometer-0\" (UID: \"77145032-e837-4a39-bc67-703645401e34\") " pod="openstack/ceilometer-0" Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.718480 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77145032-e837-4a39-bc67-703645401e34-run-httpd\") pod \"ceilometer-0\" (UID: \"77145032-e837-4a39-bc67-703645401e34\") " pod="openstack/ceilometer-0" Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.718718 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vj4qp\" (UniqueName: \"kubernetes.io/projected/77145032-e837-4a39-bc67-703645401e34-kube-api-access-vj4qp\") pod \"ceilometer-0\" (UID: \"77145032-e837-4a39-bc67-703645401e34\") " pod="openstack/ceilometer-0" Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.719321 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77145032-e837-4a39-bc67-703645401e34-run-httpd\") pod \"ceilometer-0\" (UID: \"77145032-e837-4a39-bc67-703645401e34\") " pod="openstack/ceilometer-0" Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.720023 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77145032-e837-4a39-bc67-703645401e34-log-httpd\") pod \"ceilometer-0\" (UID: \"77145032-e837-4a39-bc67-703645401e34\") " pod="openstack/ceilometer-0" Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.720603 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77145032-e837-4a39-bc67-703645401e34-log-httpd\") pod \"ceilometer-0\" (UID: \"77145032-e837-4a39-bc67-703645401e34\") " pod="openstack/ceilometer-0" Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.729197 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77145032-e837-4a39-bc67-703645401e34-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"77145032-e837-4a39-bc67-703645401e34\") " pod="openstack/ceilometer-0" Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.729492 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/77145032-e837-4a39-bc67-703645401e34-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"77145032-e837-4a39-bc67-703645401e34\") " pod="openstack/ceilometer-0" Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.729827 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77145032-e837-4a39-bc67-703645401e34-scripts\") pod \"ceilometer-0\" (UID: \"77145032-e837-4a39-bc67-703645401e34\") " pod="openstack/ceilometer-0" Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.737184 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77145032-e837-4a39-bc67-703645401e34-config-data\") pod \"ceilometer-0\" (UID: \"77145032-e837-4a39-bc67-703645401e34\") " pod="openstack/ceilometer-0" Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.737655 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vj4qp\" (UniqueName: \"kubernetes.io/projected/77145032-e837-4a39-bc67-703645401e34-kube-api-access-vj4qp\") pod \"ceilometer-0\" (UID: \"77145032-e837-4a39-bc67-703645401e34\") " pod="openstack/ceilometer-0" Mar 12 08:25:50 crc kubenswrapper[4809]: I0312 08:25:50.899723 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 08:25:51 crc kubenswrapper[4809]: I0312 08:25:51.131800 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8faa62a-95fb-49f9-a07c-0d4922ed23a6" path="/var/lib/kubelet/pods/a8faa62a-95fb-49f9-a07c-0d4922ed23a6/volumes" Mar 12 08:25:51 crc kubenswrapper[4809]: I0312 08:25:51.404961 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-rr6w8"] Mar 12 08:25:51 crc kubenswrapper[4809]: I0312 08:25:51.407226 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-rr6w8" Mar 12 08:25:51 crc kubenswrapper[4809]: I0312 08:25:51.410269 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 12 08:25:51 crc kubenswrapper[4809]: I0312 08:25:51.410597 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Mar 12 08:25:51 crc kubenswrapper[4809]: I0312 08:25:51.410721 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Mar 12 08:25:51 crc kubenswrapper[4809]: I0312 08:25:51.410848 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-qhrlw" Mar 12 08:25:51 crc kubenswrapper[4809]: I0312 08:25:51.441394 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d49c64c6-9e86-4436-a9e6-2723aecbacfe-config-data\") pod \"aodh-db-sync-rr6w8\" (UID: \"d49c64c6-9e86-4436-a9e6-2723aecbacfe\") " pod="openstack/aodh-db-sync-rr6w8" Mar 12 08:25:51 crc kubenswrapper[4809]: I0312 08:25:51.441498 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7sbg\" (UniqueName: \"kubernetes.io/projected/d49c64c6-9e86-4436-a9e6-2723aecbacfe-kube-api-access-b7sbg\") pod \"aodh-db-sync-rr6w8\" (UID: \"d49c64c6-9e86-4436-a9e6-2723aecbacfe\") " pod="openstack/aodh-db-sync-rr6w8" Mar 12 08:25:51 crc kubenswrapper[4809]: I0312 08:25:51.441543 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d49c64c6-9e86-4436-a9e6-2723aecbacfe-scripts\") pod \"aodh-db-sync-rr6w8\" (UID: \"d49c64c6-9e86-4436-a9e6-2723aecbacfe\") " pod="openstack/aodh-db-sync-rr6w8" Mar 12 08:25:51 crc kubenswrapper[4809]: I0312 08:25:51.441607 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d49c64c6-9e86-4436-a9e6-2723aecbacfe-combined-ca-bundle\") pod \"aodh-db-sync-rr6w8\" (UID: \"d49c64c6-9e86-4436-a9e6-2723aecbacfe\") " pod="openstack/aodh-db-sync-rr6w8" Mar 12 08:25:51 crc kubenswrapper[4809]: I0312 08:25:51.462171 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-rr6w8"] Mar 12 08:25:51 crc kubenswrapper[4809]: I0312 08:25:51.483207 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 08:25:51 crc kubenswrapper[4809]: I0312 08:25:51.498696 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 12 08:25:51 crc kubenswrapper[4809]: I0312 08:25:51.498935 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 12 08:25:51 crc kubenswrapper[4809]: I0312 08:25:51.529784 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 12 08:25:51 crc kubenswrapper[4809]: I0312 08:25:51.530219 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 12 08:25:51 crc kubenswrapper[4809]: I0312 08:25:51.544668 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d49c64c6-9e86-4436-a9e6-2723aecbacfe-scripts\") pod \"aodh-db-sync-rr6w8\" (UID: \"d49c64c6-9e86-4436-a9e6-2723aecbacfe\") " pod="openstack/aodh-db-sync-rr6w8" Mar 12 08:25:51 crc kubenswrapper[4809]: I0312 08:25:51.544831 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d49c64c6-9e86-4436-a9e6-2723aecbacfe-combined-ca-bundle\") pod \"aodh-db-sync-rr6w8\" (UID: \"d49c64c6-9e86-4436-a9e6-2723aecbacfe\") " pod="openstack/aodh-db-sync-rr6w8" Mar 12 08:25:51 crc kubenswrapper[4809]: I0312 08:25:51.544981 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d49c64c6-9e86-4436-a9e6-2723aecbacfe-config-data\") pod \"aodh-db-sync-rr6w8\" (UID: \"d49c64c6-9e86-4436-a9e6-2723aecbacfe\") " pod="openstack/aodh-db-sync-rr6w8" Mar 12 08:25:51 crc kubenswrapper[4809]: I0312 08:25:51.545092 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7sbg\" (UniqueName: \"kubernetes.io/projected/d49c64c6-9e86-4436-a9e6-2723aecbacfe-kube-api-access-b7sbg\") pod \"aodh-db-sync-rr6w8\" (UID: \"d49c64c6-9e86-4436-a9e6-2723aecbacfe\") " pod="openstack/aodh-db-sync-rr6w8" Mar 12 08:25:51 crc kubenswrapper[4809]: I0312 08:25:51.560011 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d49c64c6-9e86-4436-a9e6-2723aecbacfe-config-data\") pod \"aodh-db-sync-rr6w8\" (UID: \"d49c64c6-9e86-4436-a9e6-2723aecbacfe\") " pod="openstack/aodh-db-sync-rr6w8" Mar 12 08:25:51 crc kubenswrapper[4809]: I0312 08:25:51.560225 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d49c64c6-9e86-4436-a9e6-2723aecbacfe-scripts\") pod \"aodh-db-sync-rr6w8\" (UID: \"d49c64c6-9e86-4436-a9e6-2723aecbacfe\") " pod="openstack/aodh-db-sync-rr6w8" Mar 12 08:25:51 crc kubenswrapper[4809]: I0312 08:25:51.569835 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d49c64c6-9e86-4436-a9e6-2723aecbacfe-combined-ca-bundle\") pod \"aodh-db-sync-rr6w8\" (UID: \"d49c64c6-9e86-4436-a9e6-2723aecbacfe\") " pod="openstack/aodh-db-sync-rr6w8" Mar 12 08:25:51 crc kubenswrapper[4809]: I0312 08:25:51.575854 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7sbg\" (UniqueName: \"kubernetes.io/projected/d49c64c6-9e86-4436-a9e6-2723aecbacfe-kube-api-access-b7sbg\") pod \"aodh-db-sync-rr6w8\" (UID: \"d49c64c6-9e86-4436-a9e6-2723aecbacfe\") " pod="openstack/aodh-db-sync-rr6w8" Mar 12 08:25:51 crc kubenswrapper[4809]: I0312 08:25:51.596631 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 12 08:25:51 crc kubenswrapper[4809]: I0312 08:25:51.739470 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-rr6w8" Mar 12 08:25:51 crc kubenswrapper[4809]: I0312 08:25:51.991253 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Mar 12 08:25:52 crc kubenswrapper[4809]: I0312 08:25:52.016998 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 12 08:25:52 crc kubenswrapper[4809]: I0312 08:25:52.017065 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 12 08:25:52 crc kubenswrapper[4809]: I0312 08:25:52.217408 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"77145032-e837-4a39-bc67-703645401e34","Type":"ContainerStarted","Data":"8db5c8db53b01b53be684603dd951cc5ab33fef80237c345236b345dbae416f4"} Mar 12 08:25:52 crc kubenswrapper[4809]: I0312 08:25:52.279008 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 12 08:25:52 crc kubenswrapper[4809]: I0312 08:25:52.409949 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-rr6w8"] Mar 12 08:25:52 crc kubenswrapper[4809]: I0312 08:25:52.581374 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d7871824-2524-4083-8a87-f7f27f0f533f" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.251:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 08:25:52 crc kubenswrapper[4809]: I0312 08:25:52.581389 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d7871824-2524-4083-8a87-f7f27f0f533f" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.251:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 08:25:53 crc kubenswrapper[4809]: I0312 08:25:53.106421 4809 scope.go:117] "RemoveContainer" containerID="864ab2b9321734db2ec1c2e4ce57f8a096a813a5cbaa0744b231535713d6ab10" Mar 12 08:25:53 crc kubenswrapper[4809]: E0312 08:25:53.108159 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:25:53 crc kubenswrapper[4809]: I0312 08:25:53.264574 4809 generic.go:334] "Generic (PLEG): container finished" podID="661848d4-7363-415e-8446-98751a00c6de" containerID="3d5c947b257864e12e92b5e465b049ec476bdd76f6f249a743a5bc23774eb9da" exitCode=0 Mar 12 08:25:53 crc kubenswrapper[4809]: I0312 08:25:53.264702 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-m6nvs" event={"ID":"661848d4-7363-415e-8446-98751a00c6de","Type":"ContainerDied","Data":"3d5c947b257864e12e92b5e465b049ec476bdd76f6f249a743a5bc23774eb9da"} Mar 12 08:25:53 crc kubenswrapper[4809]: I0312 08:25:53.274368 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-rr6w8" event={"ID":"d49c64c6-9e86-4436-a9e6-2723aecbacfe","Type":"ContainerStarted","Data":"1eb12cc6abbc3e05c3cb78c350f404292318c6cd4874e449a5e73ccd3aa24c15"} Mar 12 08:25:53 crc kubenswrapper[4809]: I0312 08:25:53.287960 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"77145032-e837-4a39-bc67-703645401e34","Type":"ContainerStarted","Data":"c07acd4906983bb3a28bde7781a63c5e94337c2c3548ec007980bcb1320d96f4"} Mar 12 08:25:53 crc kubenswrapper[4809]: I0312 08:25:53.288024 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"77145032-e837-4a39-bc67-703645401e34","Type":"ContainerStarted","Data":"816deaf821bdae861bc904a76d1824e99542790a8c447ae628f952a789953810"} Mar 12 08:25:54 crc kubenswrapper[4809]: I0312 08:25:54.305311 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"77145032-e837-4a39-bc67-703645401e34","Type":"ContainerStarted","Data":"cd12592ebc6b4b12d66a89bed2e3b33b2772025ad6963dc28af5e402db06c71d"} Mar 12 08:25:54 crc kubenswrapper[4809]: I0312 08:25:54.870791 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-m6nvs" Mar 12 08:25:54 crc kubenswrapper[4809]: I0312 08:25:54.997826 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/661848d4-7363-415e-8446-98751a00c6de-scripts\") pod \"661848d4-7363-415e-8446-98751a00c6de\" (UID: \"661848d4-7363-415e-8446-98751a00c6de\") " Mar 12 08:25:54 crc kubenswrapper[4809]: I0312 08:25:54.997956 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/661848d4-7363-415e-8446-98751a00c6de-config-data\") pod \"661848d4-7363-415e-8446-98751a00c6de\" (UID: \"661848d4-7363-415e-8446-98751a00c6de\") " Mar 12 08:25:54 crc kubenswrapper[4809]: I0312 08:25:54.998058 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbps6\" (UniqueName: \"kubernetes.io/projected/661848d4-7363-415e-8446-98751a00c6de-kube-api-access-xbps6\") pod \"661848d4-7363-415e-8446-98751a00c6de\" (UID: \"661848d4-7363-415e-8446-98751a00c6de\") " Mar 12 08:25:54 crc kubenswrapper[4809]: I0312 08:25:54.998412 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/661848d4-7363-415e-8446-98751a00c6de-combined-ca-bundle\") pod \"661848d4-7363-415e-8446-98751a00c6de\" (UID: \"661848d4-7363-415e-8446-98751a00c6de\") " Mar 12 08:25:55 crc kubenswrapper[4809]: I0312 08:25:55.013505 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/661848d4-7363-415e-8446-98751a00c6de-scripts" (OuterVolumeSpecName: "scripts") pod "661848d4-7363-415e-8446-98751a00c6de" (UID: "661848d4-7363-415e-8446-98751a00c6de"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:25:55 crc kubenswrapper[4809]: I0312 08:25:55.014004 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/661848d4-7363-415e-8446-98751a00c6de-kube-api-access-xbps6" (OuterVolumeSpecName: "kube-api-access-xbps6") pod "661848d4-7363-415e-8446-98751a00c6de" (UID: "661848d4-7363-415e-8446-98751a00c6de"). InnerVolumeSpecName "kube-api-access-xbps6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:25:55 crc kubenswrapper[4809]: I0312 08:25:55.041785 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/661848d4-7363-415e-8446-98751a00c6de-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "661848d4-7363-415e-8446-98751a00c6de" (UID: "661848d4-7363-415e-8446-98751a00c6de"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:25:55 crc kubenswrapper[4809]: I0312 08:25:55.076399 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/661848d4-7363-415e-8446-98751a00c6de-config-data" (OuterVolumeSpecName: "config-data") pod "661848d4-7363-415e-8446-98751a00c6de" (UID: "661848d4-7363-415e-8446-98751a00c6de"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:25:55 crc kubenswrapper[4809]: I0312 08:25:55.100882 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbps6\" (UniqueName: \"kubernetes.io/projected/661848d4-7363-415e-8446-98751a00c6de-kube-api-access-xbps6\") on node \"crc\" DevicePath \"\"" Mar 12 08:25:55 crc kubenswrapper[4809]: I0312 08:25:55.101197 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/661848d4-7363-415e-8446-98751a00c6de-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:25:55 crc kubenswrapper[4809]: I0312 08:25:55.101266 4809 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/661848d4-7363-415e-8446-98751a00c6de-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 08:25:55 crc kubenswrapper[4809]: I0312 08:25:55.101323 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/661848d4-7363-415e-8446-98751a00c6de-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 08:25:55 crc kubenswrapper[4809]: I0312 08:25:55.326762 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-m6nvs" event={"ID":"661848d4-7363-415e-8446-98751a00c6de","Type":"ContainerDied","Data":"ce21b4d31d1ac8920373ea11323590218231374a752393172d317329f0dedcb7"} Mar 12 08:25:55 crc kubenswrapper[4809]: I0312 08:25:55.326811 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce21b4d31d1ac8920373ea11323590218231374a752393172d317329f0dedcb7" Mar 12 08:25:55 crc kubenswrapper[4809]: I0312 08:25:55.326827 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-m6nvs" Mar 12 08:25:55 crc kubenswrapper[4809]: I0312 08:25:55.334519 4809 generic.go:334] "Generic (PLEG): container finished" podID="20b188d8-97a2-47ef-a863-e243b1f38483" containerID="d2160229e26eb2441b207a1b23558ef1b1791130c13a1faa180bce47278795bd" exitCode=0 Mar 12 08:25:55 crc kubenswrapper[4809]: I0312 08:25:55.334583 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bc2l4" event={"ID":"20b188d8-97a2-47ef-a863-e243b1f38483","Type":"ContainerDied","Data":"d2160229e26eb2441b207a1b23558ef1b1791130c13a1faa180bce47278795bd"} Mar 12 08:25:55 crc kubenswrapper[4809]: I0312 08:25:55.457947 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 12 08:25:55 crc kubenswrapper[4809]: I0312 08:25:55.458982 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d7871824-2524-4083-8a87-f7f27f0f533f" containerName="nova-api-api" containerID="cri-o://158b3499e55f9be843afd74724441318312b8a36ad2dfcb94b5469080b433793" gracePeriod=30 Mar 12 08:25:55 crc kubenswrapper[4809]: I0312 08:25:55.459084 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d7871824-2524-4083-8a87-f7f27f0f533f" containerName="nova-api-log" containerID="cri-o://bfd1785d5e2c4a7e31cce9c1c527cda33937f0505156dcde14c8c8740009470d" gracePeriod=30 Mar 12 08:25:55 crc kubenswrapper[4809]: I0312 08:25:55.472192 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 08:25:55 crc kubenswrapper[4809]: I0312 08:25:55.472758 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="04249628-0bfe-4de8-b6ad-f8508e7b1a8a" containerName="nova-scheduler-scheduler" containerID="cri-o://3086965d63cda37dedda0b1906cd757d0cad972875d3261a35076c43e2761d72" gracePeriod=30 Mar 12 08:25:56 crc kubenswrapper[4809]: I0312 08:25:56.354197 4809 generic.go:334] "Generic (PLEG): container finished" podID="d7871824-2524-4083-8a87-f7f27f0f533f" containerID="bfd1785d5e2c4a7e31cce9c1c527cda33937f0505156dcde14c8c8740009470d" exitCode=143 Mar 12 08:25:56 crc kubenswrapper[4809]: I0312 08:25:56.354308 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d7871824-2524-4083-8a87-f7f27f0f533f","Type":"ContainerDied","Data":"bfd1785d5e2c4a7e31cce9c1c527cda33937f0505156dcde14c8c8740009470d"} Mar 12 08:25:56 crc kubenswrapper[4809]: E0312 08:25:56.532496 4809 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3086965d63cda37dedda0b1906cd757d0cad972875d3261a35076c43e2761d72" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 12 08:25:56 crc kubenswrapper[4809]: E0312 08:25:56.540670 4809 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3086965d63cda37dedda0b1906cd757d0cad972875d3261a35076c43e2761d72" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 12 08:25:56 crc kubenswrapper[4809]: E0312 08:25:56.542302 4809 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3086965d63cda37dedda0b1906cd757d0cad972875d3261a35076c43e2761d72" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 12 08:25:56 crc kubenswrapper[4809]: E0312 08:25:56.542422 4809 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="04249628-0bfe-4de8-b6ad-f8508e7b1a8a" containerName="nova-scheduler-scheduler" Mar 12 08:25:56 crc kubenswrapper[4809]: W0312 08:25:56.780864 4809 container.go:586] Failed to update stats for container "/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod04249628_0bfe_4de8_b6ad_f8508e7b1a8a.slice/crio-52b8968390064a3028329ea2daac949e19b60d9b6ac724f35f0f0ed82b3cf764": error while statting cgroup v2: [unable to parse /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod04249628_0bfe_4de8_b6ad_f8508e7b1a8a.slice/crio-52b8968390064a3028329ea2daac949e19b60d9b6ac724f35f0f0ed82b3cf764/memory.stat: read /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod04249628_0bfe_4de8_b6ad_f8508e7b1a8a.slice/crio-52b8968390064a3028329ea2daac949e19b60d9b6ac724f35f0f0ed82b3cf764/memory.stat: no such device], continuing to push stats Mar 12 08:25:57 crc kubenswrapper[4809]: I0312 08:25:57.124184 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-9b86998b5-jhzbr" Mar 12 08:25:57 crc kubenswrapper[4809]: I0312 08:25:57.257387 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-7c7km"] Mar 12 08:25:57 crc kubenswrapper[4809]: I0312 08:25:57.257657 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7756b9d78c-7c7km" podUID="43a2fa45-78b7-4556-802d-ec28c44a4f12" containerName="dnsmasq-dns" containerID="cri-o://228d81e87f41e7ab431405d079127d1d4ef3ecb92b4b42b615a7e777aa12533e" gracePeriod=10 Mar 12 08:25:57 crc kubenswrapper[4809]: I0312 08:25:57.403375 4809 generic.go:334] "Generic (PLEG): container finished" podID="04249628-0bfe-4de8-b6ad-f8508e7b1a8a" containerID="3086965d63cda37dedda0b1906cd757d0cad972875d3261a35076c43e2761d72" exitCode=0 Mar 12 08:25:57 crc kubenswrapper[4809]: I0312 08:25:57.403433 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"04249628-0bfe-4de8-b6ad-f8508e7b1a8a","Type":"ContainerDied","Data":"3086965d63cda37dedda0b1906cd757d0cad972875d3261a35076c43e2761d72"} Mar 12 08:25:58 crc kubenswrapper[4809]: I0312 08:25:58.419632 4809 generic.go:334] "Generic (PLEG): container finished" podID="43a2fa45-78b7-4556-802d-ec28c44a4f12" containerID="228d81e87f41e7ab431405d079127d1d4ef3ecb92b4b42b615a7e777aa12533e" exitCode=0 Mar 12 08:25:58 crc kubenswrapper[4809]: I0312 08:25:58.419719 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-7c7km" event={"ID":"43a2fa45-78b7-4556-802d-ec28c44a4f12","Type":"ContainerDied","Data":"228d81e87f41e7ab431405d079127d1d4ef3ecb92b4b42b615a7e777aa12533e"} Mar 12 08:25:59 crc kubenswrapper[4809]: I0312 08:25:59.436324 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d7871824-2524-4083-8a87-f7f27f0f533f","Type":"ContainerDied","Data":"158b3499e55f9be843afd74724441318312b8a36ad2dfcb94b5469080b433793"} Mar 12 08:25:59 crc kubenswrapper[4809]: I0312 08:25:59.436263 4809 generic.go:334] "Generic (PLEG): container finished" podID="d7871824-2524-4083-8a87-f7f27f0f533f" containerID="158b3499e55f9be843afd74724441318312b8a36ad2dfcb94b5469080b433793" exitCode=0 Mar 12 08:25:59 crc kubenswrapper[4809]: I0312 08:25:59.648099 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bc2l4" Mar 12 08:25:59 crc kubenswrapper[4809]: I0312 08:25:59.659381 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 08:25:59 crc kubenswrapper[4809]: I0312 08:25:59.747866 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20b188d8-97a2-47ef-a863-e243b1f38483-config-data\") pod \"20b188d8-97a2-47ef-a863-e243b1f38483\" (UID: \"20b188d8-97a2-47ef-a863-e243b1f38483\") " Mar 12 08:25:59 crc kubenswrapper[4809]: I0312 08:25:59.748082 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5h65\" (UniqueName: \"kubernetes.io/projected/20b188d8-97a2-47ef-a863-e243b1f38483-kube-api-access-f5h65\") pod \"20b188d8-97a2-47ef-a863-e243b1f38483\" (UID: \"20b188d8-97a2-47ef-a863-e243b1f38483\") " Mar 12 08:25:59 crc kubenswrapper[4809]: I0312 08:25:59.748196 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20b188d8-97a2-47ef-a863-e243b1f38483-combined-ca-bundle\") pod \"20b188d8-97a2-47ef-a863-e243b1f38483\" (UID: \"20b188d8-97a2-47ef-a863-e243b1f38483\") " Mar 12 08:25:59 crc kubenswrapper[4809]: I0312 08:25:59.748272 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t64vb\" (UniqueName: \"kubernetes.io/projected/04249628-0bfe-4de8-b6ad-f8508e7b1a8a-kube-api-access-t64vb\") pod \"04249628-0bfe-4de8-b6ad-f8508e7b1a8a\" (UID: \"04249628-0bfe-4de8-b6ad-f8508e7b1a8a\") " Mar 12 08:25:59 crc kubenswrapper[4809]: I0312 08:25:59.748412 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04249628-0bfe-4de8-b6ad-f8508e7b1a8a-config-data\") pod \"04249628-0bfe-4de8-b6ad-f8508e7b1a8a\" (UID: \"04249628-0bfe-4de8-b6ad-f8508e7b1a8a\") " Mar 12 08:25:59 crc kubenswrapper[4809]: I0312 08:25:59.748454 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20b188d8-97a2-47ef-a863-e243b1f38483-scripts\") pod \"20b188d8-97a2-47ef-a863-e243b1f38483\" (UID: \"20b188d8-97a2-47ef-a863-e243b1f38483\") " Mar 12 08:25:59 crc kubenswrapper[4809]: I0312 08:25:59.748494 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04249628-0bfe-4de8-b6ad-f8508e7b1a8a-combined-ca-bundle\") pod \"04249628-0bfe-4de8-b6ad-f8508e7b1a8a\" (UID: \"04249628-0bfe-4de8-b6ad-f8508e7b1a8a\") " Mar 12 08:25:59 crc kubenswrapper[4809]: I0312 08:25:59.793991 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b188d8-97a2-47ef-a863-e243b1f38483-scripts" (OuterVolumeSpecName: "scripts") pod "20b188d8-97a2-47ef-a863-e243b1f38483" (UID: "20b188d8-97a2-47ef-a863-e243b1f38483"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:25:59 crc kubenswrapper[4809]: I0312 08:25:59.794191 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04249628-0bfe-4de8-b6ad-f8508e7b1a8a-kube-api-access-t64vb" (OuterVolumeSpecName: "kube-api-access-t64vb") pod "04249628-0bfe-4de8-b6ad-f8508e7b1a8a" (UID: "04249628-0bfe-4de8-b6ad-f8508e7b1a8a"). InnerVolumeSpecName "kube-api-access-t64vb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:25:59 crc kubenswrapper[4809]: I0312 08:25:59.797160 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b188d8-97a2-47ef-a863-e243b1f38483-kube-api-access-f5h65" (OuterVolumeSpecName: "kube-api-access-f5h65") pod "20b188d8-97a2-47ef-a863-e243b1f38483" (UID: "20b188d8-97a2-47ef-a863-e243b1f38483"). InnerVolumeSpecName "kube-api-access-f5h65". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:25:59 crc kubenswrapper[4809]: I0312 08:25:59.809489 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04249628-0bfe-4de8-b6ad-f8508e7b1a8a-config-data" (OuterVolumeSpecName: "config-data") pod "04249628-0bfe-4de8-b6ad-f8508e7b1a8a" (UID: "04249628-0bfe-4de8-b6ad-f8508e7b1a8a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:25:59 crc kubenswrapper[4809]: I0312 08:25:59.813475 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04249628-0bfe-4de8-b6ad-f8508e7b1a8a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "04249628-0bfe-4de8-b6ad-f8508e7b1a8a" (UID: "04249628-0bfe-4de8-b6ad-f8508e7b1a8a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:25:59 crc kubenswrapper[4809]: I0312 08:25:59.813491 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b188d8-97a2-47ef-a863-e243b1f38483-config-data" (OuterVolumeSpecName: "config-data") pod "20b188d8-97a2-47ef-a863-e243b1f38483" (UID: "20b188d8-97a2-47ef-a863-e243b1f38483"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:25:59 crc kubenswrapper[4809]: I0312 08:25:59.830636 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b188d8-97a2-47ef-a863-e243b1f38483-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "20b188d8-97a2-47ef-a863-e243b1f38483" (UID: "20b188d8-97a2-47ef-a863-e243b1f38483"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:25:59 crc kubenswrapper[4809]: I0312 08:25:59.855672 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5h65\" (UniqueName: \"kubernetes.io/projected/20b188d8-97a2-47ef-a863-e243b1f38483-kube-api-access-f5h65\") on node \"crc\" DevicePath \"\"" Mar 12 08:25:59 crc kubenswrapper[4809]: I0312 08:25:59.855723 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20b188d8-97a2-47ef-a863-e243b1f38483-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:25:59 crc kubenswrapper[4809]: I0312 08:25:59.855736 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t64vb\" (UniqueName: \"kubernetes.io/projected/04249628-0bfe-4de8-b6ad-f8508e7b1a8a-kube-api-access-t64vb\") on node \"crc\" DevicePath \"\"" Mar 12 08:25:59 crc kubenswrapper[4809]: I0312 08:25:59.855746 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04249628-0bfe-4de8-b6ad-f8508e7b1a8a-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 08:25:59 crc kubenswrapper[4809]: I0312 08:25:59.855758 4809 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20b188d8-97a2-47ef-a863-e243b1f38483-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 08:25:59 crc kubenswrapper[4809]: I0312 08:25:59.855768 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04249628-0bfe-4de8-b6ad-f8508e7b1a8a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:25:59 crc kubenswrapper[4809]: I0312 08:25:59.855779 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20b188d8-97a2-47ef-a863-e243b1f38483-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.158770 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555066-dzhqs"] Mar 12 08:26:00 crc kubenswrapper[4809]: E0312 08:26:00.159722 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04249628-0bfe-4de8-b6ad-f8508e7b1a8a" containerName="nova-scheduler-scheduler" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.159743 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="04249628-0bfe-4de8-b6ad-f8508e7b1a8a" containerName="nova-scheduler-scheduler" Mar 12 08:26:00 crc kubenswrapper[4809]: E0312 08:26:00.159752 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20b188d8-97a2-47ef-a863-e243b1f38483" containerName="nova-cell1-conductor-db-sync" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.159759 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="20b188d8-97a2-47ef-a863-e243b1f38483" containerName="nova-cell1-conductor-db-sync" Mar 12 08:26:00 crc kubenswrapper[4809]: E0312 08:26:00.159804 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="661848d4-7363-415e-8446-98751a00c6de" containerName="nova-manage" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.159811 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="661848d4-7363-415e-8446-98751a00c6de" containerName="nova-manage" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.160037 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="661848d4-7363-415e-8446-98751a00c6de" containerName="nova-manage" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.160051 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="20b188d8-97a2-47ef-a863-e243b1f38483" containerName="nova-cell1-conductor-db-sync" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.160080 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="04249628-0bfe-4de8-b6ad-f8508e7b1a8a" containerName="nova-scheduler-scheduler" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.161035 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555066-dzhqs" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.165933 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.166210 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.166808 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-tvxqv" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.212735 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555066-dzhqs"] Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.214841 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-7c7km" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.228463 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.284176 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kghfx\" (UniqueName: \"kubernetes.io/projected/765761f4-7998-4a19-8ff6-d72af224951c-kube-api-access-kghfx\") pod \"auto-csr-approver-29555066-dzhqs\" (UID: \"765761f4-7998-4a19-8ff6-d72af224951c\") " pod="openshift-infra/auto-csr-approver-29555066-dzhqs" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.386272 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7871824-2524-4083-8a87-f7f27f0f533f-combined-ca-bundle\") pod \"d7871824-2524-4083-8a87-f7f27f0f533f\" (UID: \"d7871824-2524-4083-8a87-f7f27f0f533f\") " Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.386316 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94lln\" (UniqueName: \"kubernetes.io/projected/d7871824-2524-4083-8a87-f7f27f0f533f-kube-api-access-94lln\") pod \"d7871824-2524-4083-8a87-f7f27f0f533f\" (UID: \"d7871824-2524-4083-8a87-f7f27f0f533f\") " Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.386371 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43a2fa45-78b7-4556-802d-ec28c44a4f12-dns-svc\") pod \"43a2fa45-78b7-4556-802d-ec28c44a4f12\" (UID: \"43a2fa45-78b7-4556-802d-ec28c44a4f12\") " Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.386390 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7871824-2524-4083-8a87-f7f27f0f533f-config-data\") pod \"d7871824-2524-4083-8a87-f7f27f0f533f\" (UID: \"d7871824-2524-4083-8a87-f7f27f0f533f\") " Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.386459 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43a2fa45-78b7-4556-802d-ec28c44a4f12-config\") pod \"43a2fa45-78b7-4556-802d-ec28c44a4f12\" (UID: \"43a2fa45-78b7-4556-802d-ec28c44a4f12\") " Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.386483 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/43a2fa45-78b7-4556-802d-ec28c44a4f12-ovsdbserver-nb\") pod \"43a2fa45-78b7-4556-802d-ec28c44a4f12\" (UID: \"43a2fa45-78b7-4556-802d-ec28c44a4f12\") " Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.386508 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/43a2fa45-78b7-4556-802d-ec28c44a4f12-ovsdbserver-sb\") pod \"43a2fa45-78b7-4556-802d-ec28c44a4f12\" (UID: \"43a2fa45-78b7-4556-802d-ec28c44a4f12\") " Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.386536 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dzg8\" (UniqueName: \"kubernetes.io/projected/43a2fa45-78b7-4556-802d-ec28c44a4f12-kube-api-access-9dzg8\") pod \"43a2fa45-78b7-4556-802d-ec28c44a4f12\" (UID: \"43a2fa45-78b7-4556-802d-ec28c44a4f12\") " Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.386576 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7871824-2524-4083-8a87-f7f27f0f533f-logs\") pod \"d7871824-2524-4083-8a87-f7f27f0f533f\" (UID: \"d7871824-2524-4083-8a87-f7f27f0f533f\") " Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.386600 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/43a2fa45-78b7-4556-802d-ec28c44a4f12-dns-swift-storage-0\") pod \"43a2fa45-78b7-4556-802d-ec28c44a4f12\" (UID: \"43a2fa45-78b7-4556-802d-ec28c44a4f12\") " Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.386769 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kghfx\" (UniqueName: \"kubernetes.io/projected/765761f4-7998-4a19-8ff6-d72af224951c-kube-api-access-kghfx\") pod \"auto-csr-approver-29555066-dzhqs\" (UID: \"765761f4-7998-4a19-8ff6-d72af224951c\") " pod="openshift-infra/auto-csr-approver-29555066-dzhqs" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.387711 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7871824-2524-4083-8a87-f7f27f0f533f-logs" (OuterVolumeSpecName: "logs") pod "d7871824-2524-4083-8a87-f7f27f0f533f" (UID: "d7871824-2524-4083-8a87-f7f27f0f533f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.394506 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43a2fa45-78b7-4556-802d-ec28c44a4f12-kube-api-access-9dzg8" (OuterVolumeSpecName: "kube-api-access-9dzg8") pod "43a2fa45-78b7-4556-802d-ec28c44a4f12" (UID: "43a2fa45-78b7-4556-802d-ec28c44a4f12"). InnerVolumeSpecName "kube-api-access-9dzg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.404453 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7871824-2524-4083-8a87-f7f27f0f533f-kube-api-access-94lln" (OuterVolumeSpecName: "kube-api-access-94lln") pod "d7871824-2524-4083-8a87-f7f27f0f533f" (UID: "d7871824-2524-4083-8a87-f7f27f0f533f"). InnerVolumeSpecName "kube-api-access-94lln". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.409099 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kghfx\" (UniqueName: \"kubernetes.io/projected/765761f4-7998-4a19-8ff6-d72af224951c-kube-api-access-kghfx\") pod \"auto-csr-approver-29555066-dzhqs\" (UID: \"765761f4-7998-4a19-8ff6-d72af224951c\") " pod="openshift-infra/auto-csr-approver-29555066-dzhqs" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.432931 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7871824-2524-4083-8a87-f7f27f0f533f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d7871824-2524-4083-8a87-f7f27f0f533f" (UID: "d7871824-2524-4083-8a87-f7f27f0f533f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.456223 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bc2l4" event={"ID":"20b188d8-97a2-47ef-a863-e243b1f38483","Type":"ContainerDied","Data":"c8798eee77e1b723703063c276c910457eb3b88690e631204da17705d8d91b68"} Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.458867 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8798eee77e1b723703063c276c910457eb3b88690e631204da17705d8d91b68" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.457918 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bc2l4" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.460539 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d7871824-2524-4083-8a87-f7f27f0f533f","Type":"ContainerDied","Data":"00d311cf6e1cefdee794500f6eae37bf511651973155bf65b6529ba23634a0b5"} Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.460584 4809 scope.go:117] "RemoveContainer" containerID="158b3499e55f9be843afd74724441318312b8a36ad2dfcb94b5469080b433793" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.460710 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.464508 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-rr6w8" event={"ID":"d49c64c6-9e86-4436-a9e6-2723aecbacfe","Type":"ContainerStarted","Data":"2ef48b99426e10d93866e3f8eefb58f3074dc49c341744abd1cb4c1eb0c39535"} Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.466735 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"04249628-0bfe-4de8-b6ad-f8508e7b1a8a","Type":"ContainerDied","Data":"52b8968390064a3028329ea2daac949e19b60d9b6ac724f35f0f0ed82b3cf764"} Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.466831 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.469292 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"77145032-e837-4a39-bc67-703645401e34","Type":"ContainerStarted","Data":"887886bfad60b8f4ef78d6bd0b647f991ad867dc8fb59ab7d762a77fc447c994"} Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.469983 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.475997 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-7c7km" event={"ID":"43a2fa45-78b7-4556-802d-ec28c44a4f12","Type":"ContainerDied","Data":"ce37f3654fa076ea7833a758c920562b5fd40ee8bab7f74f35c0e458a0f67911"} Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.476599 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-7c7km" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.491681 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7871824-2524-4083-8a87-f7f27f0f533f-config-data" (OuterVolumeSpecName: "config-data") pod "d7871824-2524-4083-8a87-f7f27f0f533f" (UID: "d7871824-2524-4083-8a87-f7f27f0f533f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.496271 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7871824-2524-4083-8a87-f7f27f0f533f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.497467 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94lln\" (UniqueName: \"kubernetes.io/projected/d7871824-2524-4083-8a87-f7f27f0f533f-kube-api-access-94lln\") on node \"crc\" DevicePath \"\"" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.497499 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7871824-2524-4083-8a87-f7f27f0f533f-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.497528 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dzg8\" (UniqueName: \"kubernetes.io/projected/43a2fa45-78b7-4556-802d-ec28c44a4f12-kube-api-access-9dzg8\") on node \"crc\" DevicePath \"\"" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.497540 4809 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7871824-2524-4083-8a87-f7f27f0f533f-logs\") on node \"crc\" DevicePath \"\"" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.500052 4809 scope.go:117] "RemoveContainer" containerID="bfd1785d5e2c4a7e31cce9c1c527cda33937f0505156dcde14c8c8740009470d" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.505775 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-rr6w8" podStartSLOduration=2.278153736 podStartE2EDuration="9.505743515s" podCreationTimestamp="2026-03-12 08:25:51 +0000 UTC" firstStartedPulling="2026-03-12 08:25:52.420239792 +0000 UTC m=+1626.002275525" lastFinishedPulling="2026-03-12 08:25:59.647829571 +0000 UTC m=+1633.229865304" observedRunningTime="2026-03-12 08:26:00.489712698 +0000 UTC m=+1634.071748431" watchObservedRunningTime="2026-03-12 08:26:00.505743515 +0000 UTC m=+1634.087779248" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.509866 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43a2fa45-78b7-4556-802d-ec28c44a4f12-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "43a2fa45-78b7-4556-802d-ec28c44a4f12" (UID: "43a2fa45-78b7-4556-802d-ec28c44a4f12"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.516847 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43a2fa45-78b7-4556-802d-ec28c44a4f12-config" (OuterVolumeSpecName: "config") pod "43a2fa45-78b7-4556-802d-ec28c44a4f12" (UID: "43a2fa45-78b7-4556-802d-ec28c44a4f12"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.540320 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43a2fa45-78b7-4556-802d-ec28c44a4f12-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "43a2fa45-78b7-4556-802d-ec28c44a4f12" (UID: "43a2fa45-78b7-4556-802d-ec28c44a4f12"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.541545 4809 scope.go:117] "RemoveContainer" containerID="3086965d63cda37dedda0b1906cd757d0cad972875d3261a35076c43e2761d72" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.550173 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.349036055 podStartE2EDuration="10.550139752s" podCreationTimestamp="2026-03-12 08:25:50 +0000 UTC" firstStartedPulling="2026-03-12 08:25:51.445943832 +0000 UTC m=+1625.027979565" lastFinishedPulling="2026-03-12 08:25:59.647047529 +0000 UTC m=+1633.229083262" observedRunningTime="2026-03-12 08:26:00.512625822 +0000 UTC m=+1634.094661565" watchObservedRunningTime="2026-03-12 08:26:00.550139752 +0000 UTC m=+1634.132175485" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.561010 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555066-dzhqs" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.568148 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43a2fa45-78b7-4556-802d-ec28c44a4f12-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "43a2fa45-78b7-4556-802d-ec28c44a4f12" (UID: "43a2fa45-78b7-4556-802d-ec28c44a4f12"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.596009 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43a2fa45-78b7-4556-802d-ec28c44a4f12-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "43a2fa45-78b7-4556-802d-ec28c44a4f12" (UID: "43a2fa45-78b7-4556-802d-ec28c44a4f12"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.600192 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.608722 4809 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/43a2fa45-78b7-4556-802d-ec28c44a4f12-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.608782 4809 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43a2fa45-78b7-4556-802d-ec28c44a4f12-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.608796 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43a2fa45-78b7-4556-802d-ec28c44a4f12-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.609365 4809 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/43a2fa45-78b7-4556-802d-ec28c44a4f12-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.609395 4809 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/43a2fa45-78b7-4556-802d-ec28c44a4f12-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.620682 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.631269 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 08:26:00 crc kubenswrapper[4809]: E0312 08:26:00.631958 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43a2fa45-78b7-4556-802d-ec28c44a4f12" containerName="dnsmasq-dns" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.631985 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="43a2fa45-78b7-4556-802d-ec28c44a4f12" containerName="dnsmasq-dns" Mar 12 08:26:00 crc kubenswrapper[4809]: E0312 08:26:00.632012 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7871824-2524-4083-8a87-f7f27f0f533f" containerName="nova-api-log" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.632023 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7871824-2524-4083-8a87-f7f27f0f533f" containerName="nova-api-log" Mar 12 08:26:00 crc kubenswrapper[4809]: E0312 08:26:00.632041 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43a2fa45-78b7-4556-802d-ec28c44a4f12" containerName="init" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.632048 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="43a2fa45-78b7-4556-802d-ec28c44a4f12" containerName="init" Mar 12 08:26:00 crc kubenswrapper[4809]: E0312 08:26:00.632063 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7871824-2524-4083-8a87-f7f27f0f533f" containerName="nova-api-api" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.632069 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7871824-2524-4083-8a87-f7f27f0f533f" containerName="nova-api-api" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.632316 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="43a2fa45-78b7-4556-802d-ec28c44a4f12" containerName="dnsmasq-dns" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.632346 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7871824-2524-4083-8a87-f7f27f0f533f" containerName="nova-api-api" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.632364 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7871824-2524-4083-8a87-f7f27f0f533f" containerName="nova-api-log" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.633539 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.635231 4809 scope.go:117] "RemoveContainer" containerID="228d81e87f41e7ab431405d079127d1d4ef3ecb92b4b42b615a7e777aa12533e" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.638640 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.645004 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.669161 4809 scope.go:117] "RemoveContainer" containerID="e6f38ddc4df56ab60f5bdfe869a9252bd1f1232314ddc19aa459413915daf29e" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.710461 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/586c76fb-3a10-473c-b9d9-14fb71cc4f6e-config-data\") pod \"nova-scheduler-0\" (UID: \"586c76fb-3a10-473c-b9d9-14fb71cc4f6e\") " pod="openstack/nova-scheduler-0" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.710591 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/586c76fb-3a10-473c-b9d9-14fb71cc4f6e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"586c76fb-3a10-473c-b9d9-14fb71cc4f6e\") " pod="openstack/nova-scheduler-0" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.710663 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bq7cc\" (UniqueName: \"kubernetes.io/projected/586c76fb-3a10-473c-b9d9-14fb71cc4f6e-kube-api-access-bq7cc\") pod \"nova-scheduler-0\" (UID: \"586c76fb-3a10-473c-b9d9-14fb71cc4f6e\") " pod="openstack/nova-scheduler-0" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.781267 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.783460 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.786564 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.813510 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bq7cc\" (UniqueName: \"kubernetes.io/projected/586c76fb-3a10-473c-b9d9-14fb71cc4f6e-kube-api-access-bq7cc\") pod \"nova-scheduler-0\" (UID: \"586c76fb-3a10-473c-b9d9-14fb71cc4f6e\") " pod="openstack/nova-scheduler-0" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.814575 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/586c76fb-3a10-473c-b9d9-14fb71cc4f6e-config-data\") pod \"nova-scheduler-0\" (UID: \"586c76fb-3a10-473c-b9d9-14fb71cc4f6e\") " pod="openstack/nova-scheduler-0" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.815885 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.816229 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/586c76fb-3a10-473c-b9d9-14fb71cc4f6e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"586c76fb-3a10-473c-b9d9-14fb71cc4f6e\") " pod="openstack/nova-scheduler-0" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.850330 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/586c76fb-3a10-473c-b9d9-14fb71cc4f6e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"586c76fb-3a10-473c-b9d9-14fb71cc4f6e\") " pod="openstack/nova-scheduler-0" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.851592 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bq7cc\" (UniqueName: \"kubernetes.io/projected/586c76fb-3a10-473c-b9d9-14fb71cc4f6e-kube-api-access-bq7cc\") pod \"nova-scheduler-0\" (UID: \"586c76fb-3a10-473c-b9d9-14fb71cc4f6e\") " pod="openstack/nova-scheduler-0" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.851787 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/586c76fb-3a10-473c-b9d9-14fb71cc4f6e-config-data\") pod \"nova-scheduler-0\" (UID: \"586c76fb-3a10-473c-b9d9-14fb71cc4f6e\") " pod="openstack/nova-scheduler-0" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.918713 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcljt\" (UniqueName: \"kubernetes.io/projected/58eaf15a-e31e-4457-8f23-a3b58f5bd943-kube-api-access-pcljt\") pod \"nova-cell1-conductor-0\" (UID: \"58eaf15a-e31e-4457-8f23-a3b58f5bd943\") " pod="openstack/nova-cell1-conductor-0" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.919176 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58eaf15a-e31e-4457-8f23-a3b58f5bd943-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"58eaf15a-e31e-4457-8f23-a3b58f5bd943\") " pod="openstack/nova-cell1-conductor-0" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.919215 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58eaf15a-e31e-4457-8f23-a3b58f5bd943-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"58eaf15a-e31e-4457-8f23-a3b58f5bd943\") " pod="openstack/nova-cell1-conductor-0" Mar 12 08:26:00 crc kubenswrapper[4809]: I0312 08:26:00.960315 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 08:26:01 crc kubenswrapper[4809]: I0312 08:26:01.021460 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58eaf15a-e31e-4457-8f23-a3b58f5bd943-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"58eaf15a-e31e-4457-8f23-a3b58f5bd943\") " pod="openstack/nova-cell1-conductor-0" Mar 12 08:26:01 crc kubenswrapper[4809]: I0312 08:26:01.021511 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58eaf15a-e31e-4457-8f23-a3b58f5bd943-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"58eaf15a-e31e-4457-8f23-a3b58f5bd943\") " pod="openstack/nova-cell1-conductor-0" Mar 12 08:26:01 crc kubenswrapper[4809]: I0312 08:26:01.021619 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcljt\" (UniqueName: \"kubernetes.io/projected/58eaf15a-e31e-4457-8f23-a3b58f5bd943-kube-api-access-pcljt\") pod \"nova-cell1-conductor-0\" (UID: \"58eaf15a-e31e-4457-8f23-a3b58f5bd943\") " pod="openstack/nova-cell1-conductor-0" Mar 12 08:26:01 crc kubenswrapper[4809]: I0312 08:26:01.042298 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58eaf15a-e31e-4457-8f23-a3b58f5bd943-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"58eaf15a-e31e-4457-8f23-a3b58f5bd943\") " pod="openstack/nova-cell1-conductor-0" Mar 12 08:26:01 crc kubenswrapper[4809]: I0312 08:26:01.042528 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58eaf15a-e31e-4457-8f23-a3b58f5bd943-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"58eaf15a-e31e-4457-8f23-a3b58f5bd943\") " pod="openstack/nova-cell1-conductor-0" Mar 12 08:26:01 crc kubenswrapper[4809]: I0312 08:26:01.055427 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 12 08:26:01 crc kubenswrapper[4809]: I0312 08:26:01.062673 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcljt\" (UniqueName: \"kubernetes.io/projected/58eaf15a-e31e-4457-8f23-a3b58f5bd943-kube-api-access-pcljt\") pod \"nova-cell1-conductor-0\" (UID: \"58eaf15a-e31e-4457-8f23-a3b58f5bd943\") " pod="openstack/nova-cell1-conductor-0" Mar 12 08:26:01 crc kubenswrapper[4809]: I0312 08:26:01.084754 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 12 08:26:01 crc kubenswrapper[4809]: I0312 08:26:01.141477 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 12 08:26:01 crc kubenswrapper[4809]: I0312 08:26:01.145110 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04249628-0bfe-4de8-b6ad-f8508e7b1a8a" path="/var/lib/kubelet/pods/04249628-0bfe-4de8-b6ad-f8508e7b1a8a/volumes" Mar 12 08:26:01 crc kubenswrapper[4809]: I0312 08:26:01.150079 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7871824-2524-4083-8a87-f7f27f0f533f" path="/var/lib/kubelet/pods/d7871824-2524-4083-8a87-f7f27f0f533f/volumes" Mar 12 08:26:01 crc kubenswrapper[4809]: I0312 08:26:01.150887 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 12 08:26:01 crc kubenswrapper[4809]: I0312 08:26:01.155607 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 12 08:26:01 crc kubenswrapper[4809]: I0312 08:26:01.155718 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 08:26:01 crc kubenswrapper[4809]: I0312 08:26:01.162034 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 12 08:26:01 crc kubenswrapper[4809]: I0312 08:26:01.180519 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-7c7km"] Mar 12 08:26:01 crc kubenswrapper[4809]: I0312 08:26:01.199882 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-7c7km"] Mar 12 08:26:01 crc kubenswrapper[4809]: I0312 08:26:01.254349 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b07ea172-0fa3-4ffe-a10a-fcf182734dc1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b07ea172-0fa3-4ffe-a10a-fcf182734dc1\") " pod="openstack/nova-api-0" Mar 12 08:26:01 crc kubenswrapper[4809]: I0312 08:26:01.254807 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b07ea172-0fa3-4ffe-a10a-fcf182734dc1-config-data\") pod \"nova-api-0\" (UID: \"b07ea172-0fa3-4ffe-a10a-fcf182734dc1\") " pod="openstack/nova-api-0" Mar 12 08:26:01 crc kubenswrapper[4809]: I0312 08:26:01.254892 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b07ea172-0fa3-4ffe-a10a-fcf182734dc1-logs\") pod \"nova-api-0\" (UID: \"b07ea172-0fa3-4ffe-a10a-fcf182734dc1\") " pod="openstack/nova-api-0" Mar 12 08:26:01 crc kubenswrapper[4809]: I0312 08:26:01.255220 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vppb\" (UniqueName: \"kubernetes.io/projected/b07ea172-0fa3-4ffe-a10a-fcf182734dc1-kube-api-access-8vppb\") pod \"nova-api-0\" (UID: \"b07ea172-0fa3-4ffe-a10a-fcf182734dc1\") " pod="openstack/nova-api-0" Mar 12 08:26:01 crc kubenswrapper[4809]: I0312 08:26:01.354824 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555066-dzhqs"] Mar 12 08:26:01 crc kubenswrapper[4809]: W0312 08:26:01.356407 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod765761f4_7998_4a19_8ff6_d72af224951c.slice/crio-a415f61db876058e9b6f45b2c7d92c9d9181b443f50d6e7be59456907fba7dfd WatchSource:0}: Error finding container a415f61db876058e9b6f45b2c7d92c9d9181b443f50d6e7be59456907fba7dfd: Status 404 returned error can't find the container with id a415f61db876058e9b6f45b2c7d92c9d9181b443f50d6e7be59456907fba7dfd Mar 12 08:26:01 crc kubenswrapper[4809]: I0312 08:26:01.358922 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b07ea172-0fa3-4ffe-a10a-fcf182734dc1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b07ea172-0fa3-4ffe-a10a-fcf182734dc1\") " pod="openstack/nova-api-0" Mar 12 08:26:01 crc kubenswrapper[4809]: I0312 08:26:01.359066 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b07ea172-0fa3-4ffe-a10a-fcf182734dc1-config-data\") pod \"nova-api-0\" (UID: \"b07ea172-0fa3-4ffe-a10a-fcf182734dc1\") " pod="openstack/nova-api-0" Mar 12 08:26:01 crc kubenswrapper[4809]: I0312 08:26:01.359103 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b07ea172-0fa3-4ffe-a10a-fcf182734dc1-logs\") pod \"nova-api-0\" (UID: \"b07ea172-0fa3-4ffe-a10a-fcf182734dc1\") " pod="openstack/nova-api-0" Mar 12 08:26:01 crc kubenswrapper[4809]: I0312 08:26:01.359207 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vppb\" (UniqueName: \"kubernetes.io/projected/b07ea172-0fa3-4ffe-a10a-fcf182734dc1-kube-api-access-8vppb\") pod \"nova-api-0\" (UID: \"b07ea172-0fa3-4ffe-a10a-fcf182734dc1\") " pod="openstack/nova-api-0" Mar 12 08:26:01 crc kubenswrapper[4809]: I0312 08:26:01.359822 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b07ea172-0fa3-4ffe-a10a-fcf182734dc1-logs\") pod \"nova-api-0\" (UID: \"b07ea172-0fa3-4ffe-a10a-fcf182734dc1\") " pod="openstack/nova-api-0" Mar 12 08:26:01 crc kubenswrapper[4809]: I0312 08:26:01.364343 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b07ea172-0fa3-4ffe-a10a-fcf182734dc1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b07ea172-0fa3-4ffe-a10a-fcf182734dc1\") " pod="openstack/nova-api-0" Mar 12 08:26:01 crc kubenswrapper[4809]: I0312 08:26:01.367347 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b07ea172-0fa3-4ffe-a10a-fcf182734dc1-config-data\") pod \"nova-api-0\" (UID: \"b07ea172-0fa3-4ffe-a10a-fcf182734dc1\") " pod="openstack/nova-api-0" Mar 12 08:26:01 crc kubenswrapper[4809]: I0312 08:26:01.383479 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vppb\" (UniqueName: \"kubernetes.io/projected/b07ea172-0fa3-4ffe-a10a-fcf182734dc1-kube-api-access-8vppb\") pod \"nova-api-0\" (UID: \"b07ea172-0fa3-4ffe-a10a-fcf182734dc1\") " pod="openstack/nova-api-0" Mar 12 08:26:01 crc kubenswrapper[4809]: I0312 08:26:01.486852 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 08:26:01 crc kubenswrapper[4809]: I0312 08:26:01.502417 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555066-dzhqs" event={"ID":"765761f4-7998-4a19-8ff6-d72af224951c","Type":"ContainerStarted","Data":"a415f61db876058e9b6f45b2c7d92c9d9181b443f50d6e7be59456907fba7dfd"} Mar 12 08:26:01 crc kubenswrapper[4809]: I0312 08:26:01.617890 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 08:26:01 crc kubenswrapper[4809]: W0312 08:26:01.622789 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod586c76fb_3a10_473c_b9d9_14fb71cc4f6e.slice/crio-c50a7a5dac1545e19ad3551f5d386aa55f1934534f0b58f961c2a7c1e8945606 WatchSource:0}: Error finding container c50a7a5dac1545e19ad3551f5d386aa55f1934534f0b58f961c2a7c1e8945606: Status 404 returned error can't find the container with id c50a7a5dac1545e19ad3551f5d386aa55f1934534f0b58f961c2a7c1e8945606 Mar 12 08:26:01 crc kubenswrapper[4809]: I0312 08:26:01.746342 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 12 08:26:02 crc kubenswrapper[4809]: I0312 08:26:02.060266 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 12 08:26:02 crc kubenswrapper[4809]: I0312 08:26:02.516315 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"586c76fb-3a10-473c-b9d9-14fb71cc4f6e","Type":"ContainerStarted","Data":"d2af86247af42944b1743dd43f6d2ddb6af9d79c163cd7c46aa14102c984c0f2"} Mar 12 08:26:02 crc kubenswrapper[4809]: I0312 08:26:02.516752 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"586c76fb-3a10-473c-b9d9-14fb71cc4f6e","Type":"ContainerStarted","Data":"c50a7a5dac1545e19ad3551f5d386aa55f1934534f0b58f961c2a7c1e8945606"} Mar 12 08:26:02 crc kubenswrapper[4809]: I0312 08:26:02.520455 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b07ea172-0fa3-4ffe-a10a-fcf182734dc1","Type":"ContainerStarted","Data":"6633b218c460ac381e8ab8eb9185ca003266f27db611b67a083dd12f81730b7d"} Mar 12 08:26:02 crc kubenswrapper[4809]: I0312 08:26:02.520501 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b07ea172-0fa3-4ffe-a10a-fcf182734dc1","Type":"ContainerStarted","Data":"de1e1b15ccfccb34deb99a6bda2316e6e84c177a809bda13abcf308972e9bbf2"} Mar 12 08:26:02 crc kubenswrapper[4809]: I0312 08:26:02.522635 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"58eaf15a-e31e-4457-8f23-a3b58f5bd943","Type":"ContainerStarted","Data":"6f5bc17300e08ff90de6109a16cdd7a674f31ac17cc191c252873df456fd1347"} Mar 12 08:26:02 crc kubenswrapper[4809]: I0312 08:26:02.522690 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"58eaf15a-e31e-4457-8f23-a3b58f5bd943","Type":"ContainerStarted","Data":"a1334cb3d922ae42fb34b0ee8348bfaf064def467fdc778b537c3ca9ad7f578f"} Mar 12 08:26:02 crc kubenswrapper[4809]: I0312 08:26:02.522913 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Mar 12 08:26:02 crc kubenswrapper[4809]: I0312 08:26:02.571539 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.57151117 podStartE2EDuration="2.57151117s" podCreationTimestamp="2026-03-12 08:26:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:26:02.546627153 +0000 UTC m=+1636.128662886" watchObservedRunningTime="2026-03-12 08:26:02.57151117 +0000 UTC m=+1636.153546903" Mar 12 08:26:02 crc kubenswrapper[4809]: I0312 08:26:02.587316 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.587284599 podStartE2EDuration="2.587284599s" podCreationTimestamp="2026-03-12 08:26:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:26:02.565137926 +0000 UTC m=+1636.147173669" watchObservedRunningTime="2026-03-12 08:26:02.587284599 +0000 UTC m=+1636.169320352" Mar 12 08:26:03 crc kubenswrapper[4809]: I0312 08:26:03.124061 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43a2fa45-78b7-4556-802d-ec28c44a4f12" path="/var/lib/kubelet/pods/43a2fa45-78b7-4556-802d-ec28c44a4f12/volumes" Mar 12 08:26:03 crc kubenswrapper[4809]: I0312 08:26:03.542485 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b07ea172-0fa3-4ffe-a10a-fcf182734dc1","Type":"ContainerStarted","Data":"7bd67350b405465acd4d877942b0759278b53ffe2e0f6fe697cb2000c46bab75"} Mar 12 08:26:03 crc kubenswrapper[4809]: I0312 08:26:03.545085 4809 generic.go:334] "Generic (PLEG): container finished" podID="d49c64c6-9e86-4436-a9e6-2723aecbacfe" containerID="2ef48b99426e10d93866e3f8eefb58f3074dc49c341744abd1cb4c1eb0c39535" exitCode=0 Mar 12 08:26:03 crc kubenswrapper[4809]: I0312 08:26:03.545138 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-rr6w8" event={"ID":"d49c64c6-9e86-4436-a9e6-2723aecbacfe","Type":"ContainerDied","Data":"2ef48b99426e10d93866e3f8eefb58f3074dc49c341744abd1cb4c1eb0c39535"} Mar 12 08:26:03 crc kubenswrapper[4809]: I0312 08:26:03.546890 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555066-dzhqs" event={"ID":"765761f4-7998-4a19-8ff6-d72af224951c","Type":"ContainerStarted","Data":"e403d5813e66777eeb6d2c3addd9cc362ddb235c4ceb4fb50681b2d66622ae2a"} Mar 12 08:26:03 crc kubenswrapper[4809]: I0312 08:26:03.594162 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.594136653 podStartE2EDuration="2.594136653s" podCreationTimestamp="2026-03-12 08:26:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:26:03.563382997 +0000 UTC m=+1637.145418750" watchObservedRunningTime="2026-03-12 08:26:03.594136653 +0000 UTC m=+1637.176172386" Mar 12 08:26:03 crc kubenswrapper[4809]: I0312 08:26:03.611275 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555066-dzhqs" podStartSLOduration=2.515365975 podStartE2EDuration="3.61125213s" podCreationTimestamp="2026-03-12 08:26:00 +0000 UTC" firstStartedPulling="2026-03-12 08:26:01.363881065 +0000 UTC m=+1634.945916798" lastFinishedPulling="2026-03-12 08:26:02.45976721 +0000 UTC m=+1636.041802953" observedRunningTime="2026-03-12 08:26:03.576791202 +0000 UTC m=+1637.158826935" watchObservedRunningTime="2026-03-12 08:26:03.61125213 +0000 UTC m=+1637.193287863" Mar 12 08:26:04 crc kubenswrapper[4809]: I0312 08:26:04.106829 4809 scope.go:117] "RemoveContainer" containerID="864ab2b9321734db2ec1c2e4ce57f8a096a813a5cbaa0744b231535713d6ab10" Mar 12 08:26:04 crc kubenswrapper[4809]: E0312 08:26:04.107517 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:26:04 crc kubenswrapper[4809]: I0312 08:26:04.561525 4809 generic.go:334] "Generic (PLEG): container finished" podID="765761f4-7998-4a19-8ff6-d72af224951c" containerID="e403d5813e66777eeb6d2c3addd9cc362ddb235c4ceb4fb50681b2d66622ae2a" exitCode=0 Mar 12 08:26:04 crc kubenswrapper[4809]: I0312 08:26:04.561626 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555066-dzhqs" event={"ID":"765761f4-7998-4a19-8ff6-d72af224951c","Type":"ContainerDied","Data":"e403d5813e66777eeb6d2c3addd9cc362ddb235c4ceb4fb50681b2d66622ae2a"} Mar 12 08:26:05 crc kubenswrapper[4809]: I0312 08:26:05.038427 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-rr6w8" Mar 12 08:26:05 crc kubenswrapper[4809]: I0312 08:26:05.193885 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7sbg\" (UniqueName: \"kubernetes.io/projected/d49c64c6-9e86-4436-a9e6-2723aecbacfe-kube-api-access-b7sbg\") pod \"d49c64c6-9e86-4436-a9e6-2723aecbacfe\" (UID: \"d49c64c6-9e86-4436-a9e6-2723aecbacfe\") " Mar 12 08:26:05 crc kubenswrapper[4809]: I0312 08:26:05.194093 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d49c64c6-9e86-4436-a9e6-2723aecbacfe-config-data\") pod \"d49c64c6-9e86-4436-a9e6-2723aecbacfe\" (UID: \"d49c64c6-9e86-4436-a9e6-2723aecbacfe\") " Mar 12 08:26:05 crc kubenswrapper[4809]: I0312 08:26:05.194278 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d49c64c6-9e86-4436-a9e6-2723aecbacfe-combined-ca-bundle\") pod \"d49c64c6-9e86-4436-a9e6-2723aecbacfe\" (UID: \"d49c64c6-9e86-4436-a9e6-2723aecbacfe\") " Mar 12 08:26:05 crc kubenswrapper[4809]: I0312 08:26:05.194359 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d49c64c6-9e86-4436-a9e6-2723aecbacfe-scripts\") pod \"d49c64c6-9e86-4436-a9e6-2723aecbacfe\" (UID: \"d49c64c6-9e86-4436-a9e6-2723aecbacfe\") " Mar 12 08:26:05 crc kubenswrapper[4809]: I0312 08:26:05.200803 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d49c64c6-9e86-4436-a9e6-2723aecbacfe-kube-api-access-b7sbg" (OuterVolumeSpecName: "kube-api-access-b7sbg") pod "d49c64c6-9e86-4436-a9e6-2723aecbacfe" (UID: "d49c64c6-9e86-4436-a9e6-2723aecbacfe"). InnerVolumeSpecName "kube-api-access-b7sbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:26:05 crc kubenswrapper[4809]: I0312 08:26:05.210346 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d49c64c6-9e86-4436-a9e6-2723aecbacfe-scripts" (OuterVolumeSpecName: "scripts") pod "d49c64c6-9e86-4436-a9e6-2723aecbacfe" (UID: "d49c64c6-9e86-4436-a9e6-2723aecbacfe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:26:05 crc kubenswrapper[4809]: I0312 08:26:05.228701 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d49c64c6-9e86-4436-a9e6-2723aecbacfe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d49c64c6-9e86-4436-a9e6-2723aecbacfe" (UID: "d49c64c6-9e86-4436-a9e6-2723aecbacfe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:26:05 crc kubenswrapper[4809]: I0312 08:26:05.239370 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d49c64c6-9e86-4436-a9e6-2723aecbacfe-config-data" (OuterVolumeSpecName: "config-data") pod "d49c64c6-9e86-4436-a9e6-2723aecbacfe" (UID: "d49c64c6-9e86-4436-a9e6-2723aecbacfe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:26:05 crc kubenswrapper[4809]: I0312 08:26:05.298013 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d49c64c6-9e86-4436-a9e6-2723aecbacfe-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 08:26:05 crc kubenswrapper[4809]: I0312 08:26:05.298205 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d49c64c6-9e86-4436-a9e6-2723aecbacfe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:26:05 crc kubenswrapper[4809]: I0312 08:26:05.298308 4809 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d49c64c6-9e86-4436-a9e6-2723aecbacfe-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 08:26:05 crc kubenswrapper[4809]: I0312 08:26:05.298387 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7sbg\" (UniqueName: \"kubernetes.io/projected/d49c64c6-9e86-4436-a9e6-2723aecbacfe-kube-api-access-b7sbg\") on node \"crc\" DevicePath \"\"" Mar 12 08:26:05 crc kubenswrapper[4809]: I0312 08:26:05.577900 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-rr6w8" Mar 12 08:26:05 crc kubenswrapper[4809]: I0312 08:26:05.577892 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-rr6w8" event={"ID":"d49c64c6-9e86-4436-a9e6-2723aecbacfe","Type":"ContainerDied","Data":"1eb12cc6abbc3e05c3cb78c350f404292318c6cd4874e449a5e73ccd3aa24c15"} Mar 12 08:26:05 crc kubenswrapper[4809]: I0312 08:26:05.577963 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1eb12cc6abbc3e05c3cb78c350f404292318c6cd4874e449a5e73ccd3aa24c15" Mar 12 08:26:05 crc kubenswrapper[4809]: I0312 08:26:05.961260 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 12 08:26:06 crc kubenswrapper[4809]: I0312 08:26:06.077334 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555066-dzhqs" Mar 12 08:26:06 crc kubenswrapper[4809]: I0312 08:26:06.229289 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kghfx\" (UniqueName: \"kubernetes.io/projected/765761f4-7998-4a19-8ff6-d72af224951c-kube-api-access-kghfx\") pod \"765761f4-7998-4a19-8ff6-d72af224951c\" (UID: \"765761f4-7998-4a19-8ff6-d72af224951c\") " Mar 12 08:26:06 crc kubenswrapper[4809]: I0312 08:26:06.236954 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/765761f4-7998-4a19-8ff6-d72af224951c-kube-api-access-kghfx" (OuterVolumeSpecName: "kube-api-access-kghfx") pod "765761f4-7998-4a19-8ff6-d72af224951c" (UID: "765761f4-7998-4a19-8ff6-d72af224951c"). InnerVolumeSpecName "kube-api-access-kghfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:26:06 crc kubenswrapper[4809]: I0312 08:26:06.334361 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kghfx\" (UniqueName: \"kubernetes.io/projected/765761f4-7998-4a19-8ff6-d72af224951c-kube-api-access-kghfx\") on node \"crc\" DevicePath \"\"" Mar 12 08:26:06 crc kubenswrapper[4809]: I0312 08:26:06.570482 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Mar 12 08:26:06 crc kubenswrapper[4809]: E0312 08:26:06.571335 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d49c64c6-9e86-4436-a9e6-2723aecbacfe" containerName="aodh-db-sync" Mar 12 08:26:06 crc kubenswrapper[4809]: I0312 08:26:06.571364 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="d49c64c6-9e86-4436-a9e6-2723aecbacfe" containerName="aodh-db-sync" Mar 12 08:26:06 crc kubenswrapper[4809]: E0312 08:26:06.571387 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="765761f4-7998-4a19-8ff6-d72af224951c" containerName="oc" Mar 12 08:26:06 crc kubenswrapper[4809]: I0312 08:26:06.571393 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="765761f4-7998-4a19-8ff6-d72af224951c" containerName="oc" Mar 12 08:26:06 crc kubenswrapper[4809]: I0312 08:26:06.571641 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="d49c64c6-9e86-4436-a9e6-2723aecbacfe" containerName="aodh-db-sync" Mar 12 08:26:06 crc kubenswrapper[4809]: I0312 08:26:06.571674 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="765761f4-7998-4a19-8ff6-d72af224951c" containerName="oc" Mar 12 08:26:06 crc kubenswrapper[4809]: I0312 08:26:06.574061 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 12 08:26:06 crc kubenswrapper[4809]: I0312 08:26:06.577430 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Mar 12 08:26:06 crc kubenswrapper[4809]: I0312 08:26:06.577593 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Mar 12 08:26:06 crc kubenswrapper[4809]: I0312 08:26:06.577849 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-qhrlw" Mar 12 08:26:06 crc kubenswrapper[4809]: I0312 08:26:06.603007 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555066-dzhqs" event={"ID":"765761f4-7998-4a19-8ff6-d72af224951c","Type":"ContainerDied","Data":"a415f61db876058e9b6f45b2c7d92c9d9181b443f50d6e7be59456907fba7dfd"} Mar 12 08:26:06 crc kubenswrapper[4809]: I0312 08:26:06.603066 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a415f61db876058e9b6f45b2c7d92c9d9181b443f50d6e7be59456907fba7dfd" Mar 12 08:26:06 crc kubenswrapper[4809]: I0312 08:26:06.603038 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555066-dzhqs" Mar 12 08:26:06 crc kubenswrapper[4809]: I0312 08:26:06.666818 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Mar 12 08:26:06 crc kubenswrapper[4809]: I0312 08:26:06.747812 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5e68398-e067-4b2a-9bbf-7d1e79911ccc-combined-ca-bundle\") pod \"aodh-0\" (UID: \"f5e68398-e067-4b2a-9bbf-7d1e79911ccc\") " pod="openstack/aodh-0" Mar 12 08:26:06 crc kubenswrapper[4809]: I0312 08:26:06.747892 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5e68398-e067-4b2a-9bbf-7d1e79911ccc-config-data\") pod \"aodh-0\" (UID: \"f5e68398-e067-4b2a-9bbf-7d1e79911ccc\") " pod="openstack/aodh-0" Mar 12 08:26:06 crc kubenswrapper[4809]: I0312 08:26:06.747932 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg8t4\" (UniqueName: \"kubernetes.io/projected/f5e68398-e067-4b2a-9bbf-7d1e79911ccc-kube-api-access-mg8t4\") pod \"aodh-0\" (UID: \"f5e68398-e067-4b2a-9bbf-7d1e79911ccc\") " pod="openstack/aodh-0" Mar 12 08:26:06 crc kubenswrapper[4809]: I0312 08:26:06.748002 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5e68398-e067-4b2a-9bbf-7d1e79911ccc-scripts\") pod \"aodh-0\" (UID: \"f5e68398-e067-4b2a-9bbf-7d1e79911ccc\") " pod="openstack/aodh-0" Mar 12 08:26:06 crc kubenswrapper[4809]: I0312 08:26:06.851421 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5e68398-e067-4b2a-9bbf-7d1e79911ccc-combined-ca-bundle\") pod \"aodh-0\" (UID: \"f5e68398-e067-4b2a-9bbf-7d1e79911ccc\") " pod="openstack/aodh-0" Mar 12 08:26:06 crc kubenswrapper[4809]: I0312 08:26:06.851552 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5e68398-e067-4b2a-9bbf-7d1e79911ccc-config-data\") pod \"aodh-0\" (UID: \"f5e68398-e067-4b2a-9bbf-7d1e79911ccc\") " pod="openstack/aodh-0" Mar 12 08:26:06 crc kubenswrapper[4809]: I0312 08:26:06.851592 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mg8t4\" (UniqueName: \"kubernetes.io/projected/f5e68398-e067-4b2a-9bbf-7d1e79911ccc-kube-api-access-mg8t4\") pod \"aodh-0\" (UID: \"f5e68398-e067-4b2a-9bbf-7d1e79911ccc\") " pod="openstack/aodh-0" Mar 12 08:26:06 crc kubenswrapper[4809]: I0312 08:26:06.851709 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5e68398-e067-4b2a-9bbf-7d1e79911ccc-scripts\") pod \"aodh-0\" (UID: \"f5e68398-e067-4b2a-9bbf-7d1e79911ccc\") " pod="openstack/aodh-0" Mar 12 08:26:06 crc kubenswrapper[4809]: I0312 08:26:06.856926 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5e68398-e067-4b2a-9bbf-7d1e79911ccc-config-data\") pod \"aodh-0\" (UID: \"f5e68398-e067-4b2a-9bbf-7d1e79911ccc\") " pod="openstack/aodh-0" Mar 12 08:26:06 crc kubenswrapper[4809]: I0312 08:26:06.857714 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5e68398-e067-4b2a-9bbf-7d1e79911ccc-combined-ca-bundle\") pod \"aodh-0\" (UID: \"f5e68398-e067-4b2a-9bbf-7d1e79911ccc\") " pod="openstack/aodh-0" Mar 12 08:26:06 crc kubenswrapper[4809]: I0312 08:26:06.867907 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5e68398-e067-4b2a-9bbf-7d1e79911ccc-scripts\") pod \"aodh-0\" (UID: \"f5e68398-e067-4b2a-9bbf-7d1e79911ccc\") " pod="openstack/aodh-0" Mar 12 08:26:06 crc kubenswrapper[4809]: I0312 08:26:06.884022 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mg8t4\" (UniqueName: \"kubernetes.io/projected/f5e68398-e067-4b2a-9bbf-7d1e79911ccc-kube-api-access-mg8t4\") pod \"aodh-0\" (UID: \"f5e68398-e067-4b2a-9bbf-7d1e79911ccc\") " pod="openstack/aodh-0" Mar 12 08:26:06 crc kubenswrapper[4809]: I0312 08:26:06.903476 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 12 08:26:07 crc kubenswrapper[4809]: I0312 08:26:07.230261 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555060-zzm7h"] Mar 12 08:26:07 crc kubenswrapper[4809]: I0312 08:26:07.260650 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555060-zzm7h"] Mar 12 08:26:07 crc kubenswrapper[4809]: I0312 08:26:07.428756 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Mar 12 08:26:07 crc kubenswrapper[4809]: W0312 08:26:07.434947 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5e68398_e067_4b2a_9bbf_7d1e79911ccc.slice/crio-11f5170ae121af9e82b11c8a27e09731fc6ca5a2838ce28aa609a4ac414768b0 WatchSource:0}: Error finding container 11f5170ae121af9e82b11c8a27e09731fc6ca5a2838ce28aa609a4ac414768b0: Status 404 returned error can't find the container with id 11f5170ae121af9e82b11c8a27e09731fc6ca5a2838ce28aa609a4ac414768b0 Mar 12 08:26:07 crc kubenswrapper[4809]: I0312 08:26:07.616012 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f5e68398-e067-4b2a-9bbf-7d1e79911ccc","Type":"ContainerStarted","Data":"11f5170ae121af9e82b11c8a27e09731fc6ca5a2838ce28aa609a4ac414768b0"} Mar 12 08:26:08 crc kubenswrapper[4809]: I0312 08:26:08.632265 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f5e68398-e067-4b2a-9bbf-7d1e79911ccc","Type":"ContainerStarted","Data":"6b275c82322f9321253aaa496bb5f1b14e950eb39dfc5d82f4514aeeadd004d3"} Mar 12 08:26:09 crc kubenswrapper[4809]: I0312 08:26:09.129704 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb2286a3-9a77-43c2-903d-08ab8468c9b6" path="/var/lib/kubelet/pods/eb2286a3-9a77-43c2-903d-08ab8468c9b6/volumes" Mar 12 08:26:09 crc kubenswrapper[4809]: I0312 08:26:09.141690 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 08:26:09 crc kubenswrapper[4809]: I0312 08:26:09.142086 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="77145032-e837-4a39-bc67-703645401e34" containerName="ceilometer-central-agent" containerID="cri-o://816deaf821bdae861bc904a76d1824e99542790a8c447ae628f952a789953810" gracePeriod=30 Mar 12 08:26:09 crc kubenswrapper[4809]: I0312 08:26:09.142282 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="77145032-e837-4a39-bc67-703645401e34" containerName="ceilometer-notification-agent" containerID="cri-o://c07acd4906983bb3a28bde7781a63c5e94337c2c3548ec007980bcb1320d96f4" gracePeriod=30 Mar 12 08:26:09 crc kubenswrapper[4809]: I0312 08:26:09.142324 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="77145032-e837-4a39-bc67-703645401e34" containerName="proxy-httpd" containerID="cri-o://887886bfad60b8f4ef78d6bd0b647f991ad867dc8fb59ab7d762a77fc447c994" gracePeriod=30 Mar 12 08:26:09 crc kubenswrapper[4809]: I0312 08:26:09.142443 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="77145032-e837-4a39-bc67-703645401e34" containerName="sg-core" containerID="cri-o://cd12592ebc6b4b12d66a89bed2e3b33b2772025ad6963dc28af5e402db06c71d" gracePeriod=30 Mar 12 08:26:09 crc kubenswrapper[4809]: I0312 08:26:09.666439 4809 generic.go:334] "Generic (PLEG): container finished" podID="77145032-e837-4a39-bc67-703645401e34" containerID="887886bfad60b8f4ef78d6bd0b647f991ad867dc8fb59ab7d762a77fc447c994" exitCode=0 Mar 12 08:26:09 crc kubenswrapper[4809]: I0312 08:26:09.666758 4809 generic.go:334] "Generic (PLEG): container finished" podID="77145032-e837-4a39-bc67-703645401e34" containerID="cd12592ebc6b4b12d66a89bed2e3b33b2772025ad6963dc28af5e402db06c71d" exitCode=2 Mar 12 08:26:09 crc kubenswrapper[4809]: I0312 08:26:09.666692 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"77145032-e837-4a39-bc67-703645401e34","Type":"ContainerDied","Data":"887886bfad60b8f4ef78d6bd0b647f991ad867dc8fb59ab7d762a77fc447c994"} Mar 12 08:26:09 crc kubenswrapper[4809]: I0312 08:26:09.666811 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"77145032-e837-4a39-bc67-703645401e34","Type":"ContainerDied","Data":"cd12592ebc6b4b12d66a89bed2e3b33b2772025ad6963dc28af5e402db06c71d"} Mar 12 08:26:10 crc kubenswrapper[4809]: I0312 08:26:10.181076 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Mar 12 08:26:10 crc kubenswrapper[4809]: I0312 08:26:10.702701 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f5e68398-e067-4b2a-9bbf-7d1e79911ccc","Type":"ContainerStarted","Data":"4f7e12c7d8d92e9dbb842e71442901b29ea4d92a6db968954ebf7cf565bb1bbd"} Mar 12 08:26:10 crc kubenswrapper[4809]: I0312 08:26:10.707645 4809 generic.go:334] "Generic (PLEG): container finished" podID="77145032-e837-4a39-bc67-703645401e34" containerID="c07acd4906983bb3a28bde7781a63c5e94337c2c3548ec007980bcb1320d96f4" exitCode=0 Mar 12 08:26:10 crc kubenswrapper[4809]: I0312 08:26:10.707675 4809 generic.go:334] "Generic (PLEG): container finished" podID="77145032-e837-4a39-bc67-703645401e34" containerID="816deaf821bdae861bc904a76d1824e99542790a8c447ae628f952a789953810" exitCode=0 Mar 12 08:26:10 crc kubenswrapper[4809]: I0312 08:26:10.707692 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"77145032-e837-4a39-bc67-703645401e34","Type":"ContainerDied","Data":"c07acd4906983bb3a28bde7781a63c5e94337c2c3548ec007980bcb1320d96f4"} Mar 12 08:26:10 crc kubenswrapper[4809]: I0312 08:26:10.707714 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"77145032-e837-4a39-bc67-703645401e34","Type":"ContainerDied","Data":"816deaf821bdae861bc904a76d1824e99542790a8c447ae628f952a789953810"} Mar 12 08:26:10 crc kubenswrapper[4809]: I0312 08:26:10.960625 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 12 08:26:10 crc kubenswrapper[4809]: I0312 08:26:10.976967 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 08:26:11 crc kubenswrapper[4809]: I0312 08:26:11.040530 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 12 08:26:11 crc kubenswrapper[4809]: I0312 08:26:11.083510 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77145032-e837-4a39-bc67-703645401e34-scripts\") pod \"77145032-e837-4a39-bc67-703645401e34\" (UID: \"77145032-e837-4a39-bc67-703645401e34\") " Mar 12 08:26:11 crc kubenswrapper[4809]: I0312 08:26:11.083653 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/77145032-e837-4a39-bc67-703645401e34-sg-core-conf-yaml\") pod \"77145032-e837-4a39-bc67-703645401e34\" (UID: \"77145032-e837-4a39-bc67-703645401e34\") " Mar 12 08:26:11 crc kubenswrapper[4809]: I0312 08:26:11.083800 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77145032-e837-4a39-bc67-703645401e34-config-data\") pod \"77145032-e837-4a39-bc67-703645401e34\" (UID: \"77145032-e837-4a39-bc67-703645401e34\") " Mar 12 08:26:11 crc kubenswrapper[4809]: I0312 08:26:11.083855 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vj4qp\" (UniqueName: \"kubernetes.io/projected/77145032-e837-4a39-bc67-703645401e34-kube-api-access-vj4qp\") pod \"77145032-e837-4a39-bc67-703645401e34\" (UID: \"77145032-e837-4a39-bc67-703645401e34\") " Mar 12 08:26:11 crc kubenswrapper[4809]: I0312 08:26:11.083900 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77145032-e837-4a39-bc67-703645401e34-combined-ca-bundle\") pod \"77145032-e837-4a39-bc67-703645401e34\" (UID: \"77145032-e837-4a39-bc67-703645401e34\") " Mar 12 08:26:11 crc kubenswrapper[4809]: I0312 08:26:11.084213 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77145032-e837-4a39-bc67-703645401e34-log-httpd\") pod \"77145032-e837-4a39-bc67-703645401e34\" (UID: \"77145032-e837-4a39-bc67-703645401e34\") " Mar 12 08:26:11 crc kubenswrapper[4809]: I0312 08:26:11.084271 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77145032-e837-4a39-bc67-703645401e34-run-httpd\") pod \"77145032-e837-4a39-bc67-703645401e34\" (UID: \"77145032-e837-4a39-bc67-703645401e34\") " Mar 12 08:26:11 crc kubenswrapper[4809]: I0312 08:26:11.085401 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77145032-e837-4a39-bc67-703645401e34-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "77145032-e837-4a39-bc67-703645401e34" (UID: "77145032-e837-4a39-bc67-703645401e34"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:26:11 crc kubenswrapper[4809]: I0312 08:26:11.085591 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77145032-e837-4a39-bc67-703645401e34-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "77145032-e837-4a39-bc67-703645401e34" (UID: "77145032-e837-4a39-bc67-703645401e34"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:26:11 crc kubenswrapper[4809]: I0312 08:26:11.086229 4809 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77145032-e837-4a39-bc67-703645401e34-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 08:26:11 crc kubenswrapper[4809]: I0312 08:26:11.086253 4809 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77145032-e837-4a39-bc67-703645401e34-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 08:26:11 crc kubenswrapper[4809]: I0312 08:26:11.092104 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77145032-e837-4a39-bc67-703645401e34-scripts" (OuterVolumeSpecName: "scripts") pod "77145032-e837-4a39-bc67-703645401e34" (UID: "77145032-e837-4a39-bc67-703645401e34"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:26:11 crc kubenswrapper[4809]: I0312 08:26:11.095461 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77145032-e837-4a39-bc67-703645401e34-kube-api-access-vj4qp" (OuterVolumeSpecName: "kube-api-access-vj4qp") pod "77145032-e837-4a39-bc67-703645401e34" (UID: "77145032-e837-4a39-bc67-703645401e34"). InnerVolumeSpecName "kube-api-access-vj4qp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:26:11 crc kubenswrapper[4809]: I0312 08:26:11.122334 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77145032-e837-4a39-bc67-703645401e34-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "77145032-e837-4a39-bc67-703645401e34" (UID: "77145032-e837-4a39-bc67-703645401e34"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:26:11 crc kubenswrapper[4809]: I0312 08:26:11.196255 4809 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77145032-e837-4a39-bc67-703645401e34-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 08:26:11 crc kubenswrapper[4809]: I0312 08:26:11.196288 4809 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/77145032-e837-4a39-bc67-703645401e34-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 12 08:26:11 crc kubenswrapper[4809]: I0312 08:26:11.196306 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vj4qp\" (UniqueName: \"kubernetes.io/projected/77145032-e837-4a39-bc67-703645401e34-kube-api-access-vj4qp\") on node \"crc\" DevicePath \"\"" Mar 12 08:26:11 crc kubenswrapper[4809]: I0312 08:26:11.232860 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Mar 12 08:26:11 crc kubenswrapper[4809]: I0312 08:26:11.250601 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77145032-e837-4a39-bc67-703645401e34-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "77145032-e837-4a39-bc67-703645401e34" (UID: "77145032-e837-4a39-bc67-703645401e34"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:26:11 crc kubenswrapper[4809]: I0312 08:26:11.291387 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77145032-e837-4a39-bc67-703645401e34-config-data" (OuterVolumeSpecName: "config-data") pod "77145032-e837-4a39-bc67-703645401e34" (UID: "77145032-e837-4a39-bc67-703645401e34"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:26:11 crc kubenswrapper[4809]: I0312 08:26:11.299627 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77145032-e837-4a39-bc67-703645401e34-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 08:26:11 crc kubenswrapper[4809]: I0312 08:26:11.299670 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77145032-e837-4a39-bc67-703645401e34-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:26:11 crc kubenswrapper[4809]: I0312 08:26:11.487814 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 12 08:26:11 crc kubenswrapper[4809]: I0312 08:26:11.487879 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 12 08:26:11 crc kubenswrapper[4809]: I0312 08:26:11.746900 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"77145032-e837-4a39-bc67-703645401e34","Type":"ContainerDied","Data":"8db5c8db53b01b53be684603dd951cc5ab33fef80237c345236b345dbae416f4"} Mar 12 08:26:11 crc kubenswrapper[4809]: I0312 08:26:11.746969 4809 scope.go:117] "RemoveContainer" containerID="887886bfad60b8f4ef78d6bd0b647f991ad867dc8fb59ab7d762a77fc447c994" Mar 12 08:26:11 crc kubenswrapper[4809]: I0312 08:26:11.747576 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 08:26:11 crc kubenswrapper[4809]: I0312 08:26:11.814526 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 08:26:11 crc kubenswrapper[4809]: I0312 08:26:11.816275 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 12 08:26:11 crc kubenswrapper[4809]: I0312 08:26:11.825815 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 12 08:26:11 crc kubenswrapper[4809]: I0312 08:26:11.844177 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 12 08:26:11 crc kubenswrapper[4809]: E0312 08:26:11.844973 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77145032-e837-4a39-bc67-703645401e34" containerName="ceilometer-central-agent" Mar 12 08:26:11 crc kubenswrapper[4809]: I0312 08:26:11.844995 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="77145032-e837-4a39-bc67-703645401e34" containerName="ceilometer-central-agent" Mar 12 08:26:11 crc kubenswrapper[4809]: E0312 08:26:11.845022 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77145032-e837-4a39-bc67-703645401e34" containerName="sg-core" Mar 12 08:26:11 crc kubenswrapper[4809]: I0312 08:26:11.845030 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="77145032-e837-4a39-bc67-703645401e34" containerName="sg-core" Mar 12 08:26:11 crc kubenswrapper[4809]: E0312 08:26:11.845075 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77145032-e837-4a39-bc67-703645401e34" containerName="ceilometer-notification-agent" Mar 12 08:26:11 crc kubenswrapper[4809]: I0312 08:26:11.845082 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="77145032-e837-4a39-bc67-703645401e34" containerName="ceilometer-notification-agent" Mar 12 08:26:11 crc kubenswrapper[4809]: E0312 08:26:11.845098 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77145032-e837-4a39-bc67-703645401e34" containerName="proxy-httpd" Mar 12 08:26:11 crc kubenswrapper[4809]: I0312 08:26:11.845104 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="77145032-e837-4a39-bc67-703645401e34" containerName="proxy-httpd" Mar 12 08:26:11 crc kubenswrapper[4809]: I0312 08:26:11.845399 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="77145032-e837-4a39-bc67-703645401e34" containerName="sg-core" Mar 12 08:26:11 crc kubenswrapper[4809]: I0312 08:26:11.845422 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="77145032-e837-4a39-bc67-703645401e34" containerName="proxy-httpd" Mar 12 08:26:11 crc kubenswrapper[4809]: I0312 08:26:11.845430 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="77145032-e837-4a39-bc67-703645401e34" containerName="ceilometer-central-agent" Mar 12 08:26:11 crc kubenswrapper[4809]: I0312 08:26:11.845455 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="77145032-e837-4a39-bc67-703645401e34" containerName="ceilometer-notification-agent" Mar 12 08:26:11 crc kubenswrapper[4809]: I0312 08:26:11.853414 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 08:26:11 crc kubenswrapper[4809]: I0312 08:26:11.858014 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 12 08:26:11 crc kubenswrapper[4809]: I0312 08:26:11.858293 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 12 08:26:11 crc kubenswrapper[4809]: I0312 08:26:11.858011 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 08:26:12 crc kubenswrapper[4809]: I0312 08:26:12.024442 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5581de1-27a7-4f7f-8388-904d787d5ab8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a5581de1-27a7-4f7f-8388-904d787d5ab8\") " pod="openstack/ceilometer-0" Mar 12 08:26:12 crc kubenswrapper[4809]: I0312 08:26:12.024938 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9mws\" (UniqueName: \"kubernetes.io/projected/a5581de1-27a7-4f7f-8388-904d787d5ab8-kube-api-access-c9mws\") pod \"ceilometer-0\" (UID: \"a5581de1-27a7-4f7f-8388-904d787d5ab8\") " pod="openstack/ceilometer-0" Mar 12 08:26:12 crc kubenswrapper[4809]: I0312 08:26:12.024963 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5581de1-27a7-4f7f-8388-904d787d5ab8-log-httpd\") pod \"ceilometer-0\" (UID: \"a5581de1-27a7-4f7f-8388-904d787d5ab8\") " pod="openstack/ceilometer-0" Mar 12 08:26:12 crc kubenswrapper[4809]: I0312 08:26:12.025061 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5581de1-27a7-4f7f-8388-904d787d5ab8-run-httpd\") pod \"ceilometer-0\" (UID: \"a5581de1-27a7-4f7f-8388-904d787d5ab8\") " pod="openstack/ceilometer-0" Mar 12 08:26:12 crc kubenswrapper[4809]: I0312 08:26:12.025145 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a5581de1-27a7-4f7f-8388-904d787d5ab8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a5581de1-27a7-4f7f-8388-904d787d5ab8\") " pod="openstack/ceilometer-0" Mar 12 08:26:12 crc kubenswrapper[4809]: I0312 08:26:12.025196 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5581de1-27a7-4f7f-8388-904d787d5ab8-config-data\") pod \"ceilometer-0\" (UID: \"a5581de1-27a7-4f7f-8388-904d787d5ab8\") " pod="openstack/ceilometer-0" Mar 12 08:26:12 crc kubenswrapper[4809]: I0312 08:26:12.025266 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5581de1-27a7-4f7f-8388-904d787d5ab8-scripts\") pod \"ceilometer-0\" (UID: \"a5581de1-27a7-4f7f-8388-904d787d5ab8\") " pod="openstack/ceilometer-0" Mar 12 08:26:12 crc kubenswrapper[4809]: I0312 08:26:12.127530 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a5581de1-27a7-4f7f-8388-904d787d5ab8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a5581de1-27a7-4f7f-8388-904d787d5ab8\") " pod="openstack/ceilometer-0" Mar 12 08:26:12 crc kubenswrapper[4809]: I0312 08:26:12.127641 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5581de1-27a7-4f7f-8388-904d787d5ab8-config-data\") pod \"ceilometer-0\" (UID: \"a5581de1-27a7-4f7f-8388-904d787d5ab8\") " pod="openstack/ceilometer-0" Mar 12 08:26:12 crc kubenswrapper[4809]: I0312 08:26:12.127723 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5581de1-27a7-4f7f-8388-904d787d5ab8-scripts\") pod \"ceilometer-0\" (UID: \"a5581de1-27a7-4f7f-8388-904d787d5ab8\") " pod="openstack/ceilometer-0" Mar 12 08:26:12 crc kubenswrapper[4809]: I0312 08:26:12.127794 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5581de1-27a7-4f7f-8388-904d787d5ab8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a5581de1-27a7-4f7f-8388-904d787d5ab8\") " pod="openstack/ceilometer-0" Mar 12 08:26:12 crc kubenswrapper[4809]: I0312 08:26:12.127821 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9mws\" (UniqueName: \"kubernetes.io/projected/a5581de1-27a7-4f7f-8388-904d787d5ab8-kube-api-access-c9mws\") pod \"ceilometer-0\" (UID: \"a5581de1-27a7-4f7f-8388-904d787d5ab8\") " pod="openstack/ceilometer-0" Mar 12 08:26:12 crc kubenswrapper[4809]: I0312 08:26:12.127839 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5581de1-27a7-4f7f-8388-904d787d5ab8-log-httpd\") pod \"ceilometer-0\" (UID: \"a5581de1-27a7-4f7f-8388-904d787d5ab8\") " pod="openstack/ceilometer-0" Mar 12 08:26:12 crc kubenswrapper[4809]: I0312 08:26:12.127905 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5581de1-27a7-4f7f-8388-904d787d5ab8-run-httpd\") pod \"ceilometer-0\" (UID: \"a5581de1-27a7-4f7f-8388-904d787d5ab8\") " pod="openstack/ceilometer-0" Mar 12 08:26:12 crc kubenswrapper[4809]: I0312 08:26:12.128464 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5581de1-27a7-4f7f-8388-904d787d5ab8-run-httpd\") pod \"ceilometer-0\" (UID: \"a5581de1-27a7-4f7f-8388-904d787d5ab8\") " pod="openstack/ceilometer-0" Mar 12 08:26:12 crc kubenswrapper[4809]: I0312 08:26:12.131943 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5581de1-27a7-4f7f-8388-904d787d5ab8-log-httpd\") pod \"ceilometer-0\" (UID: \"a5581de1-27a7-4f7f-8388-904d787d5ab8\") " pod="openstack/ceilometer-0" Mar 12 08:26:12 crc kubenswrapper[4809]: I0312 08:26:12.138037 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5581de1-27a7-4f7f-8388-904d787d5ab8-scripts\") pod \"ceilometer-0\" (UID: \"a5581de1-27a7-4f7f-8388-904d787d5ab8\") " pod="openstack/ceilometer-0" Mar 12 08:26:12 crc kubenswrapper[4809]: I0312 08:26:12.138058 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a5581de1-27a7-4f7f-8388-904d787d5ab8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a5581de1-27a7-4f7f-8388-904d787d5ab8\") " pod="openstack/ceilometer-0" Mar 12 08:26:12 crc kubenswrapper[4809]: I0312 08:26:12.138217 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5581de1-27a7-4f7f-8388-904d787d5ab8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a5581de1-27a7-4f7f-8388-904d787d5ab8\") " pod="openstack/ceilometer-0" Mar 12 08:26:12 crc kubenswrapper[4809]: I0312 08:26:12.139996 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5581de1-27a7-4f7f-8388-904d787d5ab8-config-data\") pod \"ceilometer-0\" (UID: \"a5581de1-27a7-4f7f-8388-904d787d5ab8\") " pod="openstack/ceilometer-0" Mar 12 08:26:12 crc kubenswrapper[4809]: I0312 08:26:12.153842 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9mws\" (UniqueName: \"kubernetes.io/projected/a5581de1-27a7-4f7f-8388-904d787d5ab8-kube-api-access-c9mws\") pod \"ceilometer-0\" (UID: \"a5581de1-27a7-4f7f-8388-904d787d5ab8\") " pod="openstack/ceilometer-0" Mar 12 08:26:12 crc kubenswrapper[4809]: I0312 08:26:12.192621 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 08:26:12 crc kubenswrapper[4809]: I0312 08:26:12.465089 4809 scope.go:117] "RemoveContainer" containerID="cd12592ebc6b4b12d66a89bed2e3b33b2772025ad6963dc28af5e402db06c71d" Mar 12 08:26:12 crc kubenswrapper[4809]: I0312 08:26:12.572356 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b07ea172-0fa3-4ffe-a10a-fcf182734dc1" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.8:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 08:26:12 crc kubenswrapper[4809]: I0312 08:26:12.572385 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b07ea172-0fa3-4ffe-a10a-fcf182734dc1" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.8:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 08:26:12 crc kubenswrapper[4809]: I0312 08:26:12.689291 4809 scope.go:117] "RemoveContainer" containerID="c07acd4906983bb3a28bde7781a63c5e94337c2c3548ec007980bcb1320d96f4" Mar 12 08:26:12 crc kubenswrapper[4809]: I0312 08:26:12.773645 4809 scope.go:117] "RemoveContainer" containerID="816deaf821bdae861bc904a76d1824e99542790a8c447ae628f952a789953810" Mar 12 08:26:12 crc kubenswrapper[4809]: I0312 08:26:12.906396 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 08:26:13 crc kubenswrapper[4809]: I0312 08:26:13.105061 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 08:26:13 crc kubenswrapper[4809]: I0312 08:26:13.172854 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77145032-e837-4a39-bc67-703645401e34" path="/var/lib/kubelet/pods/77145032-e837-4a39-bc67-703645401e34/volumes" Mar 12 08:26:13 crc kubenswrapper[4809]: I0312 08:26:13.822021 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5581de1-27a7-4f7f-8388-904d787d5ab8","Type":"ContainerStarted","Data":"c0a53e776590ef0301ccac9e44bb490a3b17526ad85c3a60aaf46750e1a4de88"} Mar 12 08:26:13 crc kubenswrapper[4809]: I0312 08:26:13.829050 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f5e68398-e067-4b2a-9bbf-7d1e79911ccc","Type":"ContainerStarted","Data":"20bc9edb132fafdc73409542f43cc38ab68d8317420b4e7acf5927cc5e1bc5c5"} Mar 12 08:26:14 crc kubenswrapper[4809]: I0312 08:26:14.843586 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5581de1-27a7-4f7f-8388-904d787d5ab8","Type":"ContainerStarted","Data":"ae4e3291e11f40ff9801ba2f53a90feaa68a42cfcf2ed5288c6d532f211b51d2"} Mar 12 08:26:14 crc kubenswrapper[4809]: I0312 08:26:14.849820 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f5e68398-e067-4b2a-9bbf-7d1e79911ccc","Type":"ContainerStarted","Data":"8d09117fdc064e2b240b8d3ee0392adb07387ebea513b338751cf143351ce9df"} Mar 12 08:26:14 crc kubenswrapper[4809]: I0312 08:26:14.850168 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="f5e68398-e067-4b2a-9bbf-7d1e79911ccc" containerName="aodh-api" containerID="cri-o://6b275c82322f9321253aaa496bb5f1b14e950eb39dfc5d82f4514aeeadd004d3" gracePeriod=30 Mar 12 08:26:14 crc kubenswrapper[4809]: I0312 08:26:14.850379 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="f5e68398-e067-4b2a-9bbf-7d1e79911ccc" containerName="aodh-notifier" containerID="cri-o://20bc9edb132fafdc73409542f43cc38ab68d8317420b4e7acf5927cc5e1bc5c5" gracePeriod=30 Mar 12 08:26:14 crc kubenswrapper[4809]: I0312 08:26:14.850388 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="f5e68398-e067-4b2a-9bbf-7d1e79911ccc" containerName="aodh-evaluator" containerID="cri-o://4f7e12c7d8d92e9dbb842e71442901b29ea4d92a6db968954ebf7cf565bb1bbd" gracePeriod=30 Mar 12 08:26:14 crc kubenswrapper[4809]: I0312 08:26:14.850765 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="f5e68398-e067-4b2a-9bbf-7d1e79911ccc" containerName="aodh-listener" containerID="cri-o://8d09117fdc064e2b240b8d3ee0392adb07387ebea513b338751cf143351ce9df" gracePeriod=30 Mar 12 08:26:14 crc kubenswrapper[4809]: I0312 08:26:14.891660 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.12116416 podStartE2EDuration="8.891636517s" podCreationTimestamp="2026-03-12 08:26:06 +0000 UTC" firstStartedPulling="2026-03-12 08:26:07.437237279 +0000 UTC m=+1641.019273012" lastFinishedPulling="2026-03-12 08:26:14.207709636 +0000 UTC m=+1647.789745369" observedRunningTime="2026-03-12 08:26:14.887057053 +0000 UTC m=+1648.469092786" watchObservedRunningTime="2026-03-12 08:26:14.891636517 +0000 UTC m=+1648.473672250" Mar 12 08:26:15 crc kubenswrapper[4809]: I0312 08:26:15.864175 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5581de1-27a7-4f7f-8388-904d787d5ab8","Type":"ContainerStarted","Data":"560c87acd58d8a281af7ea2df611e66383b0d7e979566659cf3dd4040a2f50d0"} Mar 12 08:26:15 crc kubenswrapper[4809]: I0312 08:26:15.866423 4809 generic.go:334] "Generic (PLEG): container finished" podID="f5e68398-e067-4b2a-9bbf-7d1e79911ccc" containerID="20bc9edb132fafdc73409542f43cc38ab68d8317420b4e7acf5927cc5e1bc5c5" exitCode=0 Mar 12 08:26:15 crc kubenswrapper[4809]: I0312 08:26:15.866462 4809 generic.go:334] "Generic (PLEG): container finished" podID="f5e68398-e067-4b2a-9bbf-7d1e79911ccc" containerID="4f7e12c7d8d92e9dbb842e71442901b29ea4d92a6db968954ebf7cf565bb1bbd" exitCode=0 Mar 12 08:26:15 crc kubenswrapper[4809]: I0312 08:26:15.866477 4809 generic.go:334] "Generic (PLEG): container finished" podID="f5e68398-e067-4b2a-9bbf-7d1e79911ccc" containerID="6b275c82322f9321253aaa496bb5f1b14e950eb39dfc5d82f4514aeeadd004d3" exitCode=0 Mar 12 08:26:15 crc kubenswrapper[4809]: I0312 08:26:15.866495 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f5e68398-e067-4b2a-9bbf-7d1e79911ccc","Type":"ContainerDied","Data":"20bc9edb132fafdc73409542f43cc38ab68d8317420b4e7acf5927cc5e1bc5c5"} Mar 12 08:26:15 crc kubenswrapper[4809]: I0312 08:26:15.866554 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f5e68398-e067-4b2a-9bbf-7d1e79911ccc","Type":"ContainerDied","Data":"4f7e12c7d8d92e9dbb842e71442901b29ea4d92a6db968954ebf7cf565bb1bbd"} Mar 12 08:26:15 crc kubenswrapper[4809]: I0312 08:26:15.866567 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f5e68398-e067-4b2a-9bbf-7d1e79911ccc","Type":"ContainerDied","Data":"6b275c82322f9321253aaa496bb5f1b14e950eb39dfc5d82f4514aeeadd004d3"} Mar 12 08:26:16 crc kubenswrapper[4809]: I0312 08:26:16.106821 4809 scope.go:117] "RemoveContainer" containerID="864ab2b9321734db2ec1c2e4ce57f8a096a813a5cbaa0744b231535713d6ab10" Mar 12 08:26:16 crc kubenswrapper[4809]: E0312 08:26:16.107461 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:26:16 crc kubenswrapper[4809]: I0312 08:26:16.882475 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5581de1-27a7-4f7f-8388-904d787d5ab8","Type":"ContainerStarted","Data":"5f939c40d3bdca00bb6117aebc702d36a2fd9408e6bbcb32a26ac93845af80c3"} Mar 12 08:26:17 crc kubenswrapper[4809]: I0312 08:26:17.899644 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5581de1-27a7-4f7f-8388-904d787d5ab8","Type":"ContainerStarted","Data":"c7e34f58e8f5fedcc550976b179a97f44622c003b60541d3d29865089b3bc0f0"} Mar 12 08:26:17 crc kubenswrapper[4809]: I0312 08:26:17.900506 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 12 08:26:17 crc kubenswrapper[4809]: I0312 08:26:17.899892 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a5581de1-27a7-4f7f-8388-904d787d5ab8" containerName="proxy-httpd" containerID="cri-o://c7e34f58e8f5fedcc550976b179a97f44622c003b60541d3d29865089b3bc0f0" gracePeriod=30 Mar 12 08:26:17 crc kubenswrapper[4809]: I0312 08:26:17.899815 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a5581de1-27a7-4f7f-8388-904d787d5ab8" containerName="ceilometer-central-agent" containerID="cri-o://ae4e3291e11f40ff9801ba2f53a90feaa68a42cfcf2ed5288c6d532f211b51d2" gracePeriod=30 Mar 12 08:26:17 crc kubenswrapper[4809]: I0312 08:26:17.899939 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a5581de1-27a7-4f7f-8388-904d787d5ab8" containerName="sg-core" containerID="cri-o://5f939c40d3bdca00bb6117aebc702d36a2fd9408e6bbcb32a26ac93845af80c3" gracePeriod=30 Mar 12 08:26:17 crc kubenswrapper[4809]: I0312 08:26:17.900003 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a5581de1-27a7-4f7f-8388-904d787d5ab8" containerName="ceilometer-notification-agent" containerID="cri-o://560c87acd58d8a281af7ea2df611e66383b0d7e979566659cf3dd4040a2f50d0" gracePeriod=30 Mar 12 08:26:17 crc kubenswrapper[4809]: I0312 08:26:17.934450 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.510685177 podStartE2EDuration="6.934425736s" podCreationTimestamp="2026-03-12 08:26:11 +0000 UTC" firstStartedPulling="2026-03-12 08:26:13.109109385 +0000 UTC m=+1646.691145118" lastFinishedPulling="2026-03-12 08:26:17.532849924 +0000 UTC m=+1651.114885677" observedRunningTime="2026-03-12 08:26:17.927902629 +0000 UTC m=+1651.509938372" watchObservedRunningTime="2026-03-12 08:26:17.934425736 +0000 UTC m=+1651.516461489" Mar 12 08:26:18 crc kubenswrapper[4809]: I0312 08:26:18.916105 4809 generic.go:334] "Generic (PLEG): container finished" podID="a5581de1-27a7-4f7f-8388-904d787d5ab8" containerID="5f939c40d3bdca00bb6117aebc702d36a2fd9408e6bbcb32a26ac93845af80c3" exitCode=2 Mar 12 08:26:18 crc kubenswrapper[4809]: I0312 08:26:18.916178 4809 generic.go:334] "Generic (PLEG): container finished" podID="a5581de1-27a7-4f7f-8388-904d787d5ab8" containerID="560c87acd58d8a281af7ea2df611e66383b0d7e979566659cf3dd4040a2f50d0" exitCode=0 Mar 12 08:26:18 crc kubenswrapper[4809]: I0312 08:26:18.916172 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5581de1-27a7-4f7f-8388-904d787d5ab8","Type":"ContainerDied","Data":"5f939c40d3bdca00bb6117aebc702d36a2fd9408e6bbcb32a26ac93845af80c3"} Mar 12 08:26:18 crc kubenswrapper[4809]: I0312 08:26:18.916242 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5581de1-27a7-4f7f-8388-904d787d5ab8","Type":"ContainerDied","Data":"560c87acd58d8a281af7ea2df611e66383b0d7e979566659cf3dd4040a2f50d0"} Mar 12 08:26:19 crc kubenswrapper[4809]: I0312 08:26:19.778133 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 08:26:19 crc kubenswrapper[4809]: I0312 08:26:19.783253 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 12 08:26:19 crc kubenswrapper[4809]: I0312 08:26:19.817908 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1483a15a-a49d-46c7-886b-85e8f7fcfc0b-logs\") pod \"1483a15a-a49d-46c7-886b-85e8f7fcfc0b\" (UID: \"1483a15a-a49d-46c7-886b-85e8f7fcfc0b\") " Mar 12 08:26:19 crc kubenswrapper[4809]: I0312 08:26:19.818086 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1483a15a-a49d-46c7-886b-85e8f7fcfc0b-config-data\") pod \"1483a15a-a49d-46c7-886b-85e8f7fcfc0b\" (UID: \"1483a15a-a49d-46c7-886b-85e8f7fcfc0b\") " Mar 12 08:26:19 crc kubenswrapper[4809]: I0312 08:26:19.818124 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b1185a8-a7d9-4f17-b98f-02b8051d196a-config-data\") pod \"3b1185a8-a7d9-4f17-b98f-02b8051d196a\" (UID: \"3b1185a8-a7d9-4f17-b98f-02b8051d196a\") " Mar 12 08:26:19 crc kubenswrapper[4809]: I0312 08:26:19.818265 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1483a15a-a49d-46c7-886b-85e8f7fcfc0b-combined-ca-bundle\") pod \"1483a15a-a49d-46c7-886b-85e8f7fcfc0b\" (UID: \"1483a15a-a49d-46c7-886b-85e8f7fcfc0b\") " Mar 12 08:26:19 crc kubenswrapper[4809]: I0312 08:26:19.818361 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9jg2\" (UniqueName: \"kubernetes.io/projected/1483a15a-a49d-46c7-886b-85e8f7fcfc0b-kube-api-access-r9jg2\") pod \"1483a15a-a49d-46c7-886b-85e8f7fcfc0b\" (UID: \"1483a15a-a49d-46c7-886b-85e8f7fcfc0b\") " Mar 12 08:26:19 crc kubenswrapper[4809]: I0312 08:26:19.818393 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kk6m6\" (UniqueName: \"kubernetes.io/projected/3b1185a8-a7d9-4f17-b98f-02b8051d196a-kube-api-access-kk6m6\") pod \"3b1185a8-a7d9-4f17-b98f-02b8051d196a\" (UID: \"3b1185a8-a7d9-4f17-b98f-02b8051d196a\") " Mar 12 08:26:19 crc kubenswrapper[4809]: I0312 08:26:19.818492 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b1185a8-a7d9-4f17-b98f-02b8051d196a-combined-ca-bundle\") pod \"3b1185a8-a7d9-4f17-b98f-02b8051d196a\" (UID: \"3b1185a8-a7d9-4f17-b98f-02b8051d196a\") " Mar 12 08:26:19 crc kubenswrapper[4809]: I0312 08:26:19.818489 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1483a15a-a49d-46c7-886b-85e8f7fcfc0b-logs" (OuterVolumeSpecName: "logs") pod "1483a15a-a49d-46c7-886b-85e8f7fcfc0b" (UID: "1483a15a-a49d-46c7-886b-85e8f7fcfc0b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:26:19 crc kubenswrapper[4809]: I0312 08:26:19.819164 4809 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1483a15a-a49d-46c7-886b-85e8f7fcfc0b-logs\") on node \"crc\" DevicePath \"\"" Mar 12 08:26:19 crc kubenswrapper[4809]: I0312 08:26:19.825899 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1483a15a-a49d-46c7-886b-85e8f7fcfc0b-kube-api-access-r9jg2" (OuterVolumeSpecName: "kube-api-access-r9jg2") pod "1483a15a-a49d-46c7-886b-85e8f7fcfc0b" (UID: "1483a15a-a49d-46c7-886b-85e8f7fcfc0b"). InnerVolumeSpecName "kube-api-access-r9jg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:26:19 crc kubenswrapper[4809]: I0312 08:26:19.826077 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b1185a8-a7d9-4f17-b98f-02b8051d196a-kube-api-access-kk6m6" (OuterVolumeSpecName: "kube-api-access-kk6m6") pod "3b1185a8-a7d9-4f17-b98f-02b8051d196a" (UID: "3b1185a8-a7d9-4f17-b98f-02b8051d196a"). InnerVolumeSpecName "kube-api-access-kk6m6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:26:19 crc kubenswrapper[4809]: I0312 08:26:19.867518 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b1185a8-a7d9-4f17-b98f-02b8051d196a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3b1185a8-a7d9-4f17-b98f-02b8051d196a" (UID: "3b1185a8-a7d9-4f17-b98f-02b8051d196a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:26:19 crc kubenswrapper[4809]: I0312 08:26:19.880832 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1483a15a-a49d-46c7-886b-85e8f7fcfc0b-config-data" (OuterVolumeSpecName: "config-data") pod "1483a15a-a49d-46c7-886b-85e8f7fcfc0b" (UID: "1483a15a-a49d-46c7-886b-85e8f7fcfc0b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:26:19 crc kubenswrapper[4809]: I0312 08:26:19.881186 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1483a15a-a49d-46c7-886b-85e8f7fcfc0b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1483a15a-a49d-46c7-886b-85e8f7fcfc0b" (UID: "1483a15a-a49d-46c7-886b-85e8f7fcfc0b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:26:19 crc kubenswrapper[4809]: I0312 08:26:19.892718 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b1185a8-a7d9-4f17-b98f-02b8051d196a-config-data" (OuterVolumeSpecName: "config-data") pod "3b1185a8-a7d9-4f17-b98f-02b8051d196a" (UID: "3b1185a8-a7d9-4f17-b98f-02b8051d196a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:26:19 crc kubenswrapper[4809]: I0312 08:26:19.922482 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b1185a8-a7d9-4f17-b98f-02b8051d196a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:26:19 crc kubenswrapper[4809]: I0312 08:26:19.922517 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1483a15a-a49d-46c7-886b-85e8f7fcfc0b-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 08:26:19 crc kubenswrapper[4809]: I0312 08:26:19.922531 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b1185a8-a7d9-4f17-b98f-02b8051d196a-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 08:26:19 crc kubenswrapper[4809]: I0312 08:26:19.922543 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1483a15a-a49d-46c7-886b-85e8f7fcfc0b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:26:19 crc kubenswrapper[4809]: I0312 08:26:19.922558 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9jg2\" (UniqueName: \"kubernetes.io/projected/1483a15a-a49d-46c7-886b-85e8f7fcfc0b-kube-api-access-r9jg2\") on node \"crc\" DevicePath \"\"" Mar 12 08:26:19 crc kubenswrapper[4809]: I0312 08:26:19.922569 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kk6m6\" (UniqueName: \"kubernetes.io/projected/3b1185a8-a7d9-4f17-b98f-02b8051d196a-kube-api-access-kk6m6\") on node \"crc\" DevicePath \"\"" Mar 12 08:26:19 crc kubenswrapper[4809]: I0312 08:26:19.953463 4809 generic.go:334] "Generic (PLEG): container finished" podID="1483a15a-a49d-46c7-886b-85e8f7fcfc0b" containerID="46fa804680c4c132c053f72983664223c5736562ea0a038ca0e078cc72efa9b0" exitCode=137 Mar 12 08:26:19 crc kubenswrapper[4809]: I0312 08:26:19.953635 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1483a15a-a49d-46c7-886b-85e8f7fcfc0b","Type":"ContainerDied","Data":"46fa804680c4c132c053f72983664223c5736562ea0a038ca0e078cc72efa9b0"} Mar 12 08:26:19 crc kubenswrapper[4809]: I0312 08:26:19.953690 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1483a15a-a49d-46c7-886b-85e8f7fcfc0b","Type":"ContainerDied","Data":"21d0cb3cd65bcea9071cc8298bbe1d8b3843b1c834b3f898ac925e752f273924"} Mar 12 08:26:19 crc kubenswrapper[4809]: I0312 08:26:19.953722 4809 scope.go:117] "RemoveContainer" containerID="46fa804680c4c132c053f72983664223c5736562ea0a038ca0e078cc72efa9b0" Mar 12 08:26:19 crc kubenswrapper[4809]: I0312 08:26:19.953960 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 08:26:19 crc kubenswrapper[4809]: I0312 08:26:19.963197 4809 generic.go:334] "Generic (PLEG): container finished" podID="3b1185a8-a7d9-4f17-b98f-02b8051d196a" containerID="f135b96d0f7810dc8854db30f2c19ac1969e1b0450b0adea5447bc291a30f8b9" exitCode=137 Mar 12 08:26:19 crc kubenswrapper[4809]: I0312 08:26:19.963247 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3b1185a8-a7d9-4f17-b98f-02b8051d196a","Type":"ContainerDied","Data":"f135b96d0f7810dc8854db30f2c19ac1969e1b0450b0adea5447bc291a30f8b9"} Mar 12 08:26:19 crc kubenswrapper[4809]: I0312 08:26:19.963281 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3b1185a8-a7d9-4f17-b98f-02b8051d196a","Type":"ContainerDied","Data":"273dda83bb5f2b01afb8849c7e133afcaa9b9fb33ea408dfb71f7b8eb774d53c"} Mar 12 08:26:19 crc kubenswrapper[4809]: I0312 08:26:19.963382 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 12 08:26:20 crc kubenswrapper[4809]: I0312 08:26:20.007641 4809 scope.go:117] "RemoveContainer" containerID="b875d847b8185f2b45627fe74323b145e0f95ab5abb43fa4ed36a0c5d3bef2eb" Mar 12 08:26:20 crc kubenswrapper[4809]: I0312 08:26:20.018687 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 08:26:20 crc kubenswrapper[4809]: I0312 08:26:20.051498 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 08:26:20 crc kubenswrapper[4809]: I0312 08:26:20.052266 4809 scope.go:117] "RemoveContainer" containerID="46fa804680c4c132c053f72983664223c5736562ea0a038ca0e078cc72efa9b0" Mar 12 08:26:20 crc kubenswrapper[4809]: E0312 08:26:20.053461 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46fa804680c4c132c053f72983664223c5736562ea0a038ca0e078cc72efa9b0\": container with ID starting with 46fa804680c4c132c053f72983664223c5736562ea0a038ca0e078cc72efa9b0 not found: ID does not exist" containerID="46fa804680c4c132c053f72983664223c5736562ea0a038ca0e078cc72efa9b0" Mar 12 08:26:20 crc kubenswrapper[4809]: I0312 08:26:20.053521 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46fa804680c4c132c053f72983664223c5736562ea0a038ca0e078cc72efa9b0"} err="failed to get container status \"46fa804680c4c132c053f72983664223c5736562ea0a038ca0e078cc72efa9b0\": rpc error: code = NotFound desc = could not find container \"46fa804680c4c132c053f72983664223c5736562ea0a038ca0e078cc72efa9b0\": container with ID starting with 46fa804680c4c132c053f72983664223c5736562ea0a038ca0e078cc72efa9b0 not found: ID does not exist" Mar 12 08:26:20 crc kubenswrapper[4809]: I0312 08:26:20.053561 4809 scope.go:117] "RemoveContainer" containerID="b875d847b8185f2b45627fe74323b145e0f95ab5abb43fa4ed36a0c5d3bef2eb" Mar 12 08:26:20 crc kubenswrapper[4809]: E0312 08:26:20.056409 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b875d847b8185f2b45627fe74323b145e0f95ab5abb43fa4ed36a0c5d3bef2eb\": container with ID starting with b875d847b8185f2b45627fe74323b145e0f95ab5abb43fa4ed36a0c5d3bef2eb not found: ID does not exist" containerID="b875d847b8185f2b45627fe74323b145e0f95ab5abb43fa4ed36a0c5d3bef2eb" Mar 12 08:26:20 crc kubenswrapper[4809]: I0312 08:26:20.056452 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b875d847b8185f2b45627fe74323b145e0f95ab5abb43fa4ed36a0c5d3bef2eb"} err="failed to get container status \"b875d847b8185f2b45627fe74323b145e0f95ab5abb43fa4ed36a0c5d3bef2eb\": rpc error: code = NotFound desc = could not find container \"b875d847b8185f2b45627fe74323b145e0f95ab5abb43fa4ed36a0c5d3bef2eb\": container with ID starting with b875d847b8185f2b45627fe74323b145e0f95ab5abb43fa4ed36a0c5d3bef2eb not found: ID does not exist" Mar 12 08:26:20 crc kubenswrapper[4809]: I0312 08:26:20.056471 4809 scope.go:117] "RemoveContainer" containerID="f135b96d0f7810dc8854db30f2c19ac1969e1b0450b0adea5447bc291a30f8b9" Mar 12 08:26:20 crc kubenswrapper[4809]: I0312 08:26:20.071338 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 12 08:26:20 crc kubenswrapper[4809]: I0312 08:26:20.110509 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 12 08:26:20 crc kubenswrapper[4809]: E0312 08:26:20.117910 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1483a15a-a49d-46c7-886b-85e8f7fcfc0b" containerName="nova-metadata-metadata" Mar 12 08:26:20 crc kubenswrapper[4809]: I0312 08:26:20.117953 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="1483a15a-a49d-46c7-886b-85e8f7fcfc0b" containerName="nova-metadata-metadata" Mar 12 08:26:20 crc kubenswrapper[4809]: E0312 08:26:20.117973 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1483a15a-a49d-46c7-886b-85e8f7fcfc0b" containerName="nova-metadata-log" Mar 12 08:26:20 crc kubenswrapper[4809]: I0312 08:26:20.117982 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="1483a15a-a49d-46c7-886b-85e8f7fcfc0b" containerName="nova-metadata-log" Mar 12 08:26:20 crc kubenswrapper[4809]: E0312 08:26:20.117998 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b1185a8-a7d9-4f17-b98f-02b8051d196a" containerName="nova-cell1-novncproxy-novncproxy" Mar 12 08:26:20 crc kubenswrapper[4809]: I0312 08:26:20.118004 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b1185a8-a7d9-4f17-b98f-02b8051d196a" containerName="nova-cell1-novncproxy-novncproxy" Mar 12 08:26:20 crc kubenswrapper[4809]: I0312 08:26:20.119944 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b1185a8-a7d9-4f17-b98f-02b8051d196a" containerName="nova-cell1-novncproxy-novncproxy" Mar 12 08:26:20 crc kubenswrapper[4809]: I0312 08:26:20.119964 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="1483a15a-a49d-46c7-886b-85e8f7fcfc0b" containerName="nova-metadata-metadata" Mar 12 08:26:20 crc kubenswrapper[4809]: I0312 08:26:20.120000 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="1483a15a-a49d-46c7-886b-85e8f7fcfc0b" containerName="nova-metadata-log" Mar 12 08:26:20 crc kubenswrapper[4809]: I0312 08:26:20.125455 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 08:26:20 crc kubenswrapper[4809]: I0312 08:26:20.130394 4809 scope.go:117] "RemoveContainer" containerID="f135b96d0f7810dc8854db30f2c19ac1969e1b0450b0adea5447bc291a30f8b9" Mar 12 08:26:20 crc kubenswrapper[4809]: I0312 08:26:20.130445 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 12 08:26:20 crc kubenswrapper[4809]: E0312 08:26:20.137715 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f135b96d0f7810dc8854db30f2c19ac1969e1b0450b0adea5447bc291a30f8b9\": container with ID starting with f135b96d0f7810dc8854db30f2c19ac1969e1b0450b0adea5447bc291a30f8b9 not found: ID does not exist" containerID="f135b96d0f7810dc8854db30f2c19ac1969e1b0450b0adea5447bc291a30f8b9" Mar 12 08:26:20 crc kubenswrapper[4809]: I0312 08:26:20.137798 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f135b96d0f7810dc8854db30f2c19ac1969e1b0450b0adea5447bc291a30f8b9"} err="failed to get container status \"f135b96d0f7810dc8854db30f2c19ac1969e1b0450b0adea5447bc291a30f8b9\": rpc error: code = NotFound desc = could not find container \"f135b96d0f7810dc8854db30f2c19ac1969e1b0450b0adea5447bc291a30f8b9\": container with ID starting with f135b96d0f7810dc8854db30f2c19ac1969e1b0450b0adea5447bc291a30f8b9 not found: ID does not exist" Mar 12 08:26:20 crc kubenswrapper[4809]: I0312 08:26:20.140057 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 12 08:26:20 crc kubenswrapper[4809]: I0312 08:26:20.153293 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 12 08:26:20 crc kubenswrapper[4809]: I0312 08:26:20.189458 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 08:26:20 crc kubenswrapper[4809]: I0312 08:26:20.208714 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 12 08:26:20 crc kubenswrapper[4809]: I0312 08:26:20.211519 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 12 08:26:20 crc kubenswrapper[4809]: I0312 08:26:20.215008 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Mar 12 08:26:20 crc kubenswrapper[4809]: I0312 08:26:20.215547 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Mar 12 08:26:20 crc kubenswrapper[4809]: I0312 08:26:20.216836 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Mar 12 08:26:20 crc kubenswrapper[4809]: I0312 08:26:20.229751 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 12 08:26:20 crc kubenswrapper[4809]: I0312 08:26:20.247995 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4eb1b537-df6b-4e03-aa39-42380aac7bd8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4eb1b537-df6b-4e03-aa39-42380aac7bd8\") " pod="openstack/nova-metadata-0" Mar 12 08:26:20 crc kubenswrapper[4809]: I0312 08:26:20.248052 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4eb1b537-df6b-4e03-aa39-42380aac7bd8-logs\") pod \"nova-metadata-0\" (UID: \"4eb1b537-df6b-4e03-aa39-42380aac7bd8\") " pod="openstack/nova-metadata-0" Mar 12 08:26:20 crc kubenswrapper[4809]: I0312 08:26:20.249064 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfrm5\" (UniqueName: \"kubernetes.io/projected/4eb1b537-df6b-4e03-aa39-42380aac7bd8-kube-api-access-pfrm5\") pod \"nova-metadata-0\" (UID: \"4eb1b537-df6b-4e03-aa39-42380aac7bd8\") " pod="openstack/nova-metadata-0" Mar 12 08:26:20 crc kubenswrapper[4809]: I0312 08:26:20.249750 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4eb1b537-df6b-4e03-aa39-42380aac7bd8-config-data\") pod \"nova-metadata-0\" (UID: \"4eb1b537-df6b-4e03-aa39-42380aac7bd8\") " pod="openstack/nova-metadata-0" Mar 12 08:26:20 crc kubenswrapper[4809]: I0312 08:26:20.249853 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4eb1b537-df6b-4e03-aa39-42380aac7bd8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4eb1b537-df6b-4e03-aa39-42380aac7bd8\") " pod="openstack/nova-metadata-0" Mar 12 08:26:20 crc kubenswrapper[4809]: I0312 08:26:20.352066 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4eb1b537-df6b-4e03-aa39-42380aac7bd8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4eb1b537-df6b-4e03-aa39-42380aac7bd8\") " pod="openstack/nova-metadata-0" Mar 12 08:26:20 crc kubenswrapper[4809]: I0312 08:26:20.352143 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xgfq\" (UniqueName: \"kubernetes.io/projected/b77d704e-5a2f-48ba-ac3c-c8495bda44ff-kube-api-access-7xgfq\") pod \"nova-cell1-novncproxy-0\" (UID: \"b77d704e-5a2f-48ba-ac3c-c8495bda44ff\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 08:26:20 crc kubenswrapper[4809]: I0312 08:26:20.352184 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4eb1b537-df6b-4e03-aa39-42380aac7bd8-logs\") pod \"nova-metadata-0\" (UID: \"4eb1b537-df6b-4e03-aa39-42380aac7bd8\") " pod="openstack/nova-metadata-0" Mar 12 08:26:20 crc kubenswrapper[4809]: I0312 08:26:20.352269 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfrm5\" (UniqueName: \"kubernetes.io/projected/4eb1b537-df6b-4e03-aa39-42380aac7bd8-kube-api-access-pfrm5\") pod \"nova-metadata-0\" (UID: \"4eb1b537-df6b-4e03-aa39-42380aac7bd8\") " pod="openstack/nova-metadata-0" Mar 12 08:26:20 crc kubenswrapper[4809]: I0312 08:26:20.352747 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/b77d704e-5a2f-48ba-ac3c-c8495bda44ff-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b77d704e-5a2f-48ba-ac3c-c8495bda44ff\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 08:26:20 crc kubenswrapper[4809]: I0312 08:26:20.352942 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b77d704e-5a2f-48ba-ac3c-c8495bda44ff-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b77d704e-5a2f-48ba-ac3c-c8495bda44ff\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 08:26:20 crc kubenswrapper[4809]: I0312 08:26:20.352992 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4eb1b537-df6b-4e03-aa39-42380aac7bd8-logs\") pod \"nova-metadata-0\" (UID: \"4eb1b537-df6b-4e03-aa39-42380aac7bd8\") " pod="openstack/nova-metadata-0" Mar 12 08:26:20 crc kubenswrapper[4809]: I0312 08:26:20.353007 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/b77d704e-5a2f-48ba-ac3c-c8495bda44ff-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b77d704e-5a2f-48ba-ac3c-c8495bda44ff\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 08:26:20 crc kubenswrapper[4809]: I0312 08:26:20.353234 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4eb1b537-df6b-4e03-aa39-42380aac7bd8-config-data\") pod \"nova-metadata-0\" (UID: \"4eb1b537-df6b-4e03-aa39-42380aac7bd8\") " pod="openstack/nova-metadata-0" Mar 12 08:26:20 crc kubenswrapper[4809]: I0312 08:26:20.353320 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b77d704e-5a2f-48ba-ac3c-c8495bda44ff-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b77d704e-5a2f-48ba-ac3c-c8495bda44ff\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 08:26:20 crc kubenswrapper[4809]: I0312 08:26:20.353418 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4eb1b537-df6b-4e03-aa39-42380aac7bd8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4eb1b537-df6b-4e03-aa39-42380aac7bd8\") " pod="openstack/nova-metadata-0" Mar 12 08:26:20 crc kubenswrapper[4809]: I0312 08:26:20.356675 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4eb1b537-df6b-4e03-aa39-42380aac7bd8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4eb1b537-df6b-4e03-aa39-42380aac7bd8\") " pod="openstack/nova-metadata-0" Mar 12 08:26:20 crc kubenswrapper[4809]: I0312 08:26:20.356893 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4eb1b537-df6b-4e03-aa39-42380aac7bd8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4eb1b537-df6b-4e03-aa39-42380aac7bd8\") " pod="openstack/nova-metadata-0" Mar 12 08:26:20 crc kubenswrapper[4809]: I0312 08:26:20.358088 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4eb1b537-df6b-4e03-aa39-42380aac7bd8-config-data\") pod \"nova-metadata-0\" (UID: \"4eb1b537-df6b-4e03-aa39-42380aac7bd8\") " pod="openstack/nova-metadata-0" Mar 12 08:26:20 crc kubenswrapper[4809]: I0312 08:26:20.371030 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfrm5\" (UniqueName: \"kubernetes.io/projected/4eb1b537-df6b-4e03-aa39-42380aac7bd8-kube-api-access-pfrm5\") pod \"nova-metadata-0\" (UID: \"4eb1b537-df6b-4e03-aa39-42380aac7bd8\") " pod="openstack/nova-metadata-0" Mar 12 08:26:20 crc kubenswrapper[4809]: I0312 08:26:20.456190 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/b77d704e-5a2f-48ba-ac3c-c8495bda44ff-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b77d704e-5a2f-48ba-ac3c-c8495bda44ff\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 08:26:20 crc kubenswrapper[4809]: I0312 08:26:20.456321 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b77d704e-5a2f-48ba-ac3c-c8495bda44ff-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b77d704e-5a2f-48ba-ac3c-c8495bda44ff\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 08:26:20 crc kubenswrapper[4809]: I0312 08:26:20.456369 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/b77d704e-5a2f-48ba-ac3c-c8495bda44ff-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b77d704e-5a2f-48ba-ac3c-c8495bda44ff\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 08:26:20 crc kubenswrapper[4809]: I0312 08:26:20.456466 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b77d704e-5a2f-48ba-ac3c-c8495bda44ff-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b77d704e-5a2f-48ba-ac3c-c8495bda44ff\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 08:26:20 crc kubenswrapper[4809]: I0312 08:26:20.456596 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xgfq\" (UniqueName: \"kubernetes.io/projected/b77d704e-5a2f-48ba-ac3c-c8495bda44ff-kube-api-access-7xgfq\") pod \"nova-cell1-novncproxy-0\" (UID: \"b77d704e-5a2f-48ba-ac3c-c8495bda44ff\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 08:26:20 crc kubenswrapper[4809]: I0312 08:26:20.461772 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b77d704e-5a2f-48ba-ac3c-c8495bda44ff-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b77d704e-5a2f-48ba-ac3c-c8495bda44ff\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 08:26:20 crc kubenswrapper[4809]: I0312 08:26:20.461852 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/b77d704e-5a2f-48ba-ac3c-c8495bda44ff-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b77d704e-5a2f-48ba-ac3c-c8495bda44ff\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 08:26:20 crc kubenswrapper[4809]: I0312 08:26:20.462175 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b77d704e-5a2f-48ba-ac3c-c8495bda44ff-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b77d704e-5a2f-48ba-ac3c-c8495bda44ff\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 08:26:20 crc kubenswrapper[4809]: I0312 08:26:20.462315 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/b77d704e-5a2f-48ba-ac3c-c8495bda44ff-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b77d704e-5a2f-48ba-ac3c-c8495bda44ff\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 08:26:20 crc kubenswrapper[4809]: I0312 08:26:20.483462 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xgfq\" (UniqueName: \"kubernetes.io/projected/b77d704e-5a2f-48ba-ac3c-c8495bda44ff-kube-api-access-7xgfq\") pod \"nova-cell1-novncproxy-0\" (UID: \"b77d704e-5a2f-48ba-ac3c-c8495bda44ff\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 08:26:20 crc kubenswrapper[4809]: I0312 08:26:20.544205 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 08:26:20 crc kubenswrapper[4809]: I0312 08:26:20.567299 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 12 08:26:21 crc kubenswrapper[4809]: W0312 08:26:21.132825 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb77d704e_5a2f_48ba_ac3c_c8495bda44ff.slice/crio-965f88e4855c1476041ed5324fa20a2d4b4c1f56e12b28a22165001d9b3bae18 WatchSource:0}: Error finding container 965f88e4855c1476041ed5324fa20a2d4b4c1f56e12b28a22165001d9b3bae18: Status 404 returned error can't find the container with id 965f88e4855c1476041ed5324fa20a2d4b4c1f56e12b28a22165001d9b3bae18 Mar 12 08:26:21 crc kubenswrapper[4809]: I0312 08:26:21.137338 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1483a15a-a49d-46c7-886b-85e8f7fcfc0b" path="/var/lib/kubelet/pods/1483a15a-a49d-46c7-886b-85e8f7fcfc0b/volumes" Mar 12 08:26:21 crc kubenswrapper[4809]: I0312 08:26:21.138258 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b1185a8-a7d9-4f17-b98f-02b8051d196a" path="/var/lib/kubelet/pods/3b1185a8-a7d9-4f17-b98f-02b8051d196a/volumes" Mar 12 08:26:21 crc kubenswrapper[4809]: I0312 08:26:21.139052 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 12 08:26:21 crc kubenswrapper[4809]: I0312 08:26:21.220545 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 08:26:21 crc kubenswrapper[4809]: W0312 08:26:21.226247 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4eb1b537_df6b_4e03_aa39_42380aac7bd8.slice/crio-ebd1abe666cf0abbbea2327ecb2e8395e9fef73f192892434b0657123bffe0ef WatchSource:0}: Error finding container ebd1abe666cf0abbbea2327ecb2e8395e9fef73f192892434b0657123bffe0ef: Status 404 returned error can't find the container with id ebd1abe666cf0abbbea2327ecb2e8395e9fef73f192892434b0657123bffe0ef Mar 12 08:26:21 crc kubenswrapper[4809]: I0312 08:26:21.495562 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 12 08:26:21 crc kubenswrapper[4809]: I0312 08:26:21.497249 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 12 08:26:21 crc kubenswrapper[4809]: I0312 08:26:21.498672 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 12 08:26:21 crc kubenswrapper[4809]: I0312 08:26:21.505009 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 12 08:26:22 crc kubenswrapper[4809]: I0312 08:26:22.003725 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4eb1b537-df6b-4e03-aa39-42380aac7bd8","Type":"ContainerStarted","Data":"8d83dbb02d3e347c2ebebe3d87867970eb544c920b87745b17e40d71db57f539"} Mar 12 08:26:22 crc kubenswrapper[4809]: I0312 08:26:22.003798 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4eb1b537-df6b-4e03-aa39-42380aac7bd8","Type":"ContainerStarted","Data":"7ac6254b133755930372b31485c6baf43e4a5b682323a3a9aa7aa077dc9372d6"} Mar 12 08:26:22 crc kubenswrapper[4809]: I0312 08:26:22.003819 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4eb1b537-df6b-4e03-aa39-42380aac7bd8","Type":"ContainerStarted","Data":"ebd1abe666cf0abbbea2327ecb2e8395e9fef73f192892434b0657123bffe0ef"} Mar 12 08:26:22 crc kubenswrapper[4809]: I0312 08:26:22.010902 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5581de1-27a7-4f7f-8388-904d787d5ab8","Type":"ContainerDied","Data":"ae4e3291e11f40ff9801ba2f53a90feaa68a42cfcf2ed5288c6d532f211b51d2"} Mar 12 08:26:22 crc kubenswrapper[4809]: I0312 08:26:22.010841 4809 generic.go:334] "Generic (PLEG): container finished" podID="a5581de1-27a7-4f7f-8388-904d787d5ab8" containerID="ae4e3291e11f40ff9801ba2f53a90feaa68a42cfcf2ed5288c6d532f211b51d2" exitCode=0 Mar 12 08:26:22 crc kubenswrapper[4809]: I0312 08:26:22.016930 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b77d704e-5a2f-48ba-ac3c-c8495bda44ff","Type":"ContainerStarted","Data":"957e3cae55cb8ba32360e1db30ee8cc872abf354f90821da80ef5a5f1383cbea"} Mar 12 08:26:22 crc kubenswrapper[4809]: I0312 08:26:22.016977 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 12 08:26:22 crc kubenswrapper[4809]: I0312 08:26:22.016992 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b77d704e-5a2f-48ba-ac3c-c8495bda44ff","Type":"ContainerStarted","Data":"965f88e4855c1476041ed5324fa20a2d4b4c1f56e12b28a22165001d9b3bae18"} Mar 12 08:26:22 crc kubenswrapper[4809]: I0312 08:26:22.020773 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 12 08:26:22 crc kubenswrapper[4809]: I0312 08:26:22.043088 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.043055744 podStartE2EDuration="2.043055744s" podCreationTimestamp="2026-03-12 08:26:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:26:22.025755333 +0000 UTC m=+1655.607791066" watchObservedRunningTime="2026-03-12 08:26:22.043055744 +0000 UTC m=+1655.625091487" Mar 12 08:26:22 crc kubenswrapper[4809]: I0312 08:26:22.051226 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.051203236 podStartE2EDuration="2.051203236s" podCreationTimestamp="2026-03-12 08:26:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:26:22.046662242 +0000 UTC m=+1655.628697965" watchObservedRunningTime="2026-03-12 08:26:22.051203236 +0000 UTC m=+1655.633238969" Mar 12 08:26:22 crc kubenswrapper[4809]: I0312 08:26:22.299597 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-48fpg"] Mar 12 08:26:22 crc kubenswrapper[4809]: I0312 08:26:22.302039 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-48fpg" Mar 12 08:26:22 crc kubenswrapper[4809]: I0312 08:26:22.343781 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-48fpg"] Mar 12 08:26:22 crc kubenswrapper[4809]: I0312 08:26:22.460787 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/678f1c67-671e-47c2-9086-165664e890c8-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-48fpg\" (UID: \"678f1c67-671e-47c2-9086-165664e890c8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-48fpg" Mar 12 08:26:22 crc kubenswrapper[4809]: I0312 08:26:22.462249 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/678f1c67-671e-47c2-9086-165664e890c8-config\") pod \"dnsmasq-dns-6b7bbf7cf9-48fpg\" (UID: \"678f1c67-671e-47c2-9086-165664e890c8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-48fpg" Mar 12 08:26:22 crc kubenswrapper[4809]: I0312 08:26:22.462313 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/678f1c67-671e-47c2-9086-165664e890c8-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-48fpg\" (UID: \"678f1c67-671e-47c2-9086-165664e890c8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-48fpg" Mar 12 08:26:22 crc kubenswrapper[4809]: I0312 08:26:22.462636 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdrl2\" (UniqueName: \"kubernetes.io/projected/678f1c67-671e-47c2-9086-165664e890c8-kube-api-access-hdrl2\") pod \"dnsmasq-dns-6b7bbf7cf9-48fpg\" (UID: \"678f1c67-671e-47c2-9086-165664e890c8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-48fpg" Mar 12 08:26:22 crc kubenswrapper[4809]: I0312 08:26:22.462863 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/678f1c67-671e-47c2-9086-165664e890c8-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-48fpg\" (UID: \"678f1c67-671e-47c2-9086-165664e890c8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-48fpg" Mar 12 08:26:22 crc kubenswrapper[4809]: I0312 08:26:22.463199 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/678f1c67-671e-47c2-9086-165664e890c8-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-48fpg\" (UID: \"678f1c67-671e-47c2-9086-165664e890c8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-48fpg" Mar 12 08:26:22 crc kubenswrapper[4809]: I0312 08:26:22.566276 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdrl2\" (UniqueName: \"kubernetes.io/projected/678f1c67-671e-47c2-9086-165664e890c8-kube-api-access-hdrl2\") pod \"dnsmasq-dns-6b7bbf7cf9-48fpg\" (UID: \"678f1c67-671e-47c2-9086-165664e890c8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-48fpg" Mar 12 08:26:22 crc kubenswrapper[4809]: I0312 08:26:22.567221 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/678f1c67-671e-47c2-9086-165664e890c8-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-48fpg\" (UID: \"678f1c67-671e-47c2-9086-165664e890c8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-48fpg" Mar 12 08:26:22 crc kubenswrapper[4809]: I0312 08:26:22.567429 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/678f1c67-671e-47c2-9086-165664e890c8-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-48fpg\" (UID: \"678f1c67-671e-47c2-9086-165664e890c8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-48fpg" Mar 12 08:26:22 crc kubenswrapper[4809]: I0312 08:26:22.567556 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/678f1c67-671e-47c2-9086-165664e890c8-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-48fpg\" (UID: \"678f1c67-671e-47c2-9086-165664e890c8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-48fpg" Mar 12 08:26:22 crc kubenswrapper[4809]: I0312 08:26:22.567709 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/678f1c67-671e-47c2-9086-165664e890c8-config\") pod \"dnsmasq-dns-6b7bbf7cf9-48fpg\" (UID: \"678f1c67-671e-47c2-9086-165664e890c8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-48fpg" Mar 12 08:26:22 crc kubenswrapper[4809]: I0312 08:26:22.567816 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/678f1c67-671e-47c2-9086-165664e890c8-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-48fpg\" (UID: \"678f1c67-671e-47c2-9086-165664e890c8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-48fpg" Mar 12 08:26:22 crc kubenswrapper[4809]: I0312 08:26:22.568470 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/678f1c67-671e-47c2-9086-165664e890c8-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-48fpg\" (UID: \"678f1c67-671e-47c2-9086-165664e890c8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-48fpg" Mar 12 08:26:22 crc kubenswrapper[4809]: I0312 08:26:22.568509 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/678f1c67-671e-47c2-9086-165664e890c8-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-48fpg\" (UID: \"678f1c67-671e-47c2-9086-165664e890c8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-48fpg" Mar 12 08:26:22 crc kubenswrapper[4809]: I0312 08:26:22.568509 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/678f1c67-671e-47c2-9086-165664e890c8-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-48fpg\" (UID: \"678f1c67-671e-47c2-9086-165664e890c8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-48fpg" Mar 12 08:26:22 crc kubenswrapper[4809]: I0312 08:26:22.568741 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/678f1c67-671e-47c2-9086-165664e890c8-config\") pod \"dnsmasq-dns-6b7bbf7cf9-48fpg\" (UID: \"678f1c67-671e-47c2-9086-165664e890c8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-48fpg" Mar 12 08:26:22 crc kubenswrapper[4809]: I0312 08:26:22.568935 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/678f1c67-671e-47c2-9086-165664e890c8-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-48fpg\" (UID: \"678f1c67-671e-47c2-9086-165664e890c8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-48fpg" Mar 12 08:26:22 crc kubenswrapper[4809]: I0312 08:26:22.595620 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdrl2\" (UniqueName: \"kubernetes.io/projected/678f1c67-671e-47c2-9086-165664e890c8-kube-api-access-hdrl2\") pod \"dnsmasq-dns-6b7bbf7cf9-48fpg\" (UID: \"678f1c67-671e-47c2-9086-165664e890c8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-48fpg" Mar 12 08:26:22 crc kubenswrapper[4809]: I0312 08:26:22.643862 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-48fpg" Mar 12 08:26:23 crc kubenswrapper[4809]: I0312 08:26:23.279101 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-48fpg"] Mar 12 08:26:24 crc kubenswrapper[4809]: I0312 08:26:24.041354 4809 generic.go:334] "Generic (PLEG): container finished" podID="678f1c67-671e-47c2-9086-165664e890c8" containerID="7ce3df5ec7639c225cedddcda84daf42856f9356dde19e7108d7b6ee7a257b70" exitCode=0 Mar 12 08:26:24 crc kubenswrapper[4809]: I0312 08:26:24.041504 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-48fpg" event={"ID":"678f1c67-671e-47c2-9086-165664e890c8","Type":"ContainerDied","Data":"7ce3df5ec7639c225cedddcda84daf42856f9356dde19e7108d7b6ee7a257b70"} Mar 12 08:26:24 crc kubenswrapper[4809]: I0312 08:26:24.042738 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-48fpg" event={"ID":"678f1c67-671e-47c2-9086-165664e890c8","Type":"ContainerStarted","Data":"043581c2da4e0b6dabdf09bdb2348efb4066987530cfd7de6317ddf8d01985e4"} Mar 12 08:26:25 crc kubenswrapper[4809]: I0312 08:26:25.086433 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-48fpg" event={"ID":"678f1c67-671e-47c2-9086-165664e890c8","Type":"ContainerStarted","Data":"3a438231b1583d34a00c11e0ebfe0b6d225dd328ae7ec0d849fe8cc27f1051d3"} Mar 12 08:26:25 crc kubenswrapper[4809]: I0312 08:26:25.088804 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7bbf7cf9-48fpg" Mar 12 08:26:25 crc kubenswrapper[4809]: I0312 08:26:25.103483 4809 scope.go:117] "RemoveContainer" containerID="1a51f307df6fe5bf4978ea8e8793cd993e3bfd4fb5fd85053140da22e30402ce" Mar 12 08:26:25 crc kubenswrapper[4809]: I0312 08:26:25.145521 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 12 08:26:25 crc kubenswrapper[4809]: I0312 08:26:25.146390 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b07ea172-0fa3-4ffe-a10a-fcf182734dc1" containerName="nova-api-log" containerID="cri-o://6633b218c460ac381e8ab8eb9185ca003266f27db611b67a083dd12f81730b7d" gracePeriod=30 Mar 12 08:26:25 crc kubenswrapper[4809]: I0312 08:26:25.146556 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b07ea172-0fa3-4ffe-a10a-fcf182734dc1" containerName="nova-api-api" containerID="cri-o://7bd67350b405465acd4d877942b0759278b53ffe2e0f6fe697cb2000c46bab75" gracePeriod=30 Mar 12 08:26:25 crc kubenswrapper[4809]: I0312 08:26:25.150170 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7bbf7cf9-48fpg" podStartSLOduration=3.150143942 podStartE2EDuration="3.150143942s" podCreationTimestamp="2026-03-12 08:26:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:26:25.116558508 +0000 UTC m=+1658.698594241" watchObservedRunningTime="2026-03-12 08:26:25.150143942 +0000 UTC m=+1658.732179675" Mar 12 08:26:25 crc kubenswrapper[4809]: I0312 08:26:25.544686 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 12 08:26:25 crc kubenswrapper[4809]: I0312 08:26:25.544745 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 12 08:26:25 crc kubenswrapper[4809]: I0312 08:26:25.568414 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Mar 12 08:26:26 crc kubenswrapper[4809]: I0312 08:26:26.104228 4809 generic.go:334] "Generic (PLEG): container finished" podID="b07ea172-0fa3-4ffe-a10a-fcf182734dc1" containerID="6633b218c460ac381e8ab8eb9185ca003266f27db611b67a083dd12f81730b7d" exitCode=143 Mar 12 08:26:26 crc kubenswrapper[4809]: I0312 08:26:26.104322 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b07ea172-0fa3-4ffe-a10a-fcf182734dc1","Type":"ContainerDied","Data":"6633b218c460ac381e8ab8eb9185ca003266f27db611b67a083dd12f81730b7d"} Mar 12 08:26:28 crc kubenswrapper[4809]: I0312 08:26:28.909317 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 08:26:29 crc kubenswrapper[4809]: I0312 08:26:29.065734 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b07ea172-0fa3-4ffe-a10a-fcf182734dc1-config-data\") pod \"b07ea172-0fa3-4ffe-a10a-fcf182734dc1\" (UID: \"b07ea172-0fa3-4ffe-a10a-fcf182734dc1\") " Mar 12 08:26:29 crc kubenswrapper[4809]: I0312 08:26:29.066328 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vppb\" (UniqueName: \"kubernetes.io/projected/b07ea172-0fa3-4ffe-a10a-fcf182734dc1-kube-api-access-8vppb\") pod \"b07ea172-0fa3-4ffe-a10a-fcf182734dc1\" (UID: \"b07ea172-0fa3-4ffe-a10a-fcf182734dc1\") " Mar 12 08:26:29 crc kubenswrapper[4809]: I0312 08:26:29.066502 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b07ea172-0fa3-4ffe-a10a-fcf182734dc1-combined-ca-bundle\") pod \"b07ea172-0fa3-4ffe-a10a-fcf182734dc1\" (UID: \"b07ea172-0fa3-4ffe-a10a-fcf182734dc1\") " Mar 12 08:26:29 crc kubenswrapper[4809]: I0312 08:26:29.066632 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b07ea172-0fa3-4ffe-a10a-fcf182734dc1-logs\") pod \"b07ea172-0fa3-4ffe-a10a-fcf182734dc1\" (UID: \"b07ea172-0fa3-4ffe-a10a-fcf182734dc1\") " Mar 12 08:26:29 crc kubenswrapper[4809]: I0312 08:26:29.068826 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b07ea172-0fa3-4ffe-a10a-fcf182734dc1-logs" (OuterVolumeSpecName: "logs") pod "b07ea172-0fa3-4ffe-a10a-fcf182734dc1" (UID: "b07ea172-0fa3-4ffe-a10a-fcf182734dc1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:26:29 crc kubenswrapper[4809]: I0312 08:26:29.099506 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b07ea172-0fa3-4ffe-a10a-fcf182734dc1-kube-api-access-8vppb" (OuterVolumeSpecName: "kube-api-access-8vppb") pod "b07ea172-0fa3-4ffe-a10a-fcf182734dc1" (UID: "b07ea172-0fa3-4ffe-a10a-fcf182734dc1"). InnerVolumeSpecName "kube-api-access-8vppb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:26:29 crc kubenswrapper[4809]: I0312 08:26:29.121236 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b07ea172-0fa3-4ffe-a10a-fcf182734dc1-config-data" (OuterVolumeSpecName: "config-data") pod "b07ea172-0fa3-4ffe-a10a-fcf182734dc1" (UID: "b07ea172-0fa3-4ffe-a10a-fcf182734dc1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:26:29 crc kubenswrapper[4809]: I0312 08:26:29.126462 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b07ea172-0fa3-4ffe-a10a-fcf182734dc1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b07ea172-0fa3-4ffe-a10a-fcf182734dc1" (UID: "b07ea172-0fa3-4ffe-a10a-fcf182734dc1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:26:29 crc kubenswrapper[4809]: I0312 08:26:29.153905 4809 generic.go:334] "Generic (PLEG): container finished" podID="b07ea172-0fa3-4ffe-a10a-fcf182734dc1" containerID="7bd67350b405465acd4d877942b0759278b53ffe2e0f6fe697cb2000c46bab75" exitCode=0 Mar 12 08:26:29 crc kubenswrapper[4809]: I0312 08:26:29.153980 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b07ea172-0fa3-4ffe-a10a-fcf182734dc1","Type":"ContainerDied","Data":"7bd67350b405465acd4d877942b0759278b53ffe2e0f6fe697cb2000c46bab75"} Mar 12 08:26:29 crc kubenswrapper[4809]: I0312 08:26:29.154016 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b07ea172-0fa3-4ffe-a10a-fcf182734dc1","Type":"ContainerDied","Data":"de1e1b15ccfccb34deb99a6bda2316e6e84c177a809bda13abcf308972e9bbf2"} Mar 12 08:26:29 crc kubenswrapper[4809]: I0312 08:26:29.154052 4809 scope.go:117] "RemoveContainer" containerID="7bd67350b405465acd4d877942b0759278b53ffe2e0f6fe697cb2000c46bab75" Mar 12 08:26:29 crc kubenswrapper[4809]: I0312 08:26:29.154387 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 08:26:29 crc kubenswrapper[4809]: I0312 08:26:29.171101 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vppb\" (UniqueName: \"kubernetes.io/projected/b07ea172-0fa3-4ffe-a10a-fcf182734dc1-kube-api-access-8vppb\") on node \"crc\" DevicePath \"\"" Mar 12 08:26:29 crc kubenswrapper[4809]: I0312 08:26:29.171179 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b07ea172-0fa3-4ffe-a10a-fcf182734dc1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:26:29 crc kubenswrapper[4809]: I0312 08:26:29.171193 4809 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b07ea172-0fa3-4ffe-a10a-fcf182734dc1-logs\") on node \"crc\" DevicePath \"\"" Mar 12 08:26:29 crc kubenswrapper[4809]: I0312 08:26:29.171205 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b07ea172-0fa3-4ffe-a10a-fcf182734dc1-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 08:26:29 crc kubenswrapper[4809]: I0312 08:26:29.267653 4809 scope.go:117] "RemoveContainer" containerID="6633b218c460ac381e8ab8eb9185ca003266f27db611b67a083dd12f81730b7d" Mar 12 08:26:29 crc kubenswrapper[4809]: I0312 08:26:29.286901 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 12 08:26:29 crc kubenswrapper[4809]: I0312 08:26:29.293872 4809 scope.go:117] "RemoveContainer" containerID="7bd67350b405465acd4d877942b0759278b53ffe2e0f6fe697cb2000c46bab75" Mar 12 08:26:29 crc kubenswrapper[4809]: E0312 08:26:29.294441 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bd67350b405465acd4d877942b0759278b53ffe2e0f6fe697cb2000c46bab75\": container with ID starting with 7bd67350b405465acd4d877942b0759278b53ffe2e0f6fe697cb2000c46bab75 not found: ID does not exist" containerID="7bd67350b405465acd4d877942b0759278b53ffe2e0f6fe697cb2000c46bab75" Mar 12 08:26:29 crc kubenswrapper[4809]: I0312 08:26:29.294492 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bd67350b405465acd4d877942b0759278b53ffe2e0f6fe697cb2000c46bab75"} err="failed to get container status \"7bd67350b405465acd4d877942b0759278b53ffe2e0f6fe697cb2000c46bab75\": rpc error: code = NotFound desc = could not find container \"7bd67350b405465acd4d877942b0759278b53ffe2e0f6fe697cb2000c46bab75\": container with ID starting with 7bd67350b405465acd4d877942b0759278b53ffe2e0f6fe697cb2000c46bab75 not found: ID does not exist" Mar 12 08:26:29 crc kubenswrapper[4809]: I0312 08:26:29.294522 4809 scope.go:117] "RemoveContainer" containerID="6633b218c460ac381e8ab8eb9185ca003266f27db611b67a083dd12f81730b7d" Mar 12 08:26:29 crc kubenswrapper[4809]: E0312 08:26:29.294827 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6633b218c460ac381e8ab8eb9185ca003266f27db611b67a083dd12f81730b7d\": container with ID starting with 6633b218c460ac381e8ab8eb9185ca003266f27db611b67a083dd12f81730b7d not found: ID does not exist" containerID="6633b218c460ac381e8ab8eb9185ca003266f27db611b67a083dd12f81730b7d" Mar 12 08:26:29 crc kubenswrapper[4809]: I0312 08:26:29.294858 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6633b218c460ac381e8ab8eb9185ca003266f27db611b67a083dd12f81730b7d"} err="failed to get container status \"6633b218c460ac381e8ab8eb9185ca003266f27db611b67a083dd12f81730b7d\": rpc error: code = NotFound desc = could not find container \"6633b218c460ac381e8ab8eb9185ca003266f27db611b67a083dd12f81730b7d\": container with ID starting with 6633b218c460ac381e8ab8eb9185ca003266f27db611b67a083dd12f81730b7d not found: ID does not exist" Mar 12 08:26:29 crc kubenswrapper[4809]: I0312 08:26:29.305759 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 12 08:26:29 crc kubenswrapper[4809]: I0312 08:26:29.326050 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 12 08:26:29 crc kubenswrapper[4809]: E0312 08:26:29.326935 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b07ea172-0fa3-4ffe-a10a-fcf182734dc1" containerName="nova-api-log" Mar 12 08:26:29 crc kubenswrapper[4809]: I0312 08:26:29.326969 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="b07ea172-0fa3-4ffe-a10a-fcf182734dc1" containerName="nova-api-log" Mar 12 08:26:29 crc kubenswrapper[4809]: E0312 08:26:29.327017 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b07ea172-0fa3-4ffe-a10a-fcf182734dc1" containerName="nova-api-api" Mar 12 08:26:29 crc kubenswrapper[4809]: I0312 08:26:29.327027 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="b07ea172-0fa3-4ffe-a10a-fcf182734dc1" containerName="nova-api-api" Mar 12 08:26:29 crc kubenswrapper[4809]: I0312 08:26:29.327468 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="b07ea172-0fa3-4ffe-a10a-fcf182734dc1" containerName="nova-api-api" Mar 12 08:26:29 crc kubenswrapper[4809]: I0312 08:26:29.327555 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="b07ea172-0fa3-4ffe-a10a-fcf182734dc1" containerName="nova-api-log" Mar 12 08:26:29 crc kubenswrapper[4809]: I0312 08:26:29.330667 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 08:26:29 crc kubenswrapper[4809]: I0312 08:26:29.342849 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Mar 12 08:26:29 crc kubenswrapper[4809]: I0312 08:26:29.343060 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Mar 12 08:26:29 crc kubenswrapper[4809]: I0312 08:26:29.343505 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 12 08:26:29 crc kubenswrapper[4809]: I0312 08:26:29.368281 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 12 08:26:29 crc kubenswrapper[4809]: I0312 08:26:29.479274 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8aa93b9-e740-431c-b4c2-77d3973d9251-public-tls-certs\") pod \"nova-api-0\" (UID: \"d8aa93b9-e740-431c-b4c2-77d3973d9251\") " pod="openstack/nova-api-0" Mar 12 08:26:29 crc kubenswrapper[4809]: I0312 08:26:29.479366 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgbnc\" (UniqueName: \"kubernetes.io/projected/d8aa93b9-e740-431c-b4c2-77d3973d9251-kube-api-access-mgbnc\") pod \"nova-api-0\" (UID: \"d8aa93b9-e740-431c-b4c2-77d3973d9251\") " pod="openstack/nova-api-0" Mar 12 08:26:29 crc kubenswrapper[4809]: I0312 08:26:29.479900 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8aa93b9-e740-431c-b4c2-77d3973d9251-config-data\") pod \"nova-api-0\" (UID: \"d8aa93b9-e740-431c-b4c2-77d3973d9251\") " pod="openstack/nova-api-0" Mar 12 08:26:29 crc kubenswrapper[4809]: I0312 08:26:29.480198 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8aa93b9-e740-431c-b4c2-77d3973d9251-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d8aa93b9-e740-431c-b4c2-77d3973d9251\") " pod="openstack/nova-api-0" Mar 12 08:26:29 crc kubenswrapper[4809]: I0312 08:26:29.480425 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8aa93b9-e740-431c-b4c2-77d3973d9251-internal-tls-certs\") pod \"nova-api-0\" (UID: \"d8aa93b9-e740-431c-b4c2-77d3973d9251\") " pod="openstack/nova-api-0" Mar 12 08:26:29 crc kubenswrapper[4809]: I0312 08:26:29.480591 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8aa93b9-e740-431c-b4c2-77d3973d9251-logs\") pod \"nova-api-0\" (UID: \"d8aa93b9-e740-431c-b4c2-77d3973d9251\") " pod="openstack/nova-api-0" Mar 12 08:26:29 crc kubenswrapper[4809]: I0312 08:26:29.585046 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8aa93b9-e740-431c-b4c2-77d3973d9251-config-data\") pod \"nova-api-0\" (UID: \"d8aa93b9-e740-431c-b4c2-77d3973d9251\") " pod="openstack/nova-api-0" Mar 12 08:26:29 crc kubenswrapper[4809]: I0312 08:26:29.585255 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8aa93b9-e740-431c-b4c2-77d3973d9251-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d8aa93b9-e740-431c-b4c2-77d3973d9251\") " pod="openstack/nova-api-0" Mar 12 08:26:29 crc kubenswrapper[4809]: I0312 08:26:29.585298 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8aa93b9-e740-431c-b4c2-77d3973d9251-internal-tls-certs\") pod \"nova-api-0\" (UID: \"d8aa93b9-e740-431c-b4c2-77d3973d9251\") " pod="openstack/nova-api-0" Mar 12 08:26:29 crc kubenswrapper[4809]: I0312 08:26:29.585334 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8aa93b9-e740-431c-b4c2-77d3973d9251-logs\") pod \"nova-api-0\" (UID: \"d8aa93b9-e740-431c-b4c2-77d3973d9251\") " pod="openstack/nova-api-0" Mar 12 08:26:29 crc kubenswrapper[4809]: I0312 08:26:29.585365 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8aa93b9-e740-431c-b4c2-77d3973d9251-public-tls-certs\") pod \"nova-api-0\" (UID: \"d8aa93b9-e740-431c-b4c2-77d3973d9251\") " pod="openstack/nova-api-0" Mar 12 08:26:29 crc kubenswrapper[4809]: I0312 08:26:29.585409 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgbnc\" (UniqueName: \"kubernetes.io/projected/d8aa93b9-e740-431c-b4c2-77d3973d9251-kube-api-access-mgbnc\") pod \"nova-api-0\" (UID: \"d8aa93b9-e740-431c-b4c2-77d3973d9251\") " pod="openstack/nova-api-0" Mar 12 08:26:29 crc kubenswrapper[4809]: I0312 08:26:29.586153 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8aa93b9-e740-431c-b4c2-77d3973d9251-logs\") pod \"nova-api-0\" (UID: \"d8aa93b9-e740-431c-b4c2-77d3973d9251\") " pod="openstack/nova-api-0" Mar 12 08:26:29 crc kubenswrapper[4809]: I0312 08:26:29.589706 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8aa93b9-e740-431c-b4c2-77d3973d9251-config-data\") pod \"nova-api-0\" (UID: \"d8aa93b9-e740-431c-b4c2-77d3973d9251\") " pod="openstack/nova-api-0" Mar 12 08:26:29 crc kubenswrapper[4809]: I0312 08:26:29.591135 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8aa93b9-e740-431c-b4c2-77d3973d9251-public-tls-certs\") pod \"nova-api-0\" (UID: \"d8aa93b9-e740-431c-b4c2-77d3973d9251\") " pod="openstack/nova-api-0" Mar 12 08:26:29 crc kubenswrapper[4809]: I0312 08:26:29.592791 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8aa93b9-e740-431c-b4c2-77d3973d9251-internal-tls-certs\") pod \"nova-api-0\" (UID: \"d8aa93b9-e740-431c-b4c2-77d3973d9251\") " pod="openstack/nova-api-0" Mar 12 08:26:29 crc kubenswrapper[4809]: I0312 08:26:29.593495 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8aa93b9-e740-431c-b4c2-77d3973d9251-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d8aa93b9-e740-431c-b4c2-77d3973d9251\") " pod="openstack/nova-api-0" Mar 12 08:26:29 crc kubenswrapper[4809]: I0312 08:26:29.605510 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgbnc\" (UniqueName: \"kubernetes.io/projected/d8aa93b9-e740-431c-b4c2-77d3973d9251-kube-api-access-mgbnc\") pod \"nova-api-0\" (UID: \"d8aa93b9-e740-431c-b4c2-77d3973d9251\") " pod="openstack/nova-api-0" Mar 12 08:26:29 crc kubenswrapper[4809]: I0312 08:26:29.654962 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 08:26:30 crc kubenswrapper[4809]: I0312 08:26:30.106729 4809 scope.go:117] "RemoveContainer" containerID="864ab2b9321734db2ec1c2e4ce57f8a096a813a5cbaa0744b231535713d6ab10" Mar 12 08:26:30 crc kubenswrapper[4809]: E0312 08:26:30.107623 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:26:30 crc kubenswrapper[4809]: I0312 08:26:30.191295 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 12 08:26:30 crc kubenswrapper[4809]: I0312 08:26:30.544597 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 12 08:26:30 crc kubenswrapper[4809]: I0312 08:26:30.544966 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 12 08:26:30 crc kubenswrapper[4809]: I0312 08:26:30.568447 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Mar 12 08:26:30 crc kubenswrapper[4809]: I0312 08:26:30.596162 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Mar 12 08:26:31 crc kubenswrapper[4809]: I0312 08:26:31.120854 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b07ea172-0fa3-4ffe-a10a-fcf182734dc1" path="/var/lib/kubelet/pods/b07ea172-0fa3-4ffe-a10a-fcf182734dc1/volumes" Mar 12 08:26:31 crc kubenswrapper[4809]: I0312 08:26:31.215796 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d8aa93b9-e740-431c-b4c2-77d3973d9251","Type":"ContainerStarted","Data":"778210d7bee4521eefed935f786032eeadb5eaa91c6aaa08f29d1f0733bd4e8c"} Mar 12 08:26:31 crc kubenswrapper[4809]: I0312 08:26:31.215846 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d8aa93b9-e740-431c-b4c2-77d3973d9251","Type":"ContainerStarted","Data":"bc0ab4d7344b148c072b826b583cf9737c2d8d5957b4d242227b170c4d9a76dc"} Mar 12 08:26:31 crc kubenswrapper[4809]: I0312 08:26:31.215861 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d8aa93b9-e740-431c-b4c2-77d3973d9251","Type":"ContainerStarted","Data":"6bd86bbc0ae66ee959a846a721ca8262a2b85a473849506af062dcea61470607"} Mar 12 08:26:31 crc kubenswrapper[4809]: I0312 08:26:31.252177 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.252146706 podStartE2EDuration="2.252146706s" podCreationTimestamp="2026-03-12 08:26:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:26:31.240769337 +0000 UTC m=+1664.822805090" watchObservedRunningTime="2026-03-12 08:26:31.252146706 +0000 UTC m=+1664.834182449" Mar 12 08:26:31 crc kubenswrapper[4809]: I0312 08:26:31.260301 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Mar 12 08:26:31 crc kubenswrapper[4809]: I0312 08:26:31.429443 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-fblks"] Mar 12 08:26:31 crc kubenswrapper[4809]: I0312 08:26:31.431758 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-fblks" Mar 12 08:26:31 crc kubenswrapper[4809]: I0312 08:26:31.434351 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Mar 12 08:26:31 crc kubenswrapper[4809]: I0312 08:26:31.434564 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Mar 12 08:26:31 crc kubenswrapper[4809]: I0312 08:26:31.450872 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-fblks"] Mar 12 08:26:31 crc kubenswrapper[4809]: I0312 08:26:31.551554 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d78e41a4-1f33-45bf-a0b8-6e53b47a3f70-config-data\") pod \"nova-cell1-cell-mapping-fblks\" (UID: \"d78e41a4-1f33-45bf-a0b8-6e53b47a3f70\") " pod="openstack/nova-cell1-cell-mapping-fblks" Mar 12 08:26:31 crc kubenswrapper[4809]: I0312 08:26:31.551665 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjxqc\" (UniqueName: \"kubernetes.io/projected/d78e41a4-1f33-45bf-a0b8-6e53b47a3f70-kube-api-access-bjxqc\") pod \"nova-cell1-cell-mapping-fblks\" (UID: \"d78e41a4-1f33-45bf-a0b8-6e53b47a3f70\") " pod="openstack/nova-cell1-cell-mapping-fblks" Mar 12 08:26:31 crc kubenswrapper[4809]: I0312 08:26:31.551701 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d78e41a4-1f33-45bf-a0b8-6e53b47a3f70-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-fblks\" (UID: \"d78e41a4-1f33-45bf-a0b8-6e53b47a3f70\") " pod="openstack/nova-cell1-cell-mapping-fblks" Mar 12 08:26:31 crc kubenswrapper[4809]: I0312 08:26:31.551789 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d78e41a4-1f33-45bf-a0b8-6e53b47a3f70-scripts\") pod \"nova-cell1-cell-mapping-fblks\" (UID: \"d78e41a4-1f33-45bf-a0b8-6e53b47a3f70\") " pod="openstack/nova-cell1-cell-mapping-fblks" Mar 12 08:26:31 crc kubenswrapper[4809]: I0312 08:26:31.566446 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="4eb1b537-df6b-4e03-aa39-42380aac7bd8" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.11:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 08:26:31 crc kubenswrapper[4809]: I0312 08:26:31.566466 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="4eb1b537-df6b-4e03-aa39-42380aac7bd8" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.11:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 08:26:31 crc kubenswrapper[4809]: I0312 08:26:31.656221 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d78e41a4-1f33-45bf-a0b8-6e53b47a3f70-config-data\") pod \"nova-cell1-cell-mapping-fblks\" (UID: \"d78e41a4-1f33-45bf-a0b8-6e53b47a3f70\") " pod="openstack/nova-cell1-cell-mapping-fblks" Mar 12 08:26:31 crc kubenswrapper[4809]: I0312 08:26:31.656398 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjxqc\" (UniqueName: \"kubernetes.io/projected/d78e41a4-1f33-45bf-a0b8-6e53b47a3f70-kube-api-access-bjxqc\") pod \"nova-cell1-cell-mapping-fblks\" (UID: \"d78e41a4-1f33-45bf-a0b8-6e53b47a3f70\") " pod="openstack/nova-cell1-cell-mapping-fblks" Mar 12 08:26:31 crc kubenswrapper[4809]: I0312 08:26:31.656446 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d78e41a4-1f33-45bf-a0b8-6e53b47a3f70-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-fblks\" (UID: \"d78e41a4-1f33-45bf-a0b8-6e53b47a3f70\") " pod="openstack/nova-cell1-cell-mapping-fblks" Mar 12 08:26:31 crc kubenswrapper[4809]: I0312 08:26:31.656580 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d78e41a4-1f33-45bf-a0b8-6e53b47a3f70-scripts\") pod \"nova-cell1-cell-mapping-fblks\" (UID: \"d78e41a4-1f33-45bf-a0b8-6e53b47a3f70\") " pod="openstack/nova-cell1-cell-mapping-fblks" Mar 12 08:26:31 crc kubenswrapper[4809]: I0312 08:26:31.664430 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d78e41a4-1f33-45bf-a0b8-6e53b47a3f70-scripts\") pod \"nova-cell1-cell-mapping-fblks\" (UID: \"d78e41a4-1f33-45bf-a0b8-6e53b47a3f70\") " pod="openstack/nova-cell1-cell-mapping-fblks" Mar 12 08:26:31 crc kubenswrapper[4809]: I0312 08:26:31.665268 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d78e41a4-1f33-45bf-a0b8-6e53b47a3f70-config-data\") pod \"nova-cell1-cell-mapping-fblks\" (UID: \"d78e41a4-1f33-45bf-a0b8-6e53b47a3f70\") " pod="openstack/nova-cell1-cell-mapping-fblks" Mar 12 08:26:31 crc kubenswrapper[4809]: I0312 08:26:31.665355 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d78e41a4-1f33-45bf-a0b8-6e53b47a3f70-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-fblks\" (UID: \"d78e41a4-1f33-45bf-a0b8-6e53b47a3f70\") " pod="openstack/nova-cell1-cell-mapping-fblks" Mar 12 08:26:31 crc kubenswrapper[4809]: I0312 08:26:31.680010 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjxqc\" (UniqueName: \"kubernetes.io/projected/d78e41a4-1f33-45bf-a0b8-6e53b47a3f70-kube-api-access-bjxqc\") pod \"nova-cell1-cell-mapping-fblks\" (UID: \"d78e41a4-1f33-45bf-a0b8-6e53b47a3f70\") " pod="openstack/nova-cell1-cell-mapping-fblks" Mar 12 08:26:31 crc kubenswrapper[4809]: I0312 08:26:31.763756 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-fblks" Mar 12 08:26:32 crc kubenswrapper[4809]: I0312 08:26:32.325098 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-fblks"] Mar 12 08:26:32 crc kubenswrapper[4809]: I0312 08:26:32.645359 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b7bbf7cf9-48fpg" Mar 12 08:26:32 crc kubenswrapper[4809]: I0312 08:26:32.769504 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-jhzbr"] Mar 12 08:26:32 crc kubenswrapper[4809]: I0312 08:26:32.769787 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-9b86998b5-jhzbr" podUID="92857c04-f2b0-41d7-b825-591496c43e0c" containerName="dnsmasq-dns" containerID="cri-o://89f840827fdf7cd1b10ca2e038fc3c8c9f5a09ec8a3e5f77fc61386bc2ca2ad9" gracePeriod=10 Mar 12 08:26:33 crc kubenswrapper[4809]: I0312 08:26:33.257860 4809 generic.go:334] "Generic (PLEG): container finished" podID="92857c04-f2b0-41d7-b825-591496c43e0c" containerID="89f840827fdf7cd1b10ca2e038fc3c8c9f5a09ec8a3e5f77fc61386bc2ca2ad9" exitCode=0 Mar 12 08:26:33 crc kubenswrapper[4809]: I0312 08:26:33.257922 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-jhzbr" event={"ID":"92857c04-f2b0-41d7-b825-591496c43e0c","Type":"ContainerDied","Data":"89f840827fdf7cd1b10ca2e038fc3c8c9f5a09ec8a3e5f77fc61386bc2ca2ad9"} Mar 12 08:26:33 crc kubenswrapper[4809]: I0312 08:26:33.262896 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-fblks" event={"ID":"d78e41a4-1f33-45bf-a0b8-6e53b47a3f70","Type":"ContainerStarted","Data":"6e5cc0d743f0aac26a1e000c34f95ad967b50780bc885fdcc6981791b211f92e"} Mar 12 08:26:33 crc kubenswrapper[4809]: I0312 08:26:33.262938 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-fblks" event={"ID":"d78e41a4-1f33-45bf-a0b8-6e53b47a3f70","Type":"ContainerStarted","Data":"c3e6df62a7a51e6b2a0b178dda770fa25651c2df828cbee0c984711599ff0e1f"} Mar 12 08:26:33 crc kubenswrapper[4809]: I0312 08:26:33.278159 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-fblks" podStartSLOduration=2.27813576 podStartE2EDuration="2.27813576s" podCreationTimestamp="2026-03-12 08:26:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:26:33.276394383 +0000 UTC m=+1666.858430116" watchObservedRunningTime="2026-03-12 08:26:33.27813576 +0000 UTC m=+1666.860171493" Mar 12 08:26:33 crc kubenswrapper[4809]: I0312 08:26:33.498382 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-jhzbr" Mar 12 08:26:33 crc kubenswrapper[4809]: I0312 08:26:33.682727 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92857c04-f2b0-41d7-b825-591496c43e0c-config\") pod \"92857c04-f2b0-41d7-b825-591496c43e0c\" (UID: \"92857c04-f2b0-41d7-b825-591496c43e0c\") " Mar 12 08:26:33 crc kubenswrapper[4809]: I0312 08:26:33.683363 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/92857c04-f2b0-41d7-b825-591496c43e0c-ovsdbserver-sb\") pod \"92857c04-f2b0-41d7-b825-591496c43e0c\" (UID: \"92857c04-f2b0-41d7-b825-591496c43e0c\") " Mar 12 08:26:33 crc kubenswrapper[4809]: I0312 08:26:33.683413 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npnsz\" (UniqueName: \"kubernetes.io/projected/92857c04-f2b0-41d7-b825-591496c43e0c-kube-api-access-npnsz\") pod \"92857c04-f2b0-41d7-b825-591496c43e0c\" (UID: \"92857c04-f2b0-41d7-b825-591496c43e0c\") " Mar 12 08:26:33 crc kubenswrapper[4809]: I0312 08:26:33.683461 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/92857c04-f2b0-41d7-b825-591496c43e0c-ovsdbserver-nb\") pod \"92857c04-f2b0-41d7-b825-591496c43e0c\" (UID: \"92857c04-f2b0-41d7-b825-591496c43e0c\") " Mar 12 08:26:33 crc kubenswrapper[4809]: I0312 08:26:33.683571 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92857c04-f2b0-41d7-b825-591496c43e0c-dns-svc\") pod \"92857c04-f2b0-41d7-b825-591496c43e0c\" (UID: \"92857c04-f2b0-41d7-b825-591496c43e0c\") " Mar 12 08:26:33 crc kubenswrapper[4809]: I0312 08:26:33.683747 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/92857c04-f2b0-41d7-b825-591496c43e0c-dns-swift-storage-0\") pod \"92857c04-f2b0-41d7-b825-591496c43e0c\" (UID: \"92857c04-f2b0-41d7-b825-591496c43e0c\") " Mar 12 08:26:33 crc kubenswrapper[4809]: I0312 08:26:33.690687 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92857c04-f2b0-41d7-b825-591496c43e0c-kube-api-access-npnsz" (OuterVolumeSpecName: "kube-api-access-npnsz") pod "92857c04-f2b0-41d7-b825-591496c43e0c" (UID: "92857c04-f2b0-41d7-b825-591496c43e0c"). InnerVolumeSpecName "kube-api-access-npnsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:26:33 crc kubenswrapper[4809]: I0312 08:26:33.751602 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92857c04-f2b0-41d7-b825-591496c43e0c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "92857c04-f2b0-41d7-b825-591496c43e0c" (UID: "92857c04-f2b0-41d7-b825-591496c43e0c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:26:33 crc kubenswrapper[4809]: I0312 08:26:33.752506 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92857c04-f2b0-41d7-b825-591496c43e0c-config" (OuterVolumeSpecName: "config") pod "92857c04-f2b0-41d7-b825-591496c43e0c" (UID: "92857c04-f2b0-41d7-b825-591496c43e0c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:26:33 crc kubenswrapper[4809]: I0312 08:26:33.755413 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92857c04-f2b0-41d7-b825-591496c43e0c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "92857c04-f2b0-41d7-b825-591496c43e0c" (UID: "92857c04-f2b0-41d7-b825-591496c43e0c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:26:33 crc kubenswrapper[4809]: I0312 08:26:33.771185 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92857c04-f2b0-41d7-b825-591496c43e0c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "92857c04-f2b0-41d7-b825-591496c43e0c" (UID: "92857c04-f2b0-41d7-b825-591496c43e0c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:26:33 crc kubenswrapper[4809]: I0312 08:26:33.782798 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92857c04-f2b0-41d7-b825-591496c43e0c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "92857c04-f2b0-41d7-b825-591496c43e0c" (UID: "92857c04-f2b0-41d7-b825-591496c43e0c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:26:33 crc kubenswrapper[4809]: I0312 08:26:33.787912 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92857c04-f2b0-41d7-b825-591496c43e0c-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:26:33 crc kubenswrapper[4809]: I0312 08:26:33.788354 4809 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/92857c04-f2b0-41d7-b825-591496c43e0c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 12 08:26:33 crc kubenswrapper[4809]: I0312 08:26:33.788498 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-npnsz\" (UniqueName: \"kubernetes.io/projected/92857c04-f2b0-41d7-b825-591496c43e0c-kube-api-access-npnsz\") on node \"crc\" DevicePath \"\"" Mar 12 08:26:33 crc kubenswrapper[4809]: I0312 08:26:33.788587 4809 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/92857c04-f2b0-41d7-b825-591496c43e0c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 12 08:26:33 crc kubenswrapper[4809]: I0312 08:26:33.788680 4809 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92857c04-f2b0-41d7-b825-591496c43e0c-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 12 08:26:33 crc kubenswrapper[4809]: I0312 08:26:33.788768 4809 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/92857c04-f2b0-41d7-b825-591496c43e0c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 12 08:26:35 crc kubenswrapper[4809]: I0312 08:26:35.015354 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-jhzbr" event={"ID":"92857c04-f2b0-41d7-b825-591496c43e0c","Type":"ContainerDied","Data":"8f8f338f2ace511693611de23fb4ec2761649c17318bb58f657b81bec5b4979a"} Mar 12 08:26:35 crc kubenswrapper[4809]: I0312 08:26:35.015440 4809 scope.go:117] "RemoveContainer" containerID="89f840827fdf7cd1b10ca2e038fc3c8c9f5a09ec8a3e5f77fc61386bc2ca2ad9" Mar 12 08:26:35 crc kubenswrapper[4809]: I0312 08:26:35.016490 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-jhzbr" Mar 12 08:26:35 crc kubenswrapper[4809]: I0312 08:26:35.092819 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-jhzbr"] Mar 12 08:26:35 crc kubenswrapper[4809]: I0312 08:26:35.098600 4809 scope.go:117] "RemoveContainer" containerID="ea06fe06a2ac0feda255000f0c5e14cdb76cafba523247bd8556d90d2b4adacf" Mar 12 08:26:35 crc kubenswrapper[4809]: I0312 08:26:35.104655 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-jhzbr"] Mar 12 08:26:35 crc kubenswrapper[4809]: I0312 08:26:35.123722 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92857c04-f2b0-41d7-b825-591496c43e0c" path="/var/lib/kubelet/pods/92857c04-f2b0-41d7-b825-591496c43e0c/volumes" Mar 12 08:26:39 crc kubenswrapper[4809]: I0312 08:26:39.076951 4809 generic.go:334] "Generic (PLEG): container finished" podID="d78e41a4-1f33-45bf-a0b8-6e53b47a3f70" containerID="6e5cc0d743f0aac26a1e000c34f95ad967b50780bc885fdcc6981791b211f92e" exitCode=0 Mar 12 08:26:39 crc kubenswrapper[4809]: I0312 08:26:39.077016 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-fblks" event={"ID":"d78e41a4-1f33-45bf-a0b8-6e53b47a3f70","Type":"ContainerDied","Data":"6e5cc0d743f0aac26a1e000c34f95ad967b50780bc885fdcc6981791b211f92e"} Mar 12 08:26:39 crc kubenswrapper[4809]: I0312 08:26:39.655172 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 12 08:26:39 crc kubenswrapper[4809]: I0312 08:26:39.655585 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 12 08:26:40 crc kubenswrapper[4809]: I0312 08:26:40.550548 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 12 08:26:40 crc kubenswrapper[4809]: I0312 08:26:40.551230 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 12 08:26:40 crc kubenswrapper[4809]: I0312 08:26:40.567598 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 12 08:26:40 crc kubenswrapper[4809]: I0312 08:26:40.570037 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 12 08:26:40 crc kubenswrapper[4809]: I0312 08:26:40.581261 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-fblks" Mar 12 08:26:40 crc kubenswrapper[4809]: I0312 08:26:40.670338 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d8aa93b9-e740-431c-b4c2-77d3973d9251" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.14:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 08:26:40 crc kubenswrapper[4809]: I0312 08:26:40.670400 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d8aa93b9-e740-431c-b4c2-77d3973d9251" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.14:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 08:26:40 crc kubenswrapper[4809]: I0312 08:26:40.746405 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d78e41a4-1f33-45bf-a0b8-6e53b47a3f70-combined-ca-bundle\") pod \"d78e41a4-1f33-45bf-a0b8-6e53b47a3f70\" (UID: \"d78e41a4-1f33-45bf-a0b8-6e53b47a3f70\") " Mar 12 08:26:40 crc kubenswrapper[4809]: I0312 08:26:40.746453 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d78e41a4-1f33-45bf-a0b8-6e53b47a3f70-scripts\") pod \"d78e41a4-1f33-45bf-a0b8-6e53b47a3f70\" (UID: \"d78e41a4-1f33-45bf-a0b8-6e53b47a3f70\") " Mar 12 08:26:40 crc kubenswrapper[4809]: I0312 08:26:40.746474 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d78e41a4-1f33-45bf-a0b8-6e53b47a3f70-config-data\") pod \"d78e41a4-1f33-45bf-a0b8-6e53b47a3f70\" (UID: \"d78e41a4-1f33-45bf-a0b8-6e53b47a3f70\") " Mar 12 08:26:40 crc kubenswrapper[4809]: I0312 08:26:40.746509 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjxqc\" (UniqueName: \"kubernetes.io/projected/d78e41a4-1f33-45bf-a0b8-6e53b47a3f70-kube-api-access-bjxqc\") pod \"d78e41a4-1f33-45bf-a0b8-6e53b47a3f70\" (UID: \"d78e41a4-1f33-45bf-a0b8-6e53b47a3f70\") " Mar 12 08:26:40 crc kubenswrapper[4809]: I0312 08:26:40.754173 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d78e41a4-1f33-45bf-a0b8-6e53b47a3f70-scripts" (OuterVolumeSpecName: "scripts") pod "d78e41a4-1f33-45bf-a0b8-6e53b47a3f70" (UID: "d78e41a4-1f33-45bf-a0b8-6e53b47a3f70"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:26:40 crc kubenswrapper[4809]: I0312 08:26:40.754223 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d78e41a4-1f33-45bf-a0b8-6e53b47a3f70-kube-api-access-bjxqc" (OuterVolumeSpecName: "kube-api-access-bjxqc") pod "d78e41a4-1f33-45bf-a0b8-6e53b47a3f70" (UID: "d78e41a4-1f33-45bf-a0b8-6e53b47a3f70"). InnerVolumeSpecName "kube-api-access-bjxqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:26:40 crc kubenswrapper[4809]: I0312 08:26:40.789840 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d78e41a4-1f33-45bf-a0b8-6e53b47a3f70-config-data" (OuterVolumeSpecName: "config-data") pod "d78e41a4-1f33-45bf-a0b8-6e53b47a3f70" (UID: "d78e41a4-1f33-45bf-a0b8-6e53b47a3f70"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:26:40 crc kubenswrapper[4809]: I0312 08:26:40.789877 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d78e41a4-1f33-45bf-a0b8-6e53b47a3f70-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d78e41a4-1f33-45bf-a0b8-6e53b47a3f70" (UID: "d78e41a4-1f33-45bf-a0b8-6e53b47a3f70"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:26:40 crc kubenswrapper[4809]: I0312 08:26:40.851092 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d78e41a4-1f33-45bf-a0b8-6e53b47a3f70-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:26:40 crc kubenswrapper[4809]: I0312 08:26:40.851172 4809 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d78e41a4-1f33-45bf-a0b8-6e53b47a3f70-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 08:26:40 crc kubenswrapper[4809]: I0312 08:26:40.851186 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d78e41a4-1f33-45bf-a0b8-6e53b47a3f70-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 08:26:40 crc kubenswrapper[4809]: I0312 08:26:40.851200 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjxqc\" (UniqueName: \"kubernetes.io/projected/d78e41a4-1f33-45bf-a0b8-6e53b47a3f70-kube-api-access-bjxqc\") on node \"crc\" DevicePath \"\"" Mar 12 08:26:41 crc kubenswrapper[4809]: I0312 08:26:41.123108 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-fblks" Mar 12 08:26:41 crc kubenswrapper[4809]: I0312 08:26:41.166869 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-fblks" event={"ID":"d78e41a4-1f33-45bf-a0b8-6e53b47a3f70","Type":"ContainerDied","Data":"c3e6df62a7a51e6b2a0b178dda770fa25651c2df828cbee0c984711599ff0e1f"} Mar 12 08:26:41 crc kubenswrapper[4809]: I0312 08:26:41.166920 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3e6df62a7a51e6b2a0b178dda770fa25651c2df828cbee0c984711599ff0e1f" Mar 12 08:26:41 crc kubenswrapper[4809]: I0312 08:26:41.245553 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 12 08:26:41 crc kubenswrapper[4809]: I0312 08:26:41.245943 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d8aa93b9-e740-431c-b4c2-77d3973d9251" containerName="nova-api-log" containerID="cri-o://bc0ab4d7344b148c072b826b583cf9737c2d8d5957b4d242227b170c4d9a76dc" gracePeriod=30 Mar 12 08:26:41 crc kubenswrapper[4809]: I0312 08:26:41.246023 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d8aa93b9-e740-431c-b4c2-77d3973d9251" containerName="nova-api-api" containerID="cri-o://778210d7bee4521eefed935f786032eeadb5eaa91c6aaa08f29d1f0733bd4e8c" gracePeriod=30 Mar 12 08:26:41 crc kubenswrapper[4809]: I0312 08:26:41.267877 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 08:26:41 crc kubenswrapper[4809]: I0312 08:26:41.268147 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="586c76fb-3a10-473c-b9d9-14fb71cc4f6e" containerName="nova-scheduler-scheduler" containerID="cri-o://d2af86247af42944b1743dd43f6d2ddb6af9d79c163cd7c46aa14102c984c0f2" gracePeriod=30 Mar 12 08:26:41 crc kubenswrapper[4809]: I0312 08:26:41.333568 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 08:26:42 crc kubenswrapper[4809]: I0312 08:26:42.146566 4809 generic.go:334] "Generic (PLEG): container finished" podID="d8aa93b9-e740-431c-b4c2-77d3973d9251" containerID="bc0ab4d7344b148c072b826b583cf9737c2d8d5957b4d242227b170c4d9a76dc" exitCode=143 Mar 12 08:26:42 crc kubenswrapper[4809]: I0312 08:26:42.146631 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d8aa93b9-e740-431c-b4c2-77d3973d9251","Type":"ContainerDied","Data":"bc0ab4d7344b148c072b826b583cf9737c2d8d5957b4d242227b170c4d9a76dc"} Mar 12 08:26:42 crc kubenswrapper[4809]: I0312 08:26:42.202487 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="a5581de1-27a7-4f7f-8388-904d787d5ab8" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 12 08:26:43 crc kubenswrapper[4809]: I0312 08:26:43.159496 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4eb1b537-df6b-4e03-aa39-42380aac7bd8" containerName="nova-metadata-log" containerID="cri-o://7ac6254b133755930372b31485c6baf43e4a5b682323a3a9aa7aa077dc9372d6" gracePeriod=30 Mar 12 08:26:43 crc kubenswrapper[4809]: I0312 08:26:43.159609 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4eb1b537-df6b-4e03-aa39-42380aac7bd8" containerName="nova-metadata-metadata" containerID="cri-o://8d83dbb02d3e347c2ebebe3d87867970eb544c920b87745b17e40d71db57f539" gracePeriod=30 Mar 12 08:26:44 crc kubenswrapper[4809]: I0312 08:26:44.107848 4809 scope.go:117] "RemoveContainer" containerID="864ab2b9321734db2ec1c2e4ce57f8a096a813a5cbaa0744b231535713d6ab10" Mar 12 08:26:44 crc kubenswrapper[4809]: E0312 08:26:44.108547 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:26:44 crc kubenswrapper[4809]: I0312 08:26:44.179003 4809 generic.go:334] "Generic (PLEG): container finished" podID="4eb1b537-df6b-4e03-aa39-42380aac7bd8" containerID="7ac6254b133755930372b31485c6baf43e4a5b682323a3a9aa7aa077dc9372d6" exitCode=143 Mar 12 08:26:44 crc kubenswrapper[4809]: I0312 08:26:44.179051 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4eb1b537-df6b-4e03-aa39-42380aac7bd8","Type":"ContainerDied","Data":"7ac6254b133755930372b31485c6baf43e4a5b682323a3a9aa7aa077dc9372d6"} Mar 12 08:26:45 crc kubenswrapper[4809]: I0312 08:26:45.204629 4809 generic.go:334] "Generic (PLEG): container finished" podID="f5e68398-e067-4b2a-9bbf-7d1e79911ccc" containerID="8d09117fdc064e2b240b8d3ee0392adb07387ebea513b338751cf143351ce9df" exitCode=137 Mar 12 08:26:45 crc kubenswrapper[4809]: I0312 08:26:45.204696 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f5e68398-e067-4b2a-9bbf-7d1e79911ccc","Type":"ContainerDied","Data":"8d09117fdc064e2b240b8d3ee0392adb07387ebea513b338751cf143351ce9df"} Mar 12 08:26:45 crc kubenswrapper[4809]: I0312 08:26:45.422332 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 12 08:26:45 crc kubenswrapper[4809]: I0312 08:26:45.498175 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5e68398-e067-4b2a-9bbf-7d1e79911ccc-combined-ca-bundle\") pod \"f5e68398-e067-4b2a-9bbf-7d1e79911ccc\" (UID: \"f5e68398-e067-4b2a-9bbf-7d1e79911ccc\") " Mar 12 08:26:45 crc kubenswrapper[4809]: I0312 08:26:45.498278 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5e68398-e067-4b2a-9bbf-7d1e79911ccc-scripts\") pod \"f5e68398-e067-4b2a-9bbf-7d1e79911ccc\" (UID: \"f5e68398-e067-4b2a-9bbf-7d1e79911ccc\") " Mar 12 08:26:45 crc kubenswrapper[4809]: I0312 08:26:45.498342 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5e68398-e067-4b2a-9bbf-7d1e79911ccc-config-data\") pod \"f5e68398-e067-4b2a-9bbf-7d1e79911ccc\" (UID: \"f5e68398-e067-4b2a-9bbf-7d1e79911ccc\") " Mar 12 08:26:45 crc kubenswrapper[4809]: I0312 08:26:45.498630 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg8t4\" (UniqueName: \"kubernetes.io/projected/f5e68398-e067-4b2a-9bbf-7d1e79911ccc-kube-api-access-mg8t4\") pod \"f5e68398-e067-4b2a-9bbf-7d1e79911ccc\" (UID: \"f5e68398-e067-4b2a-9bbf-7d1e79911ccc\") " Mar 12 08:26:45 crc kubenswrapper[4809]: I0312 08:26:45.507259 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5e68398-e067-4b2a-9bbf-7d1e79911ccc-scripts" (OuterVolumeSpecName: "scripts") pod "f5e68398-e067-4b2a-9bbf-7d1e79911ccc" (UID: "f5e68398-e067-4b2a-9bbf-7d1e79911ccc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:26:45 crc kubenswrapper[4809]: I0312 08:26:45.512452 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5e68398-e067-4b2a-9bbf-7d1e79911ccc-kube-api-access-mg8t4" (OuterVolumeSpecName: "kube-api-access-mg8t4") pod "f5e68398-e067-4b2a-9bbf-7d1e79911ccc" (UID: "f5e68398-e067-4b2a-9bbf-7d1e79911ccc"). InnerVolumeSpecName "kube-api-access-mg8t4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:26:45 crc kubenswrapper[4809]: I0312 08:26:45.604035 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg8t4\" (UniqueName: \"kubernetes.io/projected/f5e68398-e067-4b2a-9bbf-7d1e79911ccc-kube-api-access-mg8t4\") on node \"crc\" DevicePath \"\"" Mar 12 08:26:45 crc kubenswrapper[4809]: I0312 08:26:45.604063 4809 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5e68398-e067-4b2a-9bbf-7d1e79911ccc-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 08:26:45 crc kubenswrapper[4809]: I0312 08:26:45.657293 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5e68398-e067-4b2a-9bbf-7d1e79911ccc-config-data" (OuterVolumeSpecName: "config-data") pod "f5e68398-e067-4b2a-9bbf-7d1e79911ccc" (UID: "f5e68398-e067-4b2a-9bbf-7d1e79911ccc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:26:45 crc kubenswrapper[4809]: I0312 08:26:45.688893 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5e68398-e067-4b2a-9bbf-7d1e79911ccc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f5e68398-e067-4b2a-9bbf-7d1e79911ccc" (UID: "f5e68398-e067-4b2a-9bbf-7d1e79911ccc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:26:45 crc kubenswrapper[4809]: I0312 08:26:45.706740 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5e68398-e067-4b2a-9bbf-7d1e79911ccc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:26:45 crc kubenswrapper[4809]: I0312 08:26:45.706773 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5e68398-e067-4b2a-9bbf-7d1e79911ccc-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 08:26:45 crc kubenswrapper[4809]: E0312 08:26:45.965285 4809 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d2af86247af42944b1743dd43f6d2ddb6af9d79c163cd7c46aa14102c984c0f2 is running failed: container process not found" containerID="d2af86247af42944b1743dd43f6d2ddb6af9d79c163cd7c46aa14102c984c0f2" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 12 08:26:45 crc kubenswrapper[4809]: E0312 08:26:45.965711 4809 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d2af86247af42944b1743dd43f6d2ddb6af9d79c163cd7c46aa14102c984c0f2 is running failed: container process not found" containerID="d2af86247af42944b1743dd43f6d2ddb6af9d79c163cd7c46aa14102c984c0f2" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 12 08:26:45 crc kubenswrapper[4809]: E0312 08:26:45.966017 4809 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d2af86247af42944b1743dd43f6d2ddb6af9d79c163cd7c46aa14102c984c0f2 is running failed: container process not found" containerID="d2af86247af42944b1743dd43f6d2ddb6af9d79c163cd7c46aa14102c984c0f2" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 12 08:26:45 crc kubenswrapper[4809]: E0312 08:26:45.966054 4809 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d2af86247af42944b1743dd43f6d2ddb6af9d79c163cd7c46aa14102c984c0f2 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="586c76fb-3a10-473c-b9d9-14fb71cc4f6e" containerName="nova-scheduler-scheduler" Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.217980 4809 generic.go:334] "Generic (PLEG): container finished" podID="586c76fb-3a10-473c-b9d9-14fb71cc4f6e" containerID="d2af86247af42944b1743dd43f6d2ddb6af9d79c163cd7c46aa14102c984c0f2" exitCode=0 Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.218060 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"586c76fb-3a10-473c-b9d9-14fb71cc4f6e","Type":"ContainerDied","Data":"d2af86247af42944b1743dd43f6d2ddb6af9d79c163cd7c46aa14102c984c0f2"} Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.218092 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"586c76fb-3a10-473c-b9d9-14fb71cc4f6e","Type":"ContainerDied","Data":"c50a7a5dac1545e19ad3551f5d386aa55f1934534f0b58f961c2a7c1e8945606"} Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.218105 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c50a7a5dac1545e19ad3551f5d386aa55f1934534f0b58f961c2a7c1e8945606" Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.222464 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f5e68398-e067-4b2a-9bbf-7d1e79911ccc","Type":"ContainerDied","Data":"11f5170ae121af9e82b11c8a27e09731fc6ca5a2838ce28aa609a4ac414768b0"} Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.222538 4809 scope.go:117] "RemoveContainer" containerID="8d09117fdc064e2b240b8d3ee0392adb07387ebea513b338751cf143351ce9df" Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.223000 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.237441 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.250065 4809 scope.go:117] "RemoveContainer" containerID="20bc9edb132fafdc73409542f43cc38ab68d8317420b4e7acf5927cc5e1bc5c5" Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.302666 4809 scope.go:117] "RemoveContainer" containerID="4f7e12c7d8d92e9dbb842e71442901b29ea4d92a6db968954ebf7cf565bb1bbd" Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.325388 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="4eb1b537-df6b-4e03-aa39-42380aac7bd8" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.11:8775/\": read tcp 10.217.0.2:36786->10.217.1.11:8775: read: connection reset by peer" Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.325739 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="4eb1b537-df6b-4e03-aa39-42380aac7bd8" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.11:8775/\": read tcp 10.217.0.2:36784->10.217.1.11:8775: read: connection reset by peer" Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.330523 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.348420 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.358764 4809 scope.go:117] "RemoveContainer" containerID="6b275c82322f9321253aaa496bb5f1b14e950eb39dfc5d82f4514aeeadd004d3" Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.363941 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Mar 12 08:26:46 crc kubenswrapper[4809]: E0312 08:26:46.364912 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5e68398-e067-4b2a-9bbf-7d1e79911ccc" containerName="aodh-api" Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.364932 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5e68398-e067-4b2a-9bbf-7d1e79911ccc" containerName="aodh-api" Mar 12 08:26:46 crc kubenswrapper[4809]: E0312 08:26:46.364953 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="586c76fb-3a10-473c-b9d9-14fb71cc4f6e" containerName="nova-scheduler-scheduler" Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.364964 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="586c76fb-3a10-473c-b9d9-14fb71cc4f6e" containerName="nova-scheduler-scheduler" Mar 12 08:26:46 crc kubenswrapper[4809]: E0312 08:26:46.364984 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5e68398-e067-4b2a-9bbf-7d1e79911ccc" containerName="aodh-listener" Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.364990 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5e68398-e067-4b2a-9bbf-7d1e79911ccc" containerName="aodh-listener" Mar 12 08:26:46 crc kubenswrapper[4809]: E0312 08:26:46.365005 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92857c04-f2b0-41d7-b825-591496c43e0c" containerName="dnsmasq-dns" Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.365011 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="92857c04-f2b0-41d7-b825-591496c43e0c" containerName="dnsmasq-dns" Mar 12 08:26:46 crc kubenswrapper[4809]: E0312 08:26:46.365030 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5e68398-e067-4b2a-9bbf-7d1e79911ccc" containerName="aodh-notifier" Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.365036 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5e68398-e067-4b2a-9bbf-7d1e79911ccc" containerName="aodh-notifier" Mar 12 08:26:46 crc kubenswrapper[4809]: E0312 08:26:46.365063 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92857c04-f2b0-41d7-b825-591496c43e0c" containerName="init" Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.365070 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="92857c04-f2b0-41d7-b825-591496c43e0c" containerName="init" Mar 12 08:26:46 crc kubenswrapper[4809]: E0312 08:26:46.365080 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d78e41a4-1f33-45bf-a0b8-6e53b47a3f70" containerName="nova-manage" Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.365086 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="d78e41a4-1f33-45bf-a0b8-6e53b47a3f70" containerName="nova-manage" Mar 12 08:26:46 crc kubenswrapper[4809]: E0312 08:26:46.365097 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5e68398-e067-4b2a-9bbf-7d1e79911ccc" containerName="aodh-evaluator" Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.365106 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5e68398-e067-4b2a-9bbf-7d1e79911ccc" containerName="aodh-evaluator" Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.365437 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="d78e41a4-1f33-45bf-a0b8-6e53b47a3f70" containerName="nova-manage" Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.365452 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="586c76fb-3a10-473c-b9d9-14fb71cc4f6e" containerName="nova-scheduler-scheduler" Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.365469 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5e68398-e067-4b2a-9bbf-7d1e79911ccc" containerName="aodh-evaluator" Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.365485 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5e68398-e067-4b2a-9bbf-7d1e79911ccc" containerName="aodh-listener" Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.365498 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5e68398-e067-4b2a-9bbf-7d1e79911ccc" containerName="aodh-notifier" Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.365511 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5e68398-e067-4b2a-9bbf-7d1e79911ccc" containerName="aodh-api" Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.365524 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="92857c04-f2b0-41d7-b825-591496c43e0c" containerName="dnsmasq-dns" Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.368531 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.373037 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.373372 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.373685 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-qhrlw" Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.373847 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.373990 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.376822 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.429947 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/586c76fb-3a10-473c-b9d9-14fb71cc4f6e-combined-ca-bundle\") pod \"586c76fb-3a10-473c-b9d9-14fb71cc4f6e\" (UID: \"586c76fb-3a10-473c-b9d9-14fb71cc4f6e\") " Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.430210 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bq7cc\" (UniqueName: \"kubernetes.io/projected/586c76fb-3a10-473c-b9d9-14fb71cc4f6e-kube-api-access-bq7cc\") pod \"586c76fb-3a10-473c-b9d9-14fb71cc4f6e\" (UID: \"586c76fb-3a10-473c-b9d9-14fb71cc4f6e\") " Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.430952 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/586c76fb-3a10-473c-b9d9-14fb71cc4f6e-config-data\") pod \"586c76fb-3a10-473c-b9d9-14fb71cc4f6e\" (UID: \"586c76fb-3a10-473c-b9d9-14fb71cc4f6e\") " Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.431151 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a-internal-tls-certs\") pod \"aodh-0\" (UID: \"ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a\") " pod="openstack/aodh-0" Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.431220 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a-public-tls-certs\") pod \"aodh-0\" (UID: \"ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a\") " pod="openstack/aodh-0" Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.431255 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a-config-data\") pod \"aodh-0\" (UID: \"ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a\") " pod="openstack/aodh-0" Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.431312 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94wbl\" (UniqueName: \"kubernetes.io/projected/ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a-kube-api-access-94wbl\") pod \"aodh-0\" (UID: \"ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a\") " pod="openstack/aodh-0" Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.431453 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a-scripts\") pod \"aodh-0\" (UID: \"ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a\") " pod="openstack/aodh-0" Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.431496 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a-combined-ca-bundle\") pod \"aodh-0\" (UID: \"ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a\") " pod="openstack/aodh-0" Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.440462 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/586c76fb-3a10-473c-b9d9-14fb71cc4f6e-kube-api-access-bq7cc" (OuterVolumeSpecName: "kube-api-access-bq7cc") pod "586c76fb-3a10-473c-b9d9-14fb71cc4f6e" (UID: "586c76fb-3a10-473c-b9d9-14fb71cc4f6e"). InnerVolumeSpecName "kube-api-access-bq7cc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.465493 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/586c76fb-3a10-473c-b9d9-14fb71cc4f6e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "586c76fb-3a10-473c-b9d9-14fb71cc4f6e" (UID: "586c76fb-3a10-473c-b9d9-14fb71cc4f6e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.472371 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/586c76fb-3a10-473c-b9d9-14fb71cc4f6e-config-data" (OuterVolumeSpecName: "config-data") pod "586c76fb-3a10-473c-b9d9-14fb71cc4f6e" (UID: "586c76fb-3a10-473c-b9d9-14fb71cc4f6e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.542858 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a-public-tls-certs\") pod \"aodh-0\" (UID: \"ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a\") " pod="openstack/aodh-0" Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.542930 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a-config-data\") pod \"aodh-0\" (UID: \"ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a\") " pod="openstack/aodh-0" Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.543174 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94wbl\" (UniqueName: \"kubernetes.io/projected/ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a-kube-api-access-94wbl\") pod \"aodh-0\" (UID: \"ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a\") " pod="openstack/aodh-0" Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.543567 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a-scripts\") pod \"aodh-0\" (UID: \"ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a\") " pod="openstack/aodh-0" Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.543613 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a-combined-ca-bundle\") pod \"aodh-0\" (UID: \"ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a\") " pod="openstack/aodh-0" Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.543875 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a-internal-tls-certs\") pod \"aodh-0\" (UID: \"ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a\") " pod="openstack/aodh-0" Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.545105 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/586c76fb-3a10-473c-b9d9-14fb71cc4f6e-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.545177 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/586c76fb-3a10-473c-b9d9-14fb71cc4f6e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.545193 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bq7cc\" (UniqueName: \"kubernetes.io/projected/586c76fb-3a10-473c-b9d9-14fb71cc4f6e-kube-api-access-bq7cc\") on node \"crc\" DevicePath \"\"" Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.547674 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a-config-data\") pod \"aodh-0\" (UID: \"ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a\") " pod="openstack/aodh-0" Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.550650 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a-public-tls-certs\") pod \"aodh-0\" (UID: \"ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a\") " pod="openstack/aodh-0" Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.557855 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a-combined-ca-bundle\") pod \"aodh-0\" (UID: \"ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a\") " pod="openstack/aodh-0" Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.559966 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a-scripts\") pod \"aodh-0\" (UID: \"ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a\") " pod="openstack/aodh-0" Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.560106 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a-internal-tls-certs\") pod \"aodh-0\" (UID: \"ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a\") " pod="openstack/aodh-0" Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.567385 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94wbl\" (UniqueName: \"kubernetes.io/projected/ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a-kube-api-access-94wbl\") pod \"aodh-0\" (UID: \"ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a\") " pod="openstack/aodh-0" Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.740586 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.843494 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.853095 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4eb1b537-df6b-4e03-aa39-42380aac7bd8-combined-ca-bundle\") pod \"4eb1b537-df6b-4e03-aa39-42380aac7bd8\" (UID: \"4eb1b537-df6b-4e03-aa39-42380aac7bd8\") " Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.853158 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4eb1b537-df6b-4e03-aa39-42380aac7bd8-logs\") pod \"4eb1b537-df6b-4e03-aa39-42380aac7bd8\" (UID: \"4eb1b537-df6b-4e03-aa39-42380aac7bd8\") " Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.853228 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4eb1b537-df6b-4e03-aa39-42380aac7bd8-nova-metadata-tls-certs\") pod \"4eb1b537-df6b-4e03-aa39-42380aac7bd8\" (UID: \"4eb1b537-df6b-4e03-aa39-42380aac7bd8\") " Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.853497 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfrm5\" (UniqueName: \"kubernetes.io/projected/4eb1b537-df6b-4e03-aa39-42380aac7bd8-kube-api-access-pfrm5\") pod \"4eb1b537-df6b-4e03-aa39-42380aac7bd8\" (UID: \"4eb1b537-df6b-4e03-aa39-42380aac7bd8\") " Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.853641 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4eb1b537-df6b-4e03-aa39-42380aac7bd8-config-data\") pod \"4eb1b537-df6b-4e03-aa39-42380aac7bd8\" (UID: \"4eb1b537-df6b-4e03-aa39-42380aac7bd8\") " Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.853809 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4eb1b537-df6b-4e03-aa39-42380aac7bd8-logs" (OuterVolumeSpecName: "logs") pod "4eb1b537-df6b-4e03-aa39-42380aac7bd8" (UID: "4eb1b537-df6b-4e03-aa39-42380aac7bd8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.854515 4809 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4eb1b537-df6b-4e03-aa39-42380aac7bd8-logs\") on node \"crc\" DevicePath \"\"" Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.860512 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4eb1b537-df6b-4e03-aa39-42380aac7bd8-kube-api-access-pfrm5" (OuterVolumeSpecName: "kube-api-access-pfrm5") pod "4eb1b537-df6b-4e03-aa39-42380aac7bd8" (UID: "4eb1b537-df6b-4e03-aa39-42380aac7bd8"). InnerVolumeSpecName "kube-api-access-pfrm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.886852 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4eb1b537-df6b-4e03-aa39-42380aac7bd8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4eb1b537-df6b-4e03-aa39-42380aac7bd8" (UID: "4eb1b537-df6b-4e03-aa39-42380aac7bd8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.888853 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4eb1b537-df6b-4e03-aa39-42380aac7bd8-config-data" (OuterVolumeSpecName: "config-data") pod "4eb1b537-df6b-4e03-aa39-42380aac7bd8" (UID: "4eb1b537-df6b-4e03-aa39-42380aac7bd8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.954677 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4eb1b537-df6b-4e03-aa39-42380aac7bd8-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "4eb1b537-df6b-4e03-aa39-42380aac7bd8" (UID: "4eb1b537-df6b-4e03-aa39-42380aac7bd8"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.955778 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4eb1b537-df6b-4e03-aa39-42380aac7bd8-nova-metadata-tls-certs\") pod \"4eb1b537-df6b-4e03-aa39-42380aac7bd8\" (UID: \"4eb1b537-df6b-4e03-aa39-42380aac7bd8\") " Mar 12 08:26:46 crc kubenswrapper[4809]: W0312 08:26:46.955937 4809 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/4eb1b537-df6b-4e03-aa39-42380aac7bd8/volumes/kubernetes.io~secret/nova-metadata-tls-certs Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.955951 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4eb1b537-df6b-4e03-aa39-42380aac7bd8-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "4eb1b537-df6b-4e03-aa39-42380aac7bd8" (UID: "4eb1b537-df6b-4e03-aa39-42380aac7bd8"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.956731 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfrm5\" (UniqueName: \"kubernetes.io/projected/4eb1b537-df6b-4e03-aa39-42380aac7bd8-kube-api-access-pfrm5\") on node \"crc\" DevicePath \"\"" Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.956753 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4eb1b537-df6b-4e03-aa39-42380aac7bd8-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.956764 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4eb1b537-df6b-4e03-aa39-42380aac7bd8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:26:46 crc kubenswrapper[4809]: I0312 08:26:46.956774 4809 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4eb1b537-df6b-4e03-aa39-42380aac7bd8-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.141280 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5e68398-e067-4b2a-9bbf-7d1e79911ccc" path="/var/lib/kubelet/pods/f5e68398-e067-4b2a-9bbf-7d1e79911ccc/volumes" Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.251426 4809 generic.go:334] "Generic (PLEG): container finished" podID="d8aa93b9-e740-431c-b4c2-77d3973d9251" containerID="778210d7bee4521eefed935f786032eeadb5eaa91c6aaa08f29d1f0733bd4e8c" exitCode=0 Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.251608 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d8aa93b9-e740-431c-b4c2-77d3973d9251","Type":"ContainerDied","Data":"778210d7bee4521eefed935f786032eeadb5eaa91c6aaa08f29d1f0733bd4e8c"} Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.252583 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d8aa93b9-e740-431c-b4c2-77d3973d9251","Type":"ContainerDied","Data":"6bd86bbc0ae66ee959a846a721ca8262a2b85a473849506af062dcea61470607"} Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.252609 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6bd86bbc0ae66ee959a846a721ca8262a2b85a473849506af062dcea61470607" Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.257901 4809 generic.go:334] "Generic (PLEG): container finished" podID="4eb1b537-df6b-4e03-aa39-42380aac7bd8" containerID="8d83dbb02d3e347c2ebebe3d87867970eb544c920b87745b17e40d71db57f539" exitCode=0 Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.258027 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4eb1b537-df6b-4e03-aa39-42380aac7bd8","Type":"ContainerDied","Data":"8d83dbb02d3e347c2ebebe3d87867970eb544c920b87745b17e40d71db57f539"} Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.258063 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4eb1b537-df6b-4e03-aa39-42380aac7bd8","Type":"ContainerDied","Data":"ebd1abe666cf0abbbea2327ecb2e8395e9fef73f192892434b0657123bffe0ef"} Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.258097 4809 scope.go:117] "RemoveContainer" containerID="8d83dbb02d3e347c2ebebe3d87867970eb544c920b87745b17e40d71db57f539" Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.258148 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.262247 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.298215 4809 scope.go:117] "RemoveContainer" containerID="7ac6254b133755930372b31485c6baf43e4a5b682323a3a9aa7aa077dc9372d6" Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.307067 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.326365 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.341783 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.363934 4809 scope.go:117] "RemoveContainer" containerID="8d83dbb02d3e347c2ebebe3d87867970eb544c920b87745b17e40d71db57f539" Mar 12 08:26:47 crc kubenswrapper[4809]: E0312 08:26:47.365811 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d83dbb02d3e347c2ebebe3d87867970eb544c920b87745b17e40d71db57f539\": container with ID starting with 8d83dbb02d3e347c2ebebe3d87867970eb544c920b87745b17e40d71db57f539 not found: ID does not exist" containerID="8d83dbb02d3e347c2ebebe3d87867970eb544c920b87745b17e40d71db57f539" Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.365859 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d83dbb02d3e347c2ebebe3d87867970eb544c920b87745b17e40d71db57f539"} err="failed to get container status \"8d83dbb02d3e347c2ebebe3d87867970eb544c920b87745b17e40d71db57f539\": rpc error: code = NotFound desc = could not find container \"8d83dbb02d3e347c2ebebe3d87867970eb544c920b87745b17e40d71db57f539\": container with ID starting with 8d83dbb02d3e347c2ebebe3d87867970eb544c920b87745b17e40d71db57f539 not found: ID does not exist" Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.365885 4809 scope.go:117] "RemoveContainer" containerID="7ac6254b133755930372b31485c6baf43e4a5b682323a3a9aa7aa077dc9372d6" Mar 12 08:26:47 crc kubenswrapper[4809]: E0312 08:26:47.366158 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ac6254b133755930372b31485c6baf43e4a5b682323a3a9aa7aa077dc9372d6\": container with ID starting with 7ac6254b133755930372b31485c6baf43e4a5b682323a3a9aa7aa077dc9372d6 not found: ID does not exist" containerID="7ac6254b133755930372b31485c6baf43e4a5b682323a3a9aa7aa077dc9372d6" Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.366178 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ac6254b133755930372b31485c6baf43e4a5b682323a3a9aa7aa077dc9372d6"} err="failed to get container status \"7ac6254b133755930372b31485c6baf43e4a5b682323a3a9aa7aa077dc9372d6\": rpc error: code = NotFound desc = could not find container \"7ac6254b133755930372b31485c6baf43e4a5b682323a3a9aa7aa077dc9372d6\": container with ID starting with 7ac6254b133755930372b31485c6baf43e4a5b682323a3a9aa7aa077dc9372d6 not found: ID does not exist" Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.370212 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 08:26:47 crc kubenswrapper[4809]: E0312 08:26:47.370790 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eb1b537-df6b-4e03-aa39-42380aac7bd8" containerName="nova-metadata-log" Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.370817 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eb1b537-df6b-4e03-aa39-42380aac7bd8" containerName="nova-metadata-log" Mar 12 08:26:47 crc kubenswrapper[4809]: E0312 08:26:47.370833 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8aa93b9-e740-431c-b4c2-77d3973d9251" containerName="nova-api-api" Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.370840 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8aa93b9-e740-431c-b4c2-77d3973d9251" containerName="nova-api-api" Mar 12 08:26:47 crc kubenswrapper[4809]: E0312 08:26:47.370874 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eb1b537-df6b-4e03-aa39-42380aac7bd8" containerName="nova-metadata-metadata" Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.370881 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eb1b537-df6b-4e03-aa39-42380aac7bd8" containerName="nova-metadata-metadata" Mar 12 08:26:47 crc kubenswrapper[4809]: E0312 08:26:47.370889 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8aa93b9-e740-431c-b4c2-77d3973d9251" containerName="nova-api-log" Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.370895 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8aa93b9-e740-431c-b4c2-77d3973d9251" containerName="nova-api-log" Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.371181 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="4eb1b537-df6b-4e03-aa39-42380aac7bd8" containerName="nova-metadata-metadata" Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.371202 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8aa93b9-e740-431c-b4c2-77d3973d9251" containerName="nova-api-api" Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.371216 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8aa93b9-e740-431c-b4c2-77d3973d9251" containerName="nova-api-log" Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.371236 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="4eb1b537-df6b-4e03-aa39-42380aac7bd8" containerName="nova-metadata-log" Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.371520 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8aa93b9-e740-431c-b4c2-77d3973d9251-internal-tls-certs\") pod \"d8aa93b9-e740-431c-b4c2-77d3973d9251\" (UID: \"d8aa93b9-e740-431c-b4c2-77d3973d9251\") " Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.371606 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8aa93b9-e740-431c-b4c2-77d3973d9251-public-tls-certs\") pod \"d8aa93b9-e740-431c-b4c2-77d3973d9251\" (UID: \"d8aa93b9-e740-431c-b4c2-77d3973d9251\") " Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.371712 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8aa93b9-e740-431c-b4c2-77d3973d9251-logs\") pod \"d8aa93b9-e740-431c-b4c2-77d3973d9251\" (UID: \"d8aa93b9-e740-431c-b4c2-77d3973d9251\") " Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.371779 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8aa93b9-e740-431c-b4c2-77d3973d9251-combined-ca-bundle\") pod \"d8aa93b9-e740-431c-b4c2-77d3973d9251\" (UID: \"d8aa93b9-e740-431c-b4c2-77d3973d9251\") " Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.371838 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8aa93b9-e740-431c-b4c2-77d3973d9251-config-data\") pod \"d8aa93b9-e740-431c-b4c2-77d3973d9251\" (UID: \"d8aa93b9-e740-431c-b4c2-77d3973d9251\") " Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.371988 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgbnc\" (UniqueName: \"kubernetes.io/projected/d8aa93b9-e740-431c-b4c2-77d3973d9251-kube-api-access-mgbnc\") pod \"d8aa93b9-e740-431c-b4c2-77d3973d9251\" (UID: \"d8aa93b9-e740-431c-b4c2-77d3973d9251\") " Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.373759 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.376094 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8aa93b9-e740-431c-b4c2-77d3973d9251-logs" (OuterVolumeSpecName: "logs") pod "d8aa93b9-e740-431c-b4c2-77d3973d9251" (UID: "d8aa93b9-e740-431c-b4c2-77d3973d9251"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.381140 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.426784 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.426994 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8aa93b9-e740-431c-b4c2-77d3973d9251-kube-api-access-mgbnc" (OuterVolumeSpecName: "kube-api-access-mgbnc") pod "d8aa93b9-e740-431c-b4c2-77d3973d9251" (UID: "d8aa93b9-e740-431c-b4c2-77d3973d9251"). InnerVolumeSpecName "kube-api-access-mgbnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.444968 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8aa93b9-e740-431c-b4c2-77d3973d9251-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d8aa93b9-e740-431c-b4c2-77d3973d9251" (UID: "d8aa93b9-e740-431c-b4c2-77d3973d9251"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.471801 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.474335 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29xlt\" (UniqueName: \"kubernetes.io/projected/271f5266-7da5-4a37-a61c-aa02f9e04d15-kube-api-access-29xlt\") pod \"nova-scheduler-0\" (UID: \"271f5266-7da5-4a37-a61c-aa02f9e04d15\") " pod="openstack/nova-scheduler-0" Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.474489 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/271f5266-7da5-4a37-a61c-aa02f9e04d15-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"271f5266-7da5-4a37-a61c-aa02f9e04d15\") " pod="openstack/nova-scheduler-0" Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.474646 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/271f5266-7da5-4a37-a61c-aa02f9e04d15-config-data\") pod \"nova-scheduler-0\" (UID: \"271f5266-7da5-4a37-a61c-aa02f9e04d15\") " pod="openstack/nova-scheduler-0" Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.474716 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mgbnc\" (UniqueName: \"kubernetes.io/projected/d8aa93b9-e740-431c-b4c2-77d3973d9251-kube-api-access-mgbnc\") on node \"crc\" DevicePath \"\"" Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.474730 4809 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8aa93b9-e740-431c-b4c2-77d3973d9251-logs\") on node \"crc\" DevicePath \"\"" Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.474740 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8aa93b9-e740-431c-b4c2-77d3973d9251-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.482066 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8aa93b9-e740-431c-b4c2-77d3973d9251-config-data" (OuterVolumeSpecName: "config-data") pod "d8aa93b9-e740-431c-b4c2-77d3973d9251" (UID: "d8aa93b9-e740-431c-b4c2-77d3973d9251"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.488860 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8aa93b9-e740-431c-b4c2-77d3973d9251-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "d8aa93b9-e740-431c-b4c2-77d3973d9251" (UID: "d8aa93b9-e740-431c-b4c2-77d3973d9251"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.491897 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8aa93b9-e740-431c-b4c2-77d3973d9251-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d8aa93b9-e740-431c-b4c2-77d3973d9251" (UID: "d8aa93b9-e740-431c-b4c2-77d3973d9251"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.493796 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.516178 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.518731 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.521651 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.521699 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.537894 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.560755 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.577769 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9af05a91-f59f-4731-b75d-95fe7b869838-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9af05a91-f59f-4731-b75d-95fe7b869838\") " pod="openstack/nova-metadata-0" Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.578085 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-529zz\" (UniqueName: \"kubernetes.io/projected/9af05a91-f59f-4731-b75d-95fe7b869838-kube-api-access-529zz\") pod \"nova-metadata-0\" (UID: \"9af05a91-f59f-4731-b75d-95fe7b869838\") " pod="openstack/nova-metadata-0" Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.578228 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/271f5266-7da5-4a37-a61c-aa02f9e04d15-config-data\") pod \"nova-scheduler-0\" (UID: \"271f5266-7da5-4a37-a61c-aa02f9e04d15\") " pod="openstack/nova-scheduler-0" Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.578304 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9af05a91-f59f-4731-b75d-95fe7b869838-config-data\") pod \"nova-metadata-0\" (UID: \"9af05a91-f59f-4731-b75d-95fe7b869838\") " pod="openstack/nova-metadata-0" Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.578685 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29xlt\" (UniqueName: \"kubernetes.io/projected/271f5266-7da5-4a37-a61c-aa02f9e04d15-kube-api-access-29xlt\") pod \"nova-scheduler-0\" (UID: \"271f5266-7da5-4a37-a61c-aa02f9e04d15\") " pod="openstack/nova-scheduler-0" Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.578992 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9af05a91-f59f-4731-b75d-95fe7b869838-logs\") pod \"nova-metadata-0\" (UID: \"9af05a91-f59f-4731-b75d-95fe7b869838\") " pod="openstack/nova-metadata-0" Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.579147 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9af05a91-f59f-4731-b75d-95fe7b869838-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9af05a91-f59f-4731-b75d-95fe7b869838\") " pod="openstack/nova-metadata-0" Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.579302 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/271f5266-7da5-4a37-a61c-aa02f9e04d15-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"271f5266-7da5-4a37-a61c-aa02f9e04d15\") " pod="openstack/nova-scheduler-0" Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.579430 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8aa93b9-e740-431c-b4c2-77d3973d9251-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.579454 4809 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8aa93b9-e740-431c-b4c2-77d3973d9251-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.579464 4809 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8aa93b9-e740-431c-b4c2-77d3973d9251-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.582205 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/271f5266-7da5-4a37-a61c-aa02f9e04d15-config-data\") pod \"nova-scheduler-0\" (UID: \"271f5266-7da5-4a37-a61c-aa02f9e04d15\") " pod="openstack/nova-scheduler-0" Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.582301 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/271f5266-7da5-4a37-a61c-aa02f9e04d15-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"271f5266-7da5-4a37-a61c-aa02f9e04d15\") " pod="openstack/nova-scheduler-0" Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.603140 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29xlt\" (UniqueName: \"kubernetes.io/projected/271f5266-7da5-4a37-a61c-aa02f9e04d15-kube-api-access-29xlt\") pod \"nova-scheduler-0\" (UID: \"271f5266-7da5-4a37-a61c-aa02f9e04d15\") " pod="openstack/nova-scheduler-0" Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.681346 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-529zz\" (UniqueName: \"kubernetes.io/projected/9af05a91-f59f-4731-b75d-95fe7b869838-kube-api-access-529zz\") pod \"nova-metadata-0\" (UID: \"9af05a91-f59f-4731-b75d-95fe7b869838\") " pod="openstack/nova-metadata-0" Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.682008 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9af05a91-f59f-4731-b75d-95fe7b869838-config-data\") pod \"nova-metadata-0\" (UID: \"9af05a91-f59f-4731-b75d-95fe7b869838\") " pod="openstack/nova-metadata-0" Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.682175 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9af05a91-f59f-4731-b75d-95fe7b869838-logs\") pod \"nova-metadata-0\" (UID: \"9af05a91-f59f-4731-b75d-95fe7b869838\") " pod="openstack/nova-metadata-0" Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.682270 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9af05a91-f59f-4731-b75d-95fe7b869838-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9af05a91-f59f-4731-b75d-95fe7b869838\") " pod="openstack/nova-metadata-0" Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.682433 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9af05a91-f59f-4731-b75d-95fe7b869838-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9af05a91-f59f-4731-b75d-95fe7b869838\") " pod="openstack/nova-metadata-0" Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.682875 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9af05a91-f59f-4731-b75d-95fe7b869838-logs\") pod \"nova-metadata-0\" (UID: \"9af05a91-f59f-4731-b75d-95fe7b869838\") " pod="openstack/nova-metadata-0" Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.686750 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9af05a91-f59f-4731-b75d-95fe7b869838-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9af05a91-f59f-4731-b75d-95fe7b869838\") " pod="openstack/nova-metadata-0" Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.688647 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9af05a91-f59f-4731-b75d-95fe7b869838-config-data\") pod \"nova-metadata-0\" (UID: \"9af05a91-f59f-4731-b75d-95fe7b869838\") " pod="openstack/nova-metadata-0" Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.690063 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9af05a91-f59f-4731-b75d-95fe7b869838-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9af05a91-f59f-4731-b75d-95fe7b869838\") " pod="openstack/nova-metadata-0" Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.703978 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-529zz\" (UniqueName: \"kubernetes.io/projected/9af05a91-f59f-4731-b75d-95fe7b869838-kube-api-access-529zz\") pod \"nova-metadata-0\" (UID: \"9af05a91-f59f-4731-b75d-95fe7b869838\") " pod="openstack/nova-metadata-0" Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.768947 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 08:26:47 crc kubenswrapper[4809]: I0312 08:26:47.839743 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 08:26:48 crc kubenswrapper[4809]: I0312 08:26:48.275892 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a","Type":"ContainerStarted","Data":"e2acaaaa3970eb8cdbef7136f857b82f8873818daff9cabe5a07b8c2ca6cd3ee"} Mar 12 08:26:48 crc kubenswrapper[4809]: I0312 08:26:48.280855 4809 generic.go:334] "Generic (PLEG): container finished" podID="a5581de1-27a7-4f7f-8388-904d787d5ab8" containerID="c7e34f58e8f5fedcc550976b179a97f44622c003b60541d3d29865089b3bc0f0" exitCode=137 Mar 12 08:26:48 crc kubenswrapper[4809]: I0312 08:26:48.280992 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5581de1-27a7-4f7f-8388-904d787d5ab8","Type":"ContainerDied","Data":"c7e34f58e8f5fedcc550976b179a97f44622c003b60541d3d29865089b3bc0f0"} Mar 12 08:26:48 crc kubenswrapper[4809]: I0312 08:26:48.281278 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 08:26:48 crc kubenswrapper[4809]: I0312 08:26:48.367389 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 08:26:48 crc kubenswrapper[4809]: I0312 08:26:48.407924 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 12 08:26:48 crc kubenswrapper[4809]: I0312 08:26:48.440316 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 12 08:26:48 crc kubenswrapper[4809]: I0312 08:26:48.454386 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 12 08:26:48 crc kubenswrapper[4809]: I0312 08:26:48.457781 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 08:26:48 crc kubenswrapper[4809]: I0312 08:26:48.461225 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 12 08:26:48 crc kubenswrapper[4809]: I0312 08:26:48.461441 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Mar 12 08:26:48 crc kubenswrapper[4809]: I0312 08:26:48.461486 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Mar 12 08:26:48 crc kubenswrapper[4809]: I0312 08:26:48.470838 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 12 08:26:48 crc kubenswrapper[4809]: I0312 08:26:48.529161 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bc6p\" (UniqueName: \"kubernetes.io/projected/ec6fbec3-cafa-494f-a79b-9fcdb1665bb6-kube-api-access-2bc6p\") pod \"nova-api-0\" (UID: \"ec6fbec3-cafa-494f-a79b-9fcdb1665bb6\") " pod="openstack/nova-api-0" Mar 12 08:26:48 crc kubenswrapper[4809]: I0312 08:26:48.529245 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec6fbec3-cafa-494f-a79b-9fcdb1665bb6-public-tls-certs\") pod \"nova-api-0\" (UID: \"ec6fbec3-cafa-494f-a79b-9fcdb1665bb6\") " pod="openstack/nova-api-0" Mar 12 08:26:48 crc kubenswrapper[4809]: I0312 08:26:48.529297 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec6fbec3-cafa-494f-a79b-9fcdb1665bb6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ec6fbec3-cafa-494f-a79b-9fcdb1665bb6\") " pod="openstack/nova-api-0" Mar 12 08:26:48 crc kubenswrapper[4809]: I0312 08:26:48.529356 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec6fbec3-cafa-494f-a79b-9fcdb1665bb6-config-data\") pod \"nova-api-0\" (UID: \"ec6fbec3-cafa-494f-a79b-9fcdb1665bb6\") " pod="openstack/nova-api-0" Mar 12 08:26:48 crc kubenswrapper[4809]: I0312 08:26:48.529442 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec6fbec3-cafa-494f-a79b-9fcdb1665bb6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ec6fbec3-cafa-494f-a79b-9fcdb1665bb6\") " pod="openstack/nova-api-0" Mar 12 08:26:48 crc kubenswrapper[4809]: I0312 08:26:48.529538 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec6fbec3-cafa-494f-a79b-9fcdb1665bb6-logs\") pod \"nova-api-0\" (UID: \"ec6fbec3-cafa-494f-a79b-9fcdb1665bb6\") " pod="openstack/nova-api-0" Mar 12 08:26:48 crc kubenswrapper[4809]: I0312 08:26:48.577177 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 08:26:48 crc kubenswrapper[4809]: I0312 08:26:48.636567 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5581de1-27a7-4f7f-8388-904d787d5ab8-scripts\") pod \"a5581de1-27a7-4f7f-8388-904d787d5ab8\" (UID: \"a5581de1-27a7-4f7f-8388-904d787d5ab8\") " Mar 12 08:26:48 crc kubenswrapper[4809]: I0312 08:26:48.636644 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5581de1-27a7-4f7f-8388-904d787d5ab8-run-httpd\") pod \"a5581de1-27a7-4f7f-8388-904d787d5ab8\" (UID: \"a5581de1-27a7-4f7f-8388-904d787d5ab8\") " Mar 12 08:26:48 crc kubenswrapper[4809]: I0312 08:26:48.636706 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5581de1-27a7-4f7f-8388-904d787d5ab8-config-data\") pod \"a5581de1-27a7-4f7f-8388-904d787d5ab8\" (UID: \"a5581de1-27a7-4f7f-8388-904d787d5ab8\") " Mar 12 08:26:48 crc kubenswrapper[4809]: I0312 08:26:48.636729 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5581de1-27a7-4f7f-8388-904d787d5ab8-log-httpd\") pod \"a5581de1-27a7-4f7f-8388-904d787d5ab8\" (UID: \"a5581de1-27a7-4f7f-8388-904d787d5ab8\") " Mar 12 08:26:48 crc kubenswrapper[4809]: I0312 08:26:48.636830 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a5581de1-27a7-4f7f-8388-904d787d5ab8-sg-core-conf-yaml\") pod \"a5581de1-27a7-4f7f-8388-904d787d5ab8\" (UID: \"a5581de1-27a7-4f7f-8388-904d787d5ab8\") " Mar 12 08:26:48 crc kubenswrapper[4809]: I0312 08:26:48.637040 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9mws\" (UniqueName: \"kubernetes.io/projected/a5581de1-27a7-4f7f-8388-904d787d5ab8-kube-api-access-c9mws\") pod \"a5581de1-27a7-4f7f-8388-904d787d5ab8\" (UID: \"a5581de1-27a7-4f7f-8388-904d787d5ab8\") " Mar 12 08:26:48 crc kubenswrapper[4809]: I0312 08:26:48.637202 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5581de1-27a7-4f7f-8388-904d787d5ab8-combined-ca-bundle\") pod \"a5581de1-27a7-4f7f-8388-904d787d5ab8\" (UID: \"a5581de1-27a7-4f7f-8388-904d787d5ab8\") " Mar 12 08:26:48 crc kubenswrapper[4809]: I0312 08:26:48.638248 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bc6p\" (UniqueName: \"kubernetes.io/projected/ec6fbec3-cafa-494f-a79b-9fcdb1665bb6-kube-api-access-2bc6p\") pod \"nova-api-0\" (UID: \"ec6fbec3-cafa-494f-a79b-9fcdb1665bb6\") " pod="openstack/nova-api-0" Mar 12 08:26:48 crc kubenswrapper[4809]: I0312 08:26:48.638295 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec6fbec3-cafa-494f-a79b-9fcdb1665bb6-public-tls-certs\") pod \"nova-api-0\" (UID: \"ec6fbec3-cafa-494f-a79b-9fcdb1665bb6\") " pod="openstack/nova-api-0" Mar 12 08:26:48 crc kubenswrapper[4809]: I0312 08:26:48.638385 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec6fbec3-cafa-494f-a79b-9fcdb1665bb6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ec6fbec3-cafa-494f-a79b-9fcdb1665bb6\") " pod="openstack/nova-api-0" Mar 12 08:26:48 crc kubenswrapper[4809]: I0312 08:26:48.638439 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec6fbec3-cafa-494f-a79b-9fcdb1665bb6-config-data\") pod \"nova-api-0\" (UID: \"ec6fbec3-cafa-494f-a79b-9fcdb1665bb6\") " pod="openstack/nova-api-0" Mar 12 08:26:48 crc kubenswrapper[4809]: I0312 08:26:48.638511 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec6fbec3-cafa-494f-a79b-9fcdb1665bb6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ec6fbec3-cafa-494f-a79b-9fcdb1665bb6\") " pod="openstack/nova-api-0" Mar 12 08:26:48 crc kubenswrapper[4809]: I0312 08:26:48.638583 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec6fbec3-cafa-494f-a79b-9fcdb1665bb6-logs\") pod \"nova-api-0\" (UID: \"ec6fbec3-cafa-494f-a79b-9fcdb1665bb6\") " pod="openstack/nova-api-0" Mar 12 08:26:48 crc kubenswrapper[4809]: I0312 08:26:48.639219 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec6fbec3-cafa-494f-a79b-9fcdb1665bb6-logs\") pod \"nova-api-0\" (UID: \"ec6fbec3-cafa-494f-a79b-9fcdb1665bb6\") " pod="openstack/nova-api-0" Mar 12 08:26:48 crc kubenswrapper[4809]: I0312 08:26:48.639838 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5581de1-27a7-4f7f-8388-904d787d5ab8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a5581de1-27a7-4f7f-8388-904d787d5ab8" (UID: "a5581de1-27a7-4f7f-8388-904d787d5ab8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:26:48 crc kubenswrapper[4809]: I0312 08:26:48.646990 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5581de1-27a7-4f7f-8388-904d787d5ab8-scripts" (OuterVolumeSpecName: "scripts") pod "a5581de1-27a7-4f7f-8388-904d787d5ab8" (UID: "a5581de1-27a7-4f7f-8388-904d787d5ab8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:26:48 crc kubenswrapper[4809]: I0312 08:26:48.647659 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5581de1-27a7-4f7f-8388-904d787d5ab8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a5581de1-27a7-4f7f-8388-904d787d5ab8" (UID: "a5581de1-27a7-4f7f-8388-904d787d5ab8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:26:48 crc kubenswrapper[4809]: I0312 08:26:48.664297 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bc6p\" (UniqueName: \"kubernetes.io/projected/ec6fbec3-cafa-494f-a79b-9fcdb1665bb6-kube-api-access-2bc6p\") pod \"nova-api-0\" (UID: \"ec6fbec3-cafa-494f-a79b-9fcdb1665bb6\") " pod="openstack/nova-api-0" Mar 12 08:26:48 crc kubenswrapper[4809]: I0312 08:26:48.677743 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec6fbec3-cafa-494f-a79b-9fcdb1665bb6-public-tls-certs\") pod \"nova-api-0\" (UID: \"ec6fbec3-cafa-494f-a79b-9fcdb1665bb6\") " pod="openstack/nova-api-0" Mar 12 08:26:48 crc kubenswrapper[4809]: I0312 08:26:48.681745 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 08:26:48 crc kubenswrapper[4809]: I0312 08:26:48.683950 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec6fbec3-cafa-494f-a79b-9fcdb1665bb6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ec6fbec3-cafa-494f-a79b-9fcdb1665bb6\") " pod="openstack/nova-api-0" Mar 12 08:26:48 crc kubenswrapper[4809]: I0312 08:26:48.687001 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec6fbec3-cafa-494f-a79b-9fcdb1665bb6-config-data\") pod \"nova-api-0\" (UID: \"ec6fbec3-cafa-494f-a79b-9fcdb1665bb6\") " pod="openstack/nova-api-0" Mar 12 08:26:48 crc kubenswrapper[4809]: I0312 08:26:48.694865 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec6fbec3-cafa-494f-a79b-9fcdb1665bb6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ec6fbec3-cafa-494f-a79b-9fcdb1665bb6\") " pod="openstack/nova-api-0" Mar 12 08:26:48 crc kubenswrapper[4809]: I0312 08:26:48.705418 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5581de1-27a7-4f7f-8388-904d787d5ab8-kube-api-access-c9mws" (OuterVolumeSpecName: "kube-api-access-c9mws") pod "a5581de1-27a7-4f7f-8388-904d787d5ab8" (UID: "a5581de1-27a7-4f7f-8388-904d787d5ab8"). InnerVolumeSpecName "kube-api-access-c9mws". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:26:48 crc kubenswrapper[4809]: I0312 08:26:48.741097 4809 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5581de1-27a7-4f7f-8388-904d787d5ab8-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 08:26:48 crc kubenswrapper[4809]: I0312 08:26:48.741163 4809 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5581de1-27a7-4f7f-8388-904d787d5ab8-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 08:26:48 crc kubenswrapper[4809]: I0312 08:26:48.741173 4809 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5581de1-27a7-4f7f-8388-904d787d5ab8-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 08:26:48 crc kubenswrapper[4809]: I0312 08:26:48.741181 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9mws\" (UniqueName: \"kubernetes.io/projected/a5581de1-27a7-4f7f-8388-904d787d5ab8-kube-api-access-c9mws\") on node \"crc\" DevicePath \"\"" Mar 12 08:26:48 crc kubenswrapper[4809]: I0312 08:26:48.755558 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5581de1-27a7-4f7f-8388-904d787d5ab8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a5581de1-27a7-4f7f-8388-904d787d5ab8" (UID: "a5581de1-27a7-4f7f-8388-904d787d5ab8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:26:48 crc kubenswrapper[4809]: I0312 08:26:48.785853 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 08:26:48 crc kubenswrapper[4809]: I0312 08:26:48.844082 4809 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a5581de1-27a7-4f7f-8388-904d787d5ab8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 12 08:26:48 crc kubenswrapper[4809]: I0312 08:26:48.862815 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5581de1-27a7-4f7f-8388-904d787d5ab8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a5581de1-27a7-4f7f-8388-904d787d5ab8" (UID: "a5581de1-27a7-4f7f-8388-904d787d5ab8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:26:48 crc kubenswrapper[4809]: I0312 08:26:48.923487 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5581de1-27a7-4f7f-8388-904d787d5ab8-config-data" (OuterVolumeSpecName: "config-data") pod "a5581de1-27a7-4f7f-8388-904d787d5ab8" (UID: "a5581de1-27a7-4f7f-8388-904d787d5ab8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:26:48 crc kubenswrapper[4809]: I0312 08:26:48.949506 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5581de1-27a7-4f7f-8388-904d787d5ab8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:26:48 crc kubenswrapper[4809]: I0312 08:26:48.949547 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5581de1-27a7-4f7f-8388-904d787d5ab8-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 08:26:49 crc kubenswrapper[4809]: I0312 08:26:49.122708 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4eb1b537-df6b-4e03-aa39-42380aac7bd8" path="/var/lib/kubelet/pods/4eb1b537-df6b-4e03-aa39-42380aac7bd8/volumes" Mar 12 08:26:49 crc kubenswrapper[4809]: I0312 08:26:49.123615 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="586c76fb-3a10-473c-b9d9-14fb71cc4f6e" path="/var/lib/kubelet/pods/586c76fb-3a10-473c-b9d9-14fb71cc4f6e/volumes" Mar 12 08:26:49 crc kubenswrapper[4809]: I0312 08:26:49.124372 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8aa93b9-e740-431c-b4c2-77d3973d9251" path="/var/lib/kubelet/pods/d8aa93b9-e740-431c-b4c2-77d3973d9251/volumes" Mar 12 08:26:49 crc kubenswrapper[4809]: I0312 08:26:49.298359 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5581de1-27a7-4f7f-8388-904d787d5ab8","Type":"ContainerDied","Data":"c0a53e776590ef0301ccac9e44bb490a3b17526ad85c3a60aaf46750e1a4de88"} Mar 12 08:26:49 crc kubenswrapper[4809]: I0312 08:26:49.298656 4809 scope.go:117] "RemoveContainer" containerID="c7e34f58e8f5fedcc550976b179a97f44622c003b60541d3d29865089b3bc0f0" Mar 12 08:26:49 crc kubenswrapper[4809]: I0312 08:26:49.298715 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 08:26:49 crc kubenswrapper[4809]: I0312 08:26:49.303704 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"271f5266-7da5-4a37-a61c-aa02f9e04d15","Type":"ContainerStarted","Data":"36b216c2e62760c6d6663b8269e334e6922c43e6a57c216a3f83c29215d97a04"} Mar 12 08:26:49 crc kubenswrapper[4809]: I0312 08:26:49.303774 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"271f5266-7da5-4a37-a61c-aa02f9e04d15","Type":"ContainerStarted","Data":"e33e12ba3647f4501a58a4553dbb7b1c51bca14c15b465f3b091f0a5b71d5355"} Mar 12 08:26:49 crc kubenswrapper[4809]: I0312 08:26:49.313160 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a","Type":"ContainerStarted","Data":"c7a204f3c607e4769dc40c5c680c2f4d8d2295883e411cc0415787fd745ad576"} Mar 12 08:26:49 crc kubenswrapper[4809]: I0312 08:26:49.316008 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9af05a91-f59f-4731-b75d-95fe7b869838","Type":"ContainerStarted","Data":"6eaeb99550dd5bea2790f8e1309d783e356c96a38367410502f3c22f1dcec546"} Mar 12 08:26:49 crc kubenswrapper[4809]: I0312 08:26:49.316060 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9af05a91-f59f-4731-b75d-95fe7b869838","Type":"ContainerStarted","Data":"6663b463d13eff2d097b6b0b92ca6912d6e4dd26f4cf1dfa35e97a479aa2e79b"} Mar 12 08:26:49 crc kubenswrapper[4809]: I0312 08:26:49.328465 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.328441932 podStartE2EDuration="2.328441932s" podCreationTimestamp="2026-03-12 08:26:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:26:49.326603282 +0000 UTC m=+1682.908639035" watchObservedRunningTime="2026-03-12 08:26:49.328441932 +0000 UTC m=+1682.910477665" Mar 12 08:26:49 crc kubenswrapper[4809]: I0312 08:26:49.339706 4809 scope.go:117] "RemoveContainer" containerID="5f939c40d3bdca00bb6117aebc702d36a2fd9408e6bbcb32a26ac93845af80c3" Mar 12 08:26:49 crc kubenswrapper[4809]: I0312 08:26:49.390624 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 08:26:49 crc kubenswrapper[4809]: I0312 08:26:49.420524 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 12 08:26:49 crc kubenswrapper[4809]: I0312 08:26:49.420732 4809 scope.go:117] "RemoveContainer" containerID="560c87acd58d8a281af7ea2df611e66383b0d7e979566659cf3dd4040a2f50d0" Mar 12 08:26:49 crc kubenswrapper[4809]: I0312 08:26:49.433346 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 12 08:26:49 crc kubenswrapper[4809]: W0312 08:26:49.453997 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec6fbec3_cafa_494f_a79b_9fcdb1665bb6.slice/crio-eb5a90f3e7a936b1262e122b5503039a7cc974f86c5df18c913962a3160555d6 WatchSource:0}: Error finding container eb5a90f3e7a936b1262e122b5503039a7cc974f86c5df18c913962a3160555d6: Status 404 returned error can't find the container with id eb5a90f3e7a936b1262e122b5503039a7cc974f86c5df18c913962a3160555d6 Mar 12 08:26:49 crc kubenswrapper[4809]: I0312 08:26:49.454183 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 12 08:26:49 crc kubenswrapper[4809]: E0312 08:26:49.455394 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5581de1-27a7-4f7f-8388-904d787d5ab8" containerName="ceilometer-notification-agent" Mar 12 08:26:49 crc kubenswrapper[4809]: I0312 08:26:49.455419 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5581de1-27a7-4f7f-8388-904d787d5ab8" containerName="ceilometer-notification-agent" Mar 12 08:26:49 crc kubenswrapper[4809]: E0312 08:26:49.455450 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5581de1-27a7-4f7f-8388-904d787d5ab8" containerName="ceilometer-central-agent" Mar 12 08:26:49 crc kubenswrapper[4809]: I0312 08:26:49.455460 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5581de1-27a7-4f7f-8388-904d787d5ab8" containerName="ceilometer-central-agent" Mar 12 08:26:49 crc kubenswrapper[4809]: E0312 08:26:49.455491 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5581de1-27a7-4f7f-8388-904d787d5ab8" containerName="proxy-httpd" Mar 12 08:26:49 crc kubenswrapper[4809]: I0312 08:26:49.455607 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5581de1-27a7-4f7f-8388-904d787d5ab8" containerName="proxy-httpd" Mar 12 08:26:49 crc kubenswrapper[4809]: E0312 08:26:49.455635 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5581de1-27a7-4f7f-8388-904d787d5ab8" containerName="sg-core" Mar 12 08:26:49 crc kubenswrapper[4809]: I0312 08:26:49.455642 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5581de1-27a7-4f7f-8388-904d787d5ab8" containerName="sg-core" Mar 12 08:26:49 crc kubenswrapper[4809]: I0312 08:26:49.455923 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5581de1-27a7-4f7f-8388-904d787d5ab8" containerName="ceilometer-central-agent" Mar 12 08:26:49 crc kubenswrapper[4809]: I0312 08:26:49.455940 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5581de1-27a7-4f7f-8388-904d787d5ab8" containerName="ceilometer-notification-agent" Mar 12 08:26:49 crc kubenswrapper[4809]: I0312 08:26:49.455957 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5581de1-27a7-4f7f-8388-904d787d5ab8" containerName="sg-core" Mar 12 08:26:49 crc kubenswrapper[4809]: I0312 08:26:49.455972 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5581de1-27a7-4f7f-8388-904d787d5ab8" containerName="proxy-httpd" Mar 12 08:26:49 crc kubenswrapper[4809]: I0312 08:26:49.459303 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 08:26:49 crc kubenswrapper[4809]: I0312 08:26:49.462807 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 12 08:26:49 crc kubenswrapper[4809]: I0312 08:26:49.466023 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 12 08:26:49 crc kubenswrapper[4809]: I0312 08:26:49.469901 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 08:26:49 crc kubenswrapper[4809]: I0312 08:26:49.550020 4809 scope.go:117] "RemoveContainer" containerID="ae4e3291e11f40ff9801ba2f53a90feaa68a42cfcf2ed5288c6d532f211b51d2" Mar 12 08:26:49 crc kubenswrapper[4809]: I0312 08:26:49.572697 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/691eb456-8586-45fd-857f-74c3e351833e-scripts\") pod \"ceilometer-0\" (UID: \"691eb456-8586-45fd-857f-74c3e351833e\") " pod="openstack/ceilometer-0" Mar 12 08:26:49 crc kubenswrapper[4809]: I0312 08:26:49.573205 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/691eb456-8586-45fd-857f-74c3e351833e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"691eb456-8586-45fd-857f-74c3e351833e\") " pod="openstack/ceilometer-0" Mar 12 08:26:49 crc kubenswrapper[4809]: I0312 08:26:49.573262 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7n6x\" (UniqueName: \"kubernetes.io/projected/691eb456-8586-45fd-857f-74c3e351833e-kube-api-access-f7n6x\") pod \"ceilometer-0\" (UID: \"691eb456-8586-45fd-857f-74c3e351833e\") " pod="openstack/ceilometer-0" Mar 12 08:26:49 crc kubenswrapper[4809]: I0312 08:26:49.573296 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/691eb456-8586-45fd-857f-74c3e351833e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"691eb456-8586-45fd-857f-74c3e351833e\") " pod="openstack/ceilometer-0" Mar 12 08:26:49 crc kubenswrapper[4809]: I0312 08:26:49.573362 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/691eb456-8586-45fd-857f-74c3e351833e-run-httpd\") pod \"ceilometer-0\" (UID: \"691eb456-8586-45fd-857f-74c3e351833e\") " pod="openstack/ceilometer-0" Mar 12 08:26:49 crc kubenswrapper[4809]: I0312 08:26:49.573392 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/691eb456-8586-45fd-857f-74c3e351833e-config-data\") pod \"ceilometer-0\" (UID: \"691eb456-8586-45fd-857f-74c3e351833e\") " pod="openstack/ceilometer-0" Mar 12 08:26:49 crc kubenswrapper[4809]: I0312 08:26:49.573600 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/691eb456-8586-45fd-857f-74c3e351833e-log-httpd\") pod \"ceilometer-0\" (UID: \"691eb456-8586-45fd-857f-74c3e351833e\") " pod="openstack/ceilometer-0" Mar 12 08:26:49 crc kubenswrapper[4809]: I0312 08:26:49.676462 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/691eb456-8586-45fd-857f-74c3e351833e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"691eb456-8586-45fd-857f-74c3e351833e\") " pod="openstack/ceilometer-0" Mar 12 08:26:49 crc kubenswrapper[4809]: I0312 08:26:49.676559 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7n6x\" (UniqueName: \"kubernetes.io/projected/691eb456-8586-45fd-857f-74c3e351833e-kube-api-access-f7n6x\") pod \"ceilometer-0\" (UID: \"691eb456-8586-45fd-857f-74c3e351833e\") " pod="openstack/ceilometer-0" Mar 12 08:26:49 crc kubenswrapper[4809]: I0312 08:26:49.676607 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/691eb456-8586-45fd-857f-74c3e351833e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"691eb456-8586-45fd-857f-74c3e351833e\") " pod="openstack/ceilometer-0" Mar 12 08:26:49 crc kubenswrapper[4809]: I0312 08:26:49.676692 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/691eb456-8586-45fd-857f-74c3e351833e-run-httpd\") pod \"ceilometer-0\" (UID: \"691eb456-8586-45fd-857f-74c3e351833e\") " pod="openstack/ceilometer-0" Mar 12 08:26:49 crc kubenswrapper[4809]: I0312 08:26:49.676728 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/691eb456-8586-45fd-857f-74c3e351833e-config-data\") pod \"ceilometer-0\" (UID: \"691eb456-8586-45fd-857f-74c3e351833e\") " pod="openstack/ceilometer-0" Mar 12 08:26:49 crc kubenswrapper[4809]: I0312 08:26:49.676795 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/691eb456-8586-45fd-857f-74c3e351833e-log-httpd\") pod \"ceilometer-0\" (UID: \"691eb456-8586-45fd-857f-74c3e351833e\") " pod="openstack/ceilometer-0" Mar 12 08:26:49 crc kubenswrapper[4809]: I0312 08:26:49.676897 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/691eb456-8586-45fd-857f-74c3e351833e-scripts\") pod \"ceilometer-0\" (UID: \"691eb456-8586-45fd-857f-74c3e351833e\") " pod="openstack/ceilometer-0" Mar 12 08:26:49 crc kubenswrapper[4809]: I0312 08:26:49.681081 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/691eb456-8586-45fd-857f-74c3e351833e-run-httpd\") pod \"ceilometer-0\" (UID: \"691eb456-8586-45fd-857f-74c3e351833e\") " pod="openstack/ceilometer-0" Mar 12 08:26:49 crc kubenswrapper[4809]: I0312 08:26:49.681409 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/691eb456-8586-45fd-857f-74c3e351833e-log-httpd\") pod \"ceilometer-0\" (UID: \"691eb456-8586-45fd-857f-74c3e351833e\") " pod="openstack/ceilometer-0" Mar 12 08:26:49 crc kubenswrapper[4809]: I0312 08:26:49.681809 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/691eb456-8586-45fd-857f-74c3e351833e-scripts\") pod \"ceilometer-0\" (UID: \"691eb456-8586-45fd-857f-74c3e351833e\") " pod="openstack/ceilometer-0" Mar 12 08:26:49 crc kubenswrapper[4809]: I0312 08:26:49.686797 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/691eb456-8586-45fd-857f-74c3e351833e-config-data\") pod \"ceilometer-0\" (UID: \"691eb456-8586-45fd-857f-74c3e351833e\") " pod="openstack/ceilometer-0" Mar 12 08:26:49 crc kubenswrapper[4809]: I0312 08:26:49.689762 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/691eb456-8586-45fd-857f-74c3e351833e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"691eb456-8586-45fd-857f-74c3e351833e\") " pod="openstack/ceilometer-0" Mar 12 08:26:49 crc kubenswrapper[4809]: I0312 08:26:49.692776 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/691eb456-8586-45fd-857f-74c3e351833e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"691eb456-8586-45fd-857f-74c3e351833e\") " pod="openstack/ceilometer-0" Mar 12 08:26:49 crc kubenswrapper[4809]: I0312 08:26:49.696966 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7n6x\" (UniqueName: \"kubernetes.io/projected/691eb456-8586-45fd-857f-74c3e351833e-kube-api-access-f7n6x\") pod \"ceilometer-0\" (UID: \"691eb456-8586-45fd-857f-74c3e351833e\") " pod="openstack/ceilometer-0" Mar 12 08:26:49 crc kubenswrapper[4809]: I0312 08:26:49.823343 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 08:26:50 crc kubenswrapper[4809]: I0312 08:26:50.332288 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a","Type":"ContainerStarted","Data":"cab7ab924a4c4ad6647d58bc18fe03312ea5a779d09402133a88bd249a7b957b"} Mar 12 08:26:50 crc kubenswrapper[4809]: I0312 08:26:50.332966 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a","Type":"ContainerStarted","Data":"0444095daf15db78ef070a7091c3de12368cf54ff9458a8b040ff8aef404e31e"} Mar 12 08:26:50 crc kubenswrapper[4809]: I0312 08:26:50.334478 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ec6fbec3-cafa-494f-a79b-9fcdb1665bb6","Type":"ContainerStarted","Data":"aa6be4357821bedde1e573709475195e261bca2fc9ef070c870200a98f0aaf8b"} Mar 12 08:26:50 crc kubenswrapper[4809]: I0312 08:26:50.334554 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ec6fbec3-cafa-494f-a79b-9fcdb1665bb6","Type":"ContainerStarted","Data":"59ab0c521fa163c77198d07bde34cb2b62605167dae76caaf4ce477105f79a38"} Mar 12 08:26:50 crc kubenswrapper[4809]: I0312 08:26:50.334571 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ec6fbec3-cafa-494f-a79b-9fcdb1665bb6","Type":"ContainerStarted","Data":"eb5a90f3e7a936b1262e122b5503039a7cc974f86c5df18c913962a3160555d6"} Mar 12 08:26:50 crc kubenswrapper[4809]: I0312 08:26:50.337903 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9af05a91-f59f-4731-b75d-95fe7b869838","Type":"ContainerStarted","Data":"a0ed8fadd291c27340b72784852a5ec5ab4b135131b450af6371bb5ec5b011c6"} Mar 12 08:26:50 crc kubenswrapper[4809]: I0312 08:26:50.383225 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.383205021 podStartE2EDuration="2.383205021s" podCreationTimestamp="2026-03-12 08:26:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:26:50.375209813 +0000 UTC m=+1683.957245586" watchObservedRunningTime="2026-03-12 08:26:50.383205021 +0000 UTC m=+1683.965240754" Mar 12 08:26:50 crc kubenswrapper[4809]: I0312 08:26:50.407869 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.40784791 podStartE2EDuration="3.40784791s" podCreationTimestamp="2026-03-12 08:26:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:26:50.398965679 +0000 UTC m=+1683.981001412" watchObservedRunningTime="2026-03-12 08:26:50.40784791 +0000 UTC m=+1683.989883633" Mar 12 08:26:50 crc kubenswrapper[4809]: I0312 08:26:50.483683 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 08:26:50 crc kubenswrapper[4809]: W0312 08:26:50.484499 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod691eb456_8586_45fd_857f_74c3e351833e.slice/crio-f2e1c870138442dcafb8057a7d7d200b3b6b6037c88f330b54b889b0ab727ae2 WatchSource:0}: Error finding container f2e1c870138442dcafb8057a7d7d200b3b6b6037c88f330b54b889b0ab727ae2: Status 404 returned error can't find the container with id f2e1c870138442dcafb8057a7d7d200b3b6b6037c88f330b54b889b0ab727ae2 Mar 12 08:26:51 crc kubenswrapper[4809]: I0312 08:26:51.128829 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5581de1-27a7-4f7f-8388-904d787d5ab8" path="/var/lib/kubelet/pods/a5581de1-27a7-4f7f-8388-904d787d5ab8/volumes" Mar 12 08:26:51 crc kubenswrapper[4809]: I0312 08:26:51.356023 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"691eb456-8586-45fd-857f-74c3e351833e","Type":"ContainerStarted","Data":"bd9f033113dcc4975a1b1cafd33024d4a5cc5b82c297bbd866b06f6c33e3a888"} Mar 12 08:26:51 crc kubenswrapper[4809]: I0312 08:26:51.356077 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"691eb456-8586-45fd-857f-74c3e351833e","Type":"ContainerStarted","Data":"f2e1c870138442dcafb8057a7d7d200b3b6b6037c88f330b54b889b0ab727ae2"} Mar 12 08:26:51 crc kubenswrapper[4809]: I0312 08:26:51.369342 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a","Type":"ContainerStarted","Data":"2b987bde8509cd5aba10e8432e73940d4dd336273ad498fd7ac75ad0b01a7e41"} Mar 12 08:26:51 crc kubenswrapper[4809]: I0312 08:26:51.416720 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.127786027 podStartE2EDuration="5.41669122s" podCreationTimestamp="2026-03-12 08:26:46 +0000 UTC" firstStartedPulling="2026-03-12 08:26:47.437326648 +0000 UTC m=+1681.019362381" lastFinishedPulling="2026-03-12 08:26:50.726231841 +0000 UTC m=+1684.308267574" observedRunningTime="2026-03-12 08:26:51.407246193 +0000 UTC m=+1684.989281926" watchObservedRunningTime="2026-03-12 08:26:51.41669122 +0000 UTC m=+1684.998726953" Mar 12 08:26:52 crc kubenswrapper[4809]: I0312 08:26:52.383649 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"691eb456-8586-45fd-857f-74c3e351833e","Type":"ContainerStarted","Data":"e3fa1fb7b97b4ad242eda5cbd7494ec7edc3307ce4eaec6702a36b121c320438"} Mar 12 08:26:52 crc kubenswrapper[4809]: I0312 08:26:52.769655 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 12 08:26:52 crc kubenswrapper[4809]: I0312 08:26:52.839890 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 12 08:26:52 crc kubenswrapper[4809]: I0312 08:26:52.839944 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 12 08:26:53 crc kubenswrapper[4809]: I0312 08:26:53.396907 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"691eb456-8586-45fd-857f-74c3e351833e","Type":"ContainerStarted","Data":"0adc2d7685bb602f2e35bfee001b65a2a960e5354944d796fd42c5f88467d6b0"} Mar 12 08:26:55 crc kubenswrapper[4809]: I0312 08:26:55.434428 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"691eb456-8586-45fd-857f-74c3e351833e","Type":"ContainerStarted","Data":"bd0c3b5ac079d1b6b09d3d022f80d91e7e0c8b082688b49d419828e83bb3e17e"} Mar 12 08:26:55 crc kubenswrapper[4809]: I0312 08:26:55.434910 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 12 08:26:55 crc kubenswrapper[4809]: I0312 08:26:55.480609 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.265167179 podStartE2EDuration="6.480577257s" podCreationTimestamp="2026-03-12 08:26:49 +0000 UTC" firstStartedPulling="2026-03-12 08:26:50.48832672 +0000 UTC m=+1684.070362453" lastFinishedPulling="2026-03-12 08:26:54.703736798 +0000 UTC m=+1688.285772531" observedRunningTime="2026-03-12 08:26:55.470192636 +0000 UTC m=+1689.052228369" watchObservedRunningTime="2026-03-12 08:26:55.480577257 +0000 UTC m=+1689.062613000" Mar 12 08:26:57 crc kubenswrapper[4809]: I0312 08:26:57.769778 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 12 08:26:57 crc kubenswrapper[4809]: I0312 08:26:57.817828 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 12 08:26:57 crc kubenswrapper[4809]: I0312 08:26:57.840408 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 12 08:26:57 crc kubenswrapper[4809]: I0312 08:26:57.842344 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 12 08:26:58 crc kubenswrapper[4809]: I0312 08:26:58.105893 4809 scope.go:117] "RemoveContainer" containerID="864ab2b9321734db2ec1c2e4ce57f8a096a813a5cbaa0744b231535713d6ab10" Mar 12 08:26:58 crc kubenswrapper[4809]: E0312 08:26:58.106204 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:26:58 crc kubenswrapper[4809]: I0312 08:26:58.531108 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 12 08:26:58 crc kubenswrapper[4809]: I0312 08:26:58.787476 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 12 08:26:58 crc kubenswrapper[4809]: I0312 08:26:58.787570 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 12 08:26:58 crc kubenswrapper[4809]: I0312 08:26:58.855593 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="9af05a91-f59f-4731-b75d-95fe7b869838" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.18:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 08:26:58 crc kubenswrapper[4809]: I0312 08:26:58.855637 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="9af05a91-f59f-4731-b75d-95fe7b869838" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.18:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 08:26:59 crc kubenswrapper[4809]: I0312 08:26:59.801366 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ec6fbec3-cafa-494f-a79b-9fcdb1665bb6" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.19:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 08:26:59 crc kubenswrapper[4809]: I0312 08:26:59.801328 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ec6fbec3-cafa-494f-a79b-9fcdb1665bb6" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.19:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 08:27:07 crc kubenswrapper[4809]: I0312 08:27:07.861150 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 12 08:27:07 crc kubenswrapper[4809]: I0312 08:27:07.863878 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 12 08:27:07 crc kubenswrapper[4809]: I0312 08:27:07.871957 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 12 08:27:07 crc kubenswrapper[4809]: I0312 08:27:07.872405 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 12 08:27:08 crc kubenswrapper[4809]: I0312 08:27:08.794026 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 12 08:27:08 crc kubenswrapper[4809]: I0312 08:27:08.794834 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 12 08:27:08 crc kubenswrapper[4809]: I0312 08:27:08.797232 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 12 08:27:08 crc kubenswrapper[4809]: I0312 08:27:08.809429 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 12 08:27:09 crc kubenswrapper[4809]: I0312 08:27:09.626007 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 12 08:27:09 crc kubenswrapper[4809]: I0312 08:27:09.635032 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 12 08:27:11 crc kubenswrapper[4809]: I0312 08:27:11.107107 4809 scope.go:117] "RemoveContainer" containerID="864ab2b9321734db2ec1c2e4ce57f8a096a813a5cbaa0744b231535713d6ab10" Mar 12 08:27:11 crc kubenswrapper[4809]: E0312 08:27:11.107680 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:27:17 crc kubenswrapper[4809]: I0312 08:27:17.278139 4809 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod586c76fb-3a10-473c-b9d9-14fb71cc4f6e"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod586c76fb-3a10-473c-b9d9-14fb71cc4f6e] : Timed out while waiting for systemd to remove kubepods-besteffort-pod586c76fb_3a10_473c_b9d9_14fb71cc4f6e.slice" Mar 12 08:27:19 crc kubenswrapper[4809]: I0312 08:27:19.838031 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Mar 12 08:27:23 crc kubenswrapper[4809]: I0312 08:27:23.107744 4809 scope.go:117] "RemoveContainer" containerID="864ab2b9321734db2ec1c2e4ce57f8a096a813a5cbaa0744b231535713d6ab10" Mar 12 08:27:23 crc kubenswrapper[4809]: E0312 08:27:23.109827 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:27:24 crc kubenswrapper[4809]: I0312 08:27:24.886355 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 12 08:27:24 crc kubenswrapper[4809]: I0312 08:27:24.886942 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="160163b1-c728-46dd-8caa-df11fdb18266" containerName="kube-state-metrics" containerID="cri-o://936147b419bd7fc41f9179fd87b532efe109407e02f3eb966f79ebaa35a502a6" gracePeriod=30 Mar 12 08:27:25 crc kubenswrapper[4809]: I0312 08:27:25.044541 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Mar 12 08:27:25 crc kubenswrapper[4809]: I0312 08:27:25.044844 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mysqld-exporter-0" podUID="05554710-f410-4b78-9fb2-22fc55aeea98" containerName="mysqld-exporter" containerID="cri-o://1e64bfd09d8d77d66d9596ed68feb37d5b5cebf7e3f4d7a789cd88e5c9770d10" gracePeriod=30 Mar 12 08:27:25 crc kubenswrapper[4809]: I0312 08:27:25.886067 4809 generic.go:334] "Generic (PLEG): container finished" podID="05554710-f410-4b78-9fb2-22fc55aeea98" containerID="1e64bfd09d8d77d66d9596ed68feb37d5b5cebf7e3f4d7a789cd88e5c9770d10" exitCode=2 Mar 12 08:27:25 crc kubenswrapper[4809]: I0312 08:27:25.886711 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"05554710-f410-4b78-9fb2-22fc55aeea98","Type":"ContainerDied","Data":"1e64bfd09d8d77d66d9596ed68feb37d5b5cebf7e3f4d7a789cd88e5c9770d10"} Mar 12 08:27:25 crc kubenswrapper[4809]: I0312 08:27:25.886751 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"05554710-f410-4b78-9fb2-22fc55aeea98","Type":"ContainerDied","Data":"8eb5d80b9d54c4678d3288ac32a72b38e9807d06a729e6ad2ea4ba962034c235"} Mar 12 08:27:25 crc kubenswrapper[4809]: I0312 08:27:25.886768 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8eb5d80b9d54c4678d3288ac32a72b38e9807d06a729e6ad2ea4ba962034c235" Mar 12 08:27:25 crc kubenswrapper[4809]: I0312 08:27:25.891448 4809 generic.go:334] "Generic (PLEG): container finished" podID="160163b1-c728-46dd-8caa-df11fdb18266" containerID="936147b419bd7fc41f9179fd87b532efe109407e02f3eb966f79ebaa35a502a6" exitCode=2 Mar 12 08:27:25 crc kubenswrapper[4809]: I0312 08:27:25.891503 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"160163b1-c728-46dd-8caa-df11fdb18266","Type":"ContainerDied","Data":"936147b419bd7fc41f9179fd87b532efe109407e02f3eb966f79ebaa35a502a6"} Mar 12 08:27:25 crc kubenswrapper[4809]: I0312 08:27:25.891538 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"160163b1-c728-46dd-8caa-df11fdb18266","Type":"ContainerDied","Data":"95a3cdbe11154e053127c70e279463673de4874127884bb0bca72f15818991db"} Mar 12 08:27:25 crc kubenswrapper[4809]: I0312 08:27:25.891556 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95a3cdbe11154e053127c70e279463673de4874127884bb0bca72f15818991db" Mar 12 08:27:25 crc kubenswrapper[4809]: I0312 08:27:25.934054 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 12 08:27:25 crc kubenswrapper[4809]: I0312 08:27:25.943076 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Mar 12 08:27:26 crc kubenswrapper[4809]: I0312 08:27:26.053585 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhwtz\" (UniqueName: \"kubernetes.io/projected/160163b1-c728-46dd-8caa-df11fdb18266-kube-api-access-qhwtz\") pod \"160163b1-c728-46dd-8caa-df11fdb18266\" (UID: \"160163b1-c728-46dd-8caa-df11fdb18266\") " Mar 12 08:27:26 crc kubenswrapper[4809]: I0312 08:27:26.053689 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttxwt\" (UniqueName: \"kubernetes.io/projected/05554710-f410-4b78-9fb2-22fc55aeea98-kube-api-access-ttxwt\") pod \"05554710-f410-4b78-9fb2-22fc55aeea98\" (UID: \"05554710-f410-4b78-9fb2-22fc55aeea98\") " Mar 12 08:27:26 crc kubenswrapper[4809]: I0312 08:27:26.053795 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05554710-f410-4b78-9fb2-22fc55aeea98-config-data\") pod \"05554710-f410-4b78-9fb2-22fc55aeea98\" (UID: \"05554710-f410-4b78-9fb2-22fc55aeea98\") " Mar 12 08:27:26 crc kubenswrapper[4809]: I0312 08:27:26.053872 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05554710-f410-4b78-9fb2-22fc55aeea98-combined-ca-bundle\") pod \"05554710-f410-4b78-9fb2-22fc55aeea98\" (UID: \"05554710-f410-4b78-9fb2-22fc55aeea98\") " Mar 12 08:27:26 crc kubenswrapper[4809]: I0312 08:27:26.078969 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05554710-f410-4b78-9fb2-22fc55aeea98-kube-api-access-ttxwt" (OuterVolumeSpecName: "kube-api-access-ttxwt") pod "05554710-f410-4b78-9fb2-22fc55aeea98" (UID: "05554710-f410-4b78-9fb2-22fc55aeea98"). InnerVolumeSpecName "kube-api-access-ttxwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:27:26 crc kubenswrapper[4809]: I0312 08:27:26.081833 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/160163b1-c728-46dd-8caa-df11fdb18266-kube-api-access-qhwtz" (OuterVolumeSpecName: "kube-api-access-qhwtz") pod "160163b1-c728-46dd-8caa-df11fdb18266" (UID: "160163b1-c728-46dd-8caa-df11fdb18266"). InnerVolumeSpecName "kube-api-access-qhwtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:27:26 crc kubenswrapper[4809]: I0312 08:27:26.117652 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05554710-f410-4b78-9fb2-22fc55aeea98-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "05554710-f410-4b78-9fb2-22fc55aeea98" (UID: "05554710-f410-4b78-9fb2-22fc55aeea98"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:27:26 crc kubenswrapper[4809]: I0312 08:27:26.160061 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhwtz\" (UniqueName: \"kubernetes.io/projected/160163b1-c728-46dd-8caa-df11fdb18266-kube-api-access-qhwtz\") on node \"crc\" DevicePath \"\"" Mar 12 08:27:26 crc kubenswrapper[4809]: I0312 08:27:26.160105 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttxwt\" (UniqueName: \"kubernetes.io/projected/05554710-f410-4b78-9fb2-22fc55aeea98-kube-api-access-ttxwt\") on node \"crc\" DevicePath \"\"" Mar 12 08:27:26 crc kubenswrapper[4809]: I0312 08:27:26.160138 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05554710-f410-4b78-9fb2-22fc55aeea98-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:27:26 crc kubenswrapper[4809]: I0312 08:27:26.202523 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05554710-f410-4b78-9fb2-22fc55aeea98-config-data" (OuterVolumeSpecName: "config-data") pod "05554710-f410-4b78-9fb2-22fc55aeea98" (UID: "05554710-f410-4b78-9fb2-22fc55aeea98"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:27:26 crc kubenswrapper[4809]: I0312 08:27:26.262897 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05554710-f410-4b78-9fb2-22fc55aeea98-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 08:27:26 crc kubenswrapper[4809]: I0312 08:27:26.901018 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 12 08:27:26 crc kubenswrapper[4809]: I0312 08:27:26.901137 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Mar 12 08:27:26 crc kubenswrapper[4809]: I0312 08:27:26.953813 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 12 08:27:26 crc kubenswrapper[4809]: I0312 08:27:26.973453 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 12 08:27:26 crc kubenswrapper[4809]: I0312 08:27:26.994657 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Mar 12 08:27:27 crc kubenswrapper[4809]: I0312 08:27:27.038415 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-0"] Mar 12 08:27:27 crc kubenswrapper[4809]: I0312 08:27:27.057776 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Mar 12 08:27:27 crc kubenswrapper[4809]: E0312 08:27:27.058521 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05554710-f410-4b78-9fb2-22fc55aeea98" containerName="mysqld-exporter" Mar 12 08:27:27 crc kubenswrapper[4809]: I0312 08:27:27.058553 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="05554710-f410-4b78-9fb2-22fc55aeea98" containerName="mysqld-exporter" Mar 12 08:27:27 crc kubenswrapper[4809]: E0312 08:27:27.058600 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="160163b1-c728-46dd-8caa-df11fdb18266" containerName="kube-state-metrics" Mar 12 08:27:27 crc kubenswrapper[4809]: I0312 08:27:27.058612 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="160163b1-c728-46dd-8caa-df11fdb18266" containerName="kube-state-metrics" Mar 12 08:27:27 crc kubenswrapper[4809]: I0312 08:27:27.058958 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="05554710-f410-4b78-9fb2-22fc55aeea98" containerName="mysqld-exporter" Mar 12 08:27:27 crc kubenswrapper[4809]: I0312 08:27:27.058988 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="160163b1-c728-46dd-8caa-df11fdb18266" containerName="kube-state-metrics" Mar 12 08:27:27 crc kubenswrapper[4809]: I0312 08:27:27.060499 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 12 08:27:27 crc kubenswrapper[4809]: I0312 08:27:27.064424 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Mar 12 08:27:27 crc kubenswrapper[4809]: I0312 08:27:27.064520 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Mar 12 08:27:27 crc kubenswrapper[4809]: I0312 08:27:27.085179 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 12 08:27:27 crc kubenswrapper[4809]: I0312 08:27:27.124516 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05554710-f410-4b78-9fb2-22fc55aeea98" path="/var/lib/kubelet/pods/05554710-f410-4b78-9fb2-22fc55aeea98/volumes" Mar 12 08:27:27 crc kubenswrapper[4809]: I0312 08:27:27.125628 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="160163b1-c728-46dd-8caa-df11fdb18266" path="/var/lib/kubelet/pods/160163b1-c728-46dd-8caa-df11fdb18266/volumes" Mar 12 08:27:27 crc kubenswrapper[4809]: I0312 08:27:27.127684 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Mar 12 08:27:27 crc kubenswrapper[4809]: I0312 08:27:27.131304 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Mar 12 08:27:27 crc kubenswrapper[4809]: I0312 08:27:27.134707 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-mysqld-exporter-svc" Mar 12 08:27:27 crc kubenswrapper[4809]: I0312 08:27:27.134892 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Mar 12 08:27:27 crc kubenswrapper[4809]: I0312 08:27:27.140593 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Mar 12 08:27:27 crc kubenswrapper[4809]: I0312 08:27:27.193548 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b92111bc-ddbe-401a-83c3-2b0c1e805c6a-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"b92111bc-ddbe-401a-83c3-2b0c1e805c6a\") " pod="openstack/kube-state-metrics-0" Mar 12 08:27:27 crc kubenswrapper[4809]: I0312 08:27:27.193662 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcg7r\" (UniqueName: \"kubernetes.io/projected/b92111bc-ddbe-401a-83c3-2b0c1e805c6a-kube-api-access-dcg7r\") pod \"kube-state-metrics-0\" (UID: \"b92111bc-ddbe-401a-83c3-2b0c1e805c6a\") " pod="openstack/kube-state-metrics-0" Mar 12 08:27:27 crc kubenswrapper[4809]: I0312 08:27:27.193732 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/b92111bc-ddbe-401a-83c3-2b0c1e805c6a-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"b92111bc-ddbe-401a-83c3-2b0c1e805c6a\") " pod="openstack/kube-state-metrics-0" Mar 12 08:27:27 crc kubenswrapper[4809]: I0312 08:27:27.193768 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/b92111bc-ddbe-401a-83c3-2b0c1e805c6a-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"b92111bc-ddbe-401a-83c3-2b0c1e805c6a\") " pod="openstack/kube-state-metrics-0" Mar 12 08:27:27 crc kubenswrapper[4809]: I0312 08:27:27.296926 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b70e91cb-e032-4dcf-9e4a-4f82241f7398-config-data\") pod \"mysqld-exporter-0\" (UID: \"b70e91cb-e032-4dcf-9e4a-4f82241f7398\") " pod="openstack/mysqld-exporter-0" Mar 12 08:27:27 crc kubenswrapper[4809]: I0312 08:27:27.297065 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/b70e91cb-e032-4dcf-9e4a-4f82241f7398-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"b70e91cb-e032-4dcf-9e4a-4f82241f7398\") " pod="openstack/mysqld-exporter-0" Mar 12 08:27:27 crc kubenswrapper[4809]: I0312 08:27:27.297085 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b70e91cb-e032-4dcf-9e4a-4f82241f7398-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"b70e91cb-e032-4dcf-9e4a-4f82241f7398\") " pod="openstack/mysqld-exporter-0" Mar 12 08:27:27 crc kubenswrapper[4809]: I0312 08:27:27.297240 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b92111bc-ddbe-401a-83c3-2b0c1e805c6a-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"b92111bc-ddbe-401a-83c3-2b0c1e805c6a\") " pod="openstack/kube-state-metrics-0" Mar 12 08:27:27 crc kubenswrapper[4809]: I0312 08:27:27.297323 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcg7r\" (UniqueName: \"kubernetes.io/projected/b92111bc-ddbe-401a-83c3-2b0c1e805c6a-kube-api-access-dcg7r\") pod \"kube-state-metrics-0\" (UID: \"b92111bc-ddbe-401a-83c3-2b0c1e805c6a\") " pod="openstack/kube-state-metrics-0" Mar 12 08:27:27 crc kubenswrapper[4809]: I0312 08:27:27.297406 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/b92111bc-ddbe-401a-83c3-2b0c1e805c6a-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"b92111bc-ddbe-401a-83c3-2b0c1e805c6a\") " pod="openstack/kube-state-metrics-0" Mar 12 08:27:27 crc kubenswrapper[4809]: I0312 08:27:27.297437 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/b92111bc-ddbe-401a-83c3-2b0c1e805c6a-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"b92111bc-ddbe-401a-83c3-2b0c1e805c6a\") " pod="openstack/kube-state-metrics-0" Mar 12 08:27:27 crc kubenswrapper[4809]: I0312 08:27:27.297470 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpxb6\" (UniqueName: \"kubernetes.io/projected/b70e91cb-e032-4dcf-9e4a-4f82241f7398-kube-api-access-fpxb6\") pod \"mysqld-exporter-0\" (UID: \"b70e91cb-e032-4dcf-9e4a-4f82241f7398\") " pod="openstack/mysqld-exporter-0" Mar 12 08:27:27 crc kubenswrapper[4809]: I0312 08:27:27.305615 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/b92111bc-ddbe-401a-83c3-2b0c1e805c6a-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"b92111bc-ddbe-401a-83c3-2b0c1e805c6a\") " pod="openstack/kube-state-metrics-0" Mar 12 08:27:27 crc kubenswrapper[4809]: I0312 08:27:27.305641 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/b92111bc-ddbe-401a-83c3-2b0c1e805c6a-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"b92111bc-ddbe-401a-83c3-2b0c1e805c6a\") " pod="openstack/kube-state-metrics-0" Mar 12 08:27:27 crc kubenswrapper[4809]: I0312 08:27:27.308523 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b92111bc-ddbe-401a-83c3-2b0c1e805c6a-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"b92111bc-ddbe-401a-83c3-2b0c1e805c6a\") " pod="openstack/kube-state-metrics-0" Mar 12 08:27:27 crc kubenswrapper[4809]: I0312 08:27:27.317720 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcg7r\" (UniqueName: \"kubernetes.io/projected/b92111bc-ddbe-401a-83c3-2b0c1e805c6a-kube-api-access-dcg7r\") pod \"kube-state-metrics-0\" (UID: \"b92111bc-ddbe-401a-83c3-2b0c1e805c6a\") " pod="openstack/kube-state-metrics-0" Mar 12 08:27:27 crc kubenswrapper[4809]: I0312 08:27:27.395290 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 12 08:27:27 crc kubenswrapper[4809]: I0312 08:27:27.400043 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpxb6\" (UniqueName: \"kubernetes.io/projected/b70e91cb-e032-4dcf-9e4a-4f82241f7398-kube-api-access-fpxb6\") pod \"mysqld-exporter-0\" (UID: \"b70e91cb-e032-4dcf-9e4a-4f82241f7398\") " pod="openstack/mysqld-exporter-0" Mar 12 08:27:27 crc kubenswrapper[4809]: I0312 08:27:27.400183 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b70e91cb-e032-4dcf-9e4a-4f82241f7398-config-data\") pod \"mysqld-exporter-0\" (UID: \"b70e91cb-e032-4dcf-9e4a-4f82241f7398\") " pod="openstack/mysqld-exporter-0" Mar 12 08:27:27 crc kubenswrapper[4809]: I0312 08:27:27.400248 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/b70e91cb-e032-4dcf-9e4a-4f82241f7398-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"b70e91cb-e032-4dcf-9e4a-4f82241f7398\") " pod="openstack/mysqld-exporter-0" Mar 12 08:27:27 crc kubenswrapper[4809]: I0312 08:27:27.400289 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b70e91cb-e032-4dcf-9e4a-4f82241f7398-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"b70e91cb-e032-4dcf-9e4a-4f82241f7398\") " pod="openstack/mysqld-exporter-0" Mar 12 08:27:27 crc kubenswrapper[4809]: I0312 08:27:27.405332 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b70e91cb-e032-4dcf-9e4a-4f82241f7398-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"b70e91cb-e032-4dcf-9e4a-4f82241f7398\") " pod="openstack/mysqld-exporter-0" Mar 12 08:27:27 crc kubenswrapper[4809]: I0312 08:27:27.405696 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/b70e91cb-e032-4dcf-9e4a-4f82241f7398-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"b70e91cb-e032-4dcf-9e4a-4f82241f7398\") " pod="openstack/mysqld-exporter-0" Mar 12 08:27:27 crc kubenswrapper[4809]: I0312 08:27:27.405835 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b70e91cb-e032-4dcf-9e4a-4f82241f7398-config-data\") pod \"mysqld-exporter-0\" (UID: \"b70e91cb-e032-4dcf-9e4a-4f82241f7398\") " pod="openstack/mysqld-exporter-0" Mar 12 08:27:27 crc kubenswrapper[4809]: I0312 08:27:27.420573 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpxb6\" (UniqueName: \"kubernetes.io/projected/b70e91cb-e032-4dcf-9e4a-4f82241f7398-kube-api-access-fpxb6\") pod \"mysqld-exporter-0\" (UID: \"b70e91cb-e032-4dcf-9e4a-4f82241f7398\") " pod="openstack/mysqld-exporter-0" Mar 12 08:27:27 crc kubenswrapper[4809]: I0312 08:27:27.455699 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Mar 12 08:27:27 crc kubenswrapper[4809]: I0312 08:27:27.657806 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 08:27:27 crc kubenswrapper[4809]: I0312 08:27:27.658531 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="691eb456-8586-45fd-857f-74c3e351833e" containerName="ceilometer-central-agent" containerID="cri-o://bd9f033113dcc4975a1b1cafd33024d4a5cc5b82c297bbd866b06f6c33e3a888" gracePeriod=30 Mar 12 08:27:27 crc kubenswrapper[4809]: I0312 08:27:27.659457 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="691eb456-8586-45fd-857f-74c3e351833e" containerName="proxy-httpd" containerID="cri-o://bd0c3b5ac079d1b6b09d3d022f80d91e7e0c8b082688b49d419828e83bb3e17e" gracePeriod=30 Mar 12 08:27:27 crc kubenswrapper[4809]: I0312 08:27:27.659517 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="691eb456-8586-45fd-857f-74c3e351833e" containerName="sg-core" containerID="cri-o://0adc2d7685bb602f2e35bfee001b65a2a960e5354944d796fd42c5f88467d6b0" gracePeriod=30 Mar 12 08:27:27 crc kubenswrapper[4809]: I0312 08:27:27.659556 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="691eb456-8586-45fd-857f-74c3e351833e" containerName="ceilometer-notification-agent" containerID="cri-o://e3fa1fb7b97b4ad242eda5cbd7494ec7edc3307ce4eaec6702a36b121c320438" gracePeriod=30 Mar 12 08:27:28 crc kubenswrapper[4809]: I0312 08:27:28.000187 4809 generic.go:334] "Generic (PLEG): container finished" podID="691eb456-8586-45fd-857f-74c3e351833e" containerID="bd0c3b5ac079d1b6b09d3d022f80d91e7e0c8b082688b49d419828e83bb3e17e" exitCode=0 Mar 12 08:27:28 crc kubenswrapper[4809]: I0312 08:27:28.000226 4809 generic.go:334] "Generic (PLEG): container finished" podID="691eb456-8586-45fd-857f-74c3e351833e" containerID="0adc2d7685bb602f2e35bfee001b65a2a960e5354944d796fd42c5f88467d6b0" exitCode=2 Mar 12 08:27:28 crc kubenswrapper[4809]: I0312 08:27:28.000255 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"691eb456-8586-45fd-857f-74c3e351833e","Type":"ContainerDied","Data":"bd0c3b5ac079d1b6b09d3d022f80d91e7e0c8b082688b49d419828e83bb3e17e"} Mar 12 08:27:28 crc kubenswrapper[4809]: I0312 08:27:28.000288 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"691eb456-8586-45fd-857f-74c3e351833e","Type":"ContainerDied","Data":"0adc2d7685bb602f2e35bfee001b65a2a960e5354944d796fd42c5f88467d6b0"} Mar 12 08:27:28 crc kubenswrapper[4809]: I0312 08:27:28.062712 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 12 08:27:28 crc kubenswrapper[4809]: I0312 08:27:28.079920 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Mar 12 08:27:29 crc kubenswrapper[4809]: I0312 08:27:29.041125 4809 generic.go:334] "Generic (PLEG): container finished" podID="691eb456-8586-45fd-857f-74c3e351833e" containerID="e3fa1fb7b97b4ad242eda5cbd7494ec7edc3307ce4eaec6702a36b121c320438" exitCode=0 Mar 12 08:27:29 crc kubenswrapper[4809]: I0312 08:27:29.041909 4809 generic.go:334] "Generic (PLEG): container finished" podID="691eb456-8586-45fd-857f-74c3e351833e" containerID="bd9f033113dcc4975a1b1cafd33024d4a5cc5b82c297bbd866b06f6c33e3a888" exitCode=0 Mar 12 08:27:29 crc kubenswrapper[4809]: I0312 08:27:29.041222 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"691eb456-8586-45fd-857f-74c3e351833e","Type":"ContainerDied","Data":"e3fa1fb7b97b4ad242eda5cbd7494ec7edc3307ce4eaec6702a36b121c320438"} Mar 12 08:27:29 crc kubenswrapper[4809]: I0312 08:27:29.041999 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"691eb456-8586-45fd-857f-74c3e351833e","Type":"ContainerDied","Data":"bd9f033113dcc4975a1b1cafd33024d4a5cc5b82c297bbd866b06f6c33e3a888"} Mar 12 08:27:29 crc kubenswrapper[4809]: I0312 08:27:29.049743 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"b70e91cb-e032-4dcf-9e4a-4f82241f7398","Type":"ContainerStarted","Data":"cfce7f5c6ba9472d1525f346647dafd1be111e003cf90985c31e8c604309fa70"} Mar 12 08:27:29 crc kubenswrapper[4809]: I0312 08:27:29.049799 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"b70e91cb-e032-4dcf-9e4a-4f82241f7398","Type":"ContainerStarted","Data":"7cc849a3c126f5540a37786fb3d3310bdbbb41c0d5d0a80dbf6e2f5a7546335d"} Mar 12 08:27:29 crc kubenswrapper[4809]: I0312 08:27:29.059869 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b92111bc-ddbe-401a-83c3-2b0c1e805c6a","Type":"ContainerStarted","Data":"1f3af126e12fec7d1cf30bc40044b17f47526d6cd60671e66d8db4099042c526"} Mar 12 08:27:29 crc kubenswrapper[4809]: I0312 08:27:29.059916 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b92111bc-ddbe-401a-83c3-2b0c1e805c6a","Type":"ContainerStarted","Data":"dd080cf7cb402d90d71c65535311a332a4138a8556de2c940c03cd01b8d83e7d"} Mar 12 08:27:29 crc kubenswrapper[4809]: I0312 08:27:29.060802 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Mar 12 08:27:29 crc kubenswrapper[4809]: I0312 08:27:29.159262 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=2.4995462760000002 podStartE2EDuration="3.159237832s" podCreationTimestamp="2026-03-12 08:27:26 +0000 UTC" firstStartedPulling="2026-03-12 08:27:28.058905172 +0000 UTC m=+1721.640940905" lastFinishedPulling="2026-03-12 08:27:28.718596728 +0000 UTC m=+1722.300632461" observedRunningTime="2026-03-12 08:27:29.072501535 +0000 UTC m=+1722.654537268" watchObservedRunningTime="2026-03-12 08:27:29.159237832 +0000 UTC m=+1722.741273565" Mar 12 08:27:29 crc kubenswrapper[4809]: I0312 08:27:29.174056 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.757021422 podStartE2EDuration="3.174029564s" podCreationTimestamp="2026-03-12 08:27:26 +0000 UTC" firstStartedPulling="2026-03-12 08:27:28.053211697 +0000 UTC m=+1721.635247430" lastFinishedPulling="2026-03-12 08:27:28.470219839 +0000 UTC m=+1722.052255572" observedRunningTime="2026-03-12 08:27:29.11761597 +0000 UTC m=+1722.699651713" watchObservedRunningTime="2026-03-12 08:27:29.174029564 +0000 UTC m=+1722.756065297" Mar 12 08:27:29 crc kubenswrapper[4809]: I0312 08:27:29.305190 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 08:27:29 crc kubenswrapper[4809]: I0312 08:27:29.445549 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/691eb456-8586-45fd-857f-74c3e351833e-combined-ca-bundle\") pod \"691eb456-8586-45fd-857f-74c3e351833e\" (UID: \"691eb456-8586-45fd-857f-74c3e351833e\") " Mar 12 08:27:29 crc kubenswrapper[4809]: I0312 08:27:29.445689 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/691eb456-8586-45fd-857f-74c3e351833e-scripts\") pod \"691eb456-8586-45fd-857f-74c3e351833e\" (UID: \"691eb456-8586-45fd-857f-74c3e351833e\") " Mar 12 08:27:29 crc kubenswrapper[4809]: I0312 08:27:29.445709 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/691eb456-8586-45fd-857f-74c3e351833e-config-data\") pod \"691eb456-8586-45fd-857f-74c3e351833e\" (UID: \"691eb456-8586-45fd-857f-74c3e351833e\") " Mar 12 08:27:29 crc kubenswrapper[4809]: I0312 08:27:29.445770 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/691eb456-8586-45fd-857f-74c3e351833e-sg-core-conf-yaml\") pod \"691eb456-8586-45fd-857f-74c3e351833e\" (UID: \"691eb456-8586-45fd-857f-74c3e351833e\") " Mar 12 08:27:29 crc kubenswrapper[4809]: I0312 08:27:29.445835 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/691eb456-8586-45fd-857f-74c3e351833e-run-httpd\") pod \"691eb456-8586-45fd-857f-74c3e351833e\" (UID: \"691eb456-8586-45fd-857f-74c3e351833e\") " Mar 12 08:27:29 crc kubenswrapper[4809]: I0312 08:27:29.445896 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7n6x\" (UniqueName: \"kubernetes.io/projected/691eb456-8586-45fd-857f-74c3e351833e-kube-api-access-f7n6x\") pod \"691eb456-8586-45fd-857f-74c3e351833e\" (UID: \"691eb456-8586-45fd-857f-74c3e351833e\") " Mar 12 08:27:29 crc kubenswrapper[4809]: I0312 08:27:29.445944 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/691eb456-8586-45fd-857f-74c3e351833e-log-httpd\") pod \"691eb456-8586-45fd-857f-74c3e351833e\" (UID: \"691eb456-8586-45fd-857f-74c3e351833e\") " Mar 12 08:27:29 crc kubenswrapper[4809]: I0312 08:27:29.447423 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/691eb456-8586-45fd-857f-74c3e351833e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "691eb456-8586-45fd-857f-74c3e351833e" (UID: "691eb456-8586-45fd-857f-74c3e351833e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:27:29 crc kubenswrapper[4809]: I0312 08:27:29.451266 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/691eb456-8586-45fd-857f-74c3e351833e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "691eb456-8586-45fd-857f-74c3e351833e" (UID: "691eb456-8586-45fd-857f-74c3e351833e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:27:29 crc kubenswrapper[4809]: I0312 08:27:29.470195 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/691eb456-8586-45fd-857f-74c3e351833e-scripts" (OuterVolumeSpecName: "scripts") pod "691eb456-8586-45fd-857f-74c3e351833e" (UID: "691eb456-8586-45fd-857f-74c3e351833e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:27:29 crc kubenswrapper[4809]: I0312 08:27:29.482415 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/691eb456-8586-45fd-857f-74c3e351833e-kube-api-access-f7n6x" (OuterVolumeSpecName: "kube-api-access-f7n6x") pod "691eb456-8586-45fd-857f-74c3e351833e" (UID: "691eb456-8586-45fd-857f-74c3e351833e"). InnerVolumeSpecName "kube-api-access-f7n6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:27:29 crc kubenswrapper[4809]: I0312 08:27:29.506331 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/691eb456-8586-45fd-857f-74c3e351833e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "691eb456-8586-45fd-857f-74c3e351833e" (UID: "691eb456-8586-45fd-857f-74c3e351833e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:27:29 crc kubenswrapper[4809]: I0312 08:27:29.550697 4809 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/691eb456-8586-45fd-857f-74c3e351833e-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 08:27:29 crc kubenswrapper[4809]: I0312 08:27:29.550734 4809 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/691eb456-8586-45fd-857f-74c3e351833e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 12 08:27:29 crc kubenswrapper[4809]: I0312 08:27:29.550748 4809 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/691eb456-8586-45fd-857f-74c3e351833e-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 08:27:29 crc kubenswrapper[4809]: I0312 08:27:29.550757 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7n6x\" (UniqueName: \"kubernetes.io/projected/691eb456-8586-45fd-857f-74c3e351833e-kube-api-access-f7n6x\") on node \"crc\" DevicePath \"\"" Mar 12 08:27:29 crc kubenswrapper[4809]: I0312 08:27:29.550767 4809 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/691eb456-8586-45fd-857f-74c3e351833e-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 08:27:29 crc kubenswrapper[4809]: I0312 08:27:29.558214 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/691eb456-8586-45fd-857f-74c3e351833e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "691eb456-8586-45fd-857f-74c3e351833e" (UID: "691eb456-8586-45fd-857f-74c3e351833e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:27:29 crc kubenswrapper[4809]: I0312 08:27:29.593578 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/691eb456-8586-45fd-857f-74c3e351833e-config-data" (OuterVolumeSpecName: "config-data") pod "691eb456-8586-45fd-857f-74c3e351833e" (UID: "691eb456-8586-45fd-857f-74c3e351833e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:27:29 crc kubenswrapper[4809]: I0312 08:27:29.655060 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/691eb456-8586-45fd-857f-74c3e351833e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:27:29 crc kubenswrapper[4809]: I0312 08:27:29.655129 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/691eb456-8586-45fd-857f-74c3e351833e-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 08:27:30 crc kubenswrapper[4809]: I0312 08:27:30.077807 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"691eb456-8586-45fd-857f-74c3e351833e","Type":"ContainerDied","Data":"f2e1c870138442dcafb8057a7d7d200b3b6b6037c88f330b54b889b0ab727ae2"} Mar 12 08:27:30 crc kubenswrapper[4809]: I0312 08:27:30.078379 4809 scope.go:117] "RemoveContainer" containerID="bd0c3b5ac079d1b6b09d3d022f80d91e7e0c8b082688b49d419828e83bb3e17e" Mar 12 08:27:30 crc kubenswrapper[4809]: I0312 08:27:30.077895 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 08:27:30 crc kubenswrapper[4809]: I0312 08:27:30.121737 4809 scope.go:117] "RemoveContainer" containerID="0adc2d7685bb602f2e35bfee001b65a2a960e5354944d796fd42c5f88467d6b0" Mar 12 08:27:30 crc kubenswrapper[4809]: I0312 08:27:30.127299 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 08:27:30 crc kubenswrapper[4809]: I0312 08:27:30.157139 4809 scope.go:117] "RemoveContainer" containerID="e3fa1fb7b97b4ad242eda5cbd7494ec7edc3307ce4eaec6702a36b121c320438" Mar 12 08:27:30 crc kubenswrapper[4809]: I0312 08:27:30.160528 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 12 08:27:30 crc kubenswrapper[4809]: I0312 08:27:30.177400 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 12 08:27:30 crc kubenswrapper[4809]: E0312 08:27:30.203167 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="691eb456-8586-45fd-857f-74c3e351833e" containerName="proxy-httpd" Mar 12 08:27:30 crc kubenswrapper[4809]: I0312 08:27:30.203217 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="691eb456-8586-45fd-857f-74c3e351833e" containerName="proxy-httpd" Mar 12 08:27:30 crc kubenswrapper[4809]: E0312 08:27:30.203344 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="691eb456-8586-45fd-857f-74c3e351833e" containerName="ceilometer-notification-agent" Mar 12 08:27:30 crc kubenswrapper[4809]: I0312 08:27:30.203357 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="691eb456-8586-45fd-857f-74c3e351833e" containerName="ceilometer-notification-agent" Mar 12 08:27:30 crc kubenswrapper[4809]: E0312 08:27:30.203418 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="691eb456-8586-45fd-857f-74c3e351833e" containerName="sg-core" Mar 12 08:27:30 crc kubenswrapper[4809]: I0312 08:27:30.203427 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="691eb456-8586-45fd-857f-74c3e351833e" containerName="sg-core" Mar 12 08:27:30 crc kubenswrapper[4809]: E0312 08:27:30.203463 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="691eb456-8586-45fd-857f-74c3e351833e" containerName="ceilometer-central-agent" Mar 12 08:27:30 crc kubenswrapper[4809]: I0312 08:27:30.203470 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="691eb456-8586-45fd-857f-74c3e351833e" containerName="ceilometer-central-agent" Mar 12 08:27:30 crc kubenswrapper[4809]: I0312 08:27:30.204221 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="691eb456-8586-45fd-857f-74c3e351833e" containerName="ceilometer-central-agent" Mar 12 08:27:30 crc kubenswrapper[4809]: I0312 08:27:30.204284 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="691eb456-8586-45fd-857f-74c3e351833e" containerName="proxy-httpd" Mar 12 08:27:30 crc kubenswrapper[4809]: I0312 08:27:30.204299 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="691eb456-8586-45fd-857f-74c3e351833e" containerName="ceilometer-notification-agent" Mar 12 08:27:30 crc kubenswrapper[4809]: I0312 08:27:30.204326 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="691eb456-8586-45fd-857f-74c3e351833e" containerName="sg-core" Mar 12 08:27:30 crc kubenswrapper[4809]: I0312 08:27:30.207353 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 08:27:30 crc kubenswrapper[4809]: I0312 08:27:30.207492 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 08:27:30 crc kubenswrapper[4809]: I0312 08:27:30.210969 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 12 08:27:30 crc kubenswrapper[4809]: I0312 08:27:30.211367 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 12 08:27:30 crc kubenswrapper[4809]: I0312 08:27:30.213566 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Mar 12 08:27:30 crc kubenswrapper[4809]: I0312 08:27:30.225924 4809 scope.go:117] "RemoveContainer" containerID="bd9f033113dcc4975a1b1cafd33024d4a5cc5b82c297bbd866b06f6c33e3a888" Mar 12 08:27:30 crc kubenswrapper[4809]: I0312 08:27:30.374507 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1c73c34-73de-42a9-a307-739520601529-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e1c73c34-73de-42a9-a307-739520601529\") " pod="openstack/ceilometer-0" Mar 12 08:27:30 crc kubenswrapper[4809]: I0312 08:27:30.374617 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1c73c34-73de-42a9-a307-739520601529-scripts\") pod \"ceilometer-0\" (UID: \"e1c73c34-73de-42a9-a307-739520601529\") " pod="openstack/ceilometer-0" Mar 12 08:27:30 crc kubenswrapper[4809]: I0312 08:27:30.375161 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e1c73c34-73de-42a9-a307-739520601529-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e1c73c34-73de-42a9-a307-739520601529\") " pod="openstack/ceilometer-0" Mar 12 08:27:30 crc kubenswrapper[4809]: I0312 08:27:30.375364 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e1c73c34-73de-42a9-a307-739520601529-log-httpd\") pod \"ceilometer-0\" (UID: \"e1c73c34-73de-42a9-a307-739520601529\") " pod="openstack/ceilometer-0" Mar 12 08:27:30 crc kubenswrapper[4809]: I0312 08:27:30.375436 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e1c73c34-73de-42a9-a307-739520601529-run-httpd\") pod \"ceilometer-0\" (UID: \"e1c73c34-73de-42a9-a307-739520601529\") " pod="openstack/ceilometer-0" Mar 12 08:27:30 crc kubenswrapper[4809]: I0312 08:27:30.375706 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx8mp\" (UniqueName: \"kubernetes.io/projected/e1c73c34-73de-42a9-a307-739520601529-kube-api-access-kx8mp\") pod \"ceilometer-0\" (UID: \"e1c73c34-73de-42a9-a307-739520601529\") " pod="openstack/ceilometer-0" Mar 12 08:27:30 crc kubenswrapper[4809]: I0312 08:27:30.376247 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1c73c34-73de-42a9-a307-739520601529-config-data\") pod \"ceilometer-0\" (UID: \"e1c73c34-73de-42a9-a307-739520601529\") " pod="openstack/ceilometer-0" Mar 12 08:27:30 crc kubenswrapper[4809]: I0312 08:27:30.376320 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1c73c34-73de-42a9-a307-739520601529-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e1c73c34-73de-42a9-a307-739520601529\") " pod="openstack/ceilometer-0" Mar 12 08:27:30 crc kubenswrapper[4809]: I0312 08:27:30.479072 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1c73c34-73de-42a9-a307-739520601529-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e1c73c34-73de-42a9-a307-739520601529\") " pod="openstack/ceilometer-0" Mar 12 08:27:30 crc kubenswrapper[4809]: I0312 08:27:30.479151 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1c73c34-73de-42a9-a307-739520601529-scripts\") pod \"ceilometer-0\" (UID: \"e1c73c34-73de-42a9-a307-739520601529\") " pod="openstack/ceilometer-0" Mar 12 08:27:30 crc kubenswrapper[4809]: I0312 08:27:30.479238 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e1c73c34-73de-42a9-a307-739520601529-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e1c73c34-73de-42a9-a307-739520601529\") " pod="openstack/ceilometer-0" Mar 12 08:27:30 crc kubenswrapper[4809]: I0312 08:27:30.479278 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e1c73c34-73de-42a9-a307-739520601529-log-httpd\") pod \"ceilometer-0\" (UID: \"e1c73c34-73de-42a9-a307-739520601529\") " pod="openstack/ceilometer-0" Mar 12 08:27:30 crc kubenswrapper[4809]: I0312 08:27:30.479308 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e1c73c34-73de-42a9-a307-739520601529-run-httpd\") pod \"ceilometer-0\" (UID: \"e1c73c34-73de-42a9-a307-739520601529\") " pod="openstack/ceilometer-0" Mar 12 08:27:30 crc kubenswrapper[4809]: I0312 08:27:30.479357 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx8mp\" (UniqueName: \"kubernetes.io/projected/e1c73c34-73de-42a9-a307-739520601529-kube-api-access-kx8mp\") pod \"ceilometer-0\" (UID: \"e1c73c34-73de-42a9-a307-739520601529\") " pod="openstack/ceilometer-0" Mar 12 08:27:30 crc kubenswrapper[4809]: I0312 08:27:30.479391 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1c73c34-73de-42a9-a307-739520601529-config-data\") pod \"ceilometer-0\" (UID: \"e1c73c34-73de-42a9-a307-739520601529\") " pod="openstack/ceilometer-0" Mar 12 08:27:30 crc kubenswrapper[4809]: I0312 08:27:30.479418 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1c73c34-73de-42a9-a307-739520601529-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e1c73c34-73de-42a9-a307-739520601529\") " pod="openstack/ceilometer-0" Mar 12 08:27:30 crc kubenswrapper[4809]: I0312 08:27:30.479919 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e1c73c34-73de-42a9-a307-739520601529-log-httpd\") pod \"ceilometer-0\" (UID: \"e1c73c34-73de-42a9-a307-739520601529\") " pod="openstack/ceilometer-0" Mar 12 08:27:30 crc kubenswrapper[4809]: I0312 08:27:30.479979 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e1c73c34-73de-42a9-a307-739520601529-run-httpd\") pod \"ceilometer-0\" (UID: \"e1c73c34-73de-42a9-a307-739520601529\") " pod="openstack/ceilometer-0" Mar 12 08:27:30 crc kubenswrapper[4809]: I0312 08:27:30.485171 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1c73c34-73de-42a9-a307-739520601529-scripts\") pod \"ceilometer-0\" (UID: \"e1c73c34-73de-42a9-a307-739520601529\") " pod="openstack/ceilometer-0" Mar 12 08:27:30 crc kubenswrapper[4809]: I0312 08:27:30.485634 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e1c73c34-73de-42a9-a307-739520601529-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e1c73c34-73de-42a9-a307-739520601529\") " pod="openstack/ceilometer-0" Mar 12 08:27:30 crc kubenswrapper[4809]: I0312 08:27:30.485844 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1c73c34-73de-42a9-a307-739520601529-config-data\") pod \"ceilometer-0\" (UID: \"e1c73c34-73de-42a9-a307-739520601529\") " pod="openstack/ceilometer-0" Mar 12 08:27:30 crc kubenswrapper[4809]: I0312 08:27:30.487961 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1c73c34-73de-42a9-a307-739520601529-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e1c73c34-73de-42a9-a307-739520601529\") " pod="openstack/ceilometer-0" Mar 12 08:27:30 crc kubenswrapper[4809]: I0312 08:27:30.495959 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1c73c34-73de-42a9-a307-739520601529-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e1c73c34-73de-42a9-a307-739520601529\") " pod="openstack/ceilometer-0" Mar 12 08:27:30 crc kubenswrapper[4809]: I0312 08:27:30.499091 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kx8mp\" (UniqueName: \"kubernetes.io/projected/e1c73c34-73de-42a9-a307-739520601529-kube-api-access-kx8mp\") pod \"ceilometer-0\" (UID: \"e1c73c34-73de-42a9-a307-739520601529\") " pod="openstack/ceilometer-0" Mar 12 08:27:30 crc kubenswrapper[4809]: I0312 08:27:30.535535 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 08:27:31 crc kubenswrapper[4809]: I0312 08:27:31.136199 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="691eb456-8586-45fd-857f-74c3e351833e" path="/var/lib/kubelet/pods/691eb456-8586-45fd-857f-74c3e351833e/volumes" Mar 12 08:27:31 crc kubenswrapper[4809]: I0312 08:27:31.137901 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 08:27:32 crc kubenswrapper[4809]: I0312 08:27:32.111363 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e1c73c34-73de-42a9-a307-739520601529","Type":"ContainerStarted","Data":"38f62d995fbf094e492c1c740a1a132f330c19740dd677b7a627a700df227093"} Mar 12 08:27:32 crc kubenswrapper[4809]: I0312 08:27:32.111813 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e1c73c34-73de-42a9-a307-739520601529","Type":"ContainerStarted","Data":"ad88b7ede71a62668d882a2f29056e7f18ff4db88296c95c9416e8764c5c7935"} Mar 12 08:27:33 crc kubenswrapper[4809]: I0312 08:27:33.127257 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e1c73c34-73de-42a9-a307-739520601529","Type":"ContainerStarted","Data":"fdbf48cdaf94129003bad95d15457222c4e97e3f7eebfbff9e004ffa0165f678"} Mar 12 08:27:33 crc kubenswrapper[4809]: I0312 08:27:33.986258 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-jw2qr"] Mar 12 08:27:33 crc kubenswrapper[4809]: I0312 08:27:33.998274 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-jw2qr"] Mar 12 08:27:34 crc kubenswrapper[4809]: I0312 08:27:34.098644 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-j9k6d"] Mar 12 08:27:34 crc kubenswrapper[4809]: I0312 08:27:34.100833 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-j9k6d" Mar 12 08:27:34 crc kubenswrapper[4809]: I0312 08:27:34.112936 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-j9k6d"] Mar 12 08:27:34 crc kubenswrapper[4809]: I0312 08:27:34.141814 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e1c73c34-73de-42a9-a307-739520601529","Type":"ContainerStarted","Data":"c8b9c63d6c85c8c202552d66b1ad95dd468f69b7ac199ee10c91e0e6eba06156"} Mar 12 08:27:34 crc kubenswrapper[4809]: I0312 08:27:34.220850 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8873bebf-493e-4f3d-85a0-fca09ce4d946-combined-ca-bundle\") pod \"heat-db-sync-j9k6d\" (UID: \"8873bebf-493e-4f3d-85a0-fca09ce4d946\") " pod="openstack/heat-db-sync-j9k6d" Mar 12 08:27:34 crc kubenswrapper[4809]: I0312 08:27:34.220919 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f62rs\" (UniqueName: \"kubernetes.io/projected/8873bebf-493e-4f3d-85a0-fca09ce4d946-kube-api-access-f62rs\") pod \"heat-db-sync-j9k6d\" (UID: \"8873bebf-493e-4f3d-85a0-fca09ce4d946\") " pod="openstack/heat-db-sync-j9k6d" Mar 12 08:27:34 crc kubenswrapper[4809]: I0312 08:27:34.221499 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8873bebf-493e-4f3d-85a0-fca09ce4d946-config-data\") pod \"heat-db-sync-j9k6d\" (UID: \"8873bebf-493e-4f3d-85a0-fca09ce4d946\") " pod="openstack/heat-db-sync-j9k6d" Mar 12 08:27:34 crc kubenswrapper[4809]: I0312 08:27:34.328768 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f62rs\" (UniqueName: \"kubernetes.io/projected/8873bebf-493e-4f3d-85a0-fca09ce4d946-kube-api-access-f62rs\") pod \"heat-db-sync-j9k6d\" (UID: \"8873bebf-493e-4f3d-85a0-fca09ce4d946\") " pod="openstack/heat-db-sync-j9k6d" Mar 12 08:27:34 crc kubenswrapper[4809]: I0312 08:27:34.329023 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8873bebf-493e-4f3d-85a0-fca09ce4d946-config-data\") pod \"heat-db-sync-j9k6d\" (UID: \"8873bebf-493e-4f3d-85a0-fca09ce4d946\") " pod="openstack/heat-db-sync-j9k6d" Mar 12 08:27:34 crc kubenswrapper[4809]: I0312 08:27:34.329741 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8873bebf-493e-4f3d-85a0-fca09ce4d946-combined-ca-bundle\") pod \"heat-db-sync-j9k6d\" (UID: \"8873bebf-493e-4f3d-85a0-fca09ce4d946\") " pod="openstack/heat-db-sync-j9k6d" Mar 12 08:27:34 crc kubenswrapper[4809]: I0312 08:27:34.343407 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8873bebf-493e-4f3d-85a0-fca09ce4d946-combined-ca-bundle\") pod \"heat-db-sync-j9k6d\" (UID: \"8873bebf-493e-4f3d-85a0-fca09ce4d946\") " pod="openstack/heat-db-sync-j9k6d" Mar 12 08:27:34 crc kubenswrapper[4809]: I0312 08:27:34.359246 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8873bebf-493e-4f3d-85a0-fca09ce4d946-config-data\") pod \"heat-db-sync-j9k6d\" (UID: \"8873bebf-493e-4f3d-85a0-fca09ce4d946\") " pod="openstack/heat-db-sync-j9k6d" Mar 12 08:27:34 crc kubenswrapper[4809]: I0312 08:27:34.387856 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f62rs\" (UniqueName: \"kubernetes.io/projected/8873bebf-493e-4f3d-85a0-fca09ce4d946-kube-api-access-f62rs\") pod \"heat-db-sync-j9k6d\" (UID: \"8873bebf-493e-4f3d-85a0-fca09ce4d946\") " pod="openstack/heat-db-sync-j9k6d" Mar 12 08:27:34 crc kubenswrapper[4809]: I0312 08:27:34.466423 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-j9k6d" Mar 12 08:27:35 crc kubenswrapper[4809]: I0312 08:27:35.120850 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8ef0743-567a-4a4b-aada-a0bc3659b200" path="/var/lib/kubelet/pods/a8ef0743-567a-4a4b-aada-a0bc3659b200/volumes" Mar 12 08:27:35 crc kubenswrapper[4809]: I0312 08:27:35.235854 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-j9k6d"] Mar 12 08:27:35 crc kubenswrapper[4809]: I0312 08:27:35.906202 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Mar 12 08:27:36 crc kubenswrapper[4809]: I0312 08:27:36.107586 4809 scope.go:117] "RemoveContainer" containerID="864ab2b9321734db2ec1c2e4ce57f8a096a813a5cbaa0744b231535713d6ab10" Mar 12 08:27:36 crc kubenswrapper[4809]: E0312 08:27:36.108324 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:27:36 crc kubenswrapper[4809]: I0312 08:27:36.241887 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-j9k6d" event={"ID":"8873bebf-493e-4f3d-85a0-fca09ce4d946","Type":"ContainerStarted","Data":"1a5517131f01917df9bf32604dd01030d643f427eb9114005b8d98d87391f2ba"} Mar 12 08:27:37 crc kubenswrapper[4809]: I0312 08:27:37.304997 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e1c73c34-73de-42a9-a307-739520601529","Type":"ContainerStarted","Data":"0d75b10d5a8fb7e3d7f46244f747709ea95e2c09a6daf45c86e6e4d9a98992c0"} Mar 12 08:27:37 crc kubenswrapper[4809]: I0312 08:27:37.305492 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 12 08:27:37 crc kubenswrapper[4809]: I0312 08:27:37.391131 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.6994141320000002 podStartE2EDuration="7.391074891s" podCreationTimestamp="2026-03-12 08:27:30 +0000 UTC" firstStartedPulling="2026-03-12 08:27:31.121888435 +0000 UTC m=+1724.703924168" lastFinishedPulling="2026-03-12 08:27:35.813549194 +0000 UTC m=+1729.395584927" observedRunningTime="2026-03-12 08:27:37.364287943 +0000 UTC m=+1730.946323676" watchObservedRunningTime="2026-03-12 08:27:37.391074891 +0000 UTC m=+1730.973110624" Mar 12 08:27:37 crc kubenswrapper[4809]: I0312 08:27:37.444620 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Mar 12 08:27:37 crc kubenswrapper[4809]: I0312 08:27:37.575765 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 12 08:27:40 crc kubenswrapper[4809]: I0312 08:27:40.688422 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 08:27:40 crc kubenswrapper[4809]: I0312 08:27:40.689038 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e1c73c34-73de-42a9-a307-739520601529" containerName="ceilometer-central-agent" containerID="cri-o://38f62d995fbf094e492c1c740a1a132f330c19740dd677b7a627a700df227093" gracePeriod=30 Mar 12 08:27:40 crc kubenswrapper[4809]: I0312 08:27:40.689729 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e1c73c34-73de-42a9-a307-739520601529" containerName="proxy-httpd" containerID="cri-o://0d75b10d5a8fb7e3d7f46244f747709ea95e2c09a6daf45c86e6e4d9a98992c0" gracePeriod=30 Mar 12 08:27:40 crc kubenswrapper[4809]: I0312 08:27:40.689791 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e1c73c34-73de-42a9-a307-739520601529" containerName="sg-core" containerID="cri-o://c8b9c63d6c85c8c202552d66b1ad95dd468f69b7ac199ee10c91e0e6eba06156" gracePeriod=30 Mar 12 08:27:40 crc kubenswrapper[4809]: I0312 08:27:40.689826 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e1c73c34-73de-42a9-a307-739520601529" containerName="ceilometer-notification-agent" containerID="cri-o://fdbf48cdaf94129003bad95d15457222c4e97e3f7eebfbff9e004ffa0165f678" gracePeriod=30 Mar 12 08:27:41 crc kubenswrapper[4809]: I0312 08:27:41.383961 4809 generic.go:334] "Generic (PLEG): container finished" podID="e1c73c34-73de-42a9-a307-739520601529" containerID="0d75b10d5a8fb7e3d7f46244f747709ea95e2c09a6daf45c86e6e4d9a98992c0" exitCode=0 Mar 12 08:27:41 crc kubenswrapper[4809]: I0312 08:27:41.384708 4809 generic.go:334] "Generic (PLEG): container finished" podID="e1c73c34-73de-42a9-a307-739520601529" containerID="c8b9c63d6c85c8c202552d66b1ad95dd468f69b7ac199ee10c91e0e6eba06156" exitCode=2 Mar 12 08:27:41 crc kubenswrapper[4809]: I0312 08:27:41.384036 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e1c73c34-73de-42a9-a307-739520601529","Type":"ContainerDied","Data":"0d75b10d5a8fb7e3d7f46244f747709ea95e2c09a6daf45c86e6e4d9a98992c0"} Mar 12 08:27:41 crc kubenswrapper[4809]: I0312 08:27:41.384760 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e1c73c34-73de-42a9-a307-739520601529","Type":"ContainerDied","Data":"c8b9c63d6c85c8c202552d66b1ad95dd468f69b7ac199ee10c91e0e6eba06156"} Mar 12 08:27:41 crc kubenswrapper[4809]: I0312 08:27:41.963021 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-2" podUID="c45bb76f-92d7-4214-9ce3-c64361a40416" containerName="rabbitmq" containerID="cri-o://189ef92e31873bc3527a72453e076cc8cc76e11ec56712bdc61eaa1ca3c4e771" gracePeriod=604794 Mar 12 08:27:42 crc kubenswrapper[4809]: I0312 08:27:42.415944 4809 generic.go:334] "Generic (PLEG): container finished" podID="e1c73c34-73de-42a9-a307-739520601529" containerID="fdbf48cdaf94129003bad95d15457222c4e97e3f7eebfbff9e004ffa0165f678" exitCode=0 Mar 12 08:27:42 crc kubenswrapper[4809]: I0312 08:27:42.416534 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e1c73c34-73de-42a9-a307-739520601529","Type":"ContainerDied","Data":"fdbf48cdaf94129003bad95d15457222c4e97e3f7eebfbff9e004ffa0165f678"} Mar 12 08:27:42 crc kubenswrapper[4809]: I0312 08:27:42.695636 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="c45bb76f-92d7-4214-9ce3-c64361a40416" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.136:5671: connect: connection refused" Mar 12 08:27:43 crc kubenswrapper[4809]: I0312 08:27:43.421127 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="dae47d35-2955-4c02-88bb-a0fbe4cd7bf0" containerName="rabbitmq" containerID="cri-o://b965992fcf175c016f5c92c5db6095b16f8850df2597d532cfce28009b0f6aea" gracePeriod=604795 Mar 12 08:27:43 crc kubenswrapper[4809]: I0312 08:27:43.453555 4809 generic.go:334] "Generic (PLEG): container finished" podID="e1c73c34-73de-42a9-a307-739520601529" containerID="38f62d995fbf094e492c1c740a1a132f330c19740dd677b7a627a700df227093" exitCode=0 Mar 12 08:27:43 crc kubenswrapper[4809]: I0312 08:27:43.453668 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e1c73c34-73de-42a9-a307-739520601529","Type":"ContainerDied","Data":"38f62d995fbf094e492c1c740a1a132f330c19740dd677b7a627a700df227093"} Mar 12 08:27:43 crc kubenswrapper[4809]: I0312 08:27:43.601080 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 08:27:43 crc kubenswrapper[4809]: I0312 08:27:43.665206 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1c73c34-73de-42a9-a307-739520601529-config-data\") pod \"e1c73c34-73de-42a9-a307-739520601529\" (UID: \"e1c73c34-73de-42a9-a307-739520601529\") " Mar 12 08:27:43 crc kubenswrapper[4809]: I0312 08:27:43.665473 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kx8mp\" (UniqueName: \"kubernetes.io/projected/e1c73c34-73de-42a9-a307-739520601529-kube-api-access-kx8mp\") pod \"e1c73c34-73de-42a9-a307-739520601529\" (UID: \"e1c73c34-73de-42a9-a307-739520601529\") " Mar 12 08:27:43 crc kubenswrapper[4809]: I0312 08:27:43.665523 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e1c73c34-73de-42a9-a307-739520601529-run-httpd\") pod \"e1c73c34-73de-42a9-a307-739520601529\" (UID: \"e1c73c34-73de-42a9-a307-739520601529\") " Mar 12 08:27:43 crc kubenswrapper[4809]: I0312 08:27:43.665585 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1c73c34-73de-42a9-a307-739520601529-combined-ca-bundle\") pod \"e1c73c34-73de-42a9-a307-739520601529\" (UID: \"e1c73c34-73de-42a9-a307-739520601529\") " Mar 12 08:27:43 crc kubenswrapper[4809]: I0312 08:27:43.665644 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e1c73c34-73de-42a9-a307-739520601529-log-httpd\") pod \"e1c73c34-73de-42a9-a307-739520601529\" (UID: \"e1c73c34-73de-42a9-a307-739520601529\") " Mar 12 08:27:43 crc kubenswrapper[4809]: I0312 08:27:43.665737 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1c73c34-73de-42a9-a307-739520601529-scripts\") pod \"e1c73c34-73de-42a9-a307-739520601529\" (UID: \"e1c73c34-73de-42a9-a307-739520601529\") " Mar 12 08:27:43 crc kubenswrapper[4809]: I0312 08:27:43.665832 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e1c73c34-73de-42a9-a307-739520601529-sg-core-conf-yaml\") pod \"e1c73c34-73de-42a9-a307-739520601529\" (UID: \"e1c73c34-73de-42a9-a307-739520601529\") " Mar 12 08:27:43 crc kubenswrapper[4809]: I0312 08:27:43.665880 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1c73c34-73de-42a9-a307-739520601529-ceilometer-tls-certs\") pod \"e1c73c34-73de-42a9-a307-739520601529\" (UID: \"e1c73c34-73de-42a9-a307-739520601529\") " Mar 12 08:27:43 crc kubenswrapper[4809]: I0312 08:27:43.665961 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1c73c34-73de-42a9-a307-739520601529-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e1c73c34-73de-42a9-a307-739520601529" (UID: "e1c73c34-73de-42a9-a307-739520601529"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:27:43 crc kubenswrapper[4809]: I0312 08:27:43.677325 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1c73c34-73de-42a9-a307-739520601529-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e1c73c34-73de-42a9-a307-739520601529" (UID: "e1c73c34-73de-42a9-a307-739520601529"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:27:43 crc kubenswrapper[4809]: I0312 08:27:43.679380 4809 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e1c73c34-73de-42a9-a307-739520601529-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 08:27:43 crc kubenswrapper[4809]: I0312 08:27:43.688492 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1c73c34-73de-42a9-a307-739520601529-kube-api-access-kx8mp" (OuterVolumeSpecName: "kube-api-access-kx8mp") pod "e1c73c34-73de-42a9-a307-739520601529" (UID: "e1c73c34-73de-42a9-a307-739520601529"). InnerVolumeSpecName "kube-api-access-kx8mp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:27:43 crc kubenswrapper[4809]: I0312 08:27:43.695505 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1c73c34-73de-42a9-a307-739520601529-scripts" (OuterVolumeSpecName: "scripts") pod "e1c73c34-73de-42a9-a307-739520601529" (UID: "e1c73c34-73de-42a9-a307-739520601529"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:27:43 crc kubenswrapper[4809]: I0312 08:27:43.766167 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1c73c34-73de-42a9-a307-739520601529-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e1c73c34-73de-42a9-a307-739520601529" (UID: "e1c73c34-73de-42a9-a307-739520601529"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:27:43 crc kubenswrapper[4809]: I0312 08:27:43.783021 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kx8mp\" (UniqueName: \"kubernetes.io/projected/e1c73c34-73de-42a9-a307-739520601529-kube-api-access-kx8mp\") on node \"crc\" DevicePath \"\"" Mar 12 08:27:43 crc kubenswrapper[4809]: I0312 08:27:43.783076 4809 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e1c73c34-73de-42a9-a307-739520601529-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 08:27:43 crc kubenswrapper[4809]: I0312 08:27:43.783090 4809 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1c73c34-73de-42a9-a307-739520601529-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 08:27:43 crc kubenswrapper[4809]: I0312 08:27:43.783100 4809 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e1c73c34-73de-42a9-a307-739520601529-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 12 08:27:43 crc kubenswrapper[4809]: I0312 08:27:43.802606 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1c73c34-73de-42a9-a307-739520601529-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "e1c73c34-73de-42a9-a307-739520601529" (UID: "e1c73c34-73de-42a9-a307-739520601529"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:27:43 crc kubenswrapper[4809]: I0312 08:27:43.820320 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1c73c34-73de-42a9-a307-739520601529-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e1c73c34-73de-42a9-a307-739520601529" (UID: "e1c73c34-73de-42a9-a307-739520601529"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:27:43 crc kubenswrapper[4809]: I0312 08:27:43.889174 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1c73c34-73de-42a9-a307-739520601529-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:27:43 crc kubenswrapper[4809]: I0312 08:27:43.889492 4809 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1c73c34-73de-42a9-a307-739520601529-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 08:27:43 crc kubenswrapper[4809]: I0312 08:27:43.904286 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1c73c34-73de-42a9-a307-739520601529-config-data" (OuterVolumeSpecName: "config-data") pod "e1c73c34-73de-42a9-a307-739520601529" (UID: "e1c73c34-73de-42a9-a307-739520601529"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:27:43 crc kubenswrapper[4809]: I0312 08:27:43.992078 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1c73c34-73de-42a9-a307-739520601529-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 08:27:44 crc kubenswrapper[4809]: I0312 08:27:44.472219 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e1c73c34-73de-42a9-a307-739520601529","Type":"ContainerDied","Data":"ad88b7ede71a62668d882a2f29056e7f18ff4db88296c95c9416e8764c5c7935"} Mar 12 08:27:44 crc kubenswrapper[4809]: I0312 08:27:44.472304 4809 scope.go:117] "RemoveContainer" containerID="0d75b10d5a8fb7e3d7f46244f747709ea95e2c09a6daf45c86e6e4d9a98992c0" Mar 12 08:27:44 crc kubenswrapper[4809]: I0312 08:27:44.472330 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 08:27:44 crc kubenswrapper[4809]: I0312 08:27:44.544846 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 08:27:44 crc kubenswrapper[4809]: I0312 08:27:44.562877 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 12 08:27:44 crc kubenswrapper[4809]: I0312 08:27:44.585830 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 12 08:27:44 crc kubenswrapper[4809]: E0312 08:27:44.586770 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1c73c34-73de-42a9-a307-739520601529" containerName="proxy-httpd" Mar 12 08:27:44 crc kubenswrapper[4809]: I0312 08:27:44.586787 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1c73c34-73de-42a9-a307-739520601529" containerName="proxy-httpd" Mar 12 08:27:44 crc kubenswrapper[4809]: E0312 08:27:44.586805 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1c73c34-73de-42a9-a307-739520601529" containerName="sg-core" Mar 12 08:27:44 crc kubenswrapper[4809]: I0312 08:27:44.586812 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1c73c34-73de-42a9-a307-739520601529" containerName="sg-core" Mar 12 08:27:44 crc kubenswrapper[4809]: E0312 08:27:44.586845 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1c73c34-73de-42a9-a307-739520601529" containerName="ceilometer-central-agent" Mar 12 08:27:44 crc kubenswrapper[4809]: I0312 08:27:44.586851 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1c73c34-73de-42a9-a307-739520601529" containerName="ceilometer-central-agent" Mar 12 08:27:44 crc kubenswrapper[4809]: E0312 08:27:44.586859 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1c73c34-73de-42a9-a307-739520601529" containerName="ceilometer-notification-agent" Mar 12 08:27:44 crc kubenswrapper[4809]: I0312 08:27:44.586865 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1c73c34-73de-42a9-a307-739520601529" containerName="ceilometer-notification-agent" Mar 12 08:27:44 crc kubenswrapper[4809]: I0312 08:27:44.587189 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1c73c34-73de-42a9-a307-739520601529" containerName="proxy-httpd" Mar 12 08:27:44 crc kubenswrapper[4809]: I0312 08:27:44.587206 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1c73c34-73de-42a9-a307-739520601529" containerName="ceilometer-central-agent" Mar 12 08:27:44 crc kubenswrapper[4809]: I0312 08:27:44.587226 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1c73c34-73de-42a9-a307-739520601529" containerName="sg-core" Mar 12 08:27:44 crc kubenswrapper[4809]: I0312 08:27:44.587237 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1c73c34-73de-42a9-a307-739520601529" containerName="ceilometer-notification-agent" Mar 12 08:27:44 crc kubenswrapper[4809]: I0312 08:27:44.589867 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 08:27:44 crc kubenswrapper[4809]: I0312 08:27:44.599042 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Mar 12 08:27:44 crc kubenswrapper[4809]: I0312 08:27:44.599373 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 12 08:27:44 crc kubenswrapper[4809]: I0312 08:27:44.599524 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 12 08:27:44 crc kubenswrapper[4809]: I0312 08:27:44.601940 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 08:27:44 crc kubenswrapper[4809]: I0312 08:27:44.716168 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7c728ba-8361-4d19-833d-b3494509f355-config-data\") pod \"ceilometer-0\" (UID: \"b7c728ba-8361-4d19-833d-b3494509f355\") " pod="openstack/ceilometer-0" Mar 12 08:27:44 crc kubenswrapper[4809]: I0312 08:27:44.716286 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b7c728ba-8361-4d19-833d-b3494509f355-log-httpd\") pod \"ceilometer-0\" (UID: \"b7c728ba-8361-4d19-833d-b3494509f355\") " pod="openstack/ceilometer-0" Mar 12 08:27:44 crc kubenswrapper[4809]: I0312 08:27:44.716321 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5pkf\" (UniqueName: \"kubernetes.io/projected/b7c728ba-8361-4d19-833d-b3494509f355-kube-api-access-j5pkf\") pod \"ceilometer-0\" (UID: \"b7c728ba-8361-4d19-833d-b3494509f355\") " pod="openstack/ceilometer-0" Mar 12 08:27:44 crc kubenswrapper[4809]: I0312 08:27:44.716353 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b7c728ba-8361-4d19-833d-b3494509f355-run-httpd\") pod \"ceilometer-0\" (UID: \"b7c728ba-8361-4d19-833d-b3494509f355\") " pod="openstack/ceilometer-0" Mar 12 08:27:44 crc kubenswrapper[4809]: I0312 08:27:44.716503 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b7c728ba-8361-4d19-833d-b3494509f355-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b7c728ba-8361-4d19-833d-b3494509f355\") " pod="openstack/ceilometer-0" Mar 12 08:27:44 crc kubenswrapper[4809]: I0312 08:27:44.716645 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7c728ba-8361-4d19-833d-b3494509f355-scripts\") pod \"ceilometer-0\" (UID: \"b7c728ba-8361-4d19-833d-b3494509f355\") " pod="openstack/ceilometer-0" Mar 12 08:27:44 crc kubenswrapper[4809]: I0312 08:27:44.716741 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7c728ba-8361-4d19-833d-b3494509f355-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b7c728ba-8361-4d19-833d-b3494509f355\") " pod="openstack/ceilometer-0" Mar 12 08:27:44 crc kubenswrapper[4809]: I0312 08:27:44.717367 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7c728ba-8361-4d19-833d-b3494509f355-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b7c728ba-8361-4d19-833d-b3494509f355\") " pod="openstack/ceilometer-0" Mar 12 08:27:44 crc kubenswrapper[4809]: I0312 08:27:44.820842 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b7c728ba-8361-4d19-833d-b3494509f355-log-httpd\") pod \"ceilometer-0\" (UID: \"b7c728ba-8361-4d19-833d-b3494509f355\") " pod="openstack/ceilometer-0" Mar 12 08:27:44 crc kubenswrapper[4809]: I0312 08:27:44.821281 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5pkf\" (UniqueName: \"kubernetes.io/projected/b7c728ba-8361-4d19-833d-b3494509f355-kube-api-access-j5pkf\") pod \"ceilometer-0\" (UID: \"b7c728ba-8361-4d19-833d-b3494509f355\") " pod="openstack/ceilometer-0" Mar 12 08:27:44 crc kubenswrapper[4809]: I0312 08:27:44.821320 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b7c728ba-8361-4d19-833d-b3494509f355-run-httpd\") pod \"ceilometer-0\" (UID: \"b7c728ba-8361-4d19-833d-b3494509f355\") " pod="openstack/ceilometer-0" Mar 12 08:27:44 crc kubenswrapper[4809]: I0312 08:27:44.821364 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b7c728ba-8361-4d19-833d-b3494509f355-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b7c728ba-8361-4d19-833d-b3494509f355\") " pod="openstack/ceilometer-0" Mar 12 08:27:44 crc kubenswrapper[4809]: I0312 08:27:44.821406 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7c728ba-8361-4d19-833d-b3494509f355-scripts\") pod \"ceilometer-0\" (UID: \"b7c728ba-8361-4d19-833d-b3494509f355\") " pod="openstack/ceilometer-0" Mar 12 08:27:44 crc kubenswrapper[4809]: I0312 08:27:44.821441 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7c728ba-8361-4d19-833d-b3494509f355-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b7c728ba-8361-4d19-833d-b3494509f355\") " pod="openstack/ceilometer-0" Mar 12 08:27:44 crc kubenswrapper[4809]: I0312 08:27:44.821597 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b7c728ba-8361-4d19-833d-b3494509f355-log-httpd\") pod \"ceilometer-0\" (UID: \"b7c728ba-8361-4d19-833d-b3494509f355\") " pod="openstack/ceilometer-0" Mar 12 08:27:44 crc kubenswrapper[4809]: I0312 08:27:44.821952 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b7c728ba-8361-4d19-833d-b3494509f355-run-httpd\") pod \"ceilometer-0\" (UID: \"b7c728ba-8361-4d19-833d-b3494509f355\") " pod="openstack/ceilometer-0" Mar 12 08:27:44 crc kubenswrapper[4809]: I0312 08:27:44.823370 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7c728ba-8361-4d19-833d-b3494509f355-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b7c728ba-8361-4d19-833d-b3494509f355\") " pod="openstack/ceilometer-0" Mar 12 08:27:44 crc kubenswrapper[4809]: I0312 08:27:44.823673 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7c728ba-8361-4d19-833d-b3494509f355-config-data\") pod \"ceilometer-0\" (UID: \"b7c728ba-8361-4d19-833d-b3494509f355\") " pod="openstack/ceilometer-0" Mar 12 08:27:44 crc kubenswrapper[4809]: I0312 08:27:44.829088 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b7c728ba-8361-4d19-833d-b3494509f355-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b7c728ba-8361-4d19-833d-b3494509f355\") " pod="openstack/ceilometer-0" Mar 12 08:27:44 crc kubenswrapper[4809]: I0312 08:27:44.829520 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7c728ba-8361-4d19-833d-b3494509f355-config-data\") pod \"ceilometer-0\" (UID: \"b7c728ba-8361-4d19-833d-b3494509f355\") " pod="openstack/ceilometer-0" Mar 12 08:27:44 crc kubenswrapper[4809]: I0312 08:27:44.830540 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7c728ba-8361-4d19-833d-b3494509f355-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b7c728ba-8361-4d19-833d-b3494509f355\") " pod="openstack/ceilometer-0" Mar 12 08:27:44 crc kubenswrapper[4809]: I0312 08:27:44.830738 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7c728ba-8361-4d19-833d-b3494509f355-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b7c728ba-8361-4d19-833d-b3494509f355\") " pod="openstack/ceilometer-0" Mar 12 08:27:44 crc kubenswrapper[4809]: I0312 08:27:44.833980 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7c728ba-8361-4d19-833d-b3494509f355-scripts\") pod \"ceilometer-0\" (UID: \"b7c728ba-8361-4d19-833d-b3494509f355\") " pod="openstack/ceilometer-0" Mar 12 08:27:44 crc kubenswrapper[4809]: I0312 08:27:44.839377 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5pkf\" (UniqueName: \"kubernetes.io/projected/b7c728ba-8361-4d19-833d-b3494509f355-kube-api-access-j5pkf\") pod \"ceilometer-0\" (UID: \"b7c728ba-8361-4d19-833d-b3494509f355\") " pod="openstack/ceilometer-0" Mar 12 08:27:44 crc kubenswrapper[4809]: I0312 08:27:44.925245 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 08:27:45 crc kubenswrapper[4809]: I0312 08:27:45.140332 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1c73c34-73de-42a9-a307-739520601529" path="/var/lib/kubelet/pods/e1c73c34-73de-42a9-a307-739520601529/volumes" Mar 12 08:27:48 crc kubenswrapper[4809]: I0312 08:27:48.112021 4809 scope.go:117] "RemoveContainer" containerID="864ab2b9321734db2ec1c2e4ce57f8a096a813a5cbaa0744b231535713d6ab10" Mar 12 08:27:48 crc kubenswrapper[4809]: E0312 08:27:48.113343 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:27:48 crc kubenswrapper[4809]: I0312 08:27:48.537003 4809 generic.go:334] "Generic (PLEG): container finished" podID="c45bb76f-92d7-4214-9ce3-c64361a40416" containerID="189ef92e31873bc3527a72453e076cc8cc76e11ec56712bdc61eaa1ca3c4e771" exitCode=0 Mar 12 08:27:48 crc kubenswrapper[4809]: I0312 08:27:48.537089 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"c45bb76f-92d7-4214-9ce3-c64361a40416","Type":"ContainerDied","Data":"189ef92e31873bc3527a72453e076cc8cc76e11ec56712bdc61eaa1ca3c4e771"} Mar 12 08:27:49 crc kubenswrapper[4809]: I0312 08:27:49.858015 4809 scope.go:117] "RemoveContainer" containerID="c8b9c63d6c85c8c202552d66b1ad95dd468f69b7ac199ee10c91e0e6eba06156" Mar 12 08:27:50 crc kubenswrapper[4809]: I0312 08:27:50.585925 4809 generic.go:334] "Generic (PLEG): container finished" podID="dae47d35-2955-4c02-88bb-a0fbe4cd7bf0" containerID="b965992fcf175c016f5c92c5db6095b16f8850df2597d532cfce28009b0f6aea" exitCode=0 Mar 12 08:27:50 crc kubenswrapper[4809]: I0312 08:27:50.586029 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"dae47d35-2955-4c02-88bb-a0fbe4cd7bf0","Type":"ContainerDied","Data":"b965992fcf175c016f5c92c5db6095b16f8850df2597d532cfce28009b0f6aea"} Mar 12 08:27:52 crc kubenswrapper[4809]: I0312 08:27:52.088434 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-l59t6"] Mar 12 08:27:52 crc kubenswrapper[4809]: I0312 08:27:52.097333 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-l59t6" Mar 12 08:27:52 crc kubenswrapper[4809]: I0312 08:27:52.101074 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Mar 12 08:27:52 crc kubenswrapper[4809]: I0312 08:27:52.127636 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-l59t6"] Mar 12 08:27:52 crc kubenswrapper[4809]: I0312 08:27:52.214238 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-l59t6\" (UID: \"cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7\") " pod="openstack/dnsmasq-dns-7d84b4d45c-l59t6" Mar 12 08:27:52 crc kubenswrapper[4809]: I0312 08:27:52.214451 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-l59t6\" (UID: \"cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7\") " pod="openstack/dnsmasq-dns-7d84b4d45c-l59t6" Mar 12 08:27:52 crc kubenswrapper[4809]: I0312 08:27:52.214533 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7-config\") pod \"dnsmasq-dns-7d84b4d45c-l59t6\" (UID: \"cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7\") " pod="openstack/dnsmasq-dns-7d84b4d45c-l59t6" Mar 12 08:27:52 crc kubenswrapper[4809]: I0312 08:27:52.214733 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-l59t6\" (UID: \"cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7\") " pod="openstack/dnsmasq-dns-7d84b4d45c-l59t6" Mar 12 08:27:52 crc kubenswrapper[4809]: I0312 08:27:52.214826 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-l59t6\" (UID: \"cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7\") " pod="openstack/dnsmasq-dns-7d84b4d45c-l59t6" Mar 12 08:27:52 crc kubenswrapper[4809]: I0312 08:27:52.214887 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-l59t6\" (UID: \"cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7\") " pod="openstack/dnsmasq-dns-7d84b4d45c-l59t6" Mar 12 08:27:52 crc kubenswrapper[4809]: I0312 08:27:52.214940 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tx68f\" (UniqueName: \"kubernetes.io/projected/cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7-kube-api-access-tx68f\") pod \"dnsmasq-dns-7d84b4d45c-l59t6\" (UID: \"cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7\") " pod="openstack/dnsmasq-dns-7d84b4d45c-l59t6" Mar 12 08:27:52 crc kubenswrapper[4809]: I0312 08:27:52.317736 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-l59t6\" (UID: \"cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7\") " pod="openstack/dnsmasq-dns-7d84b4d45c-l59t6" Mar 12 08:27:52 crc kubenswrapper[4809]: I0312 08:27:52.317849 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-l59t6\" (UID: \"cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7\") " pod="openstack/dnsmasq-dns-7d84b4d45c-l59t6" Mar 12 08:27:52 crc kubenswrapper[4809]: I0312 08:27:52.317885 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7-config\") pod \"dnsmasq-dns-7d84b4d45c-l59t6\" (UID: \"cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7\") " pod="openstack/dnsmasq-dns-7d84b4d45c-l59t6" Mar 12 08:27:52 crc kubenswrapper[4809]: I0312 08:27:52.317979 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-l59t6\" (UID: \"cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7\") " pod="openstack/dnsmasq-dns-7d84b4d45c-l59t6" Mar 12 08:27:52 crc kubenswrapper[4809]: I0312 08:27:52.318024 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-l59t6\" (UID: \"cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7\") " pod="openstack/dnsmasq-dns-7d84b4d45c-l59t6" Mar 12 08:27:52 crc kubenswrapper[4809]: I0312 08:27:52.318052 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-l59t6\" (UID: \"cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7\") " pod="openstack/dnsmasq-dns-7d84b4d45c-l59t6" Mar 12 08:27:52 crc kubenswrapper[4809]: I0312 08:27:52.318088 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tx68f\" (UniqueName: \"kubernetes.io/projected/cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7-kube-api-access-tx68f\") pod \"dnsmasq-dns-7d84b4d45c-l59t6\" (UID: \"cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7\") " pod="openstack/dnsmasq-dns-7d84b4d45c-l59t6" Mar 12 08:27:52 crc kubenswrapper[4809]: I0312 08:27:52.319716 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-l59t6\" (UID: \"cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7\") " pod="openstack/dnsmasq-dns-7d84b4d45c-l59t6" Mar 12 08:27:52 crc kubenswrapper[4809]: I0312 08:27:52.319997 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-l59t6\" (UID: \"cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7\") " pod="openstack/dnsmasq-dns-7d84b4d45c-l59t6" Mar 12 08:27:52 crc kubenswrapper[4809]: I0312 08:27:52.320712 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-l59t6\" (UID: \"cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7\") " pod="openstack/dnsmasq-dns-7d84b4d45c-l59t6" Mar 12 08:27:52 crc kubenswrapper[4809]: I0312 08:27:52.320972 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-l59t6\" (UID: \"cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7\") " pod="openstack/dnsmasq-dns-7d84b4d45c-l59t6" Mar 12 08:27:52 crc kubenswrapper[4809]: I0312 08:27:52.320972 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-l59t6\" (UID: \"cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7\") " pod="openstack/dnsmasq-dns-7d84b4d45c-l59t6" Mar 12 08:27:52 crc kubenswrapper[4809]: I0312 08:27:52.321363 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7-config\") pod \"dnsmasq-dns-7d84b4d45c-l59t6\" (UID: \"cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7\") " pod="openstack/dnsmasq-dns-7d84b4d45c-l59t6" Mar 12 08:27:52 crc kubenswrapper[4809]: I0312 08:27:52.346505 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tx68f\" (UniqueName: \"kubernetes.io/projected/cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7-kube-api-access-tx68f\") pod \"dnsmasq-dns-7d84b4d45c-l59t6\" (UID: \"cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7\") " pod="openstack/dnsmasq-dns-7d84b4d45c-l59t6" Mar 12 08:27:52 crc kubenswrapper[4809]: I0312 08:27:52.431294 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-l59t6" Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.593854 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.607782 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.650651 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"c45bb76f-92d7-4214-9ce3-c64361a40416","Type":"ContainerDied","Data":"aa4028f32dc12123d3abd0e57773433c39bbdce9fd9e72bf2e5b3f8e30d640bf"} Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.650660 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.666330 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"dae47d35-2955-4c02-88bb-a0fbe4cd7bf0","Type":"ContainerDied","Data":"f65b0cfe8b97b4f4419a315daa266a8831bd67c140d14d466b059baa3537feaa"} Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.666436 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.667346 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c45bb76f-92d7-4214-9ce3-c64361a40416-rabbitmq-tls\") pod \"c45bb76f-92d7-4214-9ce3-c64361a40416\" (UID: \"c45bb76f-92d7-4214-9ce3-c64361a40416\") " Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.667433 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dae47d35-2955-4c02-88bb-a0fbe4cd7bf0-config-data\") pod \"dae47d35-2955-4c02-88bb-a0fbe4cd7bf0\" (UID: \"dae47d35-2955-4c02-88bb-a0fbe4cd7bf0\") " Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.667524 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dae47d35-2955-4c02-88bb-a0fbe4cd7bf0-rabbitmq-plugins\") pod \"dae47d35-2955-4c02-88bb-a0fbe4cd7bf0\" (UID: \"dae47d35-2955-4c02-88bb-a0fbe4cd7bf0\") " Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.667572 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c45bb76f-92d7-4214-9ce3-c64361a40416-rabbitmq-confd\") pod \"c45bb76f-92d7-4214-9ce3-c64361a40416\" (UID: \"c45bb76f-92d7-4214-9ce3-c64361a40416\") " Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.667645 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8n88\" (UniqueName: \"kubernetes.io/projected/dae47d35-2955-4c02-88bb-a0fbe4cd7bf0-kube-api-access-z8n88\") pod \"dae47d35-2955-4c02-88bb-a0fbe4cd7bf0\" (UID: \"dae47d35-2955-4c02-88bb-a0fbe4cd7bf0\") " Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.667680 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c45bb76f-92d7-4214-9ce3-c64361a40416-plugins-conf\") pod \"c45bb76f-92d7-4214-9ce3-c64361a40416\" (UID: \"c45bb76f-92d7-4214-9ce3-c64361a40416\") " Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.667724 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dae47d35-2955-4c02-88bb-a0fbe4cd7bf0-erlang-cookie-secret\") pod \"dae47d35-2955-4c02-88bb-a0fbe4cd7bf0\" (UID: \"dae47d35-2955-4c02-88bb-a0fbe4cd7bf0\") " Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.667766 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c45bb76f-92d7-4214-9ce3-c64361a40416-rabbitmq-plugins\") pod \"c45bb76f-92d7-4214-9ce3-c64361a40416\" (UID: \"c45bb76f-92d7-4214-9ce3-c64361a40416\") " Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.667784 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dae47d35-2955-4c02-88bb-a0fbe4cd7bf0-rabbitmq-confd\") pod \"dae47d35-2955-4c02-88bb-a0fbe4cd7bf0\" (UID: \"dae47d35-2955-4c02-88bb-a0fbe4cd7bf0\") " Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.667834 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dae47d35-2955-4c02-88bb-a0fbe4cd7bf0-pod-info\") pod \"dae47d35-2955-4c02-88bb-a0fbe4cd7bf0\" (UID: \"dae47d35-2955-4c02-88bb-a0fbe4cd7bf0\") " Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.667900 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c45bb76f-92d7-4214-9ce3-c64361a40416-server-conf\") pod \"c45bb76f-92d7-4214-9ce3-c64361a40416\" (UID: \"c45bb76f-92d7-4214-9ce3-c64361a40416\") " Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.667941 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dae47d35-2955-4c02-88bb-a0fbe4cd7bf0-rabbitmq-tls\") pod \"dae47d35-2955-4c02-88bb-a0fbe4cd7bf0\" (UID: \"dae47d35-2955-4c02-88bb-a0fbe4cd7bf0\") " Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.667983 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfhxl\" (UniqueName: \"kubernetes.io/projected/c45bb76f-92d7-4214-9ce3-c64361a40416-kube-api-access-jfhxl\") pod \"c45bb76f-92d7-4214-9ce3-c64361a40416\" (UID: \"c45bb76f-92d7-4214-9ce3-c64361a40416\") " Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.668689 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1be8a036-b866-4541-a70d-841ba87cbfe2\") pod \"c45bb76f-92d7-4214-9ce3-c64361a40416\" (UID: \"c45bb76f-92d7-4214-9ce3-c64361a40416\") " Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.668786 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c45bb76f-92d7-4214-9ce3-c64361a40416-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "c45bb76f-92d7-4214-9ce3-c64361a40416" (UID: "c45bb76f-92d7-4214-9ce3-c64361a40416"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.669505 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ef1f67d4-0f74-4c05-ae85-7535fc08390e\") pod \"dae47d35-2955-4c02-88bb-a0fbe4cd7bf0\" (UID: \"dae47d35-2955-4c02-88bb-a0fbe4cd7bf0\") " Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.669565 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dae47d35-2955-4c02-88bb-a0fbe4cd7bf0-server-conf\") pod \"dae47d35-2955-4c02-88bb-a0fbe4cd7bf0\" (UID: \"dae47d35-2955-4c02-88bb-a0fbe4cd7bf0\") " Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.669593 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c45bb76f-92d7-4214-9ce3-c64361a40416-rabbitmq-erlang-cookie\") pod \"c45bb76f-92d7-4214-9ce3-c64361a40416\" (UID: \"c45bb76f-92d7-4214-9ce3-c64361a40416\") " Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.669707 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c45bb76f-92d7-4214-9ce3-c64361a40416-config-data\") pod \"c45bb76f-92d7-4214-9ce3-c64361a40416\" (UID: \"c45bb76f-92d7-4214-9ce3-c64361a40416\") " Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.669765 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c45bb76f-92d7-4214-9ce3-c64361a40416-pod-info\") pod \"c45bb76f-92d7-4214-9ce3-c64361a40416\" (UID: \"c45bb76f-92d7-4214-9ce3-c64361a40416\") " Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.669799 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dae47d35-2955-4c02-88bb-a0fbe4cd7bf0-rabbitmq-erlang-cookie\") pod \"dae47d35-2955-4c02-88bb-a0fbe4cd7bf0\" (UID: \"dae47d35-2955-4c02-88bb-a0fbe4cd7bf0\") " Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.669816 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c45bb76f-92d7-4214-9ce3-c64361a40416-erlang-cookie-secret\") pod \"c45bb76f-92d7-4214-9ce3-c64361a40416\" (UID: \"c45bb76f-92d7-4214-9ce3-c64361a40416\") " Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.669880 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dae47d35-2955-4c02-88bb-a0fbe4cd7bf0-plugins-conf\") pod \"dae47d35-2955-4c02-88bb-a0fbe4cd7bf0\" (UID: \"dae47d35-2955-4c02-88bb-a0fbe4cd7bf0\") " Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.670508 4809 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c45bb76f-92d7-4214-9ce3-c64361a40416-plugins-conf\") on node \"crc\" DevicePath \"\"" Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.671400 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dae47d35-2955-4c02-88bb-a0fbe4cd7bf0-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "dae47d35-2955-4c02-88bb-a0fbe4cd7bf0" (UID: "dae47d35-2955-4c02-88bb-a0fbe4cd7bf0"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.673566 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c45bb76f-92d7-4214-9ce3-c64361a40416-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "c45bb76f-92d7-4214-9ce3-c64361a40416" (UID: "c45bb76f-92d7-4214-9ce3-c64361a40416"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.678610 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dae47d35-2955-4c02-88bb-a0fbe4cd7bf0-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "dae47d35-2955-4c02-88bb-a0fbe4cd7bf0" (UID: "dae47d35-2955-4c02-88bb-a0fbe4cd7bf0"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.684666 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dae47d35-2955-4c02-88bb-a0fbe4cd7bf0-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "dae47d35-2955-4c02-88bb-a0fbe4cd7bf0" (UID: "dae47d35-2955-4c02-88bb-a0fbe4cd7bf0"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.689539 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c45bb76f-92d7-4214-9ce3-c64361a40416-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "c45bb76f-92d7-4214-9ce3-c64361a40416" (UID: "c45bb76f-92d7-4214-9ce3-c64361a40416"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.693677 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/dae47d35-2955-4c02-88bb-a0fbe4cd7bf0-pod-info" (OuterVolumeSpecName: "pod-info") pod "dae47d35-2955-4c02-88bb-a0fbe4cd7bf0" (UID: "dae47d35-2955-4c02-88bb-a0fbe4cd7bf0"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.698266 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c45bb76f-92d7-4214-9ce3-c64361a40416-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "c45bb76f-92d7-4214-9ce3-c64361a40416" (UID: "c45bb76f-92d7-4214-9ce3-c64361a40416"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.700410 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c45bb76f-92d7-4214-9ce3-c64361a40416-kube-api-access-jfhxl" (OuterVolumeSpecName: "kube-api-access-jfhxl") pod "c45bb76f-92d7-4214-9ce3-c64361a40416" (UID: "c45bb76f-92d7-4214-9ce3-c64361a40416"). InnerVolumeSpecName "kube-api-access-jfhxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.721935 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dae47d35-2955-4c02-88bb-a0fbe4cd7bf0-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "dae47d35-2955-4c02-88bb-a0fbe4cd7bf0" (UID: "dae47d35-2955-4c02-88bb-a0fbe4cd7bf0"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.732097 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/c45bb76f-92d7-4214-9ce3-c64361a40416-pod-info" (OuterVolumeSpecName: "pod-info") pod "c45bb76f-92d7-4214-9ce3-c64361a40416" (UID: "c45bb76f-92d7-4214-9ce3-c64361a40416"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.741750 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dae47d35-2955-4c02-88bb-a0fbe4cd7bf0-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "dae47d35-2955-4c02-88bb-a0fbe4cd7bf0" (UID: "dae47d35-2955-4c02-88bb-a0fbe4cd7bf0"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.753652 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dae47d35-2955-4c02-88bb-a0fbe4cd7bf0-kube-api-access-z8n88" (OuterVolumeSpecName: "kube-api-access-z8n88") pod "dae47d35-2955-4c02-88bb-a0fbe4cd7bf0" (UID: "dae47d35-2955-4c02-88bb-a0fbe4cd7bf0"). InnerVolumeSpecName "kube-api-access-z8n88". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.760612 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c45bb76f-92d7-4214-9ce3-c64361a40416-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "c45bb76f-92d7-4214-9ce3-c64361a40416" (UID: "c45bb76f-92d7-4214-9ce3-c64361a40416"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.775963 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1be8a036-b866-4541-a70d-841ba87cbfe2" (OuterVolumeSpecName: "persistence") pod "c45bb76f-92d7-4214-9ce3-c64361a40416" (UID: "c45bb76f-92d7-4214-9ce3-c64361a40416"). InnerVolumeSpecName "pvc-1be8a036-b866-4541-a70d-841ba87cbfe2". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.776099 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ef1f67d4-0f74-4c05-ae85-7535fc08390e" (OuterVolumeSpecName: "persistence") pod "dae47d35-2955-4c02-88bb-a0fbe4cd7bf0" (UID: "dae47d35-2955-4c02-88bb-a0fbe4cd7bf0"). InnerVolumeSpecName "pvc-ef1f67d4-0f74-4c05-ae85-7535fc08390e". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 12 08:27:53 crc kubenswrapper[4809]: E0312 08:27:53.780229 4809 reconciler_common.go:156] "operationExecutor.UnmountVolume failed (controllerAttachDetachEnabled true) for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1be8a036-b866-4541-a70d-841ba87cbfe2\") pod \"c45bb76f-92d7-4214-9ce3-c64361a40416\" (UID: \"c45bb76f-92d7-4214-9ce3-c64361a40416\") : UnmountVolume.NewUnmounter failed for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1be8a036-b866-4541-a70d-841ba87cbfe2\") pod \"c45bb76f-92d7-4214-9ce3-c64361a40416\" (UID: \"c45bb76f-92d7-4214-9ce3-c64361a40416\") : kubernetes.io/csi: unmounter failed to load volume data file [/var/lib/kubelet/pods/c45bb76f-92d7-4214-9ce3-c64361a40416/volumes/kubernetes.io~csi/pvc-1be8a036-b866-4541-a70d-841ba87cbfe2/mount]: kubernetes.io/csi: failed to open volume data file [/var/lib/kubelet/pods/c45bb76f-92d7-4214-9ce3-c64361a40416/volumes/kubernetes.io~csi/pvc-1be8a036-b866-4541-a70d-841ba87cbfe2/vol_data.json]: open /var/lib/kubelet/pods/c45bb76f-92d7-4214-9ce3-c64361a40416/volumes/kubernetes.io~csi/pvc-1be8a036-b866-4541-a70d-841ba87cbfe2/vol_data.json: no such file or directory" err="UnmountVolume.NewUnmounter failed for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1be8a036-b866-4541-a70d-841ba87cbfe2\") pod \"c45bb76f-92d7-4214-9ce3-c64361a40416\" (UID: \"c45bb76f-92d7-4214-9ce3-c64361a40416\") : kubernetes.io/csi: unmounter failed to load volume data file [/var/lib/kubelet/pods/c45bb76f-92d7-4214-9ce3-c64361a40416/volumes/kubernetes.io~csi/pvc-1be8a036-b866-4541-a70d-841ba87cbfe2/mount]: kubernetes.io/csi: failed to open volume data file [/var/lib/kubelet/pods/c45bb76f-92d7-4214-9ce3-c64361a40416/volumes/kubernetes.io~csi/pvc-1be8a036-b866-4541-a70d-841ba87cbfe2/vol_data.json]: open /var/lib/kubelet/pods/c45bb76f-92d7-4214-9ce3-c64361a40416/volumes/kubernetes.io~csi/pvc-1be8a036-b866-4541-a70d-841ba87cbfe2/vol_data.json: no such file or directory" Mar 12 08:27:53 crc kubenswrapper[4809]: E0312 08:27:53.781158 4809 reconciler_common.go:156] "operationExecutor.UnmountVolume failed (controllerAttachDetachEnabled true) for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ef1f67d4-0f74-4c05-ae85-7535fc08390e\") pod \"dae47d35-2955-4c02-88bb-a0fbe4cd7bf0\" (UID: \"dae47d35-2955-4c02-88bb-a0fbe4cd7bf0\") : UnmountVolume.NewUnmounter failed for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ef1f67d4-0f74-4c05-ae85-7535fc08390e\") pod \"dae47d35-2955-4c02-88bb-a0fbe4cd7bf0\" (UID: \"dae47d35-2955-4c02-88bb-a0fbe4cd7bf0\") : kubernetes.io/csi: unmounter failed to load volume data file [/var/lib/kubelet/pods/dae47d35-2955-4c02-88bb-a0fbe4cd7bf0/volumes/kubernetes.io~csi/pvc-ef1f67d4-0f74-4c05-ae85-7535fc08390e/mount]: kubernetes.io/csi: failed to open volume data file [/var/lib/kubelet/pods/dae47d35-2955-4c02-88bb-a0fbe4cd7bf0/volumes/kubernetes.io~csi/pvc-ef1f67d4-0f74-4c05-ae85-7535fc08390e/vol_data.json]: open /var/lib/kubelet/pods/dae47d35-2955-4c02-88bb-a0fbe4cd7bf0/volumes/kubernetes.io~csi/pvc-ef1f67d4-0f74-4c05-ae85-7535fc08390e/vol_data.json: no such file or directory" err="UnmountVolume.NewUnmounter failed for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ef1f67d4-0f74-4c05-ae85-7535fc08390e\") pod \"dae47d35-2955-4c02-88bb-a0fbe4cd7bf0\" (UID: \"dae47d35-2955-4c02-88bb-a0fbe4cd7bf0\") : kubernetes.io/csi: unmounter failed to load volume data file [/var/lib/kubelet/pods/dae47d35-2955-4c02-88bb-a0fbe4cd7bf0/volumes/kubernetes.io~csi/pvc-ef1f67d4-0f74-4c05-ae85-7535fc08390e/mount]: kubernetes.io/csi: failed to open volume data file [/var/lib/kubelet/pods/dae47d35-2955-4c02-88bb-a0fbe4cd7bf0/volumes/kubernetes.io~csi/pvc-ef1f67d4-0f74-4c05-ae85-7535fc08390e/vol_data.json]: open /var/lib/kubelet/pods/dae47d35-2955-4c02-88bb-a0fbe4cd7bf0/volumes/kubernetes.io~csi/pvc-ef1f67d4-0f74-4c05-ae85-7535fc08390e/vol_data.json: no such file or directory" Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.782602 4809 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dae47d35-2955-4c02-88bb-a0fbe4cd7bf0-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.782630 4809 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c45bb76f-92d7-4214-9ce3-c64361a40416-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.782641 4809 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dae47d35-2955-4c02-88bb-a0fbe4cd7bf0-plugins-conf\") on node \"crc\" DevicePath \"\"" Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.782650 4809 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c45bb76f-92d7-4214-9ce3-c64361a40416-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.782660 4809 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dae47d35-2955-4c02-88bb-a0fbe4cd7bf0-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.782670 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8n88\" (UniqueName: \"kubernetes.io/projected/dae47d35-2955-4c02-88bb-a0fbe4cd7bf0-kube-api-access-z8n88\") on node \"crc\" DevicePath \"\"" Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.782682 4809 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dae47d35-2955-4c02-88bb-a0fbe4cd7bf0-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.782690 4809 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c45bb76f-92d7-4214-9ce3-c64361a40416-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.782698 4809 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dae47d35-2955-4c02-88bb-a0fbe4cd7bf0-pod-info\") on node \"crc\" DevicePath \"\"" Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.782706 4809 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dae47d35-2955-4c02-88bb-a0fbe4cd7bf0-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.782715 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfhxl\" (UniqueName: \"kubernetes.io/projected/c45bb76f-92d7-4214-9ce3-c64361a40416-kube-api-access-jfhxl\") on node \"crc\" DevicePath \"\"" Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.782740 4809 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-1be8a036-b866-4541-a70d-841ba87cbfe2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1be8a036-b866-4541-a70d-841ba87cbfe2\") on node \"crc\" " Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.782759 4809 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-ef1f67d4-0f74-4c05-ae85-7535fc08390e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ef1f67d4-0f74-4c05-ae85-7535fc08390e\") on node \"crc\" " Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.782771 4809 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c45bb76f-92d7-4214-9ce3-c64361a40416-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.782780 4809 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c45bb76f-92d7-4214-9ce3-c64361a40416-pod-info\") on node \"crc\" DevicePath \"\"" Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.812747 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c45bb76f-92d7-4214-9ce3-c64361a40416-config-data" (OuterVolumeSpecName: "config-data") pod "c45bb76f-92d7-4214-9ce3-c64361a40416" (UID: "c45bb76f-92d7-4214-9ce3-c64361a40416"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.856556 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dae47d35-2955-4c02-88bb-a0fbe4cd7bf0-config-data" (OuterVolumeSpecName: "config-data") pod "dae47d35-2955-4c02-88bb-a0fbe4cd7bf0" (UID: "dae47d35-2955-4c02-88bb-a0fbe4cd7bf0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.858782 4809 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.859031 4809 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-1be8a036-b866-4541-a70d-841ba87cbfe2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1be8a036-b866-4541-a70d-841ba87cbfe2") on node "crc" Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.885642 4809 reconciler_common.go:293] "Volume detached for volume \"pvc-1be8a036-b866-4541-a70d-841ba87cbfe2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1be8a036-b866-4541-a70d-841ba87cbfe2\") on node \"crc\" DevicePath \"\"" Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.885690 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c45bb76f-92d7-4214-9ce3-c64361a40416-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.885711 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dae47d35-2955-4c02-88bb-a0fbe4cd7bf0-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.890318 4809 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.890524 4809 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-ef1f67d4-0f74-4c05-ae85-7535fc08390e" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ef1f67d4-0f74-4c05-ae85-7535fc08390e") on node "crc" Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.947409 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dae47d35-2955-4c02-88bb-a0fbe4cd7bf0-server-conf" (OuterVolumeSpecName: "server-conf") pod "dae47d35-2955-4c02-88bb-a0fbe4cd7bf0" (UID: "dae47d35-2955-4c02-88bb-a0fbe4cd7bf0"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.965972 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c45bb76f-92d7-4214-9ce3-c64361a40416-server-conf" (OuterVolumeSpecName: "server-conf") pod "c45bb76f-92d7-4214-9ce3-c64361a40416" (UID: "c45bb76f-92d7-4214-9ce3-c64361a40416"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.988936 4809 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c45bb76f-92d7-4214-9ce3-c64361a40416-server-conf\") on node \"crc\" DevicePath \"\"" Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.988978 4809 reconciler_common.go:293] "Volume detached for volume \"pvc-ef1f67d4-0f74-4c05-ae85-7535fc08390e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ef1f67d4-0f74-4c05-ae85-7535fc08390e\") on node \"crc\" DevicePath \"\"" Mar 12 08:27:53 crc kubenswrapper[4809]: I0312 08:27:53.988989 4809 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dae47d35-2955-4c02-88bb-a0fbe4cd7bf0-server-conf\") on node \"crc\" DevicePath \"\"" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.012393 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c45bb76f-92d7-4214-9ce3-c64361a40416-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "c45bb76f-92d7-4214-9ce3-c64361a40416" (UID: "c45bb76f-92d7-4214-9ce3-c64361a40416"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.014354 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dae47d35-2955-4c02-88bb-a0fbe4cd7bf0-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "dae47d35-2955-4c02-88bb-a0fbe4cd7bf0" (UID: "dae47d35-2955-4c02-88bb-a0fbe4cd7bf0"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.091413 4809 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c45bb76f-92d7-4214-9ce3-c64361a40416-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.091444 4809 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dae47d35-2955-4c02-88bb-a0fbe4cd7bf0-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.329355 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.365332 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-2"] Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.417253 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.452239 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.467523 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Mar 12 08:27:54 crc kubenswrapper[4809]: E0312 08:27:54.468267 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c45bb76f-92d7-4214-9ce3-c64361a40416" containerName="rabbitmq" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.468287 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="c45bb76f-92d7-4214-9ce3-c64361a40416" containerName="rabbitmq" Mar 12 08:27:54 crc kubenswrapper[4809]: E0312 08:27:54.468321 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dae47d35-2955-4c02-88bb-a0fbe4cd7bf0" containerName="setup-container" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.468328 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="dae47d35-2955-4c02-88bb-a0fbe4cd7bf0" containerName="setup-container" Mar 12 08:27:54 crc kubenswrapper[4809]: E0312 08:27:54.468351 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c45bb76f-92d7-4214-9ce3-c64361a40416" containerName="setup-container" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.468358 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="c45bb76f-92d7-4214-9ce3-c64361a40416" containerName="setup-container" Mar 12 08:27:54 crc kubenswrapper[4809]: E0312 08:27:54.468374 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dae47d35-2955-4c02-88bb-a0fbe4cd7bf0" containerName="rabbitmq" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.468380 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="dae47d35-2955-4c02-88bb-a0fbe4cd7bf0" containerName="rabbitmq" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.468629 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="c45bb76f-92d7-4214-9ce3-c64361a40416" containerName="rabbitmq" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.468659 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="dae47d35-2955-4c02-88bb-a0fbe4cd7bf0" containerName="rabbitmq" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.470273 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.538085 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.567060 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.571293 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.574220 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.574844 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.574885 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.599706 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.600754 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.600948 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-9gcnx" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.601068 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.612074 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.634000 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b96c701c-45de-46e2-95d0-df4e12f6d643-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b96c701c-45de-46e2-95d0-df4e12f6d643\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.634106 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b96c701c-45de-46e2-95d0-df4e12f6d643-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b96c701c-45de-46e2-95d0-df4e12f6d643\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.634160 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b96c701c-45de-46e2-95d0-df4e12f6d643-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b96c701c-45de-46e2-95d0-df4e12f6d643\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.634181 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b96c701c-45de-46e2-95d0-df4e12f6d643-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b96c701c-45de-46e2-95d0-df4e12f6d643\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.634240 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b96c701c-45de-46e2-95d0-df4e12f6d643-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b96c701c-45de-46e2-95d0-df4e12f6d643\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.634308 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b96c701c-45de-46e2-95d0-df4e12f6d643-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b96c701c-45de-46e2-95d0-df4e12f6d643\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.634375 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b96c701c-45de-46e2-95d0-df4e12f6d643-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b96c701c-45de-46e2-95d0-df4e12f6d643\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.634401 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ef1f67d4-0f74-4c05-ae85-7535fc08390e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ef1f67d4-0f74-4c05-ae85-7535fc08390e\") pod \"rabbitmq-cell1-server-0\" (UID: \"b96c701c-45de-46e2-95d0-df4e12f6d643\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.634420 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b96c701c-45de-46e2-95d0-df4e12f6d643-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b96c701c-45de-46e2-95d0-df4e12f6d643\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.634463 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b96c701c-45de-46e2-95d0-df4e12f6d643-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b96c701c-45de-46e2-95d0-df4e12f6d643\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.634512 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rqcv\" (UniqueName: \"kubernetes.io/projected/b96c701c-45de-46e2-95d0-df4e12f6d643-kube-api-access-6rqcv\") pod \"rabbitmq-cell1-server-0\" (UID: \"b96c701c-45de-46e2-95d0-df4e12f6d643\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.737296 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b96c701c-45de-46e2-95d0-df4e12f6d643-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b96c701c-45de-46e2-95d0-df4e12f6d643\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.737672 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/221e3c09-0978-4460-9f66-642aa1165af4-config-data\") pod \"rabbitmq-server-2\" (UID: \"221e3c09-0978-4460-9f66-642aa1165af4\") " pod="openstack/rabbitmq-server-2" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.737792 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/221e3c09-0978-4460-9f66-642aa1165af4-server-conf\") pod \"rabbitmq-server-2\" (UID: \"221e3c09-0978-4460-9f66-642aa1165af4\") " pod="openstack/rabbitmq-server-2" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.737861 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b96c701c-45de-46e2-95d0-df4e12f6d643-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b96c701c-45de-46e2-95d0-df4e12f6d643\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.737888 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/221e3c09-0978-4460-9f66-642aa1165af4-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"221e3c09-0978-4460-9f66-642aa1165af4\") " pod="openstack/rabbitmq-server-2" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.737978 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b96c701c-45de-46e2-95d0-df4e12f6d643-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b96c701c-45de-46e2-95d0-df4e12f6d643\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.738011 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s25nk\" (UniqueName: \"kubernetes.io/projected/221e3c09-0978-4460-9f66-642aa1165af4-kube-api-access-s25nk\") pod \"rabbitmq-server-2\" (UID: \"221e3c09-0978-4460-9f66-642aa1165af4\") " pod="openstack/rabbitmq-server-2" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.738040 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b96c701c-45de-46e2-95d0-df4e12f6d643-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b96c701c-45de-46e2-95d0-df4e12f6d643\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.738269 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b96c701c-45de-46e2-95d0-df4e12f6d643-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b96c701c-45de-46e2-95d0-df4e12f6d643\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.738303 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-1be8a036-b866-4541-a70d-841ba87cbfe2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1be8a036-b866-4541-a70d-841ba87cbfe2\") pod \"rabbitmq-server-2\" (UID: \"221e3c09-0978-4460-9f66-642aa1165af4\") " pod="openstack/rabbitmq-server-2" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.738384 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/221e3c09-0978-4460-9f66-642aa1165af4-pod-info\") pod \"rabbitmq-server-2\" (UID: \"221e3c09-0978-4460-9f66-642aa1165af4\") " pod="openstack/rabbitmq-server-2" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.738507 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b96c701c-45de-46e2-95d0-df4e12f6d643-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b96c701c-45de-46e2-95d0-df4e12f6d643\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.738614 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/221e3c09-0978-4460-9f66-642aa1165af4-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"221e3c09-0978-4460-9f66-642aa1165af4\") " pod="openstack/rabbitmq-server-2" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.739187 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/221e3c09-0978-4460-9f66-642aa1165af4-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"221e3c09-0978-4460-9f66-642aa1165af4\") " pod="openstack/rabbitmq-server-2" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.739224 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/221e3c09-0978-4460-9f66-642aa1165af4-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"221e3c09-0978-4460-9f66-642aa1165af4\") " pod="openstack/rabbitmq-server-2" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.739265 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b96c701c-45de-46e2-95d0-df4e12f6d643-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b96c701c-45de-46e2-95d0-df4e12f6d643\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.739295 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ef1f67d4-0f74-4c05-ae85-7535fc08390e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ef1f67d4-0f74-4c05-ae85-7535fc08390e\") pod \"rabbitmq-cell1-server-0\" (UID: \"b96c701c-45de-46e2-95d0-df4e12f6d643\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.739313 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b96c701c-45de-46e2-95d0-df4e12f6d643-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b96c701c-45de-46e2-95d0-df4e12f6d643\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.739361 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/221e3c09-0978-4460-9f66-642aa1165af4-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"221e3c09-0978-4460-9f66-642aa1165af4\") " pod="openstack/rabbitmq-server-2" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.739387 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b96c701c-45de-46e2-95d0-df4e12f6d643-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b96c701c-45de-46e2-95d0-df4e12f6d643\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.739454 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rqcv\" (UniqueName: \"kubernetes.io/projected/b96c701c-45de-46e2-95d0-df4e12f6d643-kube-api-access-6rqcv\") pod \"rabbitmq-cell1-server-0\" (UID: \"b96c701c-45de-46e2-95d0-df4e12f6d643\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.739485 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/221e3c09-0978-4460-9f66-642aa1165af4-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"221e3c09-0978-4460-9f66-642aa1165af4\") " pod="openstack/rabbitmq-server-2" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.739956 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b96c701c-45de-46e2-95d0-df4e12f6d643-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b96c701c-45de-46e2-95d0-df4e12f6d643\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.740124 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b96c701c-45de-46e2-95d0-df4e12f6d643-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b96c701c-45de-46e2-95d0-df4e12f6d643\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.743427 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b96c701c-45de-46e2-95d0-df4e12f6d643-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b96c701c-45de-46e2-95d0-df4e12f6d643\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.743735 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b96c701c-45de-46e2-95d0-df4e12f6d643-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b96c701c-45de-46e2-95d0-df4e12f6d643\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.744489 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b96c701c-45de-46e2-95d0-df4e12f6d643-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b96c701c-45de-46e2-95d0-df4e12f6d643\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.745422 4809 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.745462 4809 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ef1f67d4-0f74-4c05-ae85-7535fc08390e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ef1f67d4-0f74-4c05-ae85-7535fc08390e\") pod \"rabbitmq-cell1-server-0\" (UID: \"b96c701c-45de-46e2-95d0-df4e12f6d643\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/bf583053864d0acd6928dddf5a4c14113cd4eb1967e29536659ba8936bc6cdee/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.746338 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b96c701c-45de-46e2-95d0-df4e12f6d643-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b96c701c-45de-46e2-95d0-df4e12f6d643\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.746433 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b96c701c-45de-46e2-95d0-df4e12f6d643-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b96c701c-45de-46e2-95d0-df4e12f6d643\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.749278 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b96c701c-45de-46e2-95d0-df4e12f6d643-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b96c701c-45de-46e2-95d0-df4e12f6d643\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.757272 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rqcv\" (UniqueName: \"kubernetes.io/projected/b96c701c-45de-46e2-95d0-df4e12f6d643-kube-api-access-6rqcv\") pod \"rabbitmq-cell1-server-0\" (UID: \"b96c701c-45de-46e2-95d0-df4e12f6d643\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.762492 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b96c701c-45de-46e2-95d0-df4e12f6d643-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b96c701c-45de-46e2-95d0-df4e12f6d643\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.814905 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ef1f67d4-0f74-4c05-ae85-7535fc08390e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ef1f67d4-0f74-4c05-ae85-7535fc08390e\") pod \"rabbitmq-cell1-server-0\" (UID: \"b96c701c-45de-46e2-95d0-df4e12f6d643\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.843281 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/221e3c09-0978-4460-9f66-642aa1165af4-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"221e3c09-0978-4460-9f66-642aa1165af4\") " pod="openstack/rabbitmq-server-2" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.843350 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/221e3c09-0978-4460-9f66-642aa1165af4-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"221e3c09-0978-4460-9f66-642aa1165af4\") " pod="openstack/rabbitmq-server-2" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.843415 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/221e3c09-0978-4460-9f66-642aa1165af4-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"221e3c09-0978-4460-9f66-642aa1165af4\") " pod="openstack/rabbitmq-server-2" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.843471 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/221e3c09-0978-4460-9f66-642aa1165af4-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"221e3c09-0978-4460-9f66-642aa1165af4\") " pod="openstack/rabbitmq-server-2" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.843504 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/221e3c09-0978-4460-9f66-642aa1165af4-config-data\") pod \"rabbitmq-server-2\" (UID: \"221e3c09-0978-4460-9f66-642aa1165af4\") " pod="openstack/rabbitmq-server-2" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.843536 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/221e3c09-0978-4460-9f66-642aa1165af4-server-conf\") pod \"rabbitmq-server-2\" (UID: \"221e3c09-0978-4460-9f66-642aa1165af4\") " pod="openstack/rabbitmq-server-2" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.843559 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/221e3c09-0978-4460-9f66-642aa1165af4-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"221e3c09-0978-4460-9f66-642aa1165af4\") " pod="openstack/rabbitmq-server-2" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.843594 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s25nk\" (UniqueName: \"kubernetes.io/projected/221e3c09-0978-4460-9f66-642aa1165af4-kube-api-access-s25nk\") pod \"rabbitmq-server-2\" (UID: \"221e3c09-0978-4460-9f66-642aa1165af4\") " pod="openstack/rabbitmq-server-2" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.843649 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-1be8a036-b866-4541-a70d-841ba87cbfe2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1be8a036-b866-4541-a70d-841ba87cbfe2\") pod \"rabbitmq-server-2\" (UID: \"221e3c09-0978-4460-9f66-642aa1165af4\") " pod="openstack/rabbitmq-server-2" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.843682 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/221e3c09-0978-4460-9f66-642aa1165af4-pod-info\") pod \"rabbitmq-server-2\" (UID: \"221e3c09-0978-4460-9f66-642aa1165af4\") " pod="openstack/rabbitmq-server-2" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.843750 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/221e3c09-0978-4460-9f66-642aa1165af4-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"221e3c09-0978-4460-9f66-642aa1165af4\") " pod="openstack/rabbitmq-server-2" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.843850 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/221e3c09-0978-4460-9f66-642aa1165af4-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"221e3c09-0978-4460-9f66-642aa1165af4\") " pod="openstack/rabbitmq-server-2" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.844297 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/221e3c09-0978-4460-9f66-642aa1165af4-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"221e3c09-0978-4460-9f66-642aa1165af4\") " pod="openstack/rabbitmq-server-2" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.845053 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/221e3c09-0978-4460-9f66-642aa1165af4-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"221e3c09-0978-4460-9f66-642aa1165af4\") " pod="openstack/rabbitmq-server-2" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.845192 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/221e3c09-0978-4460-9f66-642aa1165af4-server-conf\") pod \"rabbitmq-server-2\" (UID: \"221e3c09-0978-4460-9f66-642aa1165af4\") " pod="openstack/rabbitmq-server-2" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.845229 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/221e3c09-0978-4460-9f66-642aa1165af4-config-data\") pod \"rabbitmq-server-2\" (UID: \"221e3c09-0978-4460-9f66-642aa1165af4\") " pod="openstack/rabbitmq-server-2" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.849996 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/221e3c09-0978-4460-9f66-642aa1165af4-pod-info\") pod \"rabbitmq-server-2\" (UID: \"221e3c09-0978-4460-9f66-642aa1165af4\") " pod="openstack/rabbitmq-server-2" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.850045 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/221e3c09-0978-4460-9f66-642aa1165af4-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"221e3c09-0978-4460-9f66-642aa1165af4\") " pod="openstack/rabbitmq-server-2" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.850583 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/221e3c09-0978-4460-9f66-642aa1165af4-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"221e3c09-0978-4460-9f66-642aa1165af4\") " pod="openstack/rabbitmq-server-2" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.851863 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/221e3c09-0978-4460-9f66-642aa1165af4-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"221e3c09-0978-4460-9f66-642aa1165af4\") " pod="openstack/rabbitmq-server-2" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.853493 4809 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.853561 4809 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-1be8a036-b866-4541-a70d-841ba87cbfe2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1be8a036-b866-4541-a70d-841ba87cbfe2\") pod \"rabbitmq-server-2\" (UID: \"221e3c09-0978-4460-9f66-642aa1165af4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/926d41cabb633f27f3d651d566f648c3452c1aea11ce8a8ddceb0de312727de4/globalmount\"" pod="openstack/rabbitmq-server-2" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.882780 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s25nk\" (UniqueName: \"kubernetes.io/projected/221e3c09-0978-4460-9f66-642aa1165af4-kube-api-access-s25nk\") pod \"rabbitmq-server-2\" (UID: \"221e3c09-0978-4460-9f66-642aa1165af4\") " pod="openstack/rabbitmq-server-2" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.899071 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:27:54 crc kubenswrapper[4809]: I0312 08:27:54.926081 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-1be8a036-b866-4541-a70d-841ba87cbfe2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1be8a036-b866-4541-a70d-841ba87cbfe2\") pod \"rabbitmq-server-2\" (UID: \"221e3c09-0978-4460-9f66-642aa1165af4\") " pod="openstack/rabbitmq-server-2" Mar 12 08:27:55 crc kubenswrapper[4809]: I0312 08:27:55.127542 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c45bb76f-92d7-4214-9ce3-c64361a40416" path="/var/lib/kubelet/pods/c45bb76f-92d7-4214-9ce3-c64361a40416/volumes" Mar 12 08:27:55 crc kubenswrapper[4809]: I0312 08:27:55.128485 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dae47d35-2955-4c02-88bb-a0fbe4cd7bf0" path="/var/lib/kubelet/pods/dae47d35-2955-4c02-88bb-a0fbe4cd7bf0/volumes" Mar 12 08:27:55 crc kubenswrapper[4809]: I0312 08:27:55.173590 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Mar 12 08:27:57 crc kubenswrapper[4809]: I0312 08:27:57.185341 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="dae47d35-2955-4c02-88bb-a0fbe4cd7bf0" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.134:5671: i/o timeout" Mar 12 08:27:57 crc kubenswrapper[4809]: I0312 08:27:57.696089 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="c45bb76f-92d7-4214-9ce3-c64361a40416" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.136:5671: i/o timeout" Mar 12 08:27:59 crc kubenswrapper[4809]: I0312 08:27:59.220395 4809 scope.go:117] "RemoveContainer" containerID="fdbf48cdaf94129003bad95d15457222c4e97e3f7eebfbff9e004ffa0165f678" Mar 12 08:27:59 crc kubenswrapper[4809]: I0312 08:27:59.750681 4809 scope.go:117] "RemoveContainer" containerID="38f62d995fbf094e492c1c740a1a132f330c19740dd677b7a627a700df227093" Mar 12 08:27:59 crc kubenswrapper[4809]: E0312 08:27:59.761423 4809 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Mar 12 08:27:59 crc kubenswrapper[4809]: E0312 08:27:59.761477 4809 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Mar 12 08:27:59 crc kubenswrapper[4809]: E0312 08:27:59.761605 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f62rs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-j9k6d_openstack(8873bebf-493e-4f3d-85a0-fca09ce4d946): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 08:27:59 crc kubenswrapper[4809]: E0312 08:27:59.762767 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-j9k6d" podUID="8873bebf-493e-4f3d-85a0-fca09ce4d946" Mar 12 08:27:59 crc kubenswrapper[4809]: E0312 08:27:59.816101 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-j9k6d" podUID="8873bebf-493e-4f3d-85a0-fca09ce4d946" Mar 12 08:27:59 crc kubenswrapper[4809]: I0312 08:27:59.879491 4809 scope.go:117] "RemoveContainer" containerID="189ef92e31873bc3527a72453e076cc8cc76e11ec56712bdc61eaa1ca3c4e771" Mar 12 08:27:59 crc kubenswrapper[4809]: I0312 08:27:59.928233 4809 scope.go:117] "RemoveContainer" containerID="aa4dcb211340daf33201d36c37708b689c8f08f8477256ee26540cef8fdd886e" Mar 12 08:28:00 crc kubenswrapper[4809]: I0312 08:28:00.013686 4809 scope.go:117] "RemoveContainer" containerID="b965992fcf175c016f5c92c5db6095b16f8850df2597d532cfce28009b0f6aea" Mar 12 08:28:00 crc kubenswrapper[4809]: I0312 08:28:00.065148 4809 scope.go:117] "RemoveContainer" containerID="22b536a9bcc350066bd43312ac398e7aea4d0d0be2649687f90ebdc261a880ce" Mar 12 08:28:00 crc kubenswrapper[4809]: I0312 08:28:00.107753 4809 scope.go:117] "RemoveContainer" containerID="864ab2b9321734db2ec1c2e4ce57f8a096a813a5cbaa0744b231535713d6ab10" Mar 12 08:28:00 crc kubenswrapper[4809]: E0312 08:28:00.108190 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:28:00 crc kubenswrapper[4809]: I0312 08:28:00.149794 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555068-7zqvj"] Mar 12 08:28:00 crc kubenswrapper[4809]: I0312 08:28:00.152465 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555068-7zqvj" Mar 12 08:28:00 crc kubenswrapper[4809]: I0312 08:28:00.158893 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 08:28:00 crc kubenswrapper[4809]: I0312 08:28:00.159161 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 08:28:00 crc kubenswrapper[4809]: I0312 08:28:00.158923 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-tvxqv" Mar 12 08:28:00 crc kubenswrapper[4809]: I0312 08:28:00.163345 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555068-7zqvj"] Mar 12 08:28:00 crc kubenswrapper[4809]: I0312 08:28:00.268831 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 08:28:00 crc kubenswrapper[4809]: I0312 08:28:00.316198 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zq2q\" (UniqueName: \"kubernetes.io/projected/a56d570f-f46b-4b34-9241-124d236d7e21-kube-api-access-9zq2q\") pod \"auto-csr-approver-29555068-7zqvj\" (UID: \"a56d570f-f46b-4b34-9241-124d236d7e21\") " pod="openshift-infra/auto-csr-approver-29555068-7zqvj" Mar 12 08:28:00 crc kubenswrapper[4809]: I0312 08:28:00.419446 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zq2q\" (UniqueName: \"kubernetes.io/projected/a56d570f-f46b-4b34-9241-124d236d7e21-kube-api-access-9zq2q\") pod \"auto-csr-approver-29555068-7zqvj\" (UID: \"a56d570f-f46b-4b34-9241-124d236d7e21\") " pod="openshift-infra/auto-csr-approver-29555068-7zqvj" Mar 12 08:28:00 crc kubenswrapper[4809]: I0312 08:28:00.422820 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-l59t6"] Mar 12 08:28:00 crc kubenswrapper[4809]: I0312 08:28:00.436421 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zq2q\" (UniqueName: \"kubernetes.io/projected/a56d570f-f46b-4b34-9241-124d236d7e21-kube-api-access-9zq2q\") pod \"auto-csr-approver-29555068-7zqvj\" (UID: \"a56d570f-f46b-4b34-9241-124d236d7e21\") " pod="openshift-infra/auto-csr-approver-29555068-7zqvj" Mar 12 08:28:00 crc kubenswrapper[4809]: I0312 08:28:00.482708 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555068-7zqvj" Mar 12 08:28:00 crc kubenswrapper[4809]: I0312 08:28:00.625859 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Mar 12 08:28:00 crc kubenswrapper[4809]: W0312 08:28:00.649313 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod221e3c09_0978_4460_9f66_642aa1165af4.slice/crio-80f75ae79ff181f9f1eff572dc590e44cacb5150035248d101f139ad6acafa22 WatchSource:0}: Error finding container 80f75ae79ff181f9f1eff572dc590e44cacb5150035248d101f139ad6acafa22: Status 404 returned error can't find the container with id 80f75ae79ff181f9f1eff572dc590e44cacb5150035248d101f139ad6acafa22 Mar 12 08:28:00 crc kubenswrapper[4809]: I0312 08:28:00.664633 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 12 08:28:00 crc kubenswrapper[4809]: I0312 08:28:00.827249 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-l59t6" event={"ID":"cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7","Type":"ContainerStarted","Data":"e7ed012b442c1df33b19c9afe58aa397df47db455a31fce3197622a36569e858"} Mar 12 08:28:00 crc kubenswrapper[4809]: I0312 08:28:00.834059 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b7c728ba-8361-4d19-833d-b3494509f355","Type":"ContainerStarted","Data":"cfad6192d311afdd541f594629f203d5f4f272436ad2ab43b72ffc477fcd0109"} Mar 12 08:28:00 crc kubenswrapper[4809]: I0312 08:28:00.835945 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b96c701c-45de-46e2-95d0-df4e12f6d643","Type":"ContainerStarted","Data":"ba5de7f2cfa7ebcebfdca44662bb1f50a0a39b420af3d8e1946e89eb87f91f87"} Mar 12 08:28:00 crc kubenswrapper[4809]: I0312 08:28:00.838685 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"221e3c09-0978-4460-9f66-642aa1165af4","Type":"ContainerStarted","Data":"80f75ae79ff181f9f1eff572dc590e44cacb5150035248d101f139ad6acafa22"} Mar 12 08:28:01 crc kubenswrapper[4809]: I0312 08:28:01.138181 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555068-7zqvj"] Mar 12 08:28:01 crc kubenswrapper[4809]: I0312 08:28:01.858733 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555068-7zqvj" event={"ID":"a56d570f-f46b-4b34-9241-124d236d7e21","Type":"ContainerStarted","Data":"754b68566d7f8d01e285edf1cdc5bbaa47353f08d75e80dc41fb6266e5fe169a"} Mar 12 08:28:01 crc kubenswrapper[4809]: I0312 08:28:01.862684 4809 generic.go:334] "Generic (PLEG): container finished" podID="cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7" containerID="3df00dc1c45a265da9eb8de85ae550b3fe88069581253b858dd1b77cc3b0fda4" exitCode=0 Mar 12 08:28:01 crc kubenswrapper[4809]: I0312 08:28:01.862757 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-l59t6" event={"ID":"cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7","Type":"ContainerDied","Data":"3df00dc1c45a265da9eb8de85ae550b3fe88069581253b858dd1b77cc3b0fda4"} Mar 12 08:28:03 crc kubenswrapper[4809]: I0312 08:28:03.916615 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"221e3c09-0978-4460-9f66-642aa1165af4","Type":"ContainerStarted","Data":"b9dcdad0a4964135b6ef86e85053cc6fee3292efb1051da4d73013b51e582976"} Mar 12 08:28:03 crc kubenswrapper[4809]: I0312 08:28:03.924255 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-l59t6" event={"ID":"cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7","Type":"ContainerStarted","Data":"0a22e9e2c499942df89501a02c7147e117207ba29dc1a525308304c227ac508f"} Mar 12 08:28:03 crc kubenswrapper[4809]: I0312 08:28:03.924352 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d84b4d45c-l59t6" Mar 12 08:28:03 crc kubenswrapper[4809]: I0312 08:28:03.926918 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555068-7zqvj" event={"ID":"a56d570f-f46b-4b34-9241-124d236d7e21","Type":"ContainerStarted","Data":"ebf8743c0e7a1b5a1c243fff15f22d096343151bb387070b8e16be8809b7c964"} Mar 12 08:28:03 crc kubenswrapper[4809]: I0312 08:28:03.935319 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b96c701c-45de-46e2-95d0-df4e12f6d643","Type":"ContainerStarted","Data":"ec26d23b86bf034e79c88a5939b8025a083d11ba0d6f47599a107ea0ddeec497"} Mar 12 08:28:03 crc kubenswrapper[4809]: I0312 08:28:03.966880 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555068-7zqvj" podStartSLOduration=2.494686979 podStartE2EDuration="3.966858963s" podCreationTimestamp="2026-03-12 08:28:00 +0000 UTC" firstStartedPulling="2026-03-12 08:28:01.146829033 +0000 UTC m=+1754.728864766" lastFinishedPulling="2026-03-12 08:28:02.619001017 +0000 UTC m=+1756.201036750" observedRunningTime="2026-03-12 08:28:03.96455594 +0000 UTC m=+1757.546591673" watchObservedRunningTime="2026-03-12 08:28:03.966858963 +0000 UTC m=+1757.548894686" Mar 12 08:28:04 crc kubenswrapper[4809]: I0312 08:28:04.017725 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d84b4d45c-l59t6" podStartSLOduration=12.017698524 podStartE2EDuration="12.017698524s" podCreationTimestamp="2026-03-12 08:27:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:28:03.991790171 +0000 UTC m=+1757.573825904" watchObservedRunningTime="2026-03-12 08:28:04.017698524 +0000 UTC m=+1757.599734257" Mar 12 08:28:04 crc kubenswrapper[4809]: I0312 08:28:04.952895 4809 generic.go:334] "Generic (PLEG): container finished" podID="a56d570f-f46b-4b34-9241-124d236d7e21" containerID="ebf8743c0e7a1b5a1c243fff15f22d096343151bb387070b8e16be8809b7c964" exitCode=0 Mar 12 08:28:04 crc kubenswrapper[4809]: I0312 08:28:04.952989 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555068-7zqvj" event={"ID":"a56d570f-f46b-4b34-9241-124d236d7e21","Type":"ContainerDied","Data":"ebf8743c0e7a1b5a1c243fff15f22d096343151bb387070b8e16be8809b7c964"} Mar 12 08:28:06 crc kubenswrapper[4809]: I0312 08:28:06.531472 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555068-7zqvj" Mar 12 08:28:06 crc kubenswrapper[4809]: I0312 08:28:06.588554 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zq2q\" (UniqueName: \"kubernetes.io/projected/a56d570f-f46b-4b34-9241-124d236d7e21-kube-api-access-9zq2q\") pod \"a56d570f-f46b-4b34-9241-124d236d7e21\" (UID: \"a56d570f-f46b-4b34-9241-124d236d7e21\") " Mar 12 08:28:06 crc kubenswrapper[4809]: I0312 08:28:06.598525 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a56d570f-f46b-4b34-9241-124d236d7e21-kube-api-access-9zq2q" (OuterVolumeSpecName: "kube-api-access-9zq2q") pod "a56d570f-f46b-4b34-9241-124d236d7e21" (UID: "a56d570f-f46b-4b34-9241-124d236d7e21"). InnerVolumeSpecName "kube-api-access-9zq2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:28:06 crc kubenswrapper[4809]: I0312 08:28:06.697803 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zq2q\" (UniqueName: \"kubernetes.io/projected/a56d570f-f46b-4b34-9241-124d236d7e21-kube-api-access-9zq2q\") on node \"crc\" DevicePath \"\"" Mar 12 08:28:06 crc kubenswrapper[4809]: I0312 08:28:06.994230 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b7c728ba-8361-4d19-833d-b3494509f355","Type":"ContainerStarted","Data":"b96fb9463b48b47c51299bf419244fbc30fcc5ab57bebf0d2a86617d3123c3f7"} Mar 12 08:28:06 crc kubenswrapper[4809]: I0312 08:28:06.996462 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555068-7zqvj" event={"ID":"a56d570f-f46b-4b34-9241-124d236d7e21","Type":"ContainerDied","Data":"754b68566d7f8d01e285edf1cdc5bbaa47353f08d75e80dc41fb6266e5fe169a"} Mar 12 08:28:06 crc kubenswrapper[4809]: I0312 08:28:06.996733 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="754b68566d7f8d01e285edf1cdc5bbaa47353f08d75e80dc41fb6266e5fe169a" Mar 12 08:28:06 crc kubenswrapper[4809]: I0312 08:28:06.996733 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555068-7zqvj" Mar 12 08:28:07 crc kubenswrapper[4809]: I0312 08:28:07.059646 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555062-g8cq6"] Mar 12 08:28:07 crc kubenswrapper[4809]: I0312 08:28:07.074683 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555062-g8cq6"] Mar 12 08:28:07 crc kubenswrapper[4809]: I0312 08:28:07.172387 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9671632-9016-48e7-827c-6c440d55245e" path="/var/lib/kubelet/pods/f9671632-9016-48e7-827c-6c440d55245e/volumes" Mar 12 08:28:08 crc kubenswrapper[4809]: I0312 08:28:08.020611 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b7c728ba-8361-4d19-833d-b3494509f355","Type":"ContainerStarted","Data":"9b7c5a236cb3855397521f1895f9fc69f78a5f4b6c64c6475c12dc1103055c0e"} Mar 12 08:28:09 crc kubenswrapper[4809]: I0312 08:28:09.035704 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b7c728ba-8361-4d19-833d-b3494509f355","Type":"ContainerStarted","Data":"6a002b770108522cac326e4c966f8f817e6201f7c15dd2d637628f5568e32439"} Mar 12 08:28:11 crc kubenswrapper[4809]: I0312 08:28:11.078614 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b7c728ba-8361-4d19-833d-b3494509f355","Type":"ContainerStarted","Data":"1dbc9116030d9b415404602a0515cf16917f16419579c8ba4bc5567e1a1520f8"} Mar 12 08:28:11 crc kubenswrapper[4809]: I0312 08:28:11.081805 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 12 08:28:11 crc kubenswrapper[4809]: I0312 08:28:11.128007 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=17.08080276 podStartE2EDuration="27.127979578s" podCreationTimestamp="2026-03-12 08:27:44 +0000 UTC" firstStartedPulling="2026-03-12 08:28:00.282449285 +0000 UTC m=+1753.864485018" lastFinishedPulling="2026-03-12 08:28:10.329626103 +0000 UTC m=+1763.911661836" observedRunningTime="2026-03-12 08:28:11.109698331 +0000 UTC m=+1764.691734084" watchObservedRunningTime="2026-03-12 08:28:11.127979578 +0000 UTC m=+1764.710015311" Mar 12 08:28:12 crc kubenswrapper[4809]: I0312 08:28:12.107287 4809 scope.go:117] "RemoveContainer" containerID="864ab2b9321734db2ec1c2e4ce57f8a096a813a5cbaa0744b231535713d6ab10" Mar 12 08:28:12 crc kubenswrapper[4809]: E0312 08:28:12.110744 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:28:12 crc kubenswrapper[4809]: I0312 08:28:12.434437 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7d84b4d45c-l59t6" Mar 12 08:28:12 crc kubenswrapper[4809]: I0312 08:28:12.544167 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-48fpg"] Mar 12 08:28:12 crc kubenswrapper[4809]: I0312 08:28:12.544688 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7bbf7cf9-48fpg" podUID="678f1c67-671e-47c2-9086-165664e890c8" containerName="dnsmasq-dns" containerID="cri-o://3a438231b1583d34a00c11e0ebfe0b6d225dd328ae7ec0d849fe8cc27f1051d3" gracePeriod=10 Mar 12 08:28:12 crc kubenswrapper[4809]: I0312 08:28:12.645164 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6b7bbf7cf9-48fpg" podUID="678f1c67-671e-47c2-9086-165664e890c8" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.13:5353: connect: connection refused" Mar 12 08:28:12 crc kubenswrapper[4809]: I0312 08:28:12.811095 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-d9spn"] Mar 12 08:28:12 crc kubenswrapper[4809]: E0312 08:28:12.811814 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a56d570f-f46b-4b34-9241-124d236d7e21" containerName="oc" Mar 12 08:28:12 crc kubenswrapper[4809]: I0312 08:28:12.811834 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="a56d570f-f46b-4b34-9241-124d236d7e21" containerName="oc" Mar 12 08:28:12 crc kubenswrapper[4809]: I0312 08:28:12.812164 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="a56d570f-f46b-4b34-9241-124d236d7e21" containerName="oc" Mar 12 08:28:12 crc kubenswrapper[4809]: I0312 08:28:12.813724 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6df4f56c-d9spn" Mar 12 08:28:12 crc kubenswrapper[4809]: I0312 08:28:12.871635 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-d9spn"] Mar 12 08:28:12 crc kubenswrapper[4809]: I0312 08:28:12.895058 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12c88305-18da-470c-8cda-9a3844ca3e56-config\") pod \"dnsmasq-dns-6f6df4f56c-d9spn\" (UID: \"12c88305-18da-470c-8cda-9a3844ca3e56\") " pod="openstack/dnsmasq-dns-6f6df4f56c-d9spn" Mar 12 08:28:12 crc kubenswrapper[4809]: I0312 08:28:12.895128 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/12c88305-18da-470c-8cda-9a3844ca3e56-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-d9spn\" (UID: \"12c88305-18da-470c-8cda-9a3844ca3e56\") " pod="openstack/dnsmasq-dns-6f6df4f56c-d9spn" Mar 12 08:28:12 crc kubenswrapper[4809]: I0312 08:28:12.895183 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/12c88305-18da-470c-8cda-9a3844ca3e56-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-d9spn\" (UID: \"12c88305-18da-470c-8cda-9a3844ca3e56\") " pod="openstack/dnsmasq-dns-6f6df4f56c-d9spn" Mar 12 08:28:12 crc kubenswrapper[4809]: I0312 08:28:12.895208 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/12c88305-18da-470c-8cda-9a3844ca3e56-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-d9spn\" (UID: \"12c88305-18da-470c-8cda-9a3844ca3e56\") " pod="openstack/dnsmasq-dns-6f6df4f56c-d9spn" Mar 12 08:28:12 crc kubenswrapper[4809]: I0312 08:28:12.895256 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/12c88305-18da-470c-8cda-9a3844ca3e56-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-d9spn\" (UID: \"12c88305-18da-470c-8cda-9a3844ca3e56\") " pod="openstack/dnsmasq-dns-6f6df4f56c-d9spn" Mar 12 08:28:12 crc kubenswrapper[4809]: I0312 08:28:12.895405 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8w6qm\" (UniqueName: \"kubernetes.io/projected/12c88305-18da-470c-8cda-9a3844ca3e56-kube-api-access-8w6qm\") pod \"dnsmasq-dns-6f6df4f56c-d9spn\" (UID: \"12c88305-18da-470c-8cda-9a3844ca3e56\") " pod="openstack/dnsmasq-dns-6f6df4f56c-d9spn" Mar 12 08:28:12 crc kubenswrapper[4809]: I0312 08:28:12.895432 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/12c88305-18da-470c-8cda-9a3844ca3e56-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-d9spn\" (UID: \"12c88305-18da-470c-8cda-9a3844ca3e56\") " pod="openstack/dnsmasq-dns-6f6df4f56c-d9spn" Mar 12 08:28:12 crc kubenswrapper[4809]: I0312 08:28:12.998081 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8w6qm\" (UniqueName: \"kubernetes.io/projected/12c88305-18da-470c-8cda-9a3844ca3e56-kube-api-access-8w6qm\") pod \"dnsmasq-dns-6f6df4f56c-d9spn\" (UID: \"12c88305-18da-470c-8cda-9a3844ca3e56\") " pod="openstack/dnsmasq-dns-6f6df4f56c-d9spn" Mar 12 08:28:12 crc kubenswrapper[4809]: I0312 08:28:12.998619 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/12c88305-18da-470c-8cda-9a3844ca3e56-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-d9spn\" (UID: \"12c88305-18da-470c-8cda-9a3844ca3e56\") " pod="openstack/dnsmasq-dns-6f6df4f56c-d9spn" Mar 12 08:28:12 crc kubenswrapper[4809]: I0312 08:28:12.998686 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12c88305-18da-470c-8cda-9a3844ca3e56-config\") pod \"dnsmasq-dns-6f6df4f56c-d9spn\" (UID: \"12c88305-18da-470c-8cda-9a3844ca3e56\") " pod="openstack/dnsmasq-dns-6f6df4f56c-d9spn" Mar 12 08:28:12 crc kubenswrapper[4809]: I0312 08:28:12.998759 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/12c88305-18da-470c-8cda-9a3844ca3e56-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-d9spn\" (UID: \"12c88305-18da-470c-8cda-9a3844ca3e56\") " pod="openstack/dnsmasq-dns-6f6df4f56c-d9spn" Mar 12 08:28:12 crc kubenswrapper[4809]: I0312 08:28:12.998971 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/12c88305-18da-470c-8cda-9a3844ca3e56-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-d9spn\" (UID: \"12c88305-18da-470c-8cda-9a3844ca3e56\") " pod="openstack/dnsmasq-dns-6f6df4f56c-d9spn" Mar 12 08:28:12 crc kubenswrapper[4809]: I0312 08:28:12.999011 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/12c88305-18da-470c-8cda-9a3844ca3e56-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-d9spn\" (UID: \"12c88305-18da-470c-8cda-9a3844ca3e56\") " pod="openstack/dnsmasq-dns-6f6df4f56c-d9spn" Mar 12 08:28:12 crc kubenswrapper[4809]: I0312 08:28:12.999076 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/12c88305-18da-470c-8cda-9a3844ca3e56-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-d9spn\" (UID: \"12c88305-18da-470c-8cda-9a3844ca3e56\") " pod="openstack/dnsmasq-dns-6f6df4f56c-d9spn" Mar 12 08:28:12 crc kubenswrapper[4809]: I0312 08:28:12.999656 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/12c88305-18da-470c-8cda-9a3844ca3e56-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-d9spn\" (UID: \"12c88305-18da-470c-8cda-9a3844ca3e56\") " pod="openstack/dnsmasq-dns-6f6df4f56c-d9spn" Mar 12 08:28:12 crc kubenswrapper[4809]: I0312 08:28:12.999891 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/12c88305-18da-470c-8cda-9a3844ca3e56-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-d9spn\" (UID: \"12c88305-18da-470c-8cda-9a3844ca3e56\") " pod="openstack/dnsmasq-dns-6f6df4f56c-d9spn" Mar 12 08:28:12 crc kubenswrapper[4809]: I0312 08:28:12.999930 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12c88305-18da-470c-8cda-9a3844ca3e56-config\") pod \"dnsmasq-dns-6f6df4f56c-d9spn\" (UID: \"12c88305-18da-470c-8cda-9a3844ca3e56\") " pod="openstack/dnsmasq-dns-6f6df4f56c-d9spn" Mar 12 08:28:13 crc kubenswrapper[4809]: I0312 08:28:13.000615 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/12c88305-18da-470c-8cda-9a3844ca3e56-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-d9spn\" (UID: \"12c88305-18da-470c-8cda-9a3844ca3e56\") " pod="openstack/dnsmasq-dns-6f6df4f56c-d9spn" Mar 12 08:28:13 crc kubenswrapper[4809]: I0312 08:28:13.002393 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/12c88305-18da-470c-8cda-9a3844ca3e56-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-d9spn\" (UID: \"12c88305-18da-470c-8cda-9a3844ca3e56\") " pod="openstack/dnsmasq-dns-6f6df4f56c-d9spn" Mar 12 08:28:13 crc kubenswrapper[4809]: I0312 08:28:13.002667 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/12c88305-18da-470c-8cda-9a3844ca3e56-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-d9spn\" (UID: \"12c88305-18da-470c-8cda-9a3844ca3e56\") " pod="openstack/dnsmasq-dns-6f6df4f56c-d9spn" Mar 12 08:28:13 crc kubenswrapper[4809]: I0312 08:28:13.028559 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8w6qm\" (UniqueName: \"kubernetes.io/projected/12c88305-18da-470c-8cda-9a3844ca3e56-kube-api-access-8w6qm\") pod \"dnsmasq-dns-6f6df4f56c-d9spn\" (UID: \"12c88305-18da-470c-8cda-9a3844ca3e56\") " pod="openstack/dnsmasq-dns-6f6df4f56c-d9spn" Mar 12 08:28:13 crc kubenswrapper[4809]: I0312 08:28:13.109180 4809 generic.go:334] "Generic (PLEG): container finished" podID="678f1c67-671e-47c2-9086-165664e890c8" containerID="3a438231b1583d34a00c11e0ebfe0b6d225dd328ae7ec0d849fe8cc27f1051d3" exitCode=0 Mar 12 08:28:13 crc kubenswrapper[4809]: I0312 08:28:13.131626 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-48fpg" event={"ID":"678f1c67-671e-47c2-9086-165664e890c8","Type":"ContainerDied","Data":"3a438231b1583d34a00c11e0ebfe0b6d225dd328ae7ec0d849fe8cc27f1051d3"} Mar 12 08:28:13 crc kubenswrapper[4809]: I0312 08:28:13.131685 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-48fpg" event={"ID":"678f1c67-671e-47c2-9086-165664e890c8","Type":"ContainerDied","Data":"043581c2da4e0b6dabdf09bdb2348efb4066987530cfd7de6317ddf8d01985e4"} Mar 12 08:28:13 crc kubenswrapper[4809]: I0312 08:28:13.131699 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="043581c2da4e0b6dabdf09bdb2348efb4066987530cfd7de6317ddf8d01985e4" Mar 12 08:28:13 crc kubenswrapper[4809]: I0312 08:28:13.159674 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6df4f56c-d9spn" Mar 12 08:28:13 crc kubenswrapper[4809]: I0312 08:28:13.228490 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-48fpg" Mar 12 08:28:13 crc kubenswrapper[4809]: I0312 08:28:13.433481 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/678f1c67-671e-47c2-9086-165664e890c8-config\") pod \"678f1c67-671e-47c2-9086-165664e890c8\" (UID: \"678f1c67-671e-47c2-9086-165664e890c8\") " Mar 12 08:28:13 crc kubenswrapper[4809]: I0312 08:28:13.433891 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/678f1c67-671e-47c2-9086-165664e890c8-dns-swift-storage-0\") pod \"678f1c67-671e-47c2-9086-165664e890c8\" (UID: \"678f1c67-671e-47c2-9086-165664e890c8\") " Mar 12 08:28:13 crc kubenswrapper[4809]: I0312 08:28:13.434011 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdrl2\" (UniqueName: \"kubernetes.io/projected/678f1c67-671e-47c2-9086-165664e890c8-kube-api-access-hdrl2\") pod \"678f1c67-671e-47c2-9086-165664e890c8\" (UID: \"678f1c67-671e-47c2-9086-165664e890c8\") " Mar 12 08:28:13 crc kubenswrapper[4809]: I0312 08:28:13.434035 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/678f1c67-671e-47c2-9086-165664e890c8-dns-svc\") pod \"678f1c67-671e-47c2-9086-165664e890c8\" (UID: \"678f1c67-671e-47c2-9086-165664e890c8\") " Mar 12 08:28:13 crc kubenswrapper[4809]: I0312 08:28:13.434099 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/678f1c67-671e-47c2-9086-165664e890c8-ovsdbserver-nb\") pod \"678f1c67-671e-47c2-9086-165664e890c8\" (UID: \"678f1c67-671e-47c2-9086-165664e890c8\") " Mar 12 08:28:13 crc kubenswrapper[4809]: I0312 08:28:13.434179 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/678f1c67-671e-47c2-9086-165664e890c8-ovsdbserver-sb\") pod \"678f1c67-671e-47c2-9086-165664e890c8\" (UID: \"678f1c67-671e-47c2-9086-165664e890c8\") " Mar 12 08:28:13 crc kubenswrapper[4809]: I0312 08:28:13.459499 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/678f1c67-671e-47c2-9086-165664e890c8-kube-api-access-hdrl2" (OuterVolumeSpecName: "kube-api-access-hdrl2") pod "678f1c67-671e-47c2-9086-165664e890c8" (UID: "678f1c67-671e-47c2-9086-165664e890c8"). InnerVolumeSpecName "kube-api-access-hdrl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:28:13 crc kubenswrapper[4809]: I0312 08:28:13.524627 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/678f1c67-671e-47c2-9086-165664e890c8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "678f1c67-671e-47c2-9086-165664e890c8" (UID: "678f1c67-671e-47c2-9086-165664e890c8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:28:13 crc kubenswrapper[4809]: I0312 08:28:13.532825 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/678f1c67-671e-47c2-9086-165664e890c8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "678f1c67-671e-47c2-9086-165664e890c8" (UID: "678f1c67-671e-47c2-9086-165664e890c8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:28:13 crc kubenswrapper[4809]: I0312 08:28:13.537621 4809 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/678f1c67-671e-47c2-9086-165664e890c8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 12 08:28:13 crc kubenswrapper[4809]: I0312 08:28:13.537661 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdrl2\" (UniqueName: \"kubernetes.io/projected/678f1c67-671e-47c2-9086-165664e890c8-kube-api-access-hdrl2\") on node \"crc\" DevicePath \"\"" Mar 12 08:28:13 crc kubenswrapper[4809]: I0312 08:28:13.537672 4809 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/678f1c67-671e-47c2-9086-165664e890c8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 12 08:28:13 crc kubenswrapper[4809]: I0312 08:28:13.542744 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/678f1c67-671e-47c2-9086-165664e890c8-config" (OuterVolumeSpecName: "config") pod "678f1c67-671e-47c2-9086-165664e890c8" (UID: "678f1c67-671e-47c2-9086-165664e890c8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:28:13 crc kubenswrapper[4809]: I0312 08:28:13.561557 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/678f1c67-671e-47c2-9086-165664e890c8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "678f1c67-671e-47c2-9086-165664e890c8" (UID: "678f1c67-671e-47c2-9086-165664e890c8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:28:13 crc kubenswrapper[4809]: I0312 08:28:13.572194 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/678f1c67-671e-47c2-9086-165664e890c8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "678f1c67-671e-47c2-9086-165664e890c8" (UID: "678f1c67-671e-47c2-9086-165664e890c8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:28:13 crc kubenswrapper[4809]: I0312 08:28:13.640520 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/678f1c67-671e-47c2-9086-165664e890c8-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:28:13 crc kubenswrapper[4809]: I0312 08:28:13.640558 4809 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/678f1c67-671e-47c2-9086-165664e890c8-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 12 08:28:13 crc kubenswrapper[4809]: I0312 08:28:13.640568 4809 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/678f1c67-671e-47c2-9086-165664e890c8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 12 08:28:13 crc kubenswrapper[4809]: I0312 08:28:13.766081 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-d9spn"] Mar 12 08:28:13 crc kubenswrapper[4809]: W0312 08:28:13.768257 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod12c88305_18da_470c_8cda_9a3844ca3e56.slice/crio-17024dc3b3c2e8eb5774c52a83b6e9d957cdcdc6331fea45d33ec9bc2297ec12 WatchSource:0}: Error finding container 17024dc3b3c2e8eb5774c52a83b6e9d957cdcdc6331fea45d33ec9bc2297ec12: Status 404 returned error can't find the container with id 17024dc3b3c2e8eb5774c52a83b6e9d957cdcdc6331fea45d33ec9bc2297ec12 Mar 12 08:28:14 crc kubenswrapper[4809]: I0312 08:28:14.137953 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-d9spn" event={"ID":"12c88305-18da-470c-8cda-9a3844ca3e56","Type":"ContainerStarted","Data":"17024dc3b3c2e8eb5774c52a83b6e9d957cdcdc6331fea45d33ec9bc2297ec12"} Mar 12 08:28:14 crc kubenswrapper[4809]: I0312 08:28:14.138027 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-48fpg" Mar 12 08:28:14 crc kubenswrapper[4809]: I0312 08:28:14.257527 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-48fpg"] Mar 12 08:28:14 crc kubenswrapper[4809]: I0312 08:28:14.283639 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-48fpg"] Mar 12 08:28:15 crc kubenswrapper[4809]: I0312 08:28:15.123201 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="678f1c67-671e-47c2-9086-165664e890c8" path="/var/lib/kubelet/pods/678f1c67-671e-47c2-9086-165664e890c8/volumes" Mar 12 08:28:15 crc kubenswrapper[4809]: I0312 08:28:15.155988 4809 generic.go:334] "Generic (PLEG): container finished" podID="12c88305-18da-470c-8cda-9a3844ca3e56" containerID="346ea380512b4f6a33f60924974302b1754c9a0705f6ef3832d2e713f4a6eb06" exitCode=0 Mar 12 08:28:15 crc kubenswrapper[4809]: I0312 08:28:15.156060 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-d9spn" event={"ID":"12c88305-18da-470c-8cda-9a3844ca3e56","Type":"ContainerDied","Data":"346ea380512b4f6a33f60924974302b1754c9a0705f6ef3832d2e713f4a6eb06"} Mar 12 08:28:16 crc kubenswrapper[4809]: I0312 08:28:16.171516 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-j9k6d" event={"ID":"8873bebf-493e-4f3d-85a0-fca09ce4d946","Type":"ContainerStarted","Data":"03f7026909aa5992acc439e0cd8d5b3acff3a019fff5719436348a0284caace8"} Mar 12 08:28:16 crc kubenswrapper[4809]: I0312 08:28:16.174626 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-d9spn" event={"ID":"12c88305-18da-470c-8cda-9a3844ca3e56","Type":"ContainerStarted","Data":"f120ef7cf43643c26f2c294fac096363e66fe245fb8a6347cdce1af0d510113c"} Mar 12 08:28:16 crc kubenswrapper[4809]: I0312 08:28:16.175565 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6f6df4f56c-d9spn" Mar 12 08:28:16 crc kubenswrapper[4809]: I0312 08:28:16.200694 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-j9k6d" podStartSLOduration=2.060952338 podStartE2EDuration="42.200669032s" podCreationTimestamp="2026-03-12 08:27:34 +0000 UTC" firstStartedPulling="2026-03-12 08:27:35.189326092 +0000 UTC m=+1728.771361815" lastFinishedPulling="2026-03-12 08:28:15.329042776 +0000 UTC m=+1768.911078509" observedRunningTime="2026-03-12 08:28:16.189718244 +0000 UTC m=+1769.771753987" watchObservedRunningTime="2026-03-12 08:28:16.200669032 +0000 UTC m=+1769.782704765" Mar 12 08:28:16 crc kubenswrapper[4809]: I0312 08:28:16.219656 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6f6df4f56c-d9spn" podStartSLOduration=4.219637056 podStartE2EDuration="4.219637056s" podCreationTimestamp="2026-03-12 08:28:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:28:16.215045702 +0000 UTC m=+1769.797081455" watchObservedRunningTime="2026-03-12 08:28:16.219637056 +0000 UTC m=+1769.801672789" Mar 12 08:28:18 crc kubenswrapper[4809]: I0312 08:28:18.203332 4809 generic.go:334] "Generic (PLEG): container finished" podID="8873bebf-493e-4f3d-85a0-fca09ce4d946" containerID="03f7026909aa5992acc439e0cd8d5b3acff3a019fff5719436348a0284caace8" exitCode=0 Mar 12 08:28:18 crc kubenswrapper[4809]: I0312 08:28:18.203423 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-j9k6d" event={"ID":"8873bebf-493e-4f3d-85a0-fca09ce4d946","Type":"ContainerDied","Data":"03f7026909aa5992acc439e0cd8d5b3acff3a019fff5719436348a0284caace8"} Mar 12 08:28:19 crc kubenswrapper[4809]: I0312 08:28:19.726814 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-j9k6d" Mar 12 08:28:19 crc kubenswrapper[4809]: I0312 08:28:19.844174 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8873bebf-493e-4f3d-85a0-fca09ce4d946-config-data\") pod \"8873bebf-493e-4f3d-85a0-fca09ce4d946\" (UID: \"8873bebf-493e-4f3d-85a0-fca09ce4d946\") " Mar 12 08:28:19 crc kubenswrapper[4809]: I0312 08:28:19.844255 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f62rs\" (UniqueName: \"kubernetes.io/projected/8873bebf-493e-4f3d-85a0-fca09ce4d946-kube-api-access-f62rs\") pod \"8873bebf-493e-4f3d-85a0-fca09ce4d946\" (UID: \"8873bebf-493e-4f3d-85a0-fca09ce4d946\") " Mar 12 08:28:19 crc kubenswrapper[4809]: I0312 08:28:19.844546 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8873bebf-493e-4f3d-85a0-fca09ce4d946-combined-ca-bundle\") pod \"8873bebf-493e-4f3d-85a0-fca09ce4d946\" (UID: \"8873bebf-493e-4f3d-85a0-fca09ce4d946\") " Mar 12 08:28:19 crc kubenswrapper[4809]: I0312 08:28:19.852797 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8873bebf-493e-4f3d-85a0-fca09ce4d946-kube-api-access-f62rs" (OuterVolumeSpecName: "kube-api-access-f62rs") pod "8873bebf-493e-4f3d-85a0-fca09ce4d946" (UID: "8873bebf-493e-4f3d-85a0-fca09ce4d946"). InnerVolumeSpecName "kube-api-access-f62rs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:28:19 crc kubenswrapper[4809]: I0312 08:28:19.893913 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8873bebf-493e-4f3d-85a0-fca09ce4d946-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8873bebf-493e-4f3d-85a0-fca09ce4d946" (UID: "8873bebf-493e-4f3d-85a0-fca09ce4d946"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:28:19 crc kubenswrapper[4809]: I0312 08:28:19.949186 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8873bebf-493e-4f3d-85a0-fca09ce4d946-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:28:19 crc kubenswrapper[4809]: I0312 08:28:19.949234 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f62rs\" (UniqueName: \"kubernetes.io/projected/8873bebf-493e-4f3d-85a0-fca09ce4d946-kube-api-access-f62rs\") on node \"crc\" DevicePath \"\"" Mar 12 08:28:19 crc kubenswrapper[4809]: I0312 08:28:19.968009 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8873bebf-493e-4f3d-85a0-fca09ce4d946-config-data" (OuterVolumeSpecName: "config-data") pod "8873bebf-493e-4f3d-85a0-fca09ce4d946" (UID: "8873bebf-493e-4f3d-85a0-fca09ce4d946"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:28:20 crc kubenswrapper[4809]: I0312 08:28:20.052771 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8873bebf-493e-4f3d-85a0-fca09ce4d946-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 08:28:20 crc kubenswrapper[4809]: I0312 08:28:20.246587 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-j9k6d" event={"ID":"8873bebf-493e-4f3d-85a0-fca09ce4d946","Type":"ContainerDied","Data":"1a5517131f01917df9bf32604dd01030d643f427eb9114005b8d98d87391f2ba"} Mar 12 08:28:20 crc kubenswrapper[4809]: I0312 08:28:20.246634 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-j9k6d" Mar 12 08:28:20 crc kubenswrapper[4809]: I0312 08:28:20.246646 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a5517131f01917df9bf32604dd01030d643f427eb9114005b8d98d87391f2ba" Mar 12 08:28:21 crc kubenswrapper[4809]: I0312 08:28:21.405224 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-7ccd669dc7-8d45d"] Mar 12 08:28:21 crc kubenswrapper[4809]: E0312 08:28:21.406206 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="678f1c67-671e-47c2-9086-165664e890c8" containerName="init" Mar 12 08:28:21 crc kubenswrapper[4809]: I0312 08:28:21.406227 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="678f1c67-671e-47c2-9086-165664e890c8" containerName="init" Mar 12 08:28:21 crc kubenswrapper[4809]: E0312 08:28:21.406270 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="678f1c67-671e-47c2-9086-165664e890c8" containerName="dnsmasq-dns" Mar 12 08:28:21 crc kubenswrapper[4809]: I0312 08:28:21.406284 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="678f1c67-671e-47c2-9086-165664e890c8" containerName="dnsmasq-dns" Mar 12 08:28:21 crc kubenswrapper[4809]: E0312 08:28:21.406325 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8873bebf-493e-4f3d-85a0-fca09ce4d946" containerName="heat-db-sync" Mar 12 08:28:21 crc kubenswrapper[4809]: I0312 08:28:21.406338 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="8873bebf-493e-4f3d-85a0-fca09ce4d946" containerName="heat-db-sync" Mar 12 08:28:21 crc kubenswrapper[4809]: I0312 08:28:21.406676 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="678f1c67-671e-47c2-9086-165664e890c8" containerName="dnsmasq-dns" Mar 12 08:28:21 crc kubenswrapper[4809]: I0312 08:28:21.406694 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="8873bebf-493e-4f3d-85a0-fca09ce4d946" containerName="heat-db-sync" Mar 12 08:28:21 crc kubenswrapper[4809]: I0312 08:28:21.407851 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7ccd669dc7-8d45d" Mar 12 08:28:21 crc kubenswrapper[4809]: I0312 08:28:21.437237 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7ccd669dc7-8d45d"] Mar 12 08:28:21 crc kubenswrapper[4809]: I0312 08:28:21.494462 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbwgh\" (UniqueName: \"kubernetes.io/projected/420beb8d-02d6-4a02-9b98-d30c28771f03-kube-api-access-wbwgh\") pod \"heat-engine-7ccd669dc7-8d45d\" (UID: \"420beb8d-02d6-4a02-9b98-d30c28771f03\") " pod="openstack/heat-engine-7ccd669dc7-8d45d" Mar 12 08:28:21 crc kubenswrapper[4809]: I0312 08:28:21.494945 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/420beb8d-02d6-4a02-9b98-d30c28771f03-config-data\") pod \"heat-engine-7ccd669dc7-8d45d\" (UID: \"420beb8d-02d6-4a02-9b98-d30c28771f03\") " pod="openstack/heat-engine-7ccd669dc7-8d45d" Mar 12 08:28:21 crc kubenswrapper[4809]: I0312 08:28:21.495229 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/420beb8d-02d6-4a02-9b98-d30c28771f03-config-data-custom\") pod \"heat-engine-7ccd669dc7-8d45d\" (UID: \"420beb8d-02d6-4a02-9b98-d30c28771f03\") " pod="openstack/heat-engine-7ccd669dc7-8d45d" Mar 12 08:28:21 crc kubenswrapper[4809]: I0312 08:28:21.495432 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/420beb8d-02d6-4a02-9b98-d30c28771f03-combined-ca-bundle\") pod \"heat-engine-7ccd669dc7-8d45d\" (UID: \"420beb8d-02d6-4a02-9b98-d30c28771f03\") " pod="openstack/heat-engine-7ccd669dc7-8d45d" Mar 12 08:28:21 crc kubenswrapper[4809]: I0312 08:28:21.515722 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-8c9fdf79f-l9g9l"] Mar 12 08:28:21 crc kubenswrapper[4809]: I0312 08:28:21.518165 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-8c9fdf79f-l9g9l" Mar 12 08:28:21 crc kubenswrapper[4809]: I0312 08:28:21.542520 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-6dd54d47c7-gfgfm"] Mar 12 08:28:21 crc kubenswrapper[4809]: I0312 08:28:21.544858 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6dd54d47c7-gfgfm" Mar 12 08:28:21 crc kubenswrapper[4809]: I0312 08:28:21.582664 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-8c9fdf79f-l9g9l"] Mar 12 08:28:21 crc kubenswrapper[4809]: I0312 08:28:21.599253 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/420beb8d-02d6-4a02-9b98-d30c28771f03-config-data-custom\") pod \"heat-engine-7ccd669dc7-8d45d\" (UID: \"420beb8d-02d6-4a02-9b98-d30c28771f03\") " pod="openstack/heat-engine-7ccd669dc7-8d45d" Mar 12 08:28:21 crc kubenswrapper[4809]: I0312 08:28:21.599432 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8934ae77-4826-4fa1-a5e1-578b06fa6650-combined-ca-bundle\") pod \"heat-api-8c9fdf79f-l9g9l\" (UID: \"8934ae77-4826-4fa1-a5e1-578b06fa6650\") " pod="openstack/heat-api-8c9fdf79f-l9g9l" Mar 12 08:28:21 crc kubenswrapper[4809]: I0312 08:28:21.599532 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8934ae77-4826-4fa1-a5e1-578b06fa6650-config-data-custom\") pod \"heat-api-8c9fdf79f-l9g9l\" (UID: \"8934ae77-4826-4fa1-a5e1-578b06fa6650\") " pod="openstack/heat-api-8c9fdf79f-l9g9l" Mar 12 08:28:21 crc kubenswrapper[4809]: I0312 08:28:21.599595 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/420beb8d-02d6-4a02-9b98-d30c28771f03-combined-ca-bundle\") pod \"heat-engine-7ccd669dc7-8d45d\" (UID: \"420beb8d-02d6-4a02-9b98-d30c28771f03\") " pod="openstack/heat-engine-7ccd669dc7-8d45d" Mar 12 08:28:21 crc kubenswrapper[4809]: I0312 08:28:21.600076 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8934ae77-4826-4fa1-a5e1-578b06fa6650-internal-tls-certs\") pod \"heat-api-8c9fdf79f-l9g9l\" (UID: \"8934ae77-4826-4fa1-a5e1-578b06fa6650\") " pod="openstack/heat-api-8c9fdf79f-l9g9l" Mar 12 08:28:21 crc kubenswrapper[4809]: I0312 08:28:21.600251 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbwgh\" (UniqueName: \"kubernetes.io/projected/420beb8d-02d6-4a02-9b98-d30c28771f03-kube-api-access-wbwgh\") pod \"heat-engine-7ccd669dc7-8d45d\" (UID: \"420beb8d-02d6-4a02-9b98-d30c28771f03\") " pod="openstack/heat-engine-7ccd669dc7-8d45d" Mar 12 08:28:21 crc kubenswrapper[4809]: I0312 08:28:21.600321 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8934ae77-4826-4fa1-a5e1-578b06fa6650-config-data\") pod \"heat-api-8c9fdf79f-l9g9l\" (UID: \"8934ae77-4826-4fa1-a5e1-578b06fa6650\") " pod="openstack/heat-api-8c9fdf79f-l9g9l" Mar 12 08:28:21 crc kubenswrapper[4809]: I0312 08:28:21.600525 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/420beb8d-02d6-4a02-9b98-d30c28771f03-config-data\") pod \"heat-engine-7ccd669dc7-8d45d\" (UID: \"420beb8d-02d6-4a02-9b98-d30c28771f03\") " pod="openstack/heat-engine-7ccd669dc7-8d45d" Mar 12 08:28:21 crc kubenswrapper[4809]: I0312 08:28:21.600835 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5d4zq\" (UniqueName: \"kubernetes.io/projected/8934ae77-4826-4fa1-a5e1-578b06fa6650-kube-api-access-5d4zq\") pod \"heat-api-8c9fdf79f-l9g9l\" (UID: \"8934ae77-4826-4fa1-a5e1-578b06fa6650\") " pod="openstack/heat-api-8c9fdf79f-l9g9l" Mar 12 08:28:21 crc kubenswrapper[4809]: I0312 08:28:21.601003 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8934ae77-4826-4fa1-a5e1-578b06fa6650-public-tls-certs\") pod \"heat-api-8c9fdf79f-l9g9l\" (UID: \"8934ae77-4826-4fa1-a5e1-578b06fa6650\") " pod="openstack/heat-api-8c9fdf79f-l9g9l" Mar 12 08:28:21 crc kubenswrapper[4809]: I0312 08:28:21.606684 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6dd54d47c7-gfgfm"] Mar 12 08:28:21 crc kubenswrapper[4809]: I0312 08:28:21.609415 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/420beb8d-02d6-4a02-9b98-d30c28771f03-config-data-custom\") pod \"heat-engine-7ccd669dc7-8d45d\" (UID: \"420beb8d-02d6-4a02-9b98-d30c28771f03\") " pod="openstack/heat-engine-7ccd669dc7-8d45d" Mar 12 08:28:21 crc kubenswrapper[4809]: I0312 08:28:21.611306 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/420beb8d-02d6-4a02-9b98-d30c28771f03-combined-ca-bundle\") pod \"heat-engine-7ccd669dc7-8d45d\" (UID: \"420beb8d-02d6-4a02-9b98-d30c28771f03\") " pod="openstack/heat-engine-7ccd669dc7-8d45d" Mar 12 08:28:21 crc kubenswrapper[4809]: I0312 08:28:21.618176 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/420beb8d-02d6-4a02-9b98-d30c28771f03-config-data\") pod \"heat-engine-7ccd669dc7-8d45d\" (UID: \"420beb8d-02d6-4a02-9b98-d30c28771f03\") " pod="openstack/heat-engine-7ccd669dc7-8d45d" Mar 12 08:28:21 crc kubenswrapper[4809]: I0312 08:28:21.649332 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbwgh\" (UniqueName: \"kubernetes.io/projected/420beb8d-02d6-4a02-9b98-d30c28771f03-kube-api-access-wbwgh\") pod \"heat-engine-7ccd669dc7-8d45d\" (UID: \"420beb8d-02d6-4a02-9b98-d30c28771f03\") " pod="openstack/heat-engine-7ccd669dc7-8d45d" Mar 12 08:28:21 crc kubenswrapper[4809]: I0312 08:28:21.703282 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5d4zq\" (UniqueName: \"kubernetes.io/projected/8934ae77-4826-4fa1-a5e1-578b06fa6650-kube-api-access-5d4zq\") pod \"heat-api-8c9fdf79f-l9g9l\" (UID: \"8934ae77-4826-4fa1-a5e1-578b06fa6650\") " pod="openstack/heat-api-8c9fdf79f-l9g9l" Mar 12 08:28:21 crc kubenswrapper[4809]: I0312 08:28:21.703344 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8934ae77-4826-4fa1-a5e1-578b06fa6650-public-tls-certs\") pod \"heat-api-8c9fdf79f-l9g9l\" (UID: \"8934ae77-4826-4fa1-a5e1-578b06fa6650\") " pod="openstack/heat-api-8c9fdf79f-l9g9l" Mar 12 08:28:21 crc kubenswrapper[4809]: I0312 08:28:21.703374 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wxwk\" (UniqueName: \"kubernetes.io/projected/d99e4f98-531f-4ef3-a833-82591d23bea7-kube-api-access-6wxwk\") pod \"heat-cfnapi-6dd54d47c7-gfgfm\" (UID: \"d99e4f98-531f-4ef3-a833-82591d23bea7\") " pod="openstack/heat-cfnapi-6dd54d47c7-gfgfm" Mar 12 08:28:21 crc kubenswrapper[4809]: I0312 08:28:21.703416 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d99e4f98-531f-4ef3-a833-82591d23bea7-combined-ca-bundle\") pod \"heat-cfnapi-6dd54d47c7-gfgfm\" (UID: \"d99e4f98-531f-4ef3-a833-82591d23bea7\") " pod="openstack/heat-cfnapi-6dd54d47c7-gfgfm" Mar 12 08:28:21 crc kubenswrapper[4809]: I0312 08:28:21.703481 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8934ae77-4826-4fa1-a5e1-578b06fa6650-combined-ca-bundle\") pod \"heat-api-8c9fdf79f-l9g9l\" (UID: \"8934ae77-4826-4fa1-a5e1-578b06fa6650\") " pod="openstack/heat-api-8c9fdf79f-l9g9l" Mar 12 08:28:21 crc kubenswrapper[4809]: I0312 08:28:21.703528 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8934ae77-4826-4fa1-a5e1-578b06fa6650-config-data-custom\") pod \"heat-api-8c9fdf79f-l9g9l\" (UID: \"8934ae77-4826-4fa1-a5e1-578b06fa6650\") " pod="openstack/heat-api-8c9fdf79f-l9g9l" Mar 12 08:28:21 crc kubenswrapper[4809]: I0312 08:28:21.703565 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d99e4f98-531f-4ef3-a833-82591d23bea7-internal-tls-certs\") pod \"heat-cfnapi-6dd54d47c7-gfgfm\" (UID: \"d99e4f98-531f-4ef3-a833-82591d23bea7\") " pod="openstack/heat-cfnapi-6dd54d47c7-gfgfm" Mar 12 08:28:21 crc kubenswrapper[4809]: I0312 08:28:21.703598 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d99e4f98-531f-4ef3-a833-82591d23bea7-config-data-custom\") pod \"heat-cfnapi-6dd54d47c7-gfgfm\" (UID: \"d99e4f98-531f-4ef3-a833-82591d23bea7\") " pod="openstack/heat-cfnapi-6dd54d47c7-gfgfm" Mar 12 08:28:21 crc kubenswrapper[4809]: I0312 08:28:21.703621 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d99e4f98-531f-4ef3-a833-82591d23bea7-config-data\") pod \"heat-cfnapi-6dd54d47c7-gfgfm\" (UID: \"d99e4f98-531f-4ef3-a833-82591d23bea7\") " pod="openstack/heat-cfnapi-6dd54d47c7-gfgfm" Mar 12 08:28:21 crc kubenswrapper[4809]: I0312 08:28:21.703649 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8934ae77-4826-4fa1-a5e1-578b06fa6650-internal-tls-certs\") pod \"heat-api-8c9fdf79f-l9g9l\" (UID: \"8934ae77-4826-4fa1-a5e1-578b06fa6650\") " pod="openstack/heat-api-8c9fdf79f-l9g9l" Mar 12 08:28:21 crc kubenswrapper[4809]: I0312 08:28:21.703690 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8934ae77-4826-4fa1-a5e1-578b06fa6650-config-data\") pod \"heat-api-8c9fdf79f-l9g9l\" (UID: \"8934ae77-4826-4fa1-a5e1-578b06fa6650\") " pod="openstack/heat-api-8c9fdf79f-l9g9l" Mar 12 08:28:21 crc kubenswrapper[4809]: I0312 08:28:21.703722 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d99e4f98-531f-4ef3-a833-82591d23bea7-public-tls-certs\") pod \"heat-cfnapi-6dd54d47c7-gfgfm\" (UID: \"d99e4f98-531f-4ef3-a833-82591d23bea7\") " pod="openstack/heat-cfnapi-6dd54d47c7-gfgfm" Mar 12 08:28:21 crc kubenswrapper[4809]: I0312 08:28:21.708961 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8934ae77-4826-4fa1-a5e1-578b06fa6650-config-data-custom\") pod \"heat-api-8c9fdf79f-l9g9l\" (UID: \"8934ae77-4826-4fa1-a5e1-578b06fa6650\") " pod="openstack/heat-api-8c9fdf79f-l9g9l" Mar 12 08:28:21 crc kubenswrapper[4809]: I0312 08:28:21.711489 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8934ae77-4826-4fa1-a5e1-578b06fa6650-internal-tls-certs\") pod \"heat-api-8c9fdf79f-l9g9l\" (UID: \"8934ae77-4826-4fa1-a5e1-578b06fa6650\") " pod="openstack/heat-api-8c9fdf79f-l9g9l" Mar 12 08:28:21 crc kubenswrapper[4809]: I0312 08:28:21.712545 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8934ae77-4826-4fa1-a5e1-578b06fa6650-config-data\") pod \"heat-api-8c9fdf79f-l9g9l\" (UID: \"8934ae77-4826-4fa1-a5e1-578b06fa6650\") " pod="openstack/heat-api-8c9fdf79f-l9g9l" Mar 12 08:28:21 crc kubenswrapper[4809]: I0312 08:28:21.714815 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8934ae77-4826-4fa1-a5e1-578b06fa6650-public-tls-certs\") pod \"heat-api-8c9fdf79f-l9g9l\" (UID: \"8934ae77-4826-4fa1-a5e1-578b06fa6650\") " pod="openstack/heat-api-8c9fdf79f-l9g9l" Mar 12 08:28:21 crc kubenswrapper[4809]: I0312 08:28:21.717176 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8934ae77-4826-4fa1-a5e1-578b06fa6650-combined-ca-bundle\") pod \"heat-api-8c9fdf79f-l9g9l\" (UID: \"8934ae77-4826-4fa1-a5e1-578b06fa6650\") " pod="openstack/heat-api-8c9fdf79f-l9g9l" Mar 12 08:28:21 crc kubenswrapper[4809]: I0312 08:28:21.724491 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5d4zq\" (UniqueName: \"kubernetes.io/projected/8934ae77-4826-4fa1-a5e1-578b06fa6650-kube-api-access-5d4zq\") pod \"heat-api-8c9fdf79f-l9g9l\" (UID: \"8934ae77-4826-4fa1-a5e1-578b06fa6650\") " pod="openstack/heat-api-8c9fdf79f-l9g9l" Mar 12 08:28:21 crc kubenswrapper[4809]: I0312 08:28:21.738237 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7ccd669dc7-8d45d" Mar 12 08:28:21 crc kubenswrapper[4809]: I0312 08:28:21.810861 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d99e4f98-531f-4ef3-a833-82591d23bea7-internal-tls-certs\") pod \"heat-cfnapi-6dd54d47c7-gfgfm\" (UID: \"d99e4f98-531f-4ef3-a833-82591d23bea7\") " pod="openstack/heat-cfnapi-6dd54d47c7-gfgfm" Mar 12 08:28:21 crc kubenswrapper[4809]: I0312 08:28:21.810968 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d99e4f98-531f-4ef3-a833-82591d23bea7-config-data-custom\") pod \"heat-cfnapi-6dd54d47c7-gfgfm\" (UID: \"d99e4f98-531f-4ef3-a833-82591d23bea7\") " pod="openstack/heat-cfnapi-6dd54d47c7-gfgfm" Mar 12 08:28:21 crc kubenswrapper[4809]: I0312 08:28:21.811026 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d99e4f98-531f-4ef3-a833-82591d23bea7-config-data\") pod \"heat-cfnapi-6dd54d47c7-gfgfm\" (UID: \"d99e4f98-531f-4ef3-a833-82591d23bea7\") " pod="openstack/heat-cfnapi-6dd54d47c7-gfgfm" Mar 12 08:28:21 crc kubenswrapper[4809]: I0312 08:28:21.811271 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d99e4f98-531f-4ef3-a833-82591d23bea7-public-tls-certs\") pod \"heat-cfnapi-6dd54d47c7-gfgfm\" (UID: \"d99e4f98-531f-4ef3-a833-82591d23bea7\") " pod="openstack/heat-cfnapi-6dd54d47c7-gfgfm" Mar 12 08:28:21 crc kubenswrapper[4809]: I0312 08:28:21.811497 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wxwk\" (UniqueName: \"kubernetes.io/projected/d99e4f98-531f-4ef3-a833-82591d23bea7-kube-api-access-6wxwk\") pod \"heat-cfnapi-6dd54d47c7-gfgfm\" (UID: \"d99e4f98-531f-4ef3-a833-82591d23bea7\") " pod="openstack/heat-cfnapi-6dd54d47c7-gfgfm" Mar 12 08:28:21 crc kubenswrapper[4809]: I0312 08:28:21.811600 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d99e4f98-531f-4ef3-a833-82591d23bea7-combined-ca-bundle\") pod \"heat-cfnapi-6dd54d47c7-gfgfm\" (UID: \"d99e4f98-531f-4ef3-a833-82591d23bea7\") " pod="openstack/heat-cfnapi-6dd54d47c7-gfgfm" Mar 12 08:28:21 crc kubenswrapper[4809]: I0312 08:28:21.817589 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d99e4f98-531f-4ef3-a833-82591d23bea7-config-data-custom\") pod \"heat-cfnapi-6dd54d47c7-gfgfm\" (UID: \"d99e4f98-531f-4ef3-a833-82591d23bea7\") " pod="openstack/heat-cfnapi-6dd54d47c7-gfgfm" Mar 12 08:28:21 crc kubenswrapper[4809]: I0312 08:28:21.817820 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d99e4f98-531f-4ef3-a833-82591d23bea7-combined-ca-bundle\") pod \"heat-cfnapi-6dd54d47c7-gfgfm\" (UID: \"d99e4f98-531f-4ef3-a833-82591d23bea7\") " pod="openstack/heat-cfnapi-6dd54d47c7-gfgfm" Mar 12 08:28:21 crc kubenswrapper[4809]: I0312 08:28:21.818350 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d99e4f98-531f-4ef3-a833-82591d23bea7-public-tls-certs\") pod \"heat-cfnapi-6dd54d47c7-gfgfm\" (UID: \"d99e4f98-531f-4ef3-a833-82591d23bea7\") " pod="openstack/heat-cfnapi-6dd54d47c7-gfgfm" Mar 12 08:28:21 crc kubenswrapper[4809]: I0312 08:28:21.821949 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d99e4f98-531f-4ef3-a833-82591d23bea7-config-data\") pod \"heat-cfnapi-6dd54d47c7-gfgfm\" (UID: \"d99e4f98-531f-4ef3-a833-82591d23bea7\") " pod="openstack/heat-cfnapi-6dd54d47c7-gfgfm" Mar 12 08:28:21 crc kubenswrapper[4809]: I0312 08:28:21.823965 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d99e4f98-531f-4ef3-a833-82591d23bea7-internal-tls-certs\") pod \"heat-cfnapi-6dd54d47c7-gfgfm\" (UID: \"d99e4f98-531f-4ef3-a833-82591d23bea7\") " pod="openstack/heat-cfnapi-6dd54d47c7-gfgfm" Mar 12 08:28:21 crc kubenswrapper[4809]: I0312 08:28:21.835349 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wxwk\" (UniqueName: \"kubernetes.io/projected/d99e4f98-531f-4ef3-a833-82591d23bea7-kube-api-access-6wxwk\") pod \"heat-cfnapi-6dd54d47c7-gfgfm\" (UID: \"d99e4f98-531f-4ef3-a833-82591d23bea7\") " pod="openstack/heat-cfnapi-6dd54d47c7-gfgfm" Mar 12 08:28:21 crc kubenswrapper[4809]: I0312 08:28:21.842747 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-8c9fdf79f-l9g9l" Mar 12 08:28:21 crc kubenswrapper[4809]: I0312 08:28:21.886227 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6dd54d47c7-gfgfm" Mar 12 08:28:22 crc kubenswrapper[4809]: I0312 08:28:22.626793 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7ccd669dc7-8d45d"] Mar 12 08:28:22 crc kubenswrapper[4809]: W0312 08:28:22.632339 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod420beb8d_02d6_4a02_9b98_d30c28771f03.slice/crio-01ac7105e406db6e47da8fc67c9aa6a92a9d371693827cb48a5447c84f9de987 WatchSource:0}: Error finding container 01ac7105e406db6e47da8fc67c9aa6a92a9d371693827cb48a5447c84f9de987: Status 404 returned error can't find the container with id 01ac7105e406db6e47da8fc67c9aa6a92a9d371693827cb48a5447c84f9de987 Mar 12 08:28:22 crc kubenswrapper[4809]: I0312 08:28:22.755358 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-8c9fdf79f-l9g9l"] Mar 12 08:28:22 crc kubenswrapper[4809]: I0312 08:28:22.774640 4809 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 08:28:22 crc kubenswrapper[4809]: W0312 08:28:22.902935 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd99e4f98_531f_4ef3_a833_82591d23bea7.slice/crio-a63bee1df194fdd60edcf31bb812b38bfecaa2317e4cc5ef8c0248f098666e95 WatchSource:0}: Error finding container a63bee1df194fdd60edcf31bb812b38bfecaa2317e4cc5ef8c0248f098666e95: Status 404 returned error can't find the container with id a63bee1df194fdd60edcf31bb812b38bfecaa2317e4cc5ef8c0248f098666e95 Mar 12 08:28:22 crc kubenswrapper[4809]: I0312 08:28:22.913022 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6dd54d47c7-gfgfm"] Mar 12 08:28:23 crc kubenswrapper[4809]: I0312 08:28:23.108792 4809 scope.go:117] "RemoveContainer" containerID="864ab2b9321734db2ec1c2e4ce57f8a096a813a5cbaa0744b231535713d6ab10" Mar 12 08:28:23 crc kubenswrapper[4809]: E0312 08:28:23.110066 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:28:23 crc kubenswrapper[4809]: I0312 08:28:23.161429 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6f6df4f56c-d9spn" Mar 12 08:28:23 crc kubenswrapper[4809]: I0312 08:28:23.266492 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-l59t6"] Mar 12 08:28:23 crc kubenswrapper[4809]: I0312 08:28:23.273443 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d84b4d45c-l59t6" podUID="cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7" containerName="dnsmasq-dns" containerID="cri-o://0a22e9e2c499942df89501a02c7147e117207ba29dc1a525308304c227ac508f" gracePeriod=10 Mar 12 08:28:23 crc kubenswrapper[4809]: I0312 08:28:23.311507 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-8c9fdf79f-l9g9l" event={"ID":"8934ae77-4826-4fa1-a5e1-578b06fa6650","Type":"ContainerStarted","Data":"91fe6cafaca9c6c20c3f8a90e74e85eb78dfc21b61ad505773f82aa99b566178"} Mar 12 08:28:23 crc kubenswrapper[4809]: I0312 08:28:23.313443 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6dd54d47c7-gfgfm" event={"ID":"d99e4f98-531f-4ef3-a833-82591d23bea7","Type":"ContainerStarted","Data":"a63bee1df194fdd60edcf31bb812b38bfecaa2317e4cc5ef8c0248f098666e95"} Mar 12 08:28:23 crc kubenswrapper[4809]: I0312 08:28:23.325830 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7ccd669dc7-8d45d" event={"ID":"420beb8d-02d6-4a02-9b98-d30c28771f03","Type":"ContainerStarted","Data":"20508769972e8fa362ac439ed3661acfeb6165fac3ce276d0d0bfe1c75bf6c96"} Mar 12 08:28:23 crc kubenswrapper[4809]: I0312 08:28:23.325901 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7ccd669dc7-8d45d" event={"ID":"420beb8d-02d6-4a02-9b98-d30c28771f03","Type":"ContainerStarted","Data":"01ac7105e406db6e47da8fc67c9aa6a92a9d371693827cb48a5447c84f9de987"} Mar 12 08:28:23 crc kubenswrapper[4809]: I0312 08:28:23.328287 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-7ccd669dc7-8d45d" Mar 12 08:28:23 crc kubenswrapper[4809]: I0312 08:28:23.368935 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-7ccd669dc7-8d45d" podStartSLOduration=2.368913889 podStartE2EDuration="2.368913889s" podCreationTimestamp="2026-03-12 08:28:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:28:23.347747034 +0000 UTC m=+1776.929782767" watchObservedRunningTime="2026-03-12 08:28:23.368913889 +0000 UTC m=+1776.950949622" Mar 12 08:28:24 crc kubenswrapper[4809]: I0312 08:28:24.355910 4809 generic.go:334] "Generic (PLEG): container finished" podID="cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7" containerID="0a22e9e2c499942df89501a02c7147e117207ba29dc1a525308304c227ac508f" exitCode=0 Mar 12 08:28:24 crc kubenswrapper[4809]: I0312 08:28:24.356091 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-l59t6" event={"ID":"cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7","Type":"ContainerDied","Data":"0a22e9e2c499942df89501a02c7147e117207ba29dc1a525308304c227ac508f"} Mar 12 08:28:24 crc kubenswrapper[4809]: I0312 08:28:24.357376 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-l59t6" event={"ID":"cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7","Type":"ContainerDied","Data":"e7ed012b442c1df33b19c9afe58aa397df47db455a31fce3197622a36569e858"} Mar 12 08:28:24 crc kubenswrapper[4809]: I0312 08:28:24.357421 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7ed012b442c1df33b19c9afe58aa397df47db455a31fce3197622a36569e858" Mar 12 08:28:24 crc kubenswrapper[4809]: I0312 08:28:24.378954 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-l59t6" Mar 12 08:28:24 crc kubenswrapper[4809]: I0312 08:28:24.510503 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7-dns-swift-storage-0\") pod \"cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7\" (UID: \"cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7\") " Mar 12 08:28:24 crc kubenswrapper[4809]: I0312 08:28:24.510556 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7-openstack-edpm-ipam\") pod \"cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7\" (UID: \"cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7\") " Mar 12 08:28:24 crc kubenswrapper[4809]: I0312 08:28:24.510647 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7-ovsdbserver-sb\") pod \"cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7\" (UID: \"cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7\") " Mar 12 08:28:24 crc kubenswrapper[4809]: I0312 08:28:24.510886 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7-ovsdbserver-nb\") pod \"cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7\" (UID: \"cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7\") " Mar 12 08:28:24 crc kubenswrapper[4809]: I0312 08:28:24.510908 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tx68f\" (UniqueName: \"kubernetes.io/projected/cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7-kube-api-access-tx68f\") pod \"cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7\" (UID: \"cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7\") " Mar 12 08:28:24 crc kubenswrapper[4809]: I0312 08:28:24.510963 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7-dns-svc\") pod \"cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7\" (UID: \"cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7\") " Mar 12 08:28:24 crc kubenswrapper[4809]: I0312 08:28:24.511043 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7-config\") pod \"cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7\" (UID: \"cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7\") " Mar 12 08:28:24 crc kubenswrapper[4809]: I0312 08:28:24.519991 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7-kube-api-access-tx68f" (OuterVolumeSpecName: "kube-api-access-tx68f") pod "cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7" (UID: "cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7"). InnerVolumeSpecName "kube-api-access-tx68f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:28:24 crc kubenswrapper[4809]: I0312 08:28:24.612685 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7" (UID: "cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:28:24 crc kubenswrapper[4809]: I0312 08:28:24.616347 4809 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 12 08:28:24 crc kubenswrapper[4809]: I0312 08:28:24.616393 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tx68f\" (UniqueName: \"kubernetes.io/projected/cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7-kube-api-access-tx68f\") on node \"crc\" DevicePath \"\"" Mar 12 08:28:24 crc kubenswrapper[4809]: I0312 08:28:24.622779 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7" (UID: "cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:28:24 crc kubenswrapper[4809]: I0312 08:28:24.633705 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7" (UID: "cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:28:24 crc kubenswrapper[4809]: I0312 08:28:24.634541 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7-config" (OuterVolumeSpecName: "config") pod "cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7" (UID: "cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:28:24 crc kubenswrapper[4809]: I0312 08:28:24.639865 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7" (UID: "cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:28:24 crc kubenswrapper[4809]: I0312 08:28:24.644942 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7" (UID: "cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:28:24 crc kubenswrapper[4809]: I0312 08:28:24.720269 4809 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 12 08:28:24 crc kubenswrapper[4809]: I0312 08:28:24.720674 4809 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7-config\") on node \"crc\" DevicePath \"\"" Mar 12 08:28:24 crc kubenswrapper[4809]: I0312 08:28:24.720686 4809 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 12 08:28:24 crc kubenswrapper[4809]: I0312 08:28:24.720696 4809 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 08:28:24 crc kubenswrapper[4809]: I0312 08:28:24.720708 4809 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 12 08:28:25 crc kubenswrapper[4809]: I0312 08:28:25.372486 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-l59t6" Mar 12 08:28:25 crc kubenswrapper[4809]: I0312 08:28:25.437562 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-l59t6"] Mar 12 08:28:25 crc kubenswrapper[4809]: I0312 08:28:25.453652 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-l59t6"] Mar 12 08:28:25 crc kubenswrapper[4809]: I0312 08:28:25.595328 4809 scope.go:117] "RemoveContainer" containerID="cdde10bb8ba356c33cd0f0e3dd3997a2688213f7550653d5a00b2251fc8255cc" Mar 12 08:28:25 crc kubenswrapper[4809]: I0312 08:28:25.642269 4809 scope.go:117] "RemoveContainer" containerID="20b12842050b232e599ccac60b81b93166fcd085b3529609c66c38802241c416" Mar 12 08:28:25 crc kubenswrapper[4809]: I0312 08:28:25.669102 4809 scope.go:117] "RemoveContainer" containerID="936147b419bd7fc41f9179fd87b532efe109407e02f3eb966f79ebaa35a502a6" Mar 12 08:28:25 crc kubenswrapper[4809]: I0312 08:28:25.777635 4809 scope.go:117] "RemoveContainer" containerID="1e64bfd09d8d77d66d9596ed68feb37d5b5cebf7e3f4d7a789cd88e5c9770d10" Mar 12 08:28:25 crc kubenswrapper[4809]: I0312 08:28:25.855621 4809 scope.go:117] "RemoveContainer" containerID="e627c1b0d7956435ef06290158275a1f130d6733b653de2d5bda80602359cc3d" Mar 12 08:28:26 crc kubenswrapper[4809]: I0312 08:28:26.403987 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-8c9fdf79f-l9g9l" event={"ID":"8934ae77-4826-4fa1-a5e1-578b06fa6650","Type":"ContainerStarted","Data":"577e5ba2a9276f9e44258e3825b920d1f51892d5f74bc1d435ce5fae0f078ce6"} Mar 12 08:28:26 crc kubenswrapper[4809]: I0312 08:28:26.404641 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-8c9fdf79f-l9g9l" Mar 12 08:28:26 crc kubenswrapper[4809]: I0312 08:28:26.410897 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6dd54d47c7-gfgfm" event={"ID":"d99e4f98-531f-4ef3-a833-82591d23bea7","Type":"ContainerStarted","Data":"3146763e393a976224ed99335ebb119be79d539138412e7bbd5827617f3a55f8"} Mar 12 08:28:26 crc kubenswrapper[4809]: I0312 08:28:26.411126 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-6dd54d47c7-gfgfm" Mar 12 08:28:26 crc kubenswrapper[4809]: I0312 08:28:26.426192 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-8c9fdf79f-l9g9l" podStartSLOduration=3.128428408 podStartE2EDuration="5.426172126s" podCreationTimestamp="2026-03-12 08:28:21 +0000 UTC" firstStartedPulling="2026-03-12 08:28:22.774448815 +0000 UTC m=+1776.356484548" lastFinishedPulling="2026-03-12 08:28:25.072192543 +0000 UTC m=+1778.654228266" observedRunningTime="2026-03-12 08:28:26.424578963 +0000 UTC m=+1780.006614696" watchObservedRunningTime="2026-03-12 08:28:26.426172126 +0000 UTC m=+1780.008207859" Mar 12 08:28:26 crc kubenswrapper[4809]: I0312 08:28:26.456884 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-6dd54d47c7-gfgfm" podStartSLOduration=3.309030496 podStartE2EDuration="5.45685954s" podCreationTimestamp="2026-03-12 08:28:21 +0000 UTC" firstStartedPulling="2026-03-12 08:28:22.908259831 +0000 UTC m=+1776.490295554" lastFinishedPulling="2026-03-12 08:28:25.056088865 +0000 UTC m=+1778.638124598" observedRunningTime="2026-03-12 08:28:26.445934263 +0000 UTC m=+1780.027969996" watchObservedRunningTime="2026-03-12 08:28:26.45685954 +0000 UTC m=+1780.038895273" Mar 12 08:28:27 crc kubenswrapper[4809]: I0312 08:28:27.133963 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7" path="/var/lib/kubelet/pods/cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7/volumes" Mar 12 08:28:33 crc kubenswrapper[4809]: I0312 08:28:33.558366 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-8c9fdf79f-l9g9l" Mar 12 08:28:33 crc kubenswrapper[4809]: I0312 08:28:33.637569 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-84d75fd6d6-jt5pm"] Mar 12 08:28:33 crc kubenswrapper[4809]: I0312 08:28:33.638163 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-84d75fd6d6-jt5pm" podUID="157d4d8b-15cb-413b-b689-209cdf45f1b7" containerName="heat-api" containerID="cri-o://1e428a496d3dc1fe31e4dddfc80db499f8d2f70a04af936956005838b29bb243" gracePeriod=60 Mar 12 08:28:34 crc kubenswrapper[4809]: I0312 08:28:34.075078 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-6dd54d47c7-gfgfm" Mar 12 08:28:34 crc kubenswrapper[4809]: I0312 08:28:34.106783 4809 scope.go:117] "RemoveContainer" containerID="864ab2b9321734db2ec1c2e4ce57f8a096a813a5cbaa0744b231535713d6ab10" Mar 12 08:28:34 crc kubenswrapper[4809]: E0312 08:28:34.107174 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:28:34 crc kubenswrapper[4809]: I0312 08:28:34.150809 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-765ccf488b-zgbk5"] Mar 12 08:28:34 crc kubenswrapper[4809]: I0312 08:28:34.151444 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-765ccf488b-zgbk5" podUID="be0d6711-7a21-40cf-ba47-eff3c52046e7" containerName="heat-cfnapi" containerID="cri-o://15d7a3b457ca3bc24189cdfffc58e867d200d6e0bddb3c1fcfccb0ae4d66044d" gracePeriod=60 Mar 12 08:28:35 crc kubenswrapper[4809]: I0312 08:28:35.736738 4809 generic.go:334] "Generic (PLEG): container finished" podID="b96c701c-45de-46e2-95d0-df4e12f6d643" containerID="ec26d23b86bf034e79c88a5939b8025a083d11ba0d6f47599a107ea0ddeec497" exitCode=0 Mar 12 08:28:35 crc kubenswrapper[4809]: I0312 08:28:35.737928 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b96c701c-45de-46e2-95d0-df4e12f6d643","Type":"ContainerDied","Data":"ec26d23b86bf034e79c88a5939b8025a083d11ba0d6f47599a107ea0ddeec497"} Mar 12 08:28:35 crc kubenswrapper[4809]: I0312 08:28:35.747614 4809 generic.go:334] "Generic (PLEG): container finished" podID="221e3c09-0978-4460-9f66-642aa1165af4" containerID="b9dcdad0a4964135b6ef86e85053cc6fee3292efb1051da4d73013b51e582976" exitCode=0 Mar 12 08:28:35 crc kubenswrapper[4809]: I0312 08:28:35.747672 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"221e3c09-0978-4460-9f66-642aa1165af4","Type":"ContainerDied","Data":"b9dcdad0a4964135b6ef86e85053cc6fee3292efb1051da4d73013b51e582976"} Mar 12 08:28:35 crc kubenswrapper[4809]: E0312 08:28:35.758095 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb96c701c_45de_46e2_95d0_df4e12f6d643.slice/crio-conmon-ec26d23b86bf034e79c88a5939b8025a083d11ba0d6f47599a107ea0ddeec497.scope\": RecentStats: unable to find data in memory cache]" Mar 12 08:28:36 crc kubenswrapper[4809]: I0312 08:28:36.765904 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"221e3c09-0978-4460-9f66-642aa1165af4","Type":"ContainerStarted","Data":"6cdec81d60f10c8f707debef44cee25c2d0e3b5fa5d44d196cdf6821ca453624"} Mar 12 08:28:36 crc kubenswrapper[4809]: I0312 08:28:36.766982 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Mar 12 08:28:36 crc kubenswrapper[4809]: I0312 08:28:36.769574 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b96c701c-45de-46e2-95d0-df4e12f6d643","Type":"ContainerStarted","Data":"da97b4e8ecd48ebc909104ea3ae195959108003db6e99f20ac0b5b66e2fce033"} Mar 12 08:28:36 crc kubenswrapper[4809]: I0312 08:28:36.769898 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:28:36 crc kubenswrapper[4809]: I0312 08:28:36.839692 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=42.839657038 podStartE2EDuration="42.839657038s" podCreationTimestamp="2026-03-12 08:27:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:28:36.818307558 +0000 UTC m=+1790.400343291" watchObservedRunningTime="2026-03-12 08:28:36.839657038 +0000 UTC m=+1790.421692771" Mar 12 08:28:36 crc kubenswrapper[4809]: I0312 08:28:36.855867 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=42.855836519 podStartE2EDuration="42.855836519s" podCreationTimestamp="2026-03-12 08:27:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:28:36.849081795 +0000 UTC m=+1790.431117528" watchObservedRunningTime="2026-03-12 08:28:36.855836519 +0000 UTC m=+1790.437872252" Mar 12 08:28:36 crc kubenswrapper[4809]: I0312 08:28:36.920964 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-84d75fd6d6-jt5pm" podUID="157d4d8b-15cb-413b-b689-209cdf45f1b7" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.0.233:8004/healthcheck\": read tcp 10.217.0.2:56430->10.217.0.233:8004: read: connection reset by peer" Mar 12 08:28:37 crc kubenswrapper[4809]: I0312 08:28:37.400525 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-765ccf488b-zgbk5" podUID="be0d6711-7a21-40cf-ba47-eff3c52046e7" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.0.234:8000/healthcheck\": read tcp 10.217.0.2:49582->10.217.0.234:8000: read: connection reset by peer" Mar 12 08:28:37 crc kubenswrapper[4809]: I0312 08:28:37.629632 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-84d75fd6d6-jt5pm" Mar 12 08:28:37 crc kubenswrapper[4809]: I0312 08:28:37.695216 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/157d4d8b-15cb-413b-b689-209cdf45f1b7-config-data-custom\") pod \"157d4d8b-15cb-413b-b689-209cdf45f1b7\" (UID: \"157d4d8b-15cb-413b-b689-209cdf45f1b7\") " Mar 12 08:28:37 crc kubenswrapper[4809]: I0312 08:28:37.695299 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/157d4d8b-15cb-413b-b689-209cdf45f1b7-combined-ca-bundle\") pod \"157d4d8b-15cb-413b-b689-209cdf45f1b7\" (UID: \"157d4d8b-15cb-413b-b689-209cdf45f1b7\") " Mar 12 08:28:37 crc kubenswrapper[4809]: I0312 08:28:37.695377 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/157d4d8b-15cb-413b-b689-209cdf45f1b7-public-tls-certs\") pod \"157d4d8b-15cb-413b-b689-209cdf45f1b7\" (UID: \"157d4d8b-15cb-413b-b689-209cdf45f1b7\") " Mar 12 08:28:37 crc kubenswrapper[4809]: I0312 08:28:37.695468 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk9kx\" (UniqueName: \"kubernetes.io/projected/157d4d8b-15cb-413b-b689-209cdf45f1b7-kube-api-access-tk9kx\") pod \"157d4d8b-15cb-413b-b689-209cdf45f1b7\" (UID: \"157d4d8b-15cb-413b-b689-209cdf45f1b7\") " Mar 12 08:28:37 crc kubenswrapper[4809]: I0312 08:28:37.695495 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/157d4d8b-15cb-413b-b689-209cdf45f1b7-internal-tls-certs\") pod \"157d4d8b-15cb-413b-b689-209cdf45f1b7\" (UID: \"157d4d8b-15cb-413b-b689-209cdf45f1b7\") " Mar 12 08:28:37 crc kubenswrapper[4809]: I0312 08:28:37.695630 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/157d4d8b-15cb-413b-b689-209cdf45f1b7-config-data\") pod \"157d4d8b-15cb-413b-b689-209cdf45f1b7\" (UID: \"157d4d8b-15cb-413b-b689-209cdf45f1b7\") " Mar 12 08:28:37 crc kubenswrapper[4809]: I0312 08:28:37.712738 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/157d4d8b-15cb-413b-b689-209cdf45f1b7-kube-api-access-tk9kx" (OuterVolumeSpecName: "kube-api-access-tk9kx") pod "157d4d8b-15cb-413b-b689-209cdf45f1b7" (UID: "157d4d8b-15cb-413b-b689-209cdf45f1b7"). InnerVolumeSpecName "kube-api-access-tk9kx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:28:37 crc kubenswrapper[4809]: I0312 08:28:37.713195 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/157d4d8b-15cb-413b-b689-209cdf45f1b7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "157d4d8b-15cb-413b-b689-209cdf45f1b7" (UID: "157d4d8b-15cb-413b-b689-209cdf45f1b7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:28:37 crc kubenswrapper[4809]: I0312 08:28:37.771636 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/157d4d8b-15cb-413b-b689-209cdf45f1b7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "157d4d8b-15cb-413b-b689-209cdf45f1b7" (UID: "157d4d8b-15cb-413b-b689-209cdf45f1b7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:28:37 crc kubenswrapper[4809]: I0312 08:28:37.800679 4809 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/157d4d8b-15cb-413b-b689-209cdf45f1b7-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 12 08:28:37 crc kubenswrapper[4809]: I0312 08:28:37.800732 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/157d4d8b-15cb-413b-b689-209cdf45f1b7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:28:37 crc kubenswrapper[4809]: I0312 08:28:37.800747 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk9kx\" (UniqueName: \"kubernetes.io/projected/157d4d8b-15cb-413b-b689-209cdf45f1b7-kube-api-access-tk9kx\") on node \"crc\" DevicePath \"\"" Mar 12 08:28:37 crc kubenswrapper[4809]: I0312 08:28:37.874576 4809 generic.go:334] "Generic (PLEG): container finished" podID="157d4d8b-15cb-413b-b689-209cdf45f1b7" containerID="1e428a496d3dc1fe31e4dddfc80db499f8d2f70a04af936956005838b29bb243" exitCode=0 Mar 12 08:28:37 crc kubenswrapper[4809]: I0312 08:28:37.874756 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-84d75fd6d6-jt5pm" Mar 12 08:28:37 crc kubenswrapper[4809]: I0312 08:28:37.875372 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-84d75fd6d6-jt5pm" event={"ID":"157d4d8b-15cb-413b-b689-209cdf45f1b7","Type":"ContainerDied","Data":"1e428a496d3dc1fe31e4dddfc80db499f8d2f70a04af936956005838b29bb243"} Mar 12 08:28:37 crc kubenswrapper[4809]: I0312 08:28:37.875468 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-84d75fd6d6-jt5pm" event={"ID":"157d4d8b-15cb-413b-b689-209cdf45f1b7","Type":"ContainerDied","Data":"f2bf18c73590fe900d6459ca011b5f41e1ddae8e037bf6d6b185acb17aef74ce"} Mar 12 08:28:37 crc kubenswrapper[4809]: I0312 08:28:37.875489 4809 scope.go:117] "RemoveContainer" containerID="1e428a496d3dc1fe31e4dddfc80db499f8d2f70a04af936956005838b29bb243" Mar 12 08:28:37 crc kubenswrapper[4809]: I0312 08:28:37.885351 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/157d4d8b-15cb-413b-b689-209cdf45f1b7-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "157d4d8b-15cb-413b-b689-209cdf45f1b7" (UID: "157d4d8b-15cb-413b-b689-209cdf45f1b7"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:28:37 crc kubenswrapper[4809]: I0312 08:28:37.916073 4809 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/157d4d8b-15cb-413b-b689-209cdf45f1b7-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 08:28:37 crc kubenswrapper[4809]: I0312 08:28:37.921320 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/157d4d8b-15cb-413b-b689-209cdf45f1b7-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "157d4d8b-15cb-413b-b689-209cdf45f1b7" (UID: "157d4d8b-15cb-413b-b689-209cdf45f1b7"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:28:37 crc kubenswrapper[4809]: I0312 08:28:37.934627 4809 generic.go:334] "Generic (PLEG): container finished" podID="be0d6711-7a21-40cf-ba47-eff3c52046e7" containerID="15d7a3b457ca3bc24189cdfffc58e867d200d6e0bddb3c1fcfccb0ae4d66044d" exitCode=0 Mar 12 08:28:37 crc kubenswrapper[4809]: I0312 08:28:37.934898 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-765ccf488b-zgbk5" event={"ID":"be0d6711-7a21-40cf-ba47-eff3c52046e7","Type":"ContainerDied","Data":"15d7a3b457ca3bc24189cdfffc58e867d200d6e0bddb3c1fcfccb0ae4d66044d"} Mar 12 08:28:37 crc kubenswrapper[4809]: I0312 08:28:37.943556 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/157d4d8b-15cb-413b-b689-209cdf45f1b7-config-data" (OuterVolumeSpecName: "config-data") pod "157d4d8b-15cb-413b-b689-209cdf45f1b7" (UID: "157d4d8b-15cb-413b-b689-209cdf45f1b7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:28:38 crc kubenswrapper[4809]: I0312 08:28:38.019737 4809 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/157d4d8b-15cb-413b-b689-209cdf45f1b7-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 08:28:38 crc kubenswrapper[4809]: I0312 08:28:38.019769 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/157d4d8b-15cb-413b-b689-209cdf45f1b7-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 08:28:38 crc kubenswrapper[4809]: I0312 08:28:38.080606 4809 scope.go:117] "RemoveContainer" containerID="1e428a496d3dc1fe31e4dddfc80db499f8d2f70a04af936956005838b29bb243" Mar 12 08:28:38 crc kubenswrapper[4809]: E0312 08:28:38.086782 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e428a496d3dc1fe31e4dddfc80db499f8d2f70a04af936956005838b29bb243\": container with ID starting with 1e428a496d3dc1fe31e4dddfc80db499f8d2f70a04af936956005838b29bb243 not found: ID does not exist" containerID="1e428a496d3dc1fe31e4dddfc80db499f8d2f70a04af936956005838b29bb243" Mar 12 08:28:38 crc kubenswrapper[4809]: I0312 08:28:38.086836 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e428a496d3dc1fe31e4dddfc80db499f8d2f70a04af936956005838b29bb243"} err="failed to get container status \"1e428a496d3dc1fe31e4dddfc80db499f8d2f70a04af936956005838b29bb243\": rpc error: code = NotFound desc = could not find container \"1e428a496d3dc1fe31e4dddfc80db499f8d2f70a04af936956005838b29bb243\": container with ID starting with 1e428a496d3dc1fe31e4dddfc80db499f8d2f70a04af936956005838b29bb243 not found: ID does not exist" Mar 12 08:28:38 crc kubenswrapper[4809]: I0312 08:28:38.264242 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-84d75fd6d6-jt5pm"] Mar 12 08:28:38 crc kubenswrapper[4809]: I0312 08:28:38.283495 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-84d75fd6d6-jt5pm"] Mar 12 08:28:38 crc kubenswrapper[4809]: I0312 08:28:38.378937 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-765ccf488b-zgbk5" Mar 12 08:28:38 crc kubenswrapper[4809]: I0312 08:28:38.443174 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be0d6711-7a21-40cf-ba47-eff3c52046e7-config-data\") pod \"be0d6711-7a21-40cf-ba47-eff3c52046e7\" (UID: \"be0d6711-7a21-40cf-ba47-eff3c52046e7\") " Mar 12 08:28:38 crc kubenswrapper[4809]: I0312 08:28:38.443309 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/be0d6711-7a21-40cf-ba47-eff3c52046e7-public-tls-certs\") pod \"be0d6711-7a21-40cf-ba47-eff3c52046e7\" (UID: \"be0d6711-7a21-40cf-ba47-eff3c52046e7\") " Mar 12 08:28:38 crc kubenswrapper[4809]: I0312 08:28:38.443388 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/be0d6711-7a21-40cf-ba47-eff3c52046e7-config-data-custom\") pod \"be0d6711-7a21-40cf-ba47-eff3c52046e7\" (UID: \"be0d6711-7a21-40cf-ba47-eff3c52046e7\") " Mar 12 08:28:38 crc kubenswrapper[4809]: I0312 08:28:38.443494 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be0d6711-7a21-40cf-ba47-eff3c52046e7-combined-ca-bundle\") pod \"be0d6711-7a21-40cf-ba47-eff3c52046e7\" (UID: \"be0d6711-7a21-40cf-ba47-eff3c52046e7\") " Mar 12 08:28:38 crc kubenswrapper[4809]: I0312 08:28:38.443720 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghlw5\" (UniqueName: \"kubernetes.io/projected/be0d6711-7a21-40cf-ba47-eff3c52046e7-kube-api-access-ghlw5\") pod \"be0d6711-7a21-40cf-ba47-eff3c52046e7\" (UID: \"be0d6711-7a21-40cf-ba47-eff3c52046e7\") " Mar 12 08:28:38 crc kubenswrapper[4809]: I0312 08:28:38.444267 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/be0d6711-7a21-40cf-ba47-eff3c52046e7-internal-tls-certs\") pod \"be0d6711-7a21-40cf-ba47-eff3c52046e7\" (UID: \"be0d6711-7a21-40cf-ba47-eff3c52046e7\") " Mar 12 08:28:38 crc kubenswrapper[4809]: I0312 08:28:38.471465 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be0d6711-7a21-40cf-ba47-eff3c52046e7-kube-api-access-ghlw5" (OuterVolumeSpecName: "kube-api-access-ghlw5") pod "be0d6711-7a21-40cf-ba47-eff3c52046e7" (UID: "be0d6711-7a21-40cf-ba47-eff3c52046e7"). InnerVolumeSpecName "kube-api-access-ghlw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:28:38 crc kubenswrapper[4809]: I0312 08:28:38.491337 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be0d6711-7a21-40cf-ba47-eff3c52046e7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "be0d6711-7a21-40cf-ba47-eff3c52046e7" (UID: "be0d6711-7a21-40cf-ba47-eff3c52046e7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:28:38 crc kubenswrapper[4809]: I0312 08:28:38.548663 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be0d6711-7a21-40cf-ba47-eff3c52046e7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "be0d6711-7a21-40cf-ba47-eff3c52046e7" (UID: "be0d6711-7a21-40cf-ba47-eff3c52046e7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:28:38 crc kubenswrapper[4809]: I0312 08:28:38.551445 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be0d6711-7a21-40cf-ba47-eff3c52046e7-combined-ca-bundle\") pod \"be0d6711-7a21-40cf-ba47-eff3c52046e7\" (UID: \"be0d6711-7a21-40cf-ba47-eff3c52046e7\") " Mar 12 08:28:38 crc kubenswrapper[4809]: W0312 08:28:38.551601 4809 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/be0d6711-7a21-40cf-ba47-eff3c52046e7/volumes/kubernetes.io~secret/combined-ca-bundle Mar 12 08:28:38 crc kubenswrapper[4809]: I0312 08:28:38.551878 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be0d6711-7a21-40cf-ba47-eff3c52046e7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "be0d6711-7a21-40cf-ba47-eff3c52046e7" (UID: "be0d6711-7a21-40cf-ba47-eff3c52046e7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:28:38 crc kubenswrapper[4809]: I0312 08:28:38.563934 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghlw5\" (UniqueName: \"kubernetes.io/projected/be0d6711-7a21-40cf-ba47-eff3c52046e7-kube-api-access-ghlw5\") on node \"crc\" DevicePath \"\"" Mar 12 08:28:38 crc kubenswrapper[4809]: I0312 08:28:38.564020 4809 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/be0d6711-7a21-40cf-ba47-eff3c52046e7-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 12 08:28:38 crc kubenswrapper[4809]: I0312 08:28:38.564089 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be0d6711-7a21-40cf-ba47-eff3c52046e7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:28:38 crc kubenswrapper[4809]: I0312 08:28:38.594847 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be0d6711-7a21-40cf-ba47-eff3c52046e7-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "be0d6711-7a21-40cf-ba47-eff3c52046e7" (UID: "be0d6711-7a21-40cf-ba47-eff3c52046e7"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:28:38 crc kubenswrapper[4809]: I0312 08:28:38.598202 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be0d6711-7a21-40cf-ba47-eff3c52046e7-config-data" (OuterVolumeSpecName: "config-data") pod "be0d6711-7a21-40cf-ba47-eff3c52046e7" (UID: "be0d6711-7a21-40cf-ba47-eff3c52046e7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:28:38 crc kubenswrapper[4809]: I0312 08:28:38.619037 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be0d6711-7a21-40cf-ba47-eff3c52046e7-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "be0d6711-7a21-40cf-ba47-eff3c52046e7" (UID: "be0d6711-7a21-40cf-ba47-eff3c52046e7"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:28:38 crc kubenswrapper[4809]: I0312 08:28:38.666423 4809 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/be0d6711-7a21-40cf-ba47-eff3c52046e7-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 08:28:38 crc kubenswrapper[4809]: I0312 08:28:38.666467 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be0d6711-7a21-40cf-ba47-eff3c52046e7-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 08:28:38 crc kubenswrapper[4809]: I0312 08:28:38.666478 4809 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/be0d6711-7a21-40cf-ba47-eff3c52046e7-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 08:28:38 crc kubenswrapper[4809]: I0312 08:28:38.958773 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-765ccf488b-zgbk5" event={"ID":"be0d6711-7a21-40cf-ba47-eff3c52046e7","Type":"ContainerDied","Data":"0b22d5182d0af8c2bb773e2756cbd703a653fca05281ad301535b345b6b0aecd"} Mar 12 08:28:38 crc kubenswrapper[4809]: I0312 08:28:38.958812 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-765ccf488b-zgbk5" Mar 12 08:28:38 crc kubenswrapper[4809]: I0312 08:28:38.958847 4809 scope.go:117] "RemoveContainer" containerID="15d7a3b457ca3bc24189cdfffc58e867d200d6e0bddb3c1fcfccb0ae4d66044d" Mar 12 08:28:39 crc kubenswrapper[4809]: I0312 08:28:39.035677 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-765ccf488b-zgbk5"] Mar 12 08:28:39 crc kubenswrapper[4809]: I0312 08:28:39.061053 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-765ccf488b-zgbk5"] Mar 12 08:28:39 crc kubenswrapper[4809]: I0312 08:28:39.122052 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="157d4d8b-15cb-413b-b689-209cdf45f1b7" path="/var/lib/kubelet/pods/157d4d8b-15cb-413b-b689-209cdf45f1b7/volumes" Mar 12 08:28:39 crc kubenswrapper[4809]: I0312 08:28:39.123197 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be0d6711-7a21-40cf-ba47-eff3c52046e7" path="/var/lib/kubelet/pods/be0d6711-7a21-40cf-ba47-eff3c52046e7/volumes" Mar 12 08:28:41 crc kubenswrapper[4809]: I0312 08:28:41.783660 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-7ccd669dc7-8d45d" Mar 12 08:28:41 crc kubenswrapper[4809]: I0312 08:28:41.859978 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-776db87b84-bnbp7"] Mar 12 08:28:41 crc kubenswrapper[4809]: I0312 08:28:41.860357 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-776db87b84-bnbp7" podUID="2c1e547b-63a1-4600-83d6-7efc902df373" containerName="heat-engine" containerID="cri-o://c6791c423abdc7139b629d816a564bb56b9bb90e0cb7bfa1e33c388b4497120c" gracePeriod=60 Mar 12 08:28:42 crc kubenswrapper[4809]: I0312 08:28:42.865367 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d7cg4"] Mar 12 08:28:42 crc kubenswrapper[4809]: E0312 08:28:42.866627 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7" containerName="dnsmasq-dns" Mar 12 08:28:42 crc kubenswrapper[4809]: I0312 08:28:42.866648 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7" containerName="dnsmasq-dns" Mar 12 08:28:42 crc kubenswrapper[4809]: E0312 08:28:42.866660 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be0d6711-7a21-40cf-ba47-eff3c52046e7" containerName="heat-cfnapi" Mar 12 08:28:42 crc kubenswrapper[4809]: I0312 08:28:42.866667 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="be0d6711-7a21-40cf-ba47-eff3c52046e7" containerName="heat-cfnapi" Mar 12 08:28:42 crc kubenswrapper[4809]: E0312 08:28:42.866697 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7" containerName="init" Mar 12 08:28:42 crc kubenswrapper[4809]: I0312 08:28:42.866705 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7" containerName="init" Mar 12 08:28:42 crc kubenswrapper[4809]: E0312 08:28:42.866726 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="157d4d8b-15cb-413b-b689-209cdf45f1b7" containerName="heat-api" Mar 12 08:28:42 crc kubenswrapper[4809]: I0312 08:28:42.866731 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="157d4d8b-15cb-413b-b689-209cdf45f1b7" containerName="heat-api" Mar 12 08:28:42 crc kubenswrapper[4809]: I0312 08:28:42.868190 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="157d4d8b-15cb-413b-b689-209cdf45f1b7" containerName="heat-api" Mar 12 08:28:42 crc kubenswrapper[4809]: I0312 08:28:42.868214 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf9e7b9c-ecc2-4c8e-86fc-7f8bce870de7" containerName="dnsmasq-dns" Mar 12 08:28:42 crc kubenswrapper[4809]: I0312 08:28:42.868223 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="be0d6711-7a21-40cf-ba47-eff3c52046e7" containerName="heat-cfnapi" Mar 12 08:28:42 crc kubenswrapper[4809]: I0312 08:28:42.869161 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d7cg4" Mar 12 08:28:42 crc kubenswrapper[4809]: I0312 08:28:42.871867 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 12 08:28:42 crc kubenswrapper[4809]: I0312 08:28:42.872742 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 12 08:28:42 crc kubenswrapper[4809]: I0312 08:28:42.872989 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 08:28:42 crc kubenswrapper[4809]: I0312 08:28:42.873253 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7nx95" Mar 12 08:28:42 crc kubenswrapper[4809]: I0312 08:28:42.884672 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d7cg4"] Mar 12 08:28:42 crc kubenswrapper[4809]: I0312 08:28:42.995381 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/baef87a5-805e-4a55-855f-df82b7292028-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d7cg4\" (UID: \"baef87a5-805e-4a55-855f-df82b7292028\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d7cg4" Mar 12 08:28:42 crc kubenswrapper[4809]: I0312 08:28:42.995477 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4qv7\" (UniqueName: \"kubernetes.io/projected/baef87a5-805e-4a55-855f-df82b7292028-kube-api-access-f4qv7\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d7cg4\" (UID: \"baef87a5-805e-4a55-855f-df82b7292028\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d7cg4" Mar 12 08:28:42 crc kubenswrapper[4809]: I0312 08:28:42.995941 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baef87a5-805e-4a55-855f-df82b7292028-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d7cg4\" (UID: \"baef87a5-805e-4a55-855f-df82b7292028\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d7cg4" Mar 12 08:28:42 crc kubenswrapper[4809]: I0312 08:28:42.996067 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/baef87a5-805e-4a55-855f-df82b7292028-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d7cg4\" (UID: \"baef87a5-805e-4a55-855f-df82b7292028\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d7cg4" Mar 12 08:28:43 crc kubenswrapper[4809]: I0312 08:28:43.099455 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baef87a5-805e-4a55-855f-df82b7292028-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d7cg4\" (UID: \"baef87a5-805e-4a55-855f-df82b7292028\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d7cg4" Mar 12 08:28:43 crc kubenswrapper[4809]: I0312 08:28:43.099560 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/baef87a5-805e-4a55-855f-df82b7292028-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d7cg4\" (UID: \"baef87a5-805e-4a55-855f-df82b7292028\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d7cg4" Mar 12 08:28:43 crc kubenswrapper[4809]: I0312 08:28:43.099733 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/baef87a5-805e-4a55-855f-df82b7292028-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d7cg4\" (UID: \"baef87a5-805e-4a55-855f-df82b7292028\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d7cg4" Mar 12 08:28:43 crc kubenswrapper[4809]: I0312 08:28:43.099824 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4qv7\" (UniqueName: \"kubernetes.io/projected/baef87a5-805e-4a55-855f-df82b7292028-kube-api-access-f4qv7\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d7cg4\" (UID: \"baef87a5-805e-4a55-855f-df82b7292028\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d7cg4" Mar 12 08:28:43 crc kubenswrapper[4809]: I0312 08:28:43.109520 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baef87a5-805e-4a55-855f-df82b7292028-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d7cg4\" (UID: \"baef87a5-805e-4a55-855f-df82b7292028\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d7cg4" Mar 12 08:28:43 crc kubenswrapper[4809]: I0312 08:28:43.114614 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/baef87a5-805e-4a55-855f-df82b7292028-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d7cg4\" (UID: \"baef87a5-805e-4a55-855f-df82b7292028\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d7cg4" Mar 12 08:28:43 crc kubenswrapper[4809]: I0312 08:28:43.121362 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/baef87a5-805e-4a55-855f-df82b7292028-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d7cg4\" (UID: \"baef87a5-805e-4a55-855f-df82b7292028\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d7cg4" Mar 12 08:28:43 crc kubenswrapper[4809]: I0312 08:28:43.126040 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4qv7\" (UniqueName: \"kubernetes.io/projected/baef87a5-805e-4a55-855f-df82b7292028-kube-api-access-f4qv7\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d7cg4\" (UID: \"baef87a5-805e-4a55-855f-df82b7292028\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d7cg4" Mar 12 08:28:43 crc kubenswrapper[4809]: I0312 08:28:43.193788 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d7cg4" Mar 12 08:28:44 crc kubenswrapper[4809]: I0312 08:28:44.127455 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d7cg4"] Mar 12 08:28:44 crc kubenswrapper[4809]: I0312 08:28:44.732925 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-rr6w8"] Mar 12 08:28:44 crc kubenswrapper[4809]: I0312 08:28:44.736237 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-rr6w8"] Mar 12 08:28:44 crc kubenswrapper[4809]: I0312 08:28:44.803105 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-2cqrr"] Mar 12 08:28:44 crc kubenswrapper[4809]: I0312 08:28:44.805094 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-2cqrr" Mar 12 08:28:44 crc kubenswrapper[4809]: I0312 08:28:44.808474 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 12 08:28:44 crc kubenswrapper[4809]: I0312 08:28:44.819059 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-2cqrr"] Mar 12 08:28:44 crc kubenswrapper[4809]: I0312 08:28:44.874207 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hftxg\" (UniqueName: \"kubernetes.io/projected/38ca4679-8c23-449c-a9e8-fc4e224bd1af-kube-api-access-hftxg\") pod \"aodh-db-sync-2cqrr\" (UID: \"38ca4679-8c23-449c-a9e8-fc4e224bd1af\") " pod="openstack/aodh-db-sync-2cqrr" Mar 12 08:28:44 crc kubenswrapper[4809]: I0312 08:28:44.874302 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38ca4679-8c23-449c-a9e8-fc4e224bd1af-combined-ca-bundle\") pod \"aodh-db-sync-2cqrr\" (UID: \"38ca4679-8c23-449c-a9e8-fc4e224bd1af\") " pod="openstack/aodh-db-sync-2cqrr" Mar 12 08:28:44 crc kubenswrapper[4809]: I0312 08:28:44.874520 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38ca4679-8c23-449c-a9e8-fc4e224bd1af-scripts\") pod \"aodh-db-sync-2cqrr\" (UID: \"38ca4679-8c23-449c-a9e8-fc4e224bd1af\") " pod="openstack/aodh-db-sync-2cqrr" Mar 12 08:28:44 crc kubenswrapper[4809]: I0312 08:28:44.874795 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38ca4679-8c23-449c-a9e8-fc4e224bd1af-config-data\") pod \"aodh-db-sync-2cqrr\" (UID: \"38ca4679-8c23-449c-a9e8-fc4e224bd1af\") " pod="openstack/aodh-db-sync-2cqrr" Mar 12 08:28:44 crc kubenswrapper[4809]: I0312 08:28:44.963956 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Mar 12 08:28:44 crc kubenswrapper[4809]: I0312 08:28:44.976484 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38ca4679-8c23-449c-a9e8-fc4e224bd1af-scripts\") pod \"aodh-db-sync-2cqrr\" (UID: \"38ca4679-8c23-449c-a9e8-fc4e224bd1af\") " pod="openstack/aodh-db-sync-2cqrr" Mar 12 08:28:44 crc kubenswrapper[4809]: I0312 08:28:44.976620 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38ca4679-8c23-449c-a9e8-fc4e224bd1af-config-data\") pod \"aodh-db-sync-2cqrr\" (UID: \"38ca4679-8c23-449c-a9e8-fc4e224bd1af\") " pod="openstack/aodh-db-sync-2cqrr" Mar 12 08:28:44 crc kubenswrapper[4809]: I0312 08:28:44.976783 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hftxg\" (UniqueName: \"kubernetes.io/projected/38ca4679-8c23-449c-a9e8-fc4e224bd1af-kube-api-access-hftxg\") pod \"aodh-db-sync-2cqrr\" (UID: \"38ca4679-8c23-449c-a9e8-fc4e224bd1af\") " pod="openstack/aodh-db-sync-2cqrr" Mar 12 08:28:44 crc kubenswrapper[4809]: I0312 08:28:44.976862 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38ca4679-8c23-449c-a9e8-fc4e224bd1af-combined-ca-bundle\") pod \"aodh-db-sync-2cqrr\" (UID: \"38ca4679-8c23-449c-a9e8-fc4e224bd1af\") " pod="openstack/aodh-db-sync-2cqrr" Mar 12 08:28:44 crc kubenswrapper[4809]: I0312 08:28:44.986254 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38ca4679-8c23-449c-a9e8-fc4e224bd1af-scripts\") pod \"aodh-db-sync-2cqrr\" (UID: \"38ca4679-8c23-449c-a9e8-fc4e224bd1af\") " pod="openstack/aodh-db-sync-2cqrr" Mar 12 08:28:44 crc kubenswrapper[4809]: I0312 08:28:44.994914 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38ca4679-8c23-449c-a9e8-fc4e224bd1af-combined-ca-bundle\") pod \"aodh-db-sync-2cqrr\" (UID: \"38ca4679-8c23-449c-a9e8-fc4e224bd1af\") " pod="openstack/aodh-db-sync-2cqrr" Mar 12 08:28:45 crc kubenswrapper[4809]: I0312 08:28:45.010853 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38ca4679-8c23-449c-a9e8-fc4e224bd1af-config-data\") pod \"aodh-db-sync-2cqrr\" (UID: \"38ca4679-8c23-449c-a9e8-fc4e224bd1af\") " pod="openstack/aodh-db-sync-2cqrr" Mar 12 08:28:45 crc kubenswrapper[4809]: I0312 08:28:45.037805 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hftxg\" (UniqueName: \"kubernetes.io/projected/38ca4679-8c23-449c-a9e8-fc4e224bd1af-kube-api-access-hftxg\") pod \"aodh-db-sync-2cqrr\" (UID: \"38ca4679-8c23-449c-a9e8-fc4e224bd1af\") " pod="openstack/aodh-db-sync-2cqrr" Mar 12 08:28:45 crc kubenswrapper[4809]: I0312 08:28:45.068641 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d7cg4" event={"ID":"baef87a5-805e-4a55-855f-df82b7292028","Type":"ContainerStarted","Data":"ddeb9b5850ee80f05102e1eb172ee51378607cd33ec2bf3f296e2eb9a33dbc5e"} Mar 12 08:28:45 crc kubenswrapper[4809]: I0312 08:28:45.137596 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d49c64c6-9e86-4436-a9e6-2723aecbacfe" path="/var/lib/kubelet/pods/d49c64c6-9e86-4436-a9e6-2723aecbacfe/volumes" Mar 12 08:28:45 crc kubenswrapper[4809]: I0312 08:28:45.143716 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-2cqrr" Mar 12 08:28:45 crc kubenswrapper[4809]: I0312 08:28:45.743515 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-2cqrr"] Mar 12 08:28:45 crc kubenswrapper[4809]: W0312 08:28:45.760491 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38ca4679_8c23_449c_a9e8_fc4e224bd1af.slice/crio-e44adcfc64642eecb040312345dfbf935c77e74b384ea0d242bbef899001aae7 WatchSource:0}: Error finding container e44adcfc64642eecb040312345dfbf935c77e74b384ea0d242bbef899001aae7: Status 404 returned error can't find the container with id e44adcfc64642eecb040312345dfbf935c77e74b384ea0d242bbef899001aae7 Mar 12 08:28:46 crc kubenswrapper[4809]: I0312 08:28:46.090359 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-2cqrr" event={"ID":"38ca4679-8c23-449c-a9e8-fc4e224bd1af","Type":"ContainerStarted","Data":"e44adcfc64642eecb040312345dfbf935c77e74b384ea0d242bbef899001aae7"} Mar 12 08:28:47 crc kubenswrapper[4809]: I0312 08:28:47.141346 4809 scope.go:117] "RemoveContainer" containerID="864ab2b9321734db2ec1c2e4ce57f8a096a813a5cbaa0744b231535713d6ab10" Mar 12 08:28:47 crc kubenswrapper[4809]: E0312 08:28:47.141737 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:28:50 crc kubenswrapper[4809]: E0312 08:28:50.916167 4809 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c6791c423abdc7139b629d816a564bb56b9bb90e0cb7bfa1e33c388b4497120c" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 12 08:28:50 crc kubenswrapper[4809]: E0312 08:28:50.919365 4809 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c6791c423abdc7139b629d816a564bb56b9bb90e0cb7bfa1e33c388b4497120c" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 12 08:28:50 crc kubenswrapper[4809]: E0312 08:28:50.920735 4809 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c6791c423abdc7139b629d816a564bb56b9bb90e0cb7bfa1e33c388b4497120c" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 12 08:28:50 crc kubenswrapper[4809]: E0312 08:28:50.920806 4809 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-776db87b84-bnbp7" podUID="2c1e547b-63a1-4600-83d6-7efc902df373" containerName="heat-engine" Mar 12 08:28:54 crc kubenswrapper[4809]: I0312 08:28:54.903382 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Mar 12 08:28:55 crc kubenswrapper[4809]: I0312 08:28:55.187850 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Mar 12 08:28:55 crc kubenswrapper[4809]: I0312 08:28:55.332408 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Mar 12 08:28:59 crc kubenswrapper[4809]: I0312 08:28:59.371374 4809 generic.go:334] "Generic (PLEG): container finished" podID="2c1e547b-63a1-4600-83d6-7efc902df373" containerID="c6791c423abdc7139b629d816a564bb56b9bb90e0cb7bfa1e33c388b4497120c" exitCode=0 Mar 12 08:28:59 crc kubenswrapper[4809]: I0312 08:28:59.371483 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-776db87b84-bnbp7" event={"ID":"2c1e547b-63a1-4600-83d6-7efc902df373","Type":"ContainerDied","Data":"c6791c423abdc7139b629d816a564bb56b9bb90e0cb7bfa1e33c388b4497120c"} Mar 12 08:29:00 crc kubenswrapper[4809]: E0312 08:29:00.917654 4809 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c6791c423abdc7139b629d816a564bb56b9bb90e0cb7bfa1e33c388b4497120c is running failed: container process not found" containerID="c6791c423abdc7139b629d816a564bb56b9bb90e0cb7bfa1e33c388b4497120c" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 12 08:29:00 crc kubenswrapper[4809]: E0312 08:29:00.924497 4809 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c6791c423abdc7139b629d816a564bb56b9bb90e0cb7bfa1e33c388b4497120c is running failed: container process not found" containerID="c6791c423abdc7139b629d816a564bb56b9bb90e0cb7bfa1e33c388b4497120c" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 12 08:29:00 crc kubenswrapper[4809]: E0312 08:29:00.924922 4809 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c6791c423abdc7139b629d816a564bb56b9bb90e0cb7bfa1e33c388b4497120c is running failed: container process not found" containerID="c6791c423abdc7139b629d816a564bb56b9bb90e0cb7bfa1e33c388b4497120c" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 12 08:29:00 crc kubenswrapper[4809]: E0312 08:29:00.925025 4809 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c6791c423abdc7139b629d816a564bb56b9bb90e0cb7bfa1e33c388b4497120c is running failed: container process not found" probeType="Readiness" pod="openstack/heat-engine-776db87b84-bnbp7" podUID="2c1e547b-63a1-4600-83d6-7efc902df373" containerName="heat-engine" Mar 12 08:29:01 crc kubenswrapper[4809]: I0312 08:29:01.018748 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-1" podUID="31094b6a-8ac7-4bbf-883e-aabf280fe22e" containerName="rabbitmq" containerID="cri-o://3fa858e890d83738234fdbb2c73ad1d7d67bdb9834004517350810694cfe0668" gracePeriod=604795 Mar 12 08:29:01 crc kubenswrapper[4809]: I0312 08:29:01.107057 4809 scope.go:117] "RemoveContainer" containerID="864ab2b9321734db2ec1c2e4ce57f8a096a813a5cbaa0744b231535713d6ab10" Mar 12 08:29:01 crc kubenswrapper[4809]: E0312 08:29:01.107400 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:29:02 crc kubenswrapper[4809]: I0312 08:29:02.710131 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="31094b6a-8ac7-4bbf-883e-aabf280fe22e" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.137:5671: connect: connection refused" Mar 12 08:29:02 crc kubenswrapper[4809]: E0312 08:29:02.847322 4809 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest" Mar 12 08:29:02 crc kubenswrapper[4809]: E0312 08:29:02.847536 4809 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 12 08:29:02 crc kubenswrapper[4809]: container &Container{Name:repo-setup-edpm-deployment-openstack-edpm-ipam,Image:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,Command:[],Args:[ansible-runner run /runner -p playbook.yaml -i repo-setup-edpm-deployment-openstack-edpm-ipam],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ANSIBLE_VERBOSITY,Value:2,ValueFrom:nil,},EnvVar{Name:RUNNER_PLAYBOOK,Value: Mar 12 08:29:02 crc kubenswrapper[4809]: - hosts: all Mar 12 08:29:02 crc kubenswrapper[4809]: strategy: linear Mar 12 08:29:02 crc kubenswrapper[4809]: tasks: Mar 12 08:29:02 crc kubenswrapper[4809]: - name: Enable podified-repos Mar 12 08:29:02 crc kubenswrapper[4809]: become: true Mar 12 08:29:02 crc kubenswrapper[4809]: ansible.builtin.shell: | Mar 12 08:29:02 crc kubenswrapper[4809]: set -euxo pipefail Mar 12 08:29:02 crc kubenswrapper[4809]: pushd /var/tmp Mar 12 08:29:02 crc kubenswrapper[4809]: curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz Mar 12 08:29:02 crc kubenswrapper[4809]: pushd repo-setup-main Mar 12 08:29:02 crc kubenswrapper[4809]: python3 -m venv ./venv Mar 12 08:29:02 crc kubenswrapper[4809]: PBR_VERSION=0.0.0 ./venv/bin/pip install ./ Mar 12 08:29:02 crc kubenswrapper[4809]: ./venv/bin/repo-setup current-podified -b antelope Mar 12 08:29:02 crc kubenswrapper[4809]: popd Mar 12 08:29:02 crc kubenswrapper[4809]: rm -rf repo-setup-main Mar 12 08:29:02 crc kubenswrapper[4809]: Mar 12 08:29:02 crc kubenswrapper[4809]: Mar 12 08:29:02 crc kubenswrapper[4809]: ,ValueFrom:nil,},EnvVar{Name:RUNNER_EXTRA_VARS,Value: Mar 12 08:29:02 crc kubenswrapper[4809]: edpm_override_hosts: openstack-edpm-ipam Mar 12 08:29:02 crc kubenswrapper[4809]: edpm_service_type: repo-setup Mar 12 08:29:02 crc kubenswrapper[4809]: Mar 12 08:29:02 crc kubenswrapper[4809]: Mar 12 08:29:02 crc kubenswrapper[4809]: ,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:repo-setup-combined-ca-bundle,ReadOnly:false,MountPath:/var/lib/openstack/cacerts/repo-setup,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key-openstack-edpm-ipam,ReadOnly:false,MountPath:/runner/env/ssh_key/ssh_key_openstack-edpm-ipam,SubPath:ssh_key_openstack-edpm-ipam,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:inventory,ReadOnly:false,MountPath:/runner/inventory/hosts,SubPath:inventory,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f4qv7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:openstack-aee-default-env,},Optional:*true,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod repo-setup-edpm-deployment-openstack-edpm-ipam-d7cg4_openstack(baef87a5-805e-4a55-855f-df82b7292028): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled Mar 12 08:29:02 crc kubenswrapper[4809]: > logger="UnhandledError" Mar 12 08:29:02 crc kubenswrapper[4809]: E0312 08:29:02.848888 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d7cg4" podUID="baef87a5-805e-4a55-855f-df82b7292028" Mar 12 08:29:03 crc kubenswrapper[4809]: E0312 08:29:03.449200 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest\\\"\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d7cg4" podUID="baef87a5-805e-4a55-855f-df82b7292028" Mar 12 08:29:03 crc kubenswrapper[4809]: E0312 08:29:03.975372 4809 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-aodh-api:current-tested" Mar 12 08:29:03 crc kubenswrapper[4809]: E0312 08:29:03.975435 4809 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-aodh-api:current-tested" Mar 12 08:29:03 crc kubenswrapper[4809]: E0312 08:29:03.975610 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:aodh-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-aodh-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:AodhPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:AodhPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:aodh-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hftxg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42402,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod aodh-db-sync-2cqrr_openstack(38ca4679-8c23-449c-a9e8-fc4e224bd1af): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 08:29:03 crc kubenswrapper[4809]: E0312 08:29:03.977144 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"aodh-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/aodh-db-sync-2cqrr" podUID="38ca4679-8c23-449c-a9e8-fc4e224bd1af" Mar 12 08:29:04 crc kubenswrapper[4809]: I0312 08:29:04.513324 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-776db87b84-bnbp7" event={"ID":"2c1e547b-63a1-4600-83d6-7efc902df373","Type":"ContainerDied","Data":"2f50bff09564f728d3e2d5cff3d830dda02a9ba1bef0000f647b62e3b7cfaeac"} Mar 12 08:29:04 crc kubenswrapper[4809]: I0312 08:29:04.513659 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f50bff09564f728d3e2d5cff3d830dda02a9ba1bef0000f647b62e3b7cfaeac" Mar 12 08:29:04 crc kubenswrapper[4809]: E0312 08:29:04.518481 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"aodh-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-aodh-api:current-tested\\\"\"" pod="openstack/aodh-db-sync-2cqrr" podUID="38ca4679-8c23-449c-a9e8-fc4e224bd1af" Mar 12 08:29:04 crc kubenswrapper[4809]: I0312 08:29:04.623914 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-776db87b84-bnbp7" Mar 12 08:29:04 crc kubenswrapper[4809]: I0312 08:29:04.813545 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c1e547b-63a1-4600-83d6-7efc902df373-config-data-custom\") pod \"2c1e547b-63a1-4600-83d6-7efc902df373\" (UID: \"2c1e547b-63a1-4600-83d6-7efc902df373\") " Mar 12 08:29:04 crc kubenswrapper[4809]: I0312 08:29:04.813953 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c1e547b-63a1-4600-83d6-7efc902df373-config-data\") pod \"2c1e547b-63a1-4600-83d6-7efc902df373\" (UID: \"2c1e547b-63a1-4600-83d6-7efc902df373\") " Mar 12 08:29:04 crc kubenswrapper[4809]: I0312 08:29:04.814102 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6smr\" (UniqueName: \"kubernetes.io/projected/2c1e547b-63a1-4600-83d6-7efc902df373-kube-api-access-p6smr\") pod \"2c1e547b-63a1-4600-83d6-7efc902df373\" (UID: \"2c1e547b-63a1-4600-83d6-7efc902df373\") " Mar 12 08:29:04 crc kubenswrapper[4809]: I0312 08:29:04.814162 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c1e547b-63a1-4600-83d6-7efc902df373-combined-ca-bundle\") pod \"2c1e547b-63a1-4600-83d6-7efc902df373\" (UID: \"2c1e547b-63a1-4600-83d6-7efc902df373\") " Mar 12 08:29:04 crc kubenswrapper[4809]: I0312 08:29:04.822194 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c1e547b-63a1-4600-83d6-7efc902df373-kube-api-access-p6smr" (OuterVolumeSpecName: "kube-api-access-p6smr") pod "2c1e547b-63a1-4600-83d6-7efc902df373" (UID: "2c1e547b-63a1-4600-83d6-7efc902df373"). InnerVolumeSpecName "kube-api-access-p6smr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:29:04 crc kubenswrapper[4809]: I0312 08:29:04.824362 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c1e547b-63a1-4600-83d6-7efc902df373-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2c1e547b-63a1-4600-83d6-7efc902df373" (UID: "2c1e547b-63a1-4600-83d6-7efc902df373"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:29:04 crc kubenswrapper[4809]: I0312 08:29:04.862136 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c1e547b-63a1-4600-83d6-7efc902df373-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2c1e547b-63a1-4600-83d6-7efc902df373" (UID: "2c1e547b-63a1-4600-83d6-7efc902df373"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:29:04 crc kubenswrapper[4809]: I0312 08:29:04.898075 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c1e547b-63a1-4600-83d6-7efc902df373-config-data" (OuterVolumeSpecName: "config-data") pod "2c1e547b-63a1-4600-83d6-7efc902df373" (UID: "2c1e547b-63a1-4600-83d6-7efc902df373"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:29:04 crc kubenswrapper[4809]: I0312 08:29:04.918054 4809 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c1e547b-63a1-4600-83d6-7efc902df373-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 12 08:29:04 crc kubenswrapper[4809]: I0312 08:29:04.918124 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c1e547b-63a1-4600-83d6-7efc902df373-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 08:29:04 crc kubenswrapper[4809]: I0312 08:29:04.918138 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6smr\" (UniqueName: \"kubernetes.io/projected/2c1e547b-63a1-4600-83d6-7efc902df373-kube-api-access-p6smr\") on node \"crc\" DevicePath \"\"" Mar 12 08:29:04 crc kubenswrapper[4809]: I0312 08:29:04.918150 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c1e547b-63a1-4600-83d6-7efc902df373-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:29:05 crc kubenswrapper[4809]: I0312 08:29:05.541215 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-776db87b84-bnbp7" Mar 12 08:29:05 crc kubenswrapper[4809]: I0312 08:29:05.578503 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-776db87b84-bnbp7"] Mar 12 08:29:05 crc kubenswrapper[4809]: I0312 08:29:05.594701 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-776db87b84-bnbp7"] Mar 12 08:29:07 crc kubenswrapper[4809]: I0312 08:29:07.120066 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c1e547b-63a1-4600-83d6-7efc902df373" path="/var/lib/kubelet/pods/2c1e547b-63a1-4600-83d6-7efc902df373/volumes" Mar 12 08:29:07 crc kubenswrapper[4809]: I0312 08:29:07.578167 4809 generic.go:334] "Generic (PLEG): container finished" podID="31094b6a-8ac7-4bbf-883e-aabf280fe22e" containerID="3fa858e890d83738234fdbb2c73ad1d7d67bdb9834004517350810694cfe0668" exitCode=0 Mar 12 08:29:07 crc kubenswrapper[4809]: I0312 08:29:07.578305 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"31094b6a-8ac7-4bbf-883e-aabf280fe22e","Type":"ContainerDied","Data":"3fa858e890d83738234fdbb2c73ad1d7d67bdb9834004517350810694cfe0668"} Mar 12 08:29:07 crc kubenswrapper[4809]: I0312 08:29:07.779248 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Mar 12 08:29:07 crc kubenswrapper[4809]: I0312 08:29:07.908213 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/31094b6a-8ac7-4bbf-883e-aabf280fe22e-rabbitmq-erlang-cookie\") pod \"31094b6a-8ac7-4bbf-883e-aabf280fe22e\" (UID: \"31094b6a-8ac7-4bbf-883e-aabf280fe22e\") " Mar 12 08:29:07 crc kubenswrapper[4809]: I0312 08:29:07.908552 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/31094b6a-8ac7-4bbf-883e-aabf280fe22e-erlang-cookie-secret\") pod \"31094b6a-8ac7-4bbf-883e-aabf280fe22e\" (UID: \"31094b6a-8ac7-4bbf-883e-aabf280fe22e\") " Mar 12 08:29:07 crc kubenswrapper[4809]: I0312 08:29:07.908633 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpm68\" (UniqueName: \"kubernetes.io/projected/31094b6a-8ac7-4bbf-883e-aabf280fe22e-kube-api-access-zpm68\") pod \"31094b6a-8ac7-4bbf-883e-aabf280fe22e\" (UID: \"31094b6a-8ac7-4bbf-883e-aabf280fe22e\") " Mar 12 08:29:07 crc kubenswrapper[4809]: I0312 08:29:07.908675 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/31094b6a-8ac7-4bbf-883e-aabf280fe22e-config-data\") pod \"31094b6a-8ac7-4bbf-883e-aabf280fe22e\" (UID: \"31094b6a-8ac7-4bbf-883e-aabf280fe22e\") " Mar 12 08:29:07 crc kubenswrapper[4809]: I0312 08:29:07.908869 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31094b6a-8ac7-4bbf-883e-aabf280fe22e-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "31094b6a-8ac7-4bbf-883e-aabf280fe22e" (UID: "31094b6a-8ac7-4bbf-883e-aabf280fe22e"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:29:07 crc kubenswrapper[4809]: I0312 08:29:07.909350 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc1ea871-b536-4145-bc22-b04289bfeff6\") pod \"31094b6a-8ac7-4bbf-883e-aabf280fe22e\" (UID: \"31094b6a-8ac7-4bbf-883e-aabf280fe22e\") " Mar 12 08:29:07 crc kubenswrapper[4809]: I0312 08:29:07.909499 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/31094b6a-8ac7-4bbf-883e-aabf280fe22e-server-conf\") pod \"31094b6a-8ac7-4bbf-883e-aabf280fe22e\" (UID: \"31094b6a-8ac7-4bbf-883e-aabf280fe22e\") " Mar 12 08:29:07 crc kubenswrapper[4809]: I0312 08:29:07.909529 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/31094b6a-8ac7-4bbf-883e-aabf280fe22e-rabbitmq-tls\") pod \"31094b6a-8ac7-4bbf-883e-aabf280fe22e\" (UID: \"31094b6a-8ac7-4bbf-883e-aabf280fe22e\") " Mar 12 08:29:07 crc kubenswrapper[4809]: I0312 08:29:07.909593 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/31094b6a-8ac7-4bbf-883e-aabf280fe22e-pod-info\") pod \"31094b6a-8ac7-4bbf-883e-aabf280fe22e\" (UID: \"31094b6a-8ac7-4bbf-883e-aabf280fe22e\") " Mar 12 08:29:07 crc kubenswrapper[4809]: I0312 08:29:07.909632 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/31094b6a-8ac7-4bbf-883e-aabf280fe22e-rabbitmq-confd\") pod \"31094b6a-8ac7-4bbf-883e-aabf280fe22e\" (UID: \"31094b6a-8ac7-4bbf-883e-aabf280fe22e\") " Mar 12 08:29:07 crc kubenswrapper[4809]: I0312 08:29:07.909709 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/31094b6a-8ac7-4bbf-883e-aabf280fe22e-plugins-conf\") pod \"31094b6a-8ac7-4bbf-883e-aabf280fe22e\" (UID: \"31094b6a-8ac7-4bbf-883e-aabf280fe22e\") " Mar 12 08:29:07 crc kubenswrapper[4809]: I0312 08:29:07.909732 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/31094b6a-8ac7-4bbf-883e-aabf280fe22e-rabbitmq-plugins\") pod \"31094b6a-8ac7-4bbf-883e-aabf280fe22e\" (UID: \"31094b6a-8ac7-4bbf-883e-aabf280fe22e\") " Mar 12 08:29:07 crc kubenswrapper[4809]: I0312 08:29:07.910634 4809 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/31094b6a-8ac7-4bbf-883e-aabf280fe22e-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Mar 12 08:29:07 crc kubenswrapper[4809]: I0312 08:29:07.918475 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31094b6a-8ac7-4bbf-883e-aabf280fe22e-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "31094b6a-8ac7-4bbf-883e-aabf280fe22e" (UID: "31094b6a-8ac7-4bbf-883e-aabf280fe22e"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:29:07 crc kubenswrapper[4809]: I0312 08:29:07.918859 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31094b6a-8ac7-4bbf-883e-aabf280fe22e-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "31094b6a-8ac7-4bbf-883e-aabf280fe22e" (UID: "31094b6a-8ac7-4bbf-883e-aabf280fe22e"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:29:07 crc kubenswrapper[4809]: I0312 08:29:07.923559 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31094b6a-8ac7-4bbf-883e-aabf280fe22e-kube-api-access-zpm68" (OuterVolumeSpecName: "kube-api-access-zpm68") pod "31094b6a-8ac7-4bbf-883e-aabf280fe22e" (UID: "31094b6a-8ac7-4bbf-883e-aabf280fe22e"). InnerVolumeSpecName "kube-api-access-zpm68". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:29:07 crc kubenswrapper[4809]: I0312 08:29:07.924237 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/31094b6a-8ac7-4bbf-883e-aabf280fe22e-pod-info" (OuterVolumeSpecName: "pod-info") pod "31094b6a-8ac7-4bbf-883e-aabf280fe22e" (UID: "31094b6a-8ac7-4bbf-883e-aabf280fe22e"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 12 08:29:07 crc kubenswrapper[4809]: I0312 08:29:07.924412 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31094b6a-8ac7-4bbf-883e-aabf280fe22e-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "31094b6a-8ac7-4bbf-883e-aabf280fe22e" (UID: "31094b6a-8ac7-4bbf-883e-aabf280fe22e"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:29:07 crc kubenswrapper[4809]: I0312 08:29:07.933362 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31094b6a-8ac7-4bbf-883e-aabf280fe22e-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "31094b6a-8ac7-4bbf-883e-aabf280fe22e" (UID: "31094b6a-8ac7-4bbf-883e-aabf280fe22e"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:29:07 crc kubenswrapper[4809]: I0312 08:29:07.955674 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31094b6a-8ac7-4bbf-883e-aabf280fe22e-config-data" (OuterVolumeSpecName: "config-data") pod "31094b6a-8ac7-4bbf-883e-aabf280fe22e" (UID: "31094b6a-8ac7-4bbf-883e-aabf280fe22e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:29:07 crc kubenswrapper[4809]: I0312 08:29:07.960674 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc1ea871-b536-4145-bc22-b04289bfeff6" (OuterVolumeSpecName: "persistence") pod "31094b6a-8ac7-4bbf-883e-aabf280fe22e" (UID: "31094b6a-8ac7-4bbf-883e-aabf280fe22e"). InnerVolumeSpecName "pvc-dc1ea871-b536-4145-bc22-b04289bfeff6". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 12 08:29:07 crc kubenswrapper[4809]: I0312 08:29:07.998535 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31094b6a-8ac7-4bbf-883e-aabf280fe22e-server-conf" (OuterVolumeSpecName: "server-conf") pod "31094b6a-8ac7-4bbf-883e-aabf280fe22e" (UID: "31094b6a-8ac7-4bbf-883e-aabf280fe22e"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:29:08 crc kubenswrapper[4809]: I0312 08:29:08.012880 4809 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/31094b6a-8ac7-4bbf-883e-aabf280fe22e-plugins-conf\") on node \"crc\" DevicePath \"\"" Mar 12 08:29:08 crc kubenswrapper[4809]: I0312 08:29:08.012917 4809 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/31094b6a-8ac7-4bbf-883e-aabf280fe22e-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Mar 12 08:29:08 crc kubenswrapper[4809]: I0312 08:29:08.012936 4809 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/31094b6a-8ac7-4bbf-883e-aabf280fe22e-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Mar 12 08:29:08 crc kubenswrapper[4809]: I0312 08:29:08.012949 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpm68\" (UniqueName: \"kubernetes.io/projected/31094b6a-8ac7-4bbf-883e-aabf280fe22e-kube-api-access-zpm68\") on node \"crc\" DevicePath \"\"" Mar 12 08:29:08 crc kubenswrapper[4809]: I0312 08:29:08.012960 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/31094b6a-8ac7-4bbf-883e-aabf280fe22e-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 08:29:08 crc kubenswrapper[4809]: I0312 08:29:08.013001 4809 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-dc1ea871-b536-4145-bc22-b04289bfeff6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc1ea871-b536-4145-bc22-b04289bfeff6\") on node \"crc\" " Mar 12 08:29:08 crc kubenswrapper[4809]: I0312 08:29:08.013018 4809 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/31094b6a-8ac7-4bbf-883e-aabf280fe22e-server-conf\") on node \"crc\" DevicePath \"\"" Mar 12 08:29:08 crc kubenswrapper[4809]: I0312 08:29:08.013032 4809 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/31094b6a-8ac7-4bbf-883e-aabf280fe22e-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Mar 12 08:29:08 crc kubenswrapper[4809]: I0312 08:29:08.013044 4809 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/31094b6a-8ac7-4bbf-883e-aabf280fe22e-pod-info\") on node \"crc\" DevicePath \"\"" Mar 12 08:29:08 crc kubenswrapper[4809]: I0312 08:29:08.067928 4809 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 12 08:29:08 crc kubenswrapper[4809]: I0312 08:29:08.068292 4809 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-dc1ea871-b536-4145-bc22-b04289bfeff6" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc1ea871-b536-4145-bc22-b04289bfeff6") on node "crc" Mar 12 08:29:08 crc kubenswrapper[4809]: I0312 08:29:08.067999 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31094b6a-8ac7-4bbf-883e-aabf280fe22e-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "31094b6a-8ac7-4bbf-883e-aabf280fe22e" (UID: "31094b6a-8ac7-4bbf-883e-aabf280fe22e"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:29:08 crc kubenswrapper[4809]: I0312 08:29:08.116877 4809 reconciler_common.go:293] "Volume detached for volume \"pvc-dc1ea871-b536-4145-bc22-b04289bfeff6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc1ea871-b536-4145-bc22-b04289bfeff6\") on node \"crc\" DevicePath \"\"" Mar 12 08:29:08 crc kubenswrapper[4809]: I0312 08:29:08.117224 4809 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/31094b6a-8ac7-4bbf-883e-aabf280fe22e-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Mar 12 08:29:08 crc kubenswrapper[4809]: I0312 08:29:08.596737 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"31094b6a-8ac7-4bbf-883e-aabf280fe22e","Type":"ContainerDied","Data":"0799204193ee6fb0ac4c511848a04cc29a30f9fd31ebfcab14241db41fbb8e6d"} Mar 12 08:29:08 crc kubenswrapper[4809]: I0312 08:29:08.596802 4809 scope.go:117] "RemoveContainer" containerID="3fa858e890d83738234fdbb2c73ad1d7d67bdb9834004517350810694cfe0668" Mar 12 08:29:08 crc kubenswrapper[4809]: I0312 08:29:08.596820 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Mar 12 08:29:08 crc kubenswrapper[4809]: I0312 08:29:08.640157 4809 scope.go:117] "RemoveContainer" containerID="a6ede6ce0480318857787a54e82e85fbafb16bed43eeb8a9ce79c6d82759c251" Mar 12 08:29:08 crc kubenswrapper[4809]: I0312 08:29:08.675305 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Mar 12 08:29:08 crc kubenswrapper[4809]: I0312 08:29:08.704070 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-1"] Mar 12 08:29:08 crc kubenswrapper[4809]: I0312 08:29:08.733287 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Mar 12 08:29:08 crc kubenswrapper[4809]: E0312 08:29:08.734017 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31094b6a-8ac7-4bbf-883e-aabf280fe22e" containerName="rabbitmq" Mar 12 08:29:08 crc kubenswrapper[4809]: I0312 08:29:08.734043 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="31094b6a-8ac7-4bbf-883e-aabf280fe22e" containerName="rabbitmq" Mar 12 08:29:08 crc kubenswrapper[4809]: E0312 08:29:08.734064 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c1e547b-63a1-4600-83d6-7efc902df373" containerName="heat-engine" Mar 12 08:29:08 crc kubenswrapper[4809]: I0312 08:29:08.734071 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c1e547b-63a1-4600-83d6-7efc902df373" containerName="heat-engine" Mar 12 08:29:08 crc kubenswrapper[4809]: E0312 08:29:08.734136 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31094b6a-8ac7-4bbf-883e-aabf280fe22e" containerName="setup-container" Mar 12 08:29:08 crc kubenswrapper[4809]: I0312 08:29:08.734143 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="31094b6a-8ac7-4bbf-883e-aabf280fe22e" containerName="setup-container" Mar 12 08:29:08 crc kubenswrapper[4809]: I0312 08:29:08.734381 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c1e547b-63a1-4600-83d6-7efc902df373" containerName="heat-engine" Mar 12 08:29:08 crc kubenswrapper[4809]: I0312 08:29:08.734421 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="31094b6a-8ac7-4bbf-883e-aabf280fe22e" containerName="rabbitmq" Mar 12 08:29:08 crc kubenswrapper[4809]: I0312 08:29:08.735766 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Mar 12 08:29:08 crc kubenswrapper[4809]: I0312 08:29:08.760712 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Mar 12 08:29:08 crc kubenswrapper[4809]: I0312 08:29:08.842086 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/66d312ab-6fb2-43de-98f2-dc692f592a47-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"66d312ab-6fb2-43de-98f2-dc692f592a47\") " pod="openstack/rabbitmq-server-1" Mar 12 08:29:08 crc kubenswrapper[4809]: I0312 08:29:08.842174 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/66d312ab-6fb2-43de-98f2-dc692f592a47-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"66d312ab-6fb2-43de-98f2-dc692f592a47\") " pod="openstack/rabbitmq-server-1" Mar 12 08:29:08 crc kubenswrapper[4809]: I0312 08:29:08.842200 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/66d312ab-6fb2-43de-98f2-dc692f592a47-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"66d312ab-6fb2-43de-98f2-dc692f592a47\") " pod="openstack/rabbitmq-server-1" Mar 12 08:29:08 crc kubenswrapper[4809]: I0312 08:29:08.842361 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/66d312ab-6fb2-43de-98f2-dc692f592a47-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"66d312ab-6fb2-43de-98f2-dc692f592a47\") " pod="openstack/rabbitmq-server-1" Mar 12 08:29:08 crc kubenswrapper[4809]: I0312 08:29:08.842420 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/66d312ab-6fb2-43de-98f2-dc692f592a47-config-data\") pod \"rabbitmq-server-1\" (UID: \"66d312ab-6fb2-43de-98f2-dc692f592a47\") " pod="openstack/rabbitmq-server-1" Mar 12 08:29:08 crc kubenswrapper[4809]: I0312 08:29:08.842439 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/66d312ab-6fb2-43de-98f2-dc692f592a47-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"66d312ab-6fb2-43de-98f2-dc692f592a47\") " pod="openstack/rabbitmq-server-1" Mar 12 08:29:08 crc kubenswrapper[4809]: I0312 08:29:08.842624 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-dc1ea871-b536-4145-bc22-b04289bfeff6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc1ea871-b536-4145-bc22-b04289bfeff6\") pod \"rabbitmq-server-1\" (UID: \"66d312ab-6fb2-43de-98f2-dc692f592a47\") " pod="openstack/rabbitmq-server-1" Mar 12 08:29:08 crc kubenswrapper[4809]: I0312 08:29:08.842698 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/66d312ab-6fb2-43de-98f2-dc692f592a47-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"66d312ab-6fb2-43de-98f2-dc692f592a47\") " pod="openstack/rabbitmq-server-1" Mar 12 08:29:08 crc kubenswrapper[4809]: I0312 08:29:08.842797 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/66d312ab-6fb2-43de-98f2-dc692f592a47-server-conf\") pod \"rabbitmq-server-1\" (UID: \"66d312ab-6fb2-43de-98f2-dc692f592a47\") " pod="openstack/rabbitmq-server-1" Mar 12 08:29:08 crc kubenswrapper[4809]: I0312 08:29:08.842846 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zm7d\" (UniqueName: \"kubernetes.io/projected/66d312ab-6fb2-43de-98f2-dc692f592a47-kube-api-access-9zm7d\") pod \"rabbitmq-server-1\" (UID: \"66d312ab-6fb2-43de-98f2-dc692f592a47\") " pod="openstack/rabbitmq-server-1" Mar 12 08:29:08 crc kubenswrapper[4809]: I0312 08:29:08.842968 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/66d312ab-6fb2-43de-98f2-dc692f592a47-pod-info\") pod \"rabbitmq-server-1\" (UID: \"66d312ab-6fb2-43de-98f2-dc692f592a47\") " pod="openstack/rabbitmq-server-1" Mar 12 08:29:08 crc kubenswrapper[4809]: I0312 08:29:08.945677 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/66d312ab-6fb2-43de-98f2-dc692f592a47-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"66d312ab-6fb2-43de-98f2-dc692f592a47\") " pod="openstack/rabbitmq-server-1" Mar 12 08:29:08 crc kubenswrapper[4809]: I0312 08:29:08.945768 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/66d312ab-6fb2-43de-98f2-dc692f592a47-config-data\") pod \"rabbitmq-server-1\" (UID: \"66d312ab-6fb2-43de-98f2-dc692f592a47\") " pod="openstack/rabbitmq-server-1" Mar 12 08:29:08 crc kubenswrapper[4809]: I0312 08:29:08.945795 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/66d312ab-6fb2-43de-98f2-dc692f592a47-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"66d312ab-6fb2-43de-98f2-dc692f592a47\") " pod="openstack/rabbitmq-server-1" Mar 12 08:29:08 crc kubenswrapper[4809]: I0312 08:29:08.945846 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-dc1ea871-b536-4145-bc22-b04289bfeff6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc1ea871-b536-4145-bc22-b04289bfeff6\") pod \"rabbitmq-server-1\" (UID: \"66d312ab-6fb2-43de-98f2-dc692f592a47\") " pod="openstack/rabbitmq-server-1" Mar 12 08:29:08 crc kubenswrapper[4809]: I0312 08:29:08.945877 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/66d312ab-6fb2-43de-98f2-dc692f592a47-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"66d312ab-6fb2-43de-98f2-dc692f592a47\") " pod="openstack/rabbitmq-server-1" Mar 12 08:29:08 crc kubenswrapper[4809]: I0312 08:29:08.945913 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/66d312ab-6fb2-43de-98f2-dc692f592a47-server-conf\") pod \"rabbitmq-server-1\" (UID: \"66d312ab-6fb2-43de-98f2-dc692f592a47\") " pod="openstack/rabbitmq-server-1" Mar 12 08:29:08 crc kubenswrapper[4809]: I0312 08:29:08.945937 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zm7d\" (UniqueName: \"kubernetes.io/projected/66d312ab-6fb2-43de-98f2-dc692f592a47-kube-api-access-9zm7d\") pod \"rabbitmq-server-1\" (UID: \"66d312ab-6fb2-43de-98f2-dc692f592a47\") " pod="openstack/rabbitmq-server-1" Mar 12 08:29:08 crc kubenswrapper[4809]: I0312 08:29:08.945975 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/66d312ab-6fb2-43de-98f2-dc692f592a47-pod-info\") pod \"rabbitmq-server-1\" (UID: \"66d312ab-6fb2-43de-98f2-dc692f592a47\") " pod="openstack/rabbitmq-server-1" Mar 12 08:29:08 crc kubenswrapper[4809]: I0312 08:29:08.946075 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/66d312ab-6fb2-43de-98f2-dc692f592a47-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"66d312ab-6fb2-43de-98f2-dc692f592a47\") " pod="openstack/rabbitmq-server-1" Mar 12 08:29:08 crc kubenswrapper[4809]: I0312 08:29:08.946140 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/66d312ab-6fb2-43de-98f2-dc692f592a47-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"66d312ab-6fb2-43de-98f2-dc692f592a47\") " pod="openstack/rabbitmq-server-1" Mar 12 08:29:08 crc kubenswrapper[4809]: I0312 08:29:08.946167 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/66d312ab-6fb2-43de-98f2-dc692f592a47-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"66d312ab-6fb2-43de-98f2-dc692f592a47\") " pod="openstack/rabbitmq-server-1" Mar 12 08:29:08 crc kubenswrapper[4809]: I0312 08:29:08.947864 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/66d312ab-6fb2-43de-98f2-dc692f592a47-config-data\") pod \"rabbitmq-server-1\" (UID: \"66d312ab-6fb2-43de-98f2-dc692f592a47\") " pod="openstack/rabbitmq-server-1" Mar 12 08:29:08 crc kubenswrapper[4809]: I0312 08:29:08.948292 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/66d312ab-6fb2-43de-98f2-dc692f592a47-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"66d312ab-6fb2-43de-98f2-dc692f592a47\") " pod="openstack/rabbitmq-server-1" Mar 12 08:29:08 crc kubenswrapper[4809]: I0312 08:29:08.948687 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/66d312ab-6fb2-43de-98f2-dc692f592a47-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"66d312ab-6fb2-43de-98f2-dc692f592a47\") " pod="openstack/rabbitmq-server-1" Mar 12 08:29:08 crc kubenswrapper[4809]: I0312 08:29:08.950108 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/66d312ab-6fb2-43de-98f2-dc692f592a47-server-conf\") pod \"rabbitmq-server-1\" (UID: \"66d312ab-6fb2-43de-98f2-dc692f592a47\") " pod="openstack/rabbitmq-server-1" Mar 12 08:29:08 crc kubenswrapper[4809]: I0312 08:29:08.950330 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/66d312ab-6fb2-43de-98f2-dc692f592a47-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"66d312ab-6fb2-43de-98f2-dc692f592a47\") " pod="openstack/rabbitmq-server-1" Mar 12 08:29:08 crc kubenswrapper[4809]: I0312 08:29:08.951907 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/66d312ab-6fb2-43de-98f2-dc692f592a47-pod-info\") pod \"rabbitmq-server-1\" (UID: \"66d312ab-6fb2-43de-98f2-dc692f592a47\") " pod="openstack/rabbitmq-server-1" Mar 12 08:29:08 crc kubenswrapper[4809]: I0312 08:29:08.951926 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/66d312ab-6fb2-43de-98f2-dc692f592a47-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"66d312ab-6fb2-43de-98f2-dc692f592a47\") " pod="openstack/rabbitmq-server-1" Mar 12 08:29:08 crc kubenswrapper[4809]: I0312 08:29:08.957014 4809 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 08:29:08 crc kubenswrapper[4809]: I0312 08:29:08.957055 4809 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-dc1ea871-b536-4145-bc22-b04289bfeff6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc1ea871-b536-4145-bc22-b04289bfeff6\") pod \"rabbitmq-server-1\" (UID: \"66d312ab-6fb2-43de-98f2-dc692f592a47\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/328f91188ed09e3478b0c39e8f690b6e2a7dde835523be1bc416406be231afd1/globalmount\"" pod="openstack/rabbitmq-server-1" Mar 12 08:29:08 crc kubenswrapper[4809]: I0312 08:29:08.962960 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/66d312ab-6fb2-43de-98f2-dc692f592a47-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"66d312ab-6fb2-43de-98f2-dc692f592a47\") " pod="openstack/rabbitmq-server-1" Mar 12 08:29:08 crc kubenswrapper[4809]: I0312 08:29:08.963021 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/66d312ab-6fb2-43de-98f2-dc692f592a47-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"66d312ab-6fb2-43de-98f2-dc692f592a47\") " pod="openstack/rabbitmq-server-1" Mar 12 08:29:08 crc kubenswrapper[4809]: I0312 08:29:08.969088 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zm7d\" (UniqueName: \"kubernetes.io/projected/66d312ab-6fb2-43de-98f2-dc692f592a47-kube-api-access-9zm7d\") pod \"rabbitmq-server-1\" (UID: \"66d312ab-6fb2-43de-98f2-dc692f592a47\") " pod="openstack/rabbitmq-server-1" Mar 12 08:29:09 crc kubenswrapper[4809]: I0312 08:29:09.027050 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-dc1ea871-b536-4145-bc22-b04289bfeff6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc1ea871-b536-4145-bc22-b04289bfeff6\") pod \"rabbitmq-server-1\" (UID: \"66d312ab-6fb2-43de-98f2-dc692f592a47\") " pod="openstack/rabbitmq-server-1" Mar 12 08:29:09 crc kubenswrapper[4809]: I0312 08:29:09.074101 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Mar 12 08:29:09 crc kubenswrapper[4809]: I0312 08:29:09.130817 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31094b6a-8ac7-4bbf-883e-aabf280fe22e" path="/var/lib/kubelet/pods/31094b6a-8ac7-4bbf-883e-aabf280fe22e/volumes" Mar 12 08:29:09 crc kubenswrapper[4809]: W0312 08:29:09.619914 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66d312ab_6fb2_43de_98f2_dc692f592a47.slice/crio-2857e0e76d89df08b37a352c28efc3963ec3736c29045ae8299dc803b9594e75 WatchSource:0}: Error finding container 2857e0e76d89df08b37a352c28efc3963ec3736c29045ae8299dc803b9594e75: Status 404 returned error can't find the container with id 2857e0e76d89df08b37a352c28efc3963ec3736c29045ae8299dc803b9594e75 Mar 12 08:29:09 crc kubenswrapper[4809]: I0312 08:29:09.621358 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Mar 12 08:29:10 crc kubenswrapper[4809]: I0312 08:29:10.636712 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"66d312ab-6fb2-43de-98f2-dc692f592a47","Type":"ContainerStarted","Data":"2857e0e76d89df08b37a352c28efc3963ec3736c29045ae8299dc803b9594e75"} Mar 12 08:29:12 crc kubenswrapper[4809]: I0312 08:29:12.668362 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"66d312ab-6fb2-43de-98f2-dc692f592a47","Type":"ContainerStarted","Data":"43a897259d17738ff6fb9a9f3682087e53fbb029e6f030675019bc4775aaf7f5"} Mar 12 08:29:14 crc kubenswrapper[4809]: I0312 08:29:14.597550 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 08:29:15 crc kubenswrapper[4809]: I0312 08:29:15.106441 4809 scope.go:117] "RemoveContainer" containerID="864ab2b9321734db2ec1c2e4ce57f8a096a813a5cbaa0744b231535713d6ab10" Mar 12 08:29:15 crc kubenswrapper[4809]: E0312 08:29:15.107185 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:29:15 crc kubenswrapper[4809]: I0312 08:29:15.717147 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d7cg4" event={"ID":"baef87a5-805e-4a55-855f-df82b7292028","Type":"ContainerStarted","Data":"03605eab784818b9e48cad4cce68066369bbc09cebb3cb1a8ead6c82579e825e"} Mar 12 08:29:15 crc kubenswrapper[4809]: I0312 08:29:15.739394 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d7cg4" podStartSLOduration=3.280878278 podStartE2EDuration="33.739371438s" podCreationTimestamp="2026-03-12 08:28:42 +0000 UTC" firstStartedPulling="2026-03-12 08:28:44.134522157 +0000 UTC m=+1797.716557890" lastFinishedPulling="2026-03-12 08:29:14.593015327 +0000 UTC m=+1828.175051050" observedRunningTime="2026-03-12 08:29:15.73873743 +0000 UTC m=+1829.320773163" watchObservedRunningTime="2026-03-12 08:29:15.739371438 +0000 UTC m=+1829.321407181" Mar 12 08:29:18 crc kubenswrapper[4809]: I0312 08:29:18.340945 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 12 08:29:18 crc kubenswrapper[4809]: I0312 08:29:18.764873 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-2cqrr" event={"ID":"38ca4679-8c23-449c-a9e8-fc4e224bd1af","Type":"ContainerStarted","Data":"4064341e4d2b5d0a89dcafe2e2ef9ec5e88e7d508fbe1c21d0c0ced5acbe01bf"} Mar 12 08:29:18 crc kubenswrapper[4809]: I0312 08:29:18.799079 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-2cqrr" podStartSLOduration=2.225455375 podStartE2EDuration="34.79904949s" podCreationTimestamp="2026-03-12 08:28:44 +0000 UTC" firstStartedPulling="2026-03-12 08:28:45.764175751 +0000 UTC m=+1799.346211484" lastFinishedPulling="2026-03-12 08:29:18.337769866 +0000 UTC m=+1831.919805599" observedRunningTime="2026-03-12 08:29:18.787404874 +0000 UTC m=+1832.369440607" watchObservedRunningTime="2026-03-12 08:29:18.79904949 +0000 UTC m=+1832.381085223" Mar 12 08:29:22 crc kubenswrapper[4809]: I0312 08:29:22.835365 4809 generic.go:334] "Generic (PLEG): container finished" podID="38ca4679-8c23-449c-a9e8-fc4e224bd1af" containerID="4064341e4d2b5d0a89dcafe2e2ef9ec5e88e7d508fbe1c21d0c0ced5acbe01bf" exitCode=0 Mar 12 08:29:22 crc kubenswrapper[4809]: I0312 08:29:22.835422 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-2cqrr" event={"ID":"38ca4679-8c23-449c-a9e8-fc4e224bd1af","Type":"ContainerDied","Data":"4064341e4d2b5d0a89dcafe2e2ef9ec5e88e7d508fbe1c21d0c0ced5acbe01bf"} Mar 12 08:29:24 crc kubenswrapper[4809]: I0312 08:29:24.299940 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-2cqrr" Mar 12 08:29:24 crc kubenswrapper[4809]: I0312 08:29:24.439883 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38ca4679-8c23-449c-a9e8-fc4e224bd1af-combined-ca-bundle\") pod \"38ca4679-8c23-449c-a9e8-fc4e224bd1af\" (UID: \"38ca4679-8c23-449c-a9e8-fc4e224bd1af\") " Mar 12 08:29:24 crc kubenswrapper[4809]: I0312 08:29:24.439981 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hftxg\" (UniqueName: \"kubernetes.io/projected/38ca4679-8c23-449c-a9e8-fc4e224bd1af-kube-api-access-hftxg\") pod \"38ca4679-8c23-449c-a9e8-fc4e224bd1af\" (UID: \"38ca4679-8c23-449c-a9e8-fc4e224bd1af\") " Mar 12 08:29:24 crc kubenswrapper[4809]: I0312 08:29:24.440199 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38ca4679-8c23-449c-a9e8-fc4e224bd1af-config-data\") pod \"38ca4679-8c23-449c-a9e8-fc4e224bd1af\" (UID: \"38ca4679-8c23-449c-a9e8-fc4e224bd1af\") " Mar 12 08:29:24 crc kubenswrapper[4809]: I0312 08:29:24.440256 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38ca4679-8c23-449c-a9e8-fc4e224bd1af-scripts\") pod \"38ca4679-8c23-449c-a9e8-fc4e224bd1af\" (UID: \"38ca4679-8c23-449c-a9e8-fc4e224bd1af\") " Mar 12 08:29:24 crc kubenswrapper[4809]: I0312 08:29:24.446373 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38ca4679-8c23-449c-a9e8-fc4e224bd1af-scripts" (OuterVolumeSpecName: "scripts") pod "38ca4679-8c23-449c-a9e8-fc4e224bd1af" (UID: "38ca4679-8c23-449c-a9e8-fc4e224bd1af"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:29:24 crc kubenswrapper[4809]: I0312 08:29:24.446629 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38ca4679-8c23-449c-a9e8-fc4e224bd1af-kube-api-access-hftxg" (OuterVolumeSpecName: "kube-api-access-hftxg") pod "38ca4679-8c23-449c-a9e8-fc4e224bd1af" (UID: "38ca4679-8c23-449c-a9e8-fc4e224bd1af"). InnerVolumeSpecName "kube-api-access-hftxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:29:24 crc kubenswrapper[4809]: I0312 08:29:24.483823 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38ca4679-8c23-449c-a9e8-fc4e224bd1af-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "38ca4679-8c23-449c-a9e8-fc4e224bd1af" (UID: "38ca4679-8c23-449c-a9e8-fc4e224bd1af"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:29:24 crc kubenswrapper[4809]: I0312 08:29:24.490021 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38ca4679-8c23-449c-a9e8-fc4e224bd1af-config-data" (OuterVolumeSpecName: "config-data") pod "38ca4679-8c23-449c-a9e8-fc4e224bd1af" (UID: "38ca4679-8c23-449c-a9e8-fc4e224bd1af"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:29:24 crc kubenswrapper[4809]: I0312 08:29:24.544787 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38ca4679-8c23-449c-a9e8-fc4e224bd1af-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:29:24 crc kubenswrapper[4809]: I0312 08:29:24.544837 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hftxg\" (UniqueName: \"kubernetes.io/projected/38ca4679-8c23-449c-a9e8-fc4e224bd1af-kube-api-access-hftxg\") on node \"crc\" DevicePath \"\"" Mar 12 08:29:24 crc kubenswrapper[4809]: I0312 08:29:24.544858 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38ca4679-8c23-449c-a9e8-fc4e224bd1af-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 08:29:24 crc kubenswrapper[4809]: I0312 08:29:24.544871 4809 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38ca4679-8c23-449c-a9e8-fc4e224bd1af-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 08:29:24 crc kubenswrapper[4809]: I0312 08:29:24.865748 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-2cqrr" event={"ID":"38ca4679-8c23-449c-a9e8-fc4e224bd1af","Type":"ContainerDied","Data":"e44adcfc64642eecb040312345dfbf935c77e74b384ea0d242bbef899001aae7"} Mar 12 08:29:24 crc kubenswrapper[4809]: I0312 08:29:24.865816 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e44adcfc64642eecb040312345dfbf935c77e74b384ea0d242bbef899001aae7" Mar 12 08:29:24 crc kubenswrapper[4809]: I0312 08:29:24.865943 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-2cqrr" Mar 12 08:29:24 crc kubenswrapper[4809]: I0312 08:29:24.987150 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Mar 12 08:29:24 crc kubenswrapper[4809]: I0312 08:29:24.987530 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a" containerName="aodh-api" containerID="cri-o://c7a204f3c607e4769dc40c5c680c2f4d8d2295883e411cc0415787fd745ad576" gracePeriod=30 Mar 12 08:29:24 crc kubenswrapper[4809]: I0312 08:29:24.987693 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a" containerName="aodh-listener" containerID="cri-o://2b987bde8509cd5aba10e8432e73940d4dd336273ad498fd7ac75ad0b01a7e41" gracePeriod=30 Mar 12 08:29:24 crc kubenswrapper[4809]: I0312 08:29:24.987731 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a" containerName="aodh-evaluator" containerID="cri-o://0444095daf15db78ef070a7091c3de12368cf54ff9458a8b040ff8aef404e31e" gracePeriod=30 Mar 12 08:29:24 crc kubenswrapper[4809]: I0312 08:29:24.987836 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a" containerName="aodh-notifier" containerID="cri-o://cab7ab924a4c4ad6647d58bc18fe03312ea5a779d09402133a88bd249a7b957b" gracePeriod=30 Mar 12 08:29:25 crc kubenswrapper[4809]: I0312 08:29:25.880376 4809 generic.go:334] "Generic (PLEG): container finished" podID="ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a" containerID="0444095daf15db78ef070a7091c3de12368cf54ff9458a8b040ff8aef404e31e" exitCode=0 Mar 12 08:29:25 crc kubenswrapper[4809]: I0312 08:29:25.880664 4809 generic.go:334] "Generic (PLEG): container finished" podID="ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a" containerID="c7a204f3c607e4769dc40c5c680c2f4d8d2295883e411cc0415787fd745ad576" exitCode=0 Mar 12 08:29:25 crc kubenswrapper[4809]: I0312 08:29:25.880438 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a","Type":"ContainerDied","Data":"0444095daf15db78ef070a7091c3de12368cf54ff9458a8b040ff8aef404e31e"} Mar 12 08:29:25 crc kubenswrapper[4809]: I0312 08:29:25.880700 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a","Type":"ContainerDied","Data":"c7a204f3c607e4769dc40c5c680c2f4d8d2295883e411cc0415787fd745ad576"} Mar 12 08:29:26 crc kubenswrapper[4809]: I0312 08:29:26.369067 4809 scope.go:117] "RemoveContainer" containerID="46c81eccdfc52f3dfa31b19209225eeb568e8d916856648b7a637a255ffb6e1d" Mar 12 08:29:26 crc kubenswrapper[4809]: I0312 08:29:26.430784 4809 scope.go:117] "RemoveContainer" containerID="17079a3b61c88ca57390b8f5db5fd1ebd79265ceda08b79633bb30c0994cd283" Mar 12 08:29:27 crc kubenswrapper[4809]: E0312 08:29:27.700577 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podccfe1f5a_0ff5_4483_a3b3_c02e2880c14a.slice/crio-cab7ab924a4c4ad6647d58bc18fe03312ea5a779d09402133a88bd249a7b957b.scope\": RecentStats: unable to find data in memory cache]" Mar 12 08:29:27 crc kubenswrapper[4809]: I0312 08:29:27.929256 4809 generic.go:334] "Generic (PLEG): container finished" podID="baef87a5-805e-4a55-855f-df82b7292028" containerID="03605eab784818b9e48cad4cce68066369bbc09cebb3cb1a8ead6c82579e825e" exitCode=0 Mar 12 08:29:27 crc kubenswrapper[4809]: I0312 08:29:27.929297 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d7cg4" event={"ID":"baef87a5-805e-4a55-855f-df82b7292028","Type":"ContainerDied","Data":"03605eab784818b9e48cad4cce68066369bbc09cebb3cb1a8ead6c82579e825e"} Mar 12 08:29:27 crc kubenswrapper[4809]: I0312 08:29:27.943322 4809 generic.go:334] "Generic (PLEG): container finished" podID="ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a" containerID="cab7ab924a4c4ad6647d58bc18fe03312ea5a779d09402133a88bd249a7b957b" exitCode=0 Mar 12 08:29:27 crc kubenswrapper[4809]: I0312 08:29:27.943383 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a","Type":"ContainerDied","Data":"cab7ab924a4c4ad6647d58bc18fe03312ea5a779d09402133a88bd249a7b957b"} Mar 12 08:29:28 crc kubenswrapper[4809]: I0312 08:29:28.720490 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 12 08:29:28 crc kubenswrapper[4809]: I0312 08:29:28.865938 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a-config-data\") pod \"ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a\" (UID: \"ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a\") " Mar 12 08:29:28 crc kubenswrapper[4809]: I0312 08:29:28.866074 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a-public-tls-certs\") pod \"ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a\" (UID: \"ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a\") " Mar 12 08:29:28 crc kubenswrapper[4809]: I0312 08:29:28.866189 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a-combined-ca-bundle\") pod \"ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a\" (UID: \"ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a\") " Mar 12 08:29:28 crc kubenswrapper[4809]: I0312 08:29:28.866221 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a-internal-tls-certs\") pod \"ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a\" (UID: \"ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a\") " Mar 12 08:29:28 crc kubenswrapper[4809]: I0312 08:29:28.866249 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a-scripts\") pod \"ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a\" (UID: \"ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a\") " Mar 12 08:29:28 crc kubenswrapper[4809]: I0312 08:29:28.867207 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94wbl\" (UniqueName: \"kubernetes.io/projected/ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a-kube-api-access-94wbl\") pod \"ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a\" (UID: \"ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a\") " Mar 12 08:29:28 crc kubenswrapper[4809]: I0312 08:29:28.873187 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a-kube-api-access-94wbl" (OuterVolumeSpecName: "kube-api-access-94wbl") pod "ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a" (UID: "ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a"). InnerVolumeSpecName "kube-api-access-94wbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:29:28 crc kubenswrapper[4809]: I0312 08:29:28.880879 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a-scripts" (OuterVolumeSpecName: "scripts") pod "ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a" (UID: "ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:29:28 crc kubenswrapper[4809]: I0312 08:29:28.936515 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a" (UID: "ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:29:28 crc kubenswrapper[4809]: I0312 08:29:28.954919 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a" (UID: "ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:29:28 crc kubenswrapper[4809]: I0312 08:29:28.973420 4809 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 08:29:28 crc kubenswrapper[4809]: I0312 08:29:28.973520 4809 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 08:29:28 crc kubenswrapper[4809]: I0312 08:29:28.973537 4809 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 08:29:28 crc kubenswrapper[4809]: I0312 08:29:28.973553 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94wbl\" (UniqueName: \"kubernetes.io/projected/ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a-kube-api-access-94wbl\") on node \"crc\" DevicePath \"\"" Mar 12 08:29:28 crc kubenswrapper[4809]: I0312 08:29:28.976735 4809 generic.go:334] "Generic (PLEG): container finished" podID="ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a" containerID="2b987bde8509cd5aba10e8432e73940d4dd336273ad498fd7ac75ad0b01a7e41" exitCode=0 Mar 12 08:29:28 crc kubenswrapper[4809]: I0312 08:29:28.976798 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a","Type":"ContainerDied","Data":"2b987bde8509cd5aba10e8432e73940d4dd336273ad498fd7ac75ad0b01a7e41"} Mar 12 08:29:28 crc kubenswrapper[4809]: I0312 08:29:28.976855 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a","Type":"ContainerDied","Data":"e2acaaaa3970eb8cdbef7136f857b82f8873818daff9cabe5a07b8c2ca6cd3ee"} Mar 12 08:29:28 crc kubenswrapper[4809]: I0312 08:29:28.976882 4809 scope.go:117] "RemoveContainer" containerID="2b987bde8509cd5aba10e8432e73940d4dd336273ad498fd7ac75ad0b01a7e41" Mar 12 08:29:28 crc kubenswrapper[4809]: I0312 08:29:28.977201 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 12 08:29:29 crc kubenswrapper[4809]: I0312 08:29:29.022779 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a-config-data" (OuterVolumeSpecName: "config-data") pod "ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a" (UID: "ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:29:29 crc kubenswrapper[4809]: I0312 08:29:29.057909 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a" (UID: "ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:29:29 crc kubenswrapper[4809]: I0312 08:29:29.075410 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:29:29 crc kubenswrapper[4809]: I0312 08:29:29.075445 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 08:29:29 crc kubenswrapper[4809]: I0312 08:29:29.175307 4809 scope.go:117] "RemoveContainer" containerID="cab7ab924a4c4ad6647d58bc18fe03312ea5a779d09402133a88bd249a7b957b" Mar 12 08:29:29 crc kubenswrapper[4809]: I0312 08:29:29.207184 4809 scope.go:117] "RemoveContainer" containerID="0444095daf15db78ef070a7091c3de12368cf54ff9458a8b040ff8aef404e31e" Mar 12 08:29:29 crc kubenswrapper[4809]: I0312 08:29:29.229786 4809 scope.go:117] "RemoveContainer" containerID="c7a204f3c607e4769dc40c5c680c2f4d8d2295883e411cc0415787fd745ad576" Mar 12 08:29:29 crc kubenswrapper[4809]: I0312 08:29:29.285014 4809 scope.go:117] "RemoveContainer" containerID="2b987bde8509cd5aba10e8432e73940d4dd336273ad498fd7ac75ad0b01a7e41" Mar 12 08:29:29 crc kubenswrapper[4809]: E0312 08:29:29.285507 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b987bde8509cd5aba10e8432e73940d4dd336273ad498fd7ac75ad0b01a7e41\": container with ID starting with 2b987bde8509cd5aba10e8432e73940d4dd336273ad498fd7ac75ad0b01a7e41 not found: ID does not exist" containerID="2b987bde8509cd5aba10e8432e73940d4dd336273ad498fd7ac75ad0b01a7e41" Mar 12 08:29:29 crc kubenswrapper[4809]: I0312 08:29:29.285548 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b987bde8509cd5aba10e8432e73940d4dd336273ad498fd7ac75ad0b01a7e41"} err="failed to get container status \"2b987bde8509cd5aba10e8432e73940d4dd336273ad498fd7ac75ad0b01a7e41\": rpc error: code = NotFound desc = could not find container \"2b987bde8509cd5aba10e8432e73940d4dd336273ad498fd7ac75ad0b01a7e41\": container with ID starting with 2b987bde8509cd5aba10e8432e73940d4dd336273ad498fd7ac75ad0b01a7e41 not found: ID does not exist" Mar 12 08:29:29 crc kubenswrapper[4809]: I0312 08:29:29.285573 4809 scope.go:117] "RemoveContainer" containerID="cab7ab924a4c4ad6647d58bc18fe03312ea5a779d09402133a88bd249a7b957b" Mar 12 08:29:29 crc kubenswrapper[4809]: E0312 08:29:29.285876 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cab7ab924a4c4ad6647d58bc18fe03312ea5a779d09402133a88bd249a7b957b\": container with ID starting with cab7ab924a4c4ad6647d58bc18fe03312ea5a779d09402133a88bd249a7b957b not found: ID does not exist" containerID="cab7ab924a4c4ad6647d58bc18fe03312ea5a779d09402133a88bd249a7b957b" Mar 12 08:29:29 crc kubenswrapper[4809]: I0312 08:29:29.285905 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cab7ab924a4c4ad6647d58bc18fe03312ea5a779d09402133a88bd249a7b957b"} err="failed to get container status \"cab7ab924a4c4ad6647d58bc18fe03312ea5a779d09402133a88bd249a7b957b\": rpc error: code = NotFound desc = could not find container \"cab7ab924a4c4ad6647d58bc18fe03312ea5a779d09402133a88bd249a7b957b\": container with ID starting with cab7ab924a4c4ad6647d58bc18fe03312ea5a779d09402133a88bd249a7b957b not found: ID does not exist" Mar 12 08:29:29 crc kubenswrapper[4809]: I0312 08:29:29.285922 4809 scope.go:117] "RemoveContainer" containerID="0444095daf15db78ef070a7091c3de12368cf54ff9458a8b040ff8aef404e31e" Mar 12 08:29:29 crc kubenswrapper[4809]: E0312 08:29:29.286353 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0444095daf15db78ef070a7091c3de12368cf54ff9458a8b040ff8aef404e31e\": container with ID starting with 0444095daf15db78ef070a7091c3de12368cf54ff9458a8b040ff8aef404e31e not found: ID does not exist" containerID="0444095daf15db78ef070a7091c3de12368cf54ff9458a8b040ff8aef404e31e" Mar 12 08:29:29 crc kubenswrapper[4809]: I0312 08:29:29.286377 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0444095daf15db78ef070a7091c3de12368cf54ff9458a8b040ff8aef404e31e"} err="failed to get container status \"0444095daf15db78ef070a7091c3de12368cf54ff9458a8b040ff8aef404e31e\": rpc error: code = NotFound desc = could not find container \"0444095daf15db78ef070a7091c3de12368cf54ff9458a8b040ff8aef404e31e\": container with ID starting with 0444095daf15db78ef070a7091c3de12368cf54ff9458a8b040ff8aef404e31e not found: ID does not exist" Mar 12 08:29:29 crc kubenswrapper[4809]: I0312 08:29:29.286392 4809 scope.go:117] "RemoveContainer" containerID="c7a204f3c607e4769dc40c5c680c2f4d8d2295883e411cc0415787fd745ad576" Mar 12 08:29:29 crc kubenswrapper[4809]: E0312 08:29:29.286686 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7a204f3c607e4769dc40c5c680c2f4d8d2295883e411cc0415787fd745ad576\": container with ID starting with c7a204f3c607e4769dc40c5c680c2f4d8d2295883e411cc0415787fd745ad576 not found: ID does not exist" containerID="c7a204f3c607e4769dc40c5c680c2f4d8d2295883e411cc0415787fd745ad576" Mar 12 08:29:29 crc kubenswrapper[4809]: I0312 08:29:29.286708 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7a204f3c607e4769dc40c5c680c2f4d8d2295883e411cc0415787fd745ad576"} err="failed to get container status \"c7a204f3c607e4769dc40c5c680c2f4d8d2295883e411cc0415787fd745ad576\": rpc error: code = NotFound desc = could not find container \"c7a204f3c607e4769dc40c5c680c2f4d8d2295883e411cc0415787fd745ad576\": container with ID starting with c7a204f3c607e4769dc40c5c680c2f4d8d2295883e411cc0415787fd745ad576 not found: ID does not exist" Mar 12 08:29:29 crc kubenswrapper[4809]: I0312 08:29:29.317184 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Mar 12 08:29:29 crc kubenswrapper[4809]: I0312 08:29:29.341273 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Mar 12 08:29:29 crc kubenswrapper[4809]: I0312 08:29:29.360662 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Mar 12 08:29:29 crc kubenswrapper[4809]: E0312 08:29:29.361420 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a" containerName="aodh-evaluator" Mar 12 08:29:29 crc kubenswrapper[4809]: I0312 08:29:29.361452 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a" containerName="aodh-evaluator" Mar 12 08:29:29 crc kubenswrapper[4809]: E0312 08:29:29.361471 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a" containerName="aodh-api" Mar 12 08:29:29 crc kubenswrapper[4809]: I0312 08:29:29.361480 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a" containerName="aodh-api" Mar 12 08:29:29 crc kubenswrapper[4809]: E0312 08:29:29.361489 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a" containerName="aodh-listener" Mar 12 08:29:29 crc kubenswrapper[4809]: I0312 08:29:29.361497 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a" containerName="aodh-listener" Mar 12 08:29:29 crc kubenswrapper[4809]: E0312 08:29:29.361531 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38ca4679-8c23-449c-a9e8-fc4e224bd1af" containerName="aodh-db-sync" Mar 12 08:29:29 crc kubenswrapper[4809]: I0312 08:29:29.361539 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="38ca4679-8c23-449c-a9e8-fc4e224bd1af" containerName="aodh-db-sync" Mar 12 08:29:29 crc kubenswrapper[4809]: E0312 08:29:29.361554 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a" containerName="aodh-notifier" Mar 12 08:29:29 crc kubenswrapper[4809]: I0312 08:29:29.361562 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a" containerName="aodh-notifier" Mar 12 08:29:29 crc kubenswrapper[4809]: I0312 08:29:29.361813 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a" containerName="aodh-api" Mar 12 08:29:29 crc kubenswrapper[4809]: I0312 08:29:29.361835 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a" containerName="aodh-notifier" Mar 12 08:29:29 crc kubenswrapper[4809]: I0312 08:29:29.361847 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a" containerName="aodh-listener" Mar 12 08:29:29 crc kubenswrapper[4809]: I0312 08:29:29.361869 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a" containerName="aodh-evaluator" Mar 12 08:29:29 crc kubenswrapper[4809]: I0312 08:29:29.361880 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="38ca4679-8c23-449c-a9e8-fc4e224bd1af" containerName="aodh-db-sync" Mar 12 08:29:29 crc kubenswrapper[4809]: I0312 08:29:29.364309 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 12 08:29:29 crc kubenswrapper[4809]: I0312 08:29:29.368185 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Mar 12 08:29:29 crc kubenswrapper[4809]: I0312 08:29:29.368400 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-qhrlw" Mar 12 08:29:29 crc kubenswrapper[4809]: I0312 08:29:29.368532 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Mar 12 08:29:29 crc kubenswrapper[4809]: I0312 08:29:29.369458 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Mar 12 08:29:29 crc kubenswrapper[4809]: I0312 08:29:29.369566 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Mar 12 08:29:29 crc kubenswrapper[4809]: I0312 08:29:29.393597 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Mar 12 08:29:29 crc kubenswrapper[4809]: I0312 08:29:29.489646 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b43dd5f7-6f15-464c-8fea-98a37a6942d1-config-data\") pod \"aodh-0\" (UID: \"b43dd5f7-6f15-464c-8fea-98a37a6942d1\") " pod="openstack/aodh-0" Mar 12 08:29:29 crc kubenswrapper[4809]: I0312 08:29:29.489814 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfzz8\" (UniqueName: \"kubernetes.io/projected/b43dd5f7-6f15-464c-8fea-98a37a6942d1-kube-api-access-zfzz8\") pod \"aodh-0\" (UID: \"b43dd5f7-6f15-464c-8fea-98a37a6942d1\") " pod="openstack/aodh-0" Mar 12 08:29:29 crc kubenswrapper[4809]: I0312 08:29:29.489883 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b43dd5f7-6f15-464c-8fea-98a37a6942d1-public-tls-certs\") pod \"aodh-0\" (UID: \"b43dd5f7-6f15-464c-8fea-98a37a6942d1\") " pod="openstack/aodh-0" Mar 12 08:29:29 crc kubenswrapper[4809]: I0312 08:29:29.490013 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b43dd5f7-6f15-464c-8fea-98a37a6942d1-combined-ca-bundle\") pod \"aodh-0\" (UID: \"b43dd5f7-6f15-464c-8fea-98a37a6942d1\") " pod="openstack/aodh-0" Mar 12 08:29:29 crc kubenswrapper[4809]: I0312 08:29:29.490075 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b43dd5f7-6f15-464c-8fea-98a37a6942d1-internal-tls-certs\") pod \"aodh-0\" (UID: \"b43dd5f7-6f15-464c-8fea-98a37a6942d1\") " pod="openstack/aodh-0" Mar 12 08:29:29 crc kubenswrapper[4809]: I0312 08:29:29.490127 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b43dd5f7-6f15-464c-8fea-98a37a6942d1-scripts\") pod \"aodh-0\" (UID: \"b43dd5f7-6f15-464c-8fea-98a37a6942d1\") " pod="openstack/aodh-0" Mar 12 08:29:29 crc kubenswrapper[4809]: I0312 08:29:29.592384 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b43dd5f7-6f15-464c-8fea-98a37a6942d1-combined-ca-bundle\") pod \"aodh-0\" (UID: \"b43dd5f7-6f15-464c-8fea-98a37a6942d1\") " pod="openstack/aodh-0" Mar 12 08:29:29 crc kubenswrapper[4809]: I0312 08:29:29.592476 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b43dd5f7-6f15-464c-8fea-98a37a6942d1-internal-tls-certs\") pod \"aodh-0\" (UID: \"b43dd5f7-6f15-464c-8fea-98a37a6942d1\") " pod="openstack/aodh-0" Mar 12 08:29:29 crc kubenswrapper[4809]: I0312 08:29:29.592521 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b43dd5f7-6f15-464c-8fea-98a37a6942d1-scripts\") pod \"aodh-0\" (UID: \"b43dd5f7-6f15-464c-8fea-98a37a6942d1\") " pod="openstack/aodh-0" Mar 12 08:29:29 crc kubenswrapper[4809]: I0312 08:29:29.592748 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b43dd5f7-6f15-464c-8fea-98a37a6942d1-config-data\") pod \"aodh-0\" (UID: \"b43dd5f7-6f15-464c-8fea-98a37a6942d1\") " pod="openstack/aodh-0" Mar 12 08:29:29 crc kubenswrapper[4809]: I0312 08:29:29.592828 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfzz8\" (UniqueName: \"kubernetes.io/projected/b43dd5f7-6f15-464c-8fea-98a37a6942d1-kube-api-access-zfzz8\") pod \"aodh-0\" (UID: \"b43dd5f7-6f15-464c-8fea-98a37a6942d1\") " pod="openstack/aodh-0" Mar 12 08:29:29 crc kubenswrapper[4809]: I0312 08:29:29.592869 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b43dd5f7-6f15-464c-8fea-98a37a6942d1-public-tls-certs\") pod \"aodh-0\" (UID: \"b43dd5f7-6f15-464c-8fea-98a37a6942d1\") " pod="openstack/aodh-0" Mar 12 08:29:29 crc kubenswrapper[4809]: I0312 08:29:29.598225 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b43dd5f7-6f15-464c-8fea-98a37a6942d1-internal-tls-certs\") pod \"aodh-0\" (UID: \"b43dd5f7-6f15-464c-8fea-98a37a6942d1\") " pod="openstack/aodh-0" Mar 12 08:29:29 crc kubenswrapper[4809]: I0312 08:29:29.598647 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b43dd5f7-6f15-464c-8fea-98a37a6942d1-scripts\") pod \"aodh-0\" (UID: \"b43dd5f7-6f15-464c-8fea-98a37a6942d1\") " pod="openstack/aodh-0" Mar 12 08:29:29 crc kubenswrapper[4809]: I0312 08:29:29.599399 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b43dd5f7-6f15-464c-8fea-98a37a6942d1-public-tls-certs\") pod \"aodh-0\" (UID: \"b43dd5f7-6f15-464c-8fea-98a37a6942d1\") " pod="openstack/aodh-0" Mar 12 08:29:29 crc kubenswrapper[4809]: I0312 08:29:29.601937 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b43dd5f7-6f15-464c-8fea-98a37a6942d1-combined-ca-bundle\") pod \"aodh-0\" (UID: \"b43dd5f7-6f15-464c-8fea-98a37a6942d1\") " pod="openstack/aodh-0" Mar 12 08:29:29 crc kubenswrapper[4809]: I0312 08:29:29.610673 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b43dd5f7-6f15-464c-8fea-98a37a6942d1-config-data\") pod \"aodh-0\" (UID: \"b43dd5f7-6f15-464c-8fea-98a37a6942d1\") " pod="openstack/aodh-0" Mar 12 08:29:29 crc kubenswrapper[4809]: I0312 08:29:29.613753 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfzz8\" (UniqueName: \"kubernetes.io/projected/b43dd5f7-6f15-464c-8fea-98a37a6942d1-kube-api-access-zfzz8\") pod \"aodh-0\" (UID: \"b43dd5f7-6f15-464c-8fea-98a37a6942d1\") " pod="openstack/aodh-0" Mar 12 08:29:29 crc kubenswrapper[4809]: I0312 08:29:29.646546 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d7cg4" Mar 12 08:29:29 crc kubenswrapper[4809]: I0312 08:29:29.693888 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 12 08:29:29 crc kubenswrapper[4809]: I0312 08:29:29.796553 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/baef87a5-805e-4a55-855f-df82b7292028-inventory\") pod \"baef87a5-805e-4a55-855f-df82b7292028\" (UID: \"baef87a5-805e-4a55-855f-df82b7292028\") " Mar 12 08:29:29 crc kubenswrapper[4809]: I0312 08:29:29.796623 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/baef87a5-805e-4a55-855f-df82b7292028-ssh-key-openstack-edpm-ipam\") pod \"baef87a5-805e-4a55-855f-df82b7292028\" (UID: \"baef87a5-805e-4a55-855f-df82b7292028\") " Mar 12 08:29:29 crc kubenswrapper[4809]: I0312 08:29:29.796775 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baef87a5-805e-4a55-855f-df82b7292028-repo-setup-combined-ca-bundle\") pod \"baef87a5-805e-4a55-855f-df82b7292028\" (UID: \"baef87a5-805e-4a55-855f-df82b7292028\") " Mar 12 08:29:29 crc kubenswrapper[4809]: I0312 08:29:29.796803 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4qv7\" (UniqueName: \"kubernetes.io/projected/baef87a5-805e-4a55-855f-df82b7292028-kube-api-access-f4qv7\") pod \"baef87a5-805e-4a55-855f-df82b7292028\" (UID: \"baef87a5-805e-4a55-855f-df82b7292028\") " Mar 12 08:29:29 crc kubenswrapper[4809]: I0312 08:29:29.802017 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/baef87a5-805e-4a55-855f-df82b7292028-kube-api-access-f4qv7" (OuterVolumeSpecName: "kube-api-access-f4qv7") pod "baef87a5-805e-4a55-855f-df82b7292028" (UID: "baef87a5-805e-4a55-855f-df82b7292028"). InnerVolumeSpecName "kube-api-access-f4qv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:29:29 crc kubenswrapper[4809]: I0312 08:29:29.802064 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baef87a5-805e-4a55-855f-df82b7292028-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "baef87a5-805e-4a55-855f-df82b7292028" (UID: "baef87a5-805e-4a55-855f-df82b7292028"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:29:29 crc kubenswrapper[4809]: I0312 08:29:29.838872 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baef87a5-805e-4a55-855f-df82b7292028-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "baef87a5-805e-4a55-855f-df82b7292028" (UID: "baef87a5-805e-4a55-855f-df82b7292028"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:29:29 crc kubenswrapper[4809]: I0312 08:29:29.845815 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baef87a5-805e-4a55-855f-df82b7292028-inventory" (OuterVolumeSpecName: "inventory") pod "baef87a5-805e-4a55-855f-df82b7292028" (UID: "baef87a5-805e-4a55-855f-df82b7292028"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:29:29 crc kubenswrapper[4809]: I0312 08:29:29.902508 4809 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/baef87a5-805e-4a55-855f-df82b7292028-inventory\") on node \"crc\" DevicePath \"\"" Mar 12 08:29:29 crc kubenswrapper[4809]: I0312 08:29:29.902900 4809 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/baef87a5-805e-4a55-855f-df82b7292028-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 08:29:29 crc kubenswrapper[4809]: I0312 08:29:29.902914 4809 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baef87a5-805e-4a55-855f-df82b7292028-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:29:29 crc kubenswrapper[4809]: I0312 08:29:29.902925 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4qv7\" (UniqueName: \"kubernetes.io/projected/baef87a5-805e-4a55-855f-df82b7292028-kube-api-access-f4qv7\") on node \"crc\" DevicePath \"\"" Mar 12 08:29:30 crc kubenswrapper[4809]: I0312 08:29:30.006089 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d7cg4" event={"ID":"baef87a5-805e-4a55-855f-df82b7292028","Type":"ContainerDied","Data":"ddeb9b5850ee80f05102e1eb172ee51378607cd33ec2bf3f296e2eb9a33dbc5e"} Mar 12 08:29:30 crc kubenswrapper[4809]: I0312 08:29:30.006162 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ddeb9b5850ee80f05102e1eb172ee51378607cd33ec2bf3f296e2eb9a33dbc5e" Mar 12 08:29:30 crc kubenswrapper[4809]: I0312 08:29:30.006251 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d7cg4" Mar 12 08:29:30 crc kubenswrapper[4809]: I0312 08:29:30.066545 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-wbr6r"] Mar 12 08:29:30 crc kubenswrapper[4809]: E0312 08:29:30.067249 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baef87a5-805e-4a55-855f-df82b7292028" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Mar 12 08:29:30 crc kubenswrapper[4809]: I0312 08:29:30.067271 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="baef87a5-805e-4a55-855f-df82b7292028" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Mar 12 08:29:30 crc kubenswrapper[4809]: I0312 08:29:30.067526 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="baef87a5-805e-4a55-855f-df82b7292028" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Mar 12 08:29:30 crc kubenswrapper[4809]: I0312 08:29:30.068488 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wbr6r" Mar 12 08:29:30 crc kubenswrapper[4809]: I0312 08:29:30.071358 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 08:29:30 crc kubenswrapper[4809]: I0312 08:29:30.071462 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 12 08:29:30 crc kubenswrapper[4809]: I0312 08:29:30.071531 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 12 08:29:30 crc kubenswrapper[4809]: I0312 08:29:30.071777 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7nx95" Mar 12 08:29:30 crc kubenswrapper[4809]: I0312 08:29:30.081029 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-wbr6r"] Mar 12 08:29:30 crc kubenswrapper[4809]: I0312 08:29:30.107815 4809 scope.go:117] "RemoveContainer" containerID="864ab2b9321734db2ec1c2e4ce57f8a096a813a5cbaa0744b231535713d6ab10" Mar 12 08:29:30 crc kubenswrapper[4809]: E0312 08:29:30.108260 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:29:30 crc kubenswrapper[4809]: I0312 08:29:30.183270 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Mar 12 08:29:30 crc kubenswrapper[4809]: I0312 08:29:30.209798 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bc959ced-37e3-4644-a945-4d2803e3d453-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-wbr6r\" (UID: \"bc959ced-37e3-4644-a945-4d2803e3d453\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wbr6r" Mar 12 08:29:30 crc kubenswrapper[4809]: I0312 08:29:30.210634 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm99n\" (UniqueName: \"kubernetes.io/projected/bc959ced-37e3-4644-a945-4d2803e3d453-kube-api-access-fm99n\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-wbr6r\" (UID: \"bc959ced-37e3-4644-a945-4d2803e3d453\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wbr6r" Mar 12 08:29:30 crc kubenswrapper[4809]: I0312 08:29:30.210824 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bc959ced-37e3-4644-a945-4d2803e3d453-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-wbr6r\" (UID: \"bc959ced-37e3-4644-a945-4d2803e3d453\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wbr6r" Mar 12 08:29:30 crc kubenswrapper[4809]: I0312 08:29:30.313716 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fm99n\" (UniqueName: \"kubernetes.io/projected/bc959ced-37e3-4644-a945-4d2803e3d453-kube-api-access-fm99n\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-wbr6r\" (UID: \"bc959ced-37e3-4644-a945-4d2803e3d453\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wbr6r" Mar 12 08:29:30 crc kubenswrapper[4809]: I0312 08:29:30.313892 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bc959ced-37e3-4644-a945-4d2803e3d453-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-wbr6r\" (UID: \"bc959ced-37e3-4644-a945-4d2803e3d453\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wbr6r" Mar 12 08:29:30 crc kubenswrapper[4809]: I0312 08:29:30.314098 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bc959ced-37e3-4644-a945-4d2803e3d453-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-wbr6r\" (UID: \"bc959ced-37e3-4644-a945-4d2803e3d453\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wbr6r" Mar 12 08:29:30 crc kubenswrapper[4809]: I0312 08:29:30.319142 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bc959ced-37e3-4644-a945-4d2803e3d453-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-wbr6r\" (UID: \"bc959ced-37e3-4644-a945-4d2803e3d453\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wbr6r" Mar 12 08:29:30 crc kubenswrapper[4809]: I0312 08:29:30.320973 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bc959ced-37e3-4644-a945-4d2803e3d453-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-wbr6r\" (UID: \"bc959ced-37e3-4644-a945-4d2803e3d453\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wbr6r" Mar 12 08:29:30 crc kubenswrapper[4809]: I0312 08:29:30.335750 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fm99n\" (UniqueName: \"kubernetes.io/projected/bc959ced-37e3-4644-a945-4d2803e3d453-kube-api-access-fm99n\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-wbr6r\" (UID: \"bc959ced-37e3-4644-a945-4d2803e3d453\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wbr6r" Mar 12 08:29:30 crc kubenswrapper[4809]: I0312 08:29:30.389140 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wbr6r" Mar 12 08:29:30 crc kubenswrapper[4809]: I0312 08:29:30.949964 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-wbr6r"] Mar 12 08:29:31 crc kubenswrapper[4809]: I0312 08:29:31.031911 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b43dd5f7-6f15-464c-8fea-98a37a6942d1","Type":"ContainerStarted","Data":"ea1afb914bb3b1171ad6f351fddd55ab6fa8da31f7656a1afae51af29b681400"} Mar 12 08:29:31 crc kubenswrapper[4809]: I0312 08:29:31.031979 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b43dd5f7-6f15-464c-8fea-98a37a6942d1","Type":"ContainerStarted","Data":"950495f8ba03ac48ff652ca48e4b397dca0931026185f2e4d7590bed3f7c73ef"} Mar 12 08:29:31 crc kubenswrapper[4809]: I0312 08:29:31.035286 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wbr6r" event={"ID":"bc959ced-37e3-4644-a945-4d2803e3d453","Type":"ContainerStarted","Data":"f3795555c6270b0d94139811710331561bfb33b11d1530d68c0db63add715eba"} Mar 12 08:29:31 crc kubenswrapper[4809]: I0312 08:29:31.122206 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a" path="/var/lib/kubelet/pods/ccfe1f5a-0ff5-4483-a3b3-c02e2880c14a/volumes" Mar 12 08:29:32 crc kubenswrapper[4809]: I0312 08:29:32.055134 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b43dd5f7-6f15-464c-8fea-98a37a6942d1","Type":"ContainerStarted","Data":"cc442ae43c9f7ba04575b99aa2a5fdb13886d5dc6075ea640408d4f168f58dcd"} Mar 12 08:29:32 crc kubenswrapper[4809]: I0312 08:29:32.059799 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wbr6r" event={"ID":"bc959ced-37e3-4644-a945-4d2803e3d453","Type":"ContainerStarted","Data":"ebde92b664a178c71fe97df9eab58db58af7b8dbbd338de6571f1b6dc92eeb2c"} Mar 12 08:29:32 crc kubenswrapper[4809]: I0312 08:29:32.092093 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wbr6r" podStartSLOduration=1.343442759 podStartE2EDuration="2.092065371s" podCreationTimestamp="2026-03-12 08:29:30 +0000 UTC" firstStartedPulling="2026-03-12 08:29:30.958616501 +0000 UTC m=+1844.540652234" lastFinishedPulling="2026-03-12 08:29:31.707239113 +0000 UTC m=+1845.289274846" observedRunningTime="2026-03-12 08:29:32.076257201 +0000 UTC m=+1845.658292934" watchObservedRunningTime="2026-03-12 08:29:32.092065371 +0000 UTC m=+1845.674101104" Mar 12 08:29:34 crc kubenswrapper[4809]: I0312 08:29:34.093791 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b43dd5f7-6f15-464c-8fea-98a37a6942d1","Type":"ContainerStarted","Data":"19e2bf3a3bbb353eaa38d8a3d5993a5dd663c3ea88beb7761d5c03de381add92"} Mar 12 08:29:35 crc kubenswrapper[4809]: I0312 08:29:35.129992 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b43dd5f7-6f15-464c-8fea-98a37a6942d1","Type":"ContainerStarted","Data":"16c69e6eb73559459cdf3936dabe6a0292cd0aa3ecf8ef045591df6750f688fc"} Mar 12 08:29:35 crc kubenswrapper[4809]: I0312 08:29:35.130902 4809 generic.go:334] "Generic (PLEG): container finished" podID="bc959ced-37e3-4644-a945-4d2803e3d453" containerID="ebde92b664a178c71fe97df9eab58db58af7b8dbbd338de6571f1b6dc92eeb2c" exitCode=0 Mar 12 08:29:35 crc kubenswrapper[4809]: I0312 08:29:35.130955 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wbr6r" event={"ID":"bc959ced-37e3-4644-a945-4d2803e3d453","Type":"ContainerDied","Data":"ebde92b664a178c71fe97df9eab58db58af7b8dbbd338de6571f1b6dc92eeb2c"} Mar 12 08:29:35 crc kubenswrapper[4809]: I0312 08:29:35.186230 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.3091830460000002 podStartE2EDuration="6.18620401s" podCreationTimestamp="2026-03-12 08:29:29 +0000 UTC" firstStartedPulling="2026-03-12 08:29:30.192655636 +0000 UTC m=+1843.774691369" lastFinishedPulling="2026-03-12 08:29:34.0696766 +0000 UTC m=+1847.651712333" observedRunningTime="2026-03-12 08:29:35.153636575 +0000 UTC m=+1848.735672328" watchObservedRunningTime="2026-03-12 08:29:35.18620401 +0000 UTC m=+1848.768239743" Mar 12 08:29:38 crc kubenswrapper[4809]: I0312 08:29:38.350739 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wbr6r" Mar 12 08:29:38 crc kubenswrapper[4809]: I0312 08:29:38.450318 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bc959ced-37e3-4644-a945-4d2803e3d453-inventory\") pod \"bc959ced-37e3-4644-a945-4d2803e3d453\" (UID: \"bc959ced-37e3-4644-a945-4d2803e3d453\") " Mar 12 08:29:38 crc kubenswrapper[4809]: I0312 08:29:38.450572 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bc959ced-37e3-4644-a945-4d2803e3d453-ssh-key-openstack-edpm-ipam\") pod \"bc959ced-37e3-4644-a945-4d2803e3d453\" (UID: \"bc959ced-37e3-4644-a945-4d2803e3d453\") " Mar 12 08:29:38 crc kubenswrapper[4809]: I0312 08:29:38.450609 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fm99n\" (UniqueName: \"kubernetes.io/projected/bc959ced-37e3-4644-a945-4d2803e3d453-kube-api-access-fm99n\") pod \"bc959ced-37e3-4644-a945-4d2803e3d453\" (UID: \"bc959ced-37e3-4644-a945-4d2803e3d453\") " Mar 12 08:29:38 crc kubenswrapper[4809]: I0312 08:29:38.456001 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc959ced-37e3-4644-a945-4d2803e3d453-kube-api-access-fm99n" (OuterVolumeSpecName: "kube-api-access-fm99n") pod "bc959ced-37e3-4644-a945-4d2803e3d453" (UID: "bc959ced-37e3-4644-a945-4d2803e3d453"). InnerVolumeSpecName "kube-api-access-fm99n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:29:38 crc kubenswrapper[4809]: I0312 08:29:38.483924 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc959ced-37e3-4644-a945-4d2803e3d453-inventory" (OuterVolumeSpecName: "inventory") pod "bc959ced-37e3-4644-a945-4d2803e3d453" (UID: "bc959ced-37e3-4644-a945-4d2803e3d453"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:29:38 crc kubenswrapper[4809]: I0312 08:29:38.502444 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc959ced-37e3-4644-a945-4d2803e3d453-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "bc959ced-37e3-4644-a945-4d2803e3d453" (UID: "bc959ced-37e3-4644-a945-4d2803e3d453"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:29:38 crc kubenswrapper[4809]: I0312 08:29:38.553132 4809 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bc959ced-37e3-4644-a945-4d2803e3d453-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 08:29:38 crc kubenswrapper[4809]: I0312 08:29:38.553169 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fm99n\" (UniqueName: \"kubernetes.io/projected/bc959ced-37e3-4644-a945-4d2803e3d453-kube-api-access-fm99n\") on node \"crc\" DevicePath \"\"" Mar 12 08:29:38 crc kubenswrapper[4809]: I0312 08:29:38.553181 4809 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bc959ced-37e3-4644-a945-4d2803e3d453-inventory\") on node \"crc\" DevicePath \"\"" Mar 12 08:29:38 crc kubenswrapper[4809]: I0312 08:29:38.787029 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wbr6r" event={"ID":"bc959ced-37e3-4644-a945-4d2803e3d453","Type":"ContainerDied","Data":"f3795555c6270b0d94139811710331561bfb33b11d1530d68c0db63add715eba"} Mar 12 08:29:38 crc kubenswrapper[4809]: I0312 08:29:38.787093 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3795555c6270b0d94139811710331561bfb33b11d1530d68c0db63add715eba" Mar 12 08:29:38 crc kubenswrapper[4809]: I0312 08:29:38.787098 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wbr6r" Mar 12 08:29:39 crc kubenswrapper[4809]: I0312 08:29:39.450303 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zb4bp"] Mar 12 08:29:39 crc kubenswrapper[4809]: E0312 08:29:39.451011 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc959ced-37e3-4644-a945-4d2803e3d453" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Mar 12 08:29:39 crc kubenswrapper[4809]: I0312 08:29:39.451028 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc959ced-37e3-4644-a945-4d2803e3d453" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Mar 12 08:29:39 crc kubenswrapper[4809]: I0312 08:29:39.451284 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc959ced-37e3-4644-a945-4d2803e3d453" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Mar 12 08:29:39 crc kubenswrapper[4809]: I0312 08:29:39.452288 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zb4bp" Mar 12 08:29:39 crc kubenswrapper[4809]: I0312 08:29:39.454821 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7nx95" Mar 12 08:29:39 crc kubenswrapper[4809]: I0312 08:29:39.454898 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 12 08:29:39 crc kubenswrapper[4809]: I0312 08:29:39.455080 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 08:29:39 crc kubenswrapper[4809]: I0312 08:29:39.455590 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 12 08:29:39 crc kubenswrapper[4809]: I0312 08:29:39.462378 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zb4bp"] Mar 12 08:29:39 crc kubenswrapper[4809]: I0312 08:29:39.589277 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23801b93-a5b2-44dc-b04a-bc3c50fccbfd-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zb4bp\" (UID: \"23801b93-a5b2-44dc-b04a-bc3c50fccbfd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zb4bp" Mar 12 08:29:39 crc kubenswrapper[4809]: I0312 08:29:39.589352 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdxx8\" (UniqueName: \"kubernetes.io/projected/23801b93-a5b2-44dc-b04a-bc3c50fccbfd-kube-api-access-zdxx8\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zb4bp\" (UID: \"23801b93-a5b2-44dc-b04a-bc3c50fccbfd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zb4bp" Mar 12 08:29:39 crc kubenswrapper[4809]: I0312 08:29:39.589556 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23801b93-a5b2-44dc-b04a-bc3c50fccbfd-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zb4bp\" (UID: \"23801b93-a5b2-44dc-b04a-bc3c50fccbfd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zb4bp" Mar 12 08:29:39 crc kubenswrapper[4809]: I0312 08:29:39.589579 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/23801b93-a5b2-44dc-b04a-bc3c50fccbfd-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zb4bp\" (UID: \"23801b93-a5b2-44dc-b04a-bc3c50fccbfd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zb4bp" Mar 12 08:29:39 crc kubenswrapper[4809]: I0312 08:29:39.692719 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23801b93-a5b2-44dc-b04a-bc3c50fccbfd-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zb4bp\" (UID: \"23801b93-a5b2-44dc-b04a-bc3c50fccbfd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zb4bp" Mar 12 08:29:39 crc kubenswrapper[4809]: I0312 08:29:39.692816 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdxx8\" (UniqueName: \"kubernetes.io/projected/23801b93-a5b2-44dc-b04a-bc3c50fccbfd-kube-api-access-zdxx8\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zb4bp\" (UID: \"23801b93-a5b2-44dc-b04a-bc3c50fccbfd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zb4bp" Mar 12 08:29:39 crc kubenswrapper[4809]: I0312 08:29:39.693048 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23801b93-a5b2-44dc-b04a-bc3c50fccbfd-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zb4bp\" (UID: \"23801b93-a5b2-44dc-b04a-bc3c50fccbfd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zb4bp" Mar 12 08:29:39 crc kubenswrapper[4809]: I0312 08:29:39.693072 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/23801b93-a5b2-44dc-b04a-bc3c50fccbfd-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zb4bp\" (UID: \"23801b93-a5b2-44dc-b04a-bc3c50fccbfd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zb4bp" Mar 12 08:29:39 crc kubenswrapper[4809]: I0312 08:29:39.698752 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/23801b93-a5b2-44dc-b04a-bc3c50fccbfd-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zb4bp\" (UID: \"23801b93-a5b2-44dc-b04a-bc3c50fccbfd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zb4bp" Mar 12 08:29:39 crc kubenswrapper[4809]: I0312 08:29:39.699260 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23801b93-a5b2-44dc-b04a-bc3c50fccbfd-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zb4bp\" (UID: \"23801b93-a5b2-44dc-b04a-bc3c50fccbfd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zb4bp" Mar 12 08:29:39 crc kubenswrapper[4809]: I0312 08:29:39.699500 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23801b93-a5b2-44dc-b04a-bc3c50fccbfd-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zb4bp\" (UID: \"23801b93-a5b2-44dc-b04a-bc3c50fccbfd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zb4bp" Mar 12 08:29:39 crc kubenswrapper[4809]: I0312 08:29:39.727689 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdxx8\" (UniqueName: \"kubernetes.io/projected/23801b93-a5b2-44dc-b04a-bc3c50fccbfd-kube-api-access-zdxx8\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zb4bp\" (UID: \"23801b93-a5b2-44dc-b04a-bc3c50fccbfd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zb4bp" Mar 12 08:29:39 crc kubenswrapper[4809]: I0312 08:29:39.798646 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zb4bp" Mar 12 08:29:40 crc kubenswrapper[4809]: I0312 08:29:40.359107 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zb4bp"] Mar 12 08:29:40 crc kubenswrapper[4809]: I0312 08:29:40.810485 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zb4bp" event={"ID":"23801b93-a5b2-44dc-b04a-bc3c50fccbfd","Type":"ContainerStarted","Data":"5d72a12691ed263533cfd4de1b70f5a98c640ad9b3ce0d509af3a3d91cb6638e"} Mar 12 08:29:42 crc kubenswrapper[4809]: I0312 08:29:42.837449 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zb4bp" event={"ID":"23801b93-a5b2-44dc-b04a-bc3c50fccbfd","Type":"ContainerStarted","Data":"52b27754ae461836ffec296cab3737ca6b37e8234634f62cda6771b838a2e5b0"} Mar 12 08:29:42 crc kubenswrapper[4809]: I0312 08:29:42.869953 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zb4bp" podStartSLOduration=2.28366353 podStartE2EDuration="3.869932885s" podCreationTimestamp="2026-03-12 08:29:39 +0000 UTC" firstStartedPulling="2026-03-12 08:29:40.359240131 +0000 UTC m=+1853.941275864" lastFinishedPulling="2026-03-12 08:29:41.945509486 +0000 UTC m=+1855.527545219" observedRunningTime="2026-03-12 08:29:42.855526943 +0000 UTC m=+1856.437562686" watchObservedRunningTime="2026-03-12 08:29:42.869932885 +0000 UTC m=+1856.451968618" Mar 12 08:29:45 crc kubenswrapper[4809]: I0312 08:29:45.107462 4809 scope.go:117] "RemoveContainer" containerID="864ab2b9321734db2ec1c2e4ce57f8a096a813a5cbaa0744b231535713d6ab10" Mar 12 08:29:45 crc kubenswrapper[4809]: I0312 08:29:45.876452 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" event={"ID":"101483ba-8ed3-40eb-9855-077e9add029f","Type":"ContainerStarted","Data":"97289a619c1926063f6f049c601dbe841c3463284a0fd671e4120b278e3af3bb"} Mar 12 08:29:45 crc kubenswrapper[4809]: I0312 08:29:45.877985 4809 generic.go:334] "Generic (PLEG): container finished" podID="66d312ab-6fb2-43de-98f2-dc692f592a47" containerID="43a897259d17738ff6fb9a9f3682087e53fbb029e6f030675019bc4775aaf7f5" exitCode=0 Mar 12 08:29:45 crc kubenswrapper[4809]: I0312 08:29:45.878036 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"66d312ab-6fb2-43de-98f2-dc692f592a47","Type":"ContainerDied","Data":"43a897259d17738ff6fb9a9f3682087e53fbb029e6f030675019bc4775aaf7f5"} Mar 12 08:29:46 crc kubenswrapper[4809]: I0312 08:29:46.895471 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"66d312ab-6fb2-43de-98f2-dc692f592a47","Type":"ContainerStarted","Data":"4f6f4d75659caab0298dfbb302aa1f59f8001a694f9307c654a1e4ef7b087423"} Mar 12 08:29:46 crc kubenswrapper[4809]: I0312 08:29:46.896377 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Mar 12 08:29:46 crc kubenswrapper[4809]: I0312 08:29:46.935943 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=38.935917023 podStartE2EDuration="38.935917023s" podCreationTimestamp="2026-03-12 08:29:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:29:46.926723613 +0000 UTC m=+1860.508759426" watchObservedRunningTime="2026-03-12 08:29:46.935917023 +0000 UTC m=+1860.517952756" Mar 12 08:29:59 crc kubenswrapper[4809]: I0312 08:29:59.081703 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Mar 12 08:29:59 crc kubenswrapper[4809]: I0312 08:29:59.253204 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 12 08:30:00 crc kubenswrapper[4809]: I0312 08:30:00.143054 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555070-k86cx"] Mar 12 08:30:00 crc kubenswrapper[4809]: I0312 08:30:00.145210 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555070-k86cx" Mar 12 08:30:00 crc kubenswrapper[4809]: I0312 08:30:00.149227 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 08:30:00 crc kubenswrapper[4809]: I0312 08:30:00.149619 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-tvxqv" Mar 12 08:30:00 crc kubenswrapper[4809]: I0312 08:30:00.151512 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 08:30:00 crc kubenswrapper[4809]: I0312 08:30:00.179364 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555070-k86cx"] Mar 12 08:30:00 crc kubenswrapper[4809]: I0312 08:30:00.254367 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555070-9qckz"] Mar 12 08:30:00 crc kubenswrapper[4809]: I0312 08:30:00.256048 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555070-9qckz" Mar 12 08:30:00 crc kubenswrapper[4809]: I0312 08:30:00.263222 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 12 08:30:00 crc kubenswrapper[4809]: I0312 08:30:00.263514 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 12 08:30:00 crc kubenswrapper[4809]: I0312 08:30:00.288846 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555070-9qckz"] Mar 12 08:30:00 crc kubenswrapper[4809]: I0312 08:30:00.293080 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77gf5\" (UniqueName: \"kubernetes.io/projected/ba2b5d5b-19ed-4e05-9165-b5064fec1040-kube-api-access-77gf5\") pod \"auto-csr-approver-29555070-k86cx\" (UID: \"ba2b5d5b-19ed-4e05-9165-b5064fec1040\") " pod="openshift-infra/auto-csr-approver-29555070-k86cx" Mar 12 08:30:00 crc kubenswrapper[4809]: I0312 08:30:00.394982 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77gf5\" (UniqueName: \"kubernetes.io/projected/ba2b5d5b-19ed-4e05-9165-b5064fec1040-kube-api-access-77gf5\") pod \"auto-csr-approver-29555070-k86cx\" (UID: \"ba2b5d5b-19ed-4e05-9165-b5064fec1040\") " pod="openshift-infra/auto-csr-approver-29555070-k86cx" Mar 12 08:30:00 crc kubenswrapper[4809]: I0312 08:30:00.395133 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1804f4be-3be2-4237-8059-f28f28a33dba-config-volume\") pod \"collect-profiles-29555070-9qckz\" (UID: \"1804f4be-3be2-4237-8059-f28f28a33dba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555070-9qckz" Mar 12 08:30:00 crc kubenswrapper[4809]: I0312 08:30:00.395180 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gr9v\" (UniqueName: \"kubernetes.io/projected/1804f4be-3be2-4237-8059-f28f28a33dba-kube-api-access-8gr9v\") pod \"collect-profiles-29555070-9qckz\" (UID: \"1804f4be-3be2-4237-8059-f28f28a33dba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555070-9qckz" Mar 12 08:30:00 crc kubenswrapper[4809]: I0312 08:30:00.395541 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1804f4be-3be2-4237-8059-f28f28a33dba-secret-volume\") pod \"collect-profiles-29555070-9qckz\" (UID: \"1804f4be-3be2-4237-8059-f28f28a33dba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555070-9qckz" Mar 12 08:30:00 crc kubenswrapper[4809]: I0312 08:30:00.433887 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77gf5\" (UniqueName: \"kubernetes.io/projected/ba2b5d5b-19ed-4e05-9165-b5064fec1040-kube-api-access-77gf5\") pod \"auto-csr-approver-29555070-k86cx\" (UID: \"ba2b5d5b-19ed-4e05-9165-b5064fec1040\") " pod="openshift-infra/auto-csr-approver-29555070-k86cx" Mar 12 08:30:00 crc kubenswrapper[4809]: I0312 08:30:00.477608 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555070-k86cx" Mar 12 08:30:00 crc kubenswrapper[4809]: I0312 08:30:00.498475 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1804f4be-3be2-4237-8059-f28f28a33dba-secret-volume\") pod \"collect-profiles-29555070-9qckz\" (UID: \"1804f4be-3be2-4237-8059-f28f28a33dba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555070-9qckz" Mar 12 08:30:00 crc kubenswrapper[4809]: I0312 08:30:00.499321 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1804f4be-3be2-4237-8059-f28f28a33dba-config-volume\") pod \"collect-profiles-29555070-9qckz\" (UID: \"1804f4be-3be2-4237-8059-f28f28a33dba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555070-9qckz" Mar 12 08:30:00 crc kubenswrapper[4809]: I0312 08:30:00.499389 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gr9v\" (UniqueName: \"kubernetes.io/projected/1804f4be-3be2-4237-8059-f28f28a33dba-kube-api-access-8gr9v\") pod \"collect-profiles-29555070-9qckz\" (UID: \"1804f4be-3be2-4237-8059-f28f28a33dba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555070-9qckz" Mar 12 08:30:00 crc kubenswrapper[4809]: I0312 08:30:00.500616 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1804f4be-3be2-4237-8059-f28f28a33dba-config-volume\") pod \"collect-profiles-29555070-9qckz\" (UID: \"1804f4be-3be2-4237-8059-f28f28a33dba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555070-9qckz" Mar 12 08:30:00 crc kubenswrapper[4809]: I0312 08:30:00.504963 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1804f4be-3be2-4237-8059-f28f28a33dba-secret-volume\") pod \"collect-profiles-29555070-9qckz\" (UID: \"1804f4be-3be2-4237-8059-f28f28a33dba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555070-9qckz" Mar 12 08:30:00 crc kubenswrapper[4809]: I0312 08:30:00.539403 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gr9v\" (UniqueName: \"kubernetes.io/projected/1804f4be-3be2-4237-8059-f28f28a33dba-kube-api-access-8gr9v\") pod \"collect-profiles-29555070-9qckz\" (UID: \"1804f4be-3be2-4237-8059-f28f28a33dba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555070-9qckz" Mar 12 08:30:00 crc kubenswrapper[4809]: I0312 08:30:00.603152 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555070-9qckz" Mar 12 08:30:01 crc kubenswrapper[4809]: I0312 08:30:01.257949 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555070-k86cx"] Mar 12 08:30:01 crc kubenswrapper[4809]: W0312 08:30:01.267833 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba2b5d5b_19ed_4e05_9165_b5064fec1040.slice/crio-6308acd1fae9ebc9106e4ce220fe17bfdc51f09bc336e9c8678f400789fdb976 WatchSource:0}: Error finding container 6308acd1fae9ebc9106e4ce220fe17bfdc51f09bc336e9c8678f400789fdb976: Status 404 returned error can't find the container with id 6308acd1fae9ebc9106e4ce220fe17bfdc51f09bc336e9c8678f400789fdb976 Mar 12 08:30:01 crc kubenswrapper[4809]: I0312 08:30:01.459087 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555070-9qckz"] Mar 12 08:30:01 crc kubenswrapper[4809]: W0312 08:30:01.460741 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1804f4be_3be2_4237_8059_f28f28a33dba.slice/crio-9928a54cffe4aafc20df8ce96fe62ae7723854a0902003fb34685ef644e153e7 WatchSource:0}: Error finding container 9928a54cffe4aafc20df8ce96fe62ae7723854a0902003fb34685ef644e153e7: Status 404 returned error can't find the container with id 9928a54cffe4aafc20df8ce96fe62ae7723854a0902003fb34685ef644e153e7 Mar 12 08:30:02 crc kubenswrapper[4809]: I0312 08:30:02.113872 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555070-9qckz" event={"ID":"1804f4be-3be2-4237-8059-f28f28a33dba","Type":"ContainerStarted","Data":"cad53017dac45a115ef0ab45c13a086ab88169d1bbc66965d3cbf6a4292a3437"} Mar 12 08:30:02 crc kubenswrapper[4809]: I0312 08:30:02.114481 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555070-9qckz" event={"ID":"1804f4be-3be2-4237-8059-f28f28a33dba","Type":"ContainerStarted","Data":"9928a54cffe4aafc20df8ce96fe62ae7723854a0902003fb34685ef644e153e7"} Mar 12 08:30:02 crc kubenswrapper[4809]: I0312 08:30:02.121689 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555070-k86cx" event={"ID":"ba2b5d5b-19ed-4e05-9165-b5064fec1040","Type":"ContainerStarted","Data":"6308acd1fae9ebc9106e4ce220fe17bfdc51f09bc336e9c8678f400789fdb976"} Mar 12 08:30:02 crc kubenswrapper[4809]: I0312 08:30:02.153707 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29555070-9qckz" podStartSLOduration=2.153678245 podStartE2EDuration="2.153678245s" podCreationTimestamp="2026-03-12 08:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:30:02.138666048 +0000 UTC m=+1875.720701781" watchObservedRunningTime="2026-03-12 08:30:02.153678245 +0000 UTC m=+1875.735713978" Mar 12 08:30:03 crc kubenswrapper[4809]: I0312 08:30:03.138620 4809 generic.go:334] "Generic (PLEG): container finished" podID="1804f4be-3be2-4237-8059-f28f28a33dba" containerID="cad53017dac45a115ef0ab45c13a086ab88169d1bbc66965d3cbf6a4292a3437" exitCode=0 Mar 12 08:30:03 crc kubenswrapper[4809]: I0312 08:30:03.139996 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555070-9qckz" event={"ID":"1804f4be-3be2-4237-8059-f28f28a33dba","Type":"ContainerDied","Data":"cad53017dac45a115ef0ab45c13a086ab88169d1bbc66965d3cbf6a4292a3437"} Mar 12 08:30:04 crc kubenswrapper[4809]: I0312 08:30:04.162344 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555070-k86cx" event={"ID":"ba2b5d5b-19ed-4e05-9165-b5064fec1040","Type":"ContainerStarted","Data":"5b90b48ef5fc296ddac81d2e4f25cde423906c39d086a92406eed385bd5f370f"} Mar 12 08:30:04 crc kubenswrapper[4809]: I0312 08:30:04.195369 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555070-k86cx" podStartSLOduration=2.287916084 podStartE2EDuration="4.195344266s" podCreationTimestamp="2026-03-12 08:30:00 +0000 UTC" firstStartedPulling="2026-03-12 08:30:01.271525824 +0000 UTC m=+1874.853561557" lastFinishedPulling="2026-03-12 08:30:03.178954006 +0000 UTC m=+1876.760989739" observedRunningTime="2026-03-12 08:30:04.178710843 +0000 UTC m=+1877.760746576" watchObservedRunningTime="2026-03-12 08:30:04.195344266 +0000 UTC m=+1877.777379999" Mar 12 08:30:04 crc kubenswrapper[4809]: I0312 08:30:04.658329 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="d043d696-d09f-4c43-8960-0d31789103e8" containerName="rabbitmq" containerID="cri-o://a5d33dc459ac717259e45f13ec4149298d61afa5b3792d19a16b0edda76ae037" gracePeriod=604795 Mar 12 08:30:04 crc kubenswrapper[4809]: I0312 08:30:04.673944 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555070-9qckz" Mar 12 08:30:04 crc kubenswrapper[4809]: I0312 08:30:04.770380 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1804f4be-3be2-4237-8059-f28f28a33dba-config-volume\") pod \"1804f4be-3be2-4237-8059-f28f28a33dba\" (UID: \"1804f4be-3be2-4237-8059-f28f28a33dba\") " Mar 12 08:30:04 crc kubenswrapper[4809]: I0312 08:30:04.770542 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gr9v\" (UniqueName: \"kubernetes.io/projected/1804f4be-3be2-4237-8059-f28f28a33dba-kube-api-access-8gr9v\") pod \"1804f4be-3be2-4237-8059-f28f28a33dba\" (UID: \"1804f4be-3be2-4237-8059-f28f28a33dba\") " Mar 12 08:30:04 crc kubenswrapper[4809]: I0312 08:30:04.770730 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1804f4be-3be2-4237-8059-f28f28a33dba-secret-volume\") pod \"1804f4be-3be2-4237-8059-f28f28a33dba\" (UID: \"1804f4be-3be2-4237-8059-f28f28a33dba\") " Mar 12 08:30:04 crc kubenswrapper[4809]: I0312 08:30:04.772884 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1804f4be-3be2-4237-8059-f28f28a33dba-config-volume" (OuterVolumeSpecName: "config-volume") pod "1804f4be-3be2-4237-8059-f28f28a33dba" (UID: "1804f4be-3be2-4237-8059-f28f28a33dba"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:30:04 crc kubenswrapper[4809]: I0312 08:30:04.780788 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1804f4be-3be2-4237-8059-f28f28a33dba-kube-api-access-8gr9v" (OuterVolumeSpecName: "kube-api-access-8gr9v") pod "1804f4be-3be2-4237-8059-f28f28a33dba" (UID: "1804f4be-3be2-4237-8059-f28f28a33dba"). InnerVolumeSpecName "kube-api-access-8gr9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:30:04 crc kubenswrapper[4809]: I0312 08:30:04.781216 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1804f4be-3be2-4237-8059-f28f28a33dba-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1804f4be-3be2-4237-8059-f28f28a33dba" (UID: "1804f4be-3be2-4237-8059-f28f28a33dba"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:30:04 crc kubenswrapper[4809]: I0312 08:30:04.875056 4809 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1804f4be-3be2-4237-8059-f28f28a33dba-config-volume\") on node \"crc\" DevicePath \"\"" Mar 12 08:30:04 crc kubenswrapper[4809]: I0312 08:30:04.875107 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gr9v\" (UniqueName: \"kubernetes.io/projected/1804f4be-3be2-4237-8059-f28f28a33dba-kube-api-access-8gr9v\") on node \"crc\" DevicePath \"\"" Mar 12 08:30:04 crc kubenswrapper[4809]: I0312 08:30:04.875176 4809 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1804f4be-3be2-4237-8059-f28f28a33dba-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 12 08:30:05 crc kubenswrapper[4809]: I0312 08:30:05.183433 4809 generic.go:334] "Generic (PLEG): container finished" podID="ba2b5d5b-19ed-4e05-9165-b5064fec1040" containerID="5b90b48ef5fc296ddac81d2e4f25cde423906c39d086a92406eed385bd5f370f" exitCode=0 Mar 12 08:30:05 crc kubenswrapper[4809]: I0312 08:30:05.183553 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555070-k86cx" event={"ID":"ba2b5d5b-19ed-4e05-9165-b5064fec1040","Type":"ContainerDied","Data":"5b90b48ef5fc296ddac81d2e4f25cde423906c39d086a92406eed385bd5f370f"} Mar 12 08:30:05 crc kubenswrapper[4809]: I0312 08:30:05.189529 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555070-9qckz" Mar 12 08:30:05 crc kubenswrapper[4809]: I0312 08:30:05.190994 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555070-9qckz" event={"ID":"1804f4be-3be2-4237-8059-f28f28a33dba","Type":"ContainerDied","Data":"9928a54cffe4aafc20df8ce96fe62ae7723854a0902003fb34685ef644e153e7"} Mar 12 08:30:05 crc kubenswrapper[4809]: I0312 08:30:05.191055 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9928a54cffe4aafc20df8ce96fe62ae7723854a0902003fb34685ef644e153e7" Mar 12 08:30:06 crc kubenswrapper[4809]: I0312 08:30:06.668905 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555070-k86cx" Mar 12 08:30:06 crc kubenswrapper[4809]: I0312 08:30:06.741416 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77gf5\" (UniqueName: \"kubernetes.io/projected/ba2b5d5b-19ed-4e05-9165-b5064fec1040-kube-api-access-77gf5\") pod \"ba2b5d5b-19ed-4e05-9165-b5064fec1040\" (UID: \"ba2b5d5b-19ed-4e05-9165-b5064fec1040\") " Mar 12 08:30:06 crc kubenswrapper[4809]: I0312 08:30:06.749464 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba2b5d5b-19ed-4e05-9165-b5064fec1040-kube-api-access-77gf5" (OuterVolumeSpecName: "kube-api-access-77gf5") pod "ba2b5d5b-19ed-4e05-9165-b5064fec1040" (UID: "ba2b5d5b-19ed-4e05-9165-b5064fec1040"). InnerVolumeSpecName "kube-api-access-77gf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:30:06 crc kubenswrapper[4809]: I0312 08:30:06.844723 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77gf5\" (UniqueName: \"kubernetes.io/projected/ba2b5d5b-19ed-4e05-9165-b5064fec1040-kube-api-access-77gf5\") on node \"crc\" DevicePath \"\"" Mar 12 08:30:07 crc kubenswrapper[4809]: I0312 08:30:07.252361 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555070-k86cx" event={"ID":"ba2b5d5b-19ed-4e05-9165-b5064fec1040","Type":"ContainerDied","Data":"6308acd1fae9ebc9106e4ce220fe17bfdc51f09bc336e9c8678f400789fdb976"} Mar 12 08:30:07 crc kubenswrapper[4809]: I0312 08:30:07.252411 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6308acd1fae9ebc9106e4ce220fe17bfdc51f09bc336e9c8678f400789fdb976" Mar 12 08:30:07 crc kubenswrapper[4809]: I0312 08:30:07.252498 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555070-k86cx" Mar 12 08:30:07 crc kubenswrapper[4809]: I0312 08:30:07.265283 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555064-rz9hw"] Mar 12 08:30:07 crc kubenswrapper[4809]: I0312 08:30:07.277442 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555064-rz9hw"] Mar 12 08:30:09 crc kubenswrapper[4809]: I0312 08:30:09.131258 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb8a8f45-bb92-4a72-8fcb-5c82ab191829" path="/var/lib/kubelet/pods/bb8a8f45-bb92-4a72-8fcb-5c82ab191829/volumes" Mar 12 08:30:11 crc kubenswrapper[4809]: I0312 08:30:11.310939 4809 generic.go:334] "Generic (PLEG): container finished" podID="d043d696-d09f-4c43-8960-0d31789103e8" containerID="a5d33dc459ac717259e45f13ec4149298d61afa5b3792d19a16b0edda76ae037" exitCode=0 Mar 12 08:30:11 crc kubenswrapper[4809]: I0312 08:30:11.311032 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d043d696-d09f-4c43-8960-0d31789103e8","Type":"ContainerDied","Data":"a5d33dc459ac717259e45f13ec4149298d61afa5b3792d19a16b0edda76ae037"} Mar 12 08:30:11 crc kubenswrapper[4809]: I0312 08:30:11.311755 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d043d696-d09f-4c43-8960-0d31789103e8","Type":"ContainerDied","Data":"09e02e75df58f4998c203bf35d469e713466228ba5da0e5d43774c743fb9cd14"} Mar 12 08:30:11 crc kubenswrapper[4809]: I0312 08:30:11.311778 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09e02e75df58f4998c203bf35d469e713466228ba5da0e5d43774c743fb9cd14" Mar 12 08:30:11 crc kubenswrapper[4809]: I0312 08:30:11.406051 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 12 08:30:11 crc kubenswrapper[4809]: I0312 08:30:11.465909 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d043d696-d09f-4c43-8960-0d31789103e8-plugins-conf\") pod \"d043d696-d09f-4c43-8960-0d31789103e8\" (UID: \"d043d696-d09f-4c43-8960-0d31789103e8\") " Mar 12 08:30:11 crc kubenswrapper[4809]: I0312 08:30:11.466863 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15723c3d-2262-4670-a49d-b9f57c4f291f\") pod \"d043d696-d09f-4c43-8960-0d31789103e8\" (UID: \"d043d696-d09f-4c43-8960-0d31789103e8\") " Mar 12 08:30:11 crc kubenswrapper[4809]: I0312 08:30:11.467010 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7hvq\" (UniqueName: \"kubernetes.io/projected/d043d696-d09f-4c43-8960-0d31789103e8-kube-api-access-j7hvq\") pod \"d043d696-d09f-4c43-8960-0d31789103e8\" (UID: \"d043d696-d09f-4c43-8960-0d31789103e8\") " Mar 12 08:30:11 crc kubenswrapper[4809]: I0312 08:30:11.467070 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d043d696-d09f-4c43-8960-0d31789103e8-erlang-cookie-secret\") pod \"d043d696-d09f-4c43-8960-0d31789103e8\" (UID: \"d043d696-d09f-4c43-8960-0d31789103e8\") " Mar 12 08:30:11 crc kubenswrapper[4809]: I0312 08:30:11.467132 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d043d696-d09f-4c43-8960-0d31789103e8-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "d043d696-d09f-4c43-8960-0d31789103e8" (UID: "d043d696-d09f-4c43-8960-0d31789103e8"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:30:11 crc kubenswrapper[4809]: I0312 08:30:11.467172 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d043d696-d09f-4c43-8960-0d31789103e8-rabbitmq-tls\") pod \"d043d696-d09f-4c43-8960-0d31789103e8\" (UID: \"d043d696-d09f-4c43-8960-0d31789103e8\") " Mar 12 08:30:11 crc kubenswrapper[4809]: I0312 08:30:11.467236 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d043d696-d09f-4c43-8960-0d31789103e8-server-conf\") pod \"d043d696-d09f-4c43-8960-0d31789103e8\" (UID: \"d043d696-d09f-4c43-8960-0d31789103e8\") " Mar 12 08:30:11 crc kubenswrapper[4809]: I0312 08:30:11.467273 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d043d696-d09f-4c43-8960-0d31789103e8-config-data\") pod \"d043d696-d09f-4c43-8960-0d31789103e8\" (UID: \"d043d696-d09f-4c43-8960-0d31789103e8\") " Mar 12 08:30:11 crc kubenswrapper[4809]: I0312 08:30:11.467325 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d043d696-d09f-4c43-8960-0d31789103e8-pod-info\") pod \"d043d696-d09f-4c43-8960-0d31789103e8\" (UID: \"d043d696-d09f-4c43-8960-0d31789103e8\") " Mar 12 08:30:11 crc kubenswrapper[4809]: I0312 08:30:11.468157 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d043d696-d09f-4c43-8960-0d31789103e8-rabbitmq-erlang-cookie\") pod \"d043d696-d09f-4c43-8960-0d31789103e8\" (UID: \"d043d696-d09f-4c43-8960-0d31789103e8\") " Mar 12 08:30:11 crc kubenswrapper[4809]: I0312 08:30:11.468366 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d043d696-d09f-4c43-8960-0d31789103e8-rabbitmq-confd\") pod \"d043d696-d09f-4c43-8960-0d31789103e8\" (UID: \"d043d696-d09f-4c43-8960-0d31789103e8\") " Mar 12 08:30:11 crc kubenswrapper[4809]: I0312 08:30:11.468465 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d043d696-d09f-4c43-8960-0d31789103e8-rabbitmq-plugins\") pod \"d043d696-d09f-4c43-8960-0d31789103e8\" (UID: \"d043d696-d09f-4c43-8960-0d31789103e8\") " Mar 12 08:30:11 crc kubenswrapper[4809]: I0312 08:30:11.469137 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d043d696-d09f-4c43-8960-0d31789103e8-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "d043d696-d09f-4c43-8960-0d31789103e8" (UID: "d043d696-d09f-4c43-8960-0d31789103e8"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:30:11 crc kubenswrapper[4809]: I0312 08:30:11.470336 4809 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d043d696-d09f-4c43-8960-0d31789103e8-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Mar 12 08:30:11 crc kubenswrapper[4809]: I0312 08:30:11.470361 4809 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d043d696-d09f-4c43-8960-0d31789103e8-plugins-conf\") on node \"crc\" DevicePath \"\"" Mar 12 08:30:11 crc kubenswrapper[4809]: I0312 08:30:11.471601 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d043d696-d09f-4c43-8960-0d31789103e8-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "d043d696-d09f-4c43-8960-0d31789103e8" (UID: "d043d696-d09f-4c43-8960-0d31789103e8"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:30:11 crc kubenswrapper[4809]: I0312 08:30:11.477025 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d043d696-d09f-4c43-8960-0d31789103e8-kube-api-access-j7hvq" (OuterVolumeSpecName: "kube-api-access-j7hvq") pod "d043d696-d09f-4c43-8960-0d31789103e8" (UID: "d043d696-d09f-4c43-8960-0d31789103e8"). InnerVolumeSpecName "kube-api-access-j7hvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:30:11 crc kubenswrapper[4809]: I0312 08:30:11.477476 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/d043d696-d09f-4c43-8960-0d31789103e8-pod-info" (OuterVolumeSpecName: "pod-info") pod "d043d696-d09f-4c43-8960-0d31789103e8" (UID: "d043d696-d09f-4c43-8960-0d31789103e8"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 12 08:30:11 crc kubenswrapper[4809]: I0312 08:30:11.479632 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d043d696-d09f-4c43-8960-0d31789103e8-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "d043d696-d09f-4c43-8960-0d31789103e8" (UID: "d043d696-d09f-4c43-8960-0d31789103e8"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:30:11 crc kubenswrapper[4809]: I0312 08:30:11.487764 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d043d696-d09f-4c43-8960-0d31789103e8-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "d043d696-d09f-4c43-8960-0d31789103e8" (UID: "d043d696-d09f-4c43-8960-0d31789103e8"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:30:11 crc kubenswrapper[4809]: I0312 08:30:11.516513 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15723c3d-2262-4670-a49d-b9f57c4f291f" (OuterVolumeSpecName: "persistence") pod "d043d696-d09f-4c43-8960-0d31789103e8" (UID: "d043d696-d09f-4c43-8960-0d31789103e8"). InnerVolumeSpecName "pvc-15723c3d-2262-4670-a49d-b9f57c4f291f". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 12 08:30:11 crc kubenswrapper[4809]: I0312 08:30:11.541787 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d043d696-d09f-4c43-8960-0d31789103e8-config-data" (OuterVolumeSpecName: "config-data") pod "d043d696-d09f-4c43-8960-0d31789103e8" (UID: "d043d696-d09f-4c43-8960-0d31789103e8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:30:11 crc kubenswrapper[4809]: I0312 08:30:11.572878 4809 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-15723c3d-2262-4670-a49d-b9f57c4f291f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15723c3d-2262-4670-a49d-b9f57c4f291f\") on node \"crc\" " Mar 12 08:30:11 crc kubenswrapper[4809]: I0312 08:30:11.572921 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7hvq\" (UniqueName: \"kubernetes.io/projected/d043d696-d09f-4c43-8960-0d31789103e8-kube-api-access-j7hvq\") on node \"crc\" DevicePath \"\"" Mar 12 08:30:11 crc kubenswrapper[4809]: I0312 08:30:11.572934 4809 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d043d696-d09f-4c43-8960-0d31789103e8-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Mar 12 08:30:11 crc kubenswrapper[4809]: I0312 08:30:11.572944 4809 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d043d696-d09f-4c43-8960-0d31789103e8-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Mar 12 08:30:11 crc kubenswrapper[4809]: I0312 08:30:11.572958 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d043d696-d09f-4c43-8960-0d31789103e8-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 08:30:11 crc kubenswrapper[4809]: I0312 08:30:11.572967 4809 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d043d696-d09f-4c43-8960-0d31789103e8-pod-info\") on node \"crc\" DevicePath \"\"" Mar 12 08:30:11 crc kubenswrapper[4809]: I0312 08:30:11.572975 4809 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d043d696-d09f-4c43-8960-0d31789103e8-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Mar 12 08:30:11 crc kubenswrapper[4809]: I0312 08:30:11.606879 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d043d696-d09f-4c43-8960-0d31789103e8-server-conf" (OuterVolumeSpecName: "server-conf") pod "d043d696-d09f-4c43-8960-0d31789103e8" (UID: "d043d696-d09f-4c43-8960-0d31789103e8"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:30:11 crc kubenswrapper[4809]: I0312 08:30:11.636803 4809 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 12 08:30:11 crc kubenswrapper[4809]: I0312 08:30:11.636999 4809 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-15723c3d-2262-4670-a49d-b9f57c4f291f" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15723c3d-2262-4670-a49d-b9f57c4f291f") on node "crc" Mar 12 08:30:11 crc kubenswrapper[4809]: I0312 08:30:11.663958 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d043d696-d09f-4c43-8960-0d31789103e8-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "d043d696-d09f-4c43-8960-0d31789103e8" (UID: "d043d696-d09f-4c43-8960-0d31789103e8"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:30:11 crc kubenswrapper[4809]: I0312 08:30:11.675825 4809 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d043d696-d09f-4c43-8960-0d31789103e8-server-conf\") on node \"crc\" DevicePath \"\"" Mar 12 08:30:11 crc kubenswrapper[4809]: I0312 08:30:11.675852 4809 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d043d696-d09f-4c43-8960-0d31789103e8-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Mar 12 08:30:11 crc kubenswrapper[4809]: I0312 08:30:11.675865 4809 reconciler_common.go:293] "Volume detached for volume \"pvc-15723c3d-2262-4670-a49d-b9f57c4f291f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15723c3d-2262-4670-a49d-b9f57c4f291f\") on node \"crc\" DevicePath \"\"" Mar 12 08:30:12 crc kubenswrapper[4809]: I0312 08:30:12.322814 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 12 08:30:12 crc kubenswrapper[4809]: I0312 08:30:12.374478 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 12 08:30:12 crc kubenswrapper[4809]: I0312 08:30:12.397810 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 12 08:30:12 crc kubenswrapper[4809]: I0312 08:30:12.469640 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Mar 12 08:30:12 crc kubenswrapper[4809]: E0312 08:30:12.471303 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d043d696-d09f-4c43-8960-0d31789103e8" containerName="setup-container" Mar 12 08:30:12 crc kubenswrapper[4809]: I0312 08:30:12.471323 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="d043d696-d09f-4c43-8960-0d31789103e8" containerName="setup-container" Mar 12 08:30:12 crc kubenswrapper[4809]: E0312 08:30:12.471357 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d043d696-d09f-4c43-8960-0d31789103e8" containerName="rabbitmq" Mar 12 08:30:12 crc kubenswrapper[4809]: I0312 08:30:12.471367 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="d043d696-d09f-4c43-8960-0d31789103e8" containerName="rabbitmq" Mar 12 08:30:12 crc kubenswrapper[4809]: E0312 08:30:12.471393 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1804f4be-3be2-4237-8059-f28f28a33dba" containerName="collect-profiles" Mar 12 08:30:12 crc kubenswrapper[4809]: I0312 08:30:12.471403 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="1804f4be-3be2-4237-8059-f28f28a33dba" containerName="collect-profiles" Mar 12 08:30:12 crc kubenswrapper[4809]: E0312 08:30:12.471453 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba2b5d5b-19ed-4e05-9165-b5064fec1040" containerName="oc" Mar 12 08:30:12 crc kubenswrapper[4809]: I0312 08:30:12.471461 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba2b5d5b-19ed-4e05-9165-b5064fec1040" containerName="oc" Mar 12 08:30:12 crc kubenswrapper[4809]: I0312 08:30:12.472227 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="1804f4be-3be2-4237-8059-f28f28a33dba" containerName="collect-profiles" Mar 12 08:30:12 crc kubenswrapper[4809]: I0312 08:30:12.472264 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba2b5d5b-19ed-4e05-9165-b5064fec1040" containerName="oc" Mar 12 08:30:12 crc kubenswrapper[4809]: I0312 08:30:12.472288 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="d043d696-d09f-4c43-8960-0d31789103e8" containerName="rabbitmq" Mar 12 08:30:12 crc kubenswrapper[4809]: I0312 08:30:12.482553 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 12 08:30:12 crc kubenswrapper[4809]: I0312 08:30:12.494817 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 12 08:30:12 crc kubenswrapper[4809]: I0312 08:30:12.599687 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/396048fa-2424-4d6c-80b6-61e9cce8a4ec-pod-info\") pod \"rabbitmq-server-0\" (UID: \"396048fa-2424-4d6c-80b6-61e9cce8a4ec\") " pod="openstack/rabbitmq-server-0" Mar 12 08:30:12 crc kubenswrapper[4809]: I0312 08:30:12.599924 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/396048fa-2424-4d6c-80b6-61e9cce8a4ec-server-conf\") pod \"rabbitmq-server-0\" (UID: \"396048fa-2424-4d6c-80b6-61e9cce8a4ec\") " pod="openstack/rabbitmq-server-0" Mar 12 08:30:12 crc kubenswrapper[4809]: I0312 08:30:12.599980 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/396048fa-2424-4d6c-80b6-61e9cce8a4ec-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"396048fa-2424-4d6c-80b6-61e9cce8a4ec\") " pod="openstack/rabbitmq-server-0" Mar 12 08:30:12 crc kubenswrapper[4809]: I0312 08:30:12.600034 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/396048fa-2424-4d6c-80b6-61e9cce8a4ec-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"396048fa-2424-4d6c-80b6-61e9cce8a4ec\") " pod="openstack/rabbitmq-server-0" Mar 12 08:30:12 crc kubenswrapper[4809]: I0312 08:30:12.600060 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/396048fa-2424-4d6c-80b6-61e9cce8a4ec-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"396048fa-2424-4d6c-80b6-61e9cce8a4ec\") " pod="openstack/rabbitmq-server-0" Mar 12 08:30:12 crc kubenswrapper[4809]: I0312 08:30:12.600090 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/396048fa-2424-4d6c-80b6-61e9cce8a4ec-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"396048fa-2424-4d6c-80b6-61e9cce8a4ec\") " pod="openstack/rabbitmq-server-0" Mar 12 08:30:12 crc kubenswrapper[4809]: I0312 08:30:12.600167 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/396048fa-2424-4d6c-80b6-61e9cce8a4ec-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"396048fa-2424-4d6c-80b6-61e9cce8a4ec\") " pod="openstack/rabbitmq-server-0" Mar 12 08:30:12 crc kubenswrapper[4809]: I0312 08:30:12.600203 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd546\" (UniqueName: \"kubernetes.io/projected/396048fa-2424-4d6c-80b6-61e9cce8a4ec-kube-api-access-rd546\") pod \"rabbitmq-server-0\" (UID: \"396048fa-2424-4d6c-80b6-61e9cce8a4ec\") " pod="openstack/rabbitmq-server-0" Mar 12 08:30:12 crc kubenswrapper[4809]: I0312 08:30:12.600236 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/396048fa-2424-4d6c-80b6-61e9cce8a4ec-config-data\") pod \"rabbitmq-server-0\" (UID: \"396048fa-2424-4d6c-80b6-61e9cce8a4ec\") " pod="openstack/rabbitmq-server-0" Mar 12 08:30:12 crc kubenswrapper[4809]: I0312 08:30:12.600259 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/396048fa-2424-4d6c-80b6-61e9cce8a4ec-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"396048fa-2424-4d6c-80b6-61e9cce8a4ec\") " pod="openstack/rabbitmq-server-0" Mar 12 08:30:12 crc kubenswrapper[4809]: I0312 08:30:12.600284 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-15723c3d-2262-4670-a49d-b9f57c4f291f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15723c3d-2262-4670-a49d-b9f57c4f291f\") pod \"rabbitmq-server-0\" (UID: \"396048fa-2424-4d6c-80b6-61e9cce8a4ec\") " pod="openstack/rabbitmq-server-0" Mar 12 08:30:12 crc kubenswrapper[4809]: I0312 08:30:12.702471 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/396048fa-2424-4d6c-80b6-61e9cce8a4ec-pod-info\") pod \"rabbitmq-server-0\" (UID: \"396048fa-2424-4d6c-80b6-61e9cce8a4ec\") " pod="openstack/rabbitmq-server-0" Mar 12 08:30:12 crc kubenswrapper[4809]: I0312 08:30:12.702726 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/396048fa-2424-4d6c-80b6-61e9cce8a4ec-server-conf\") pod \"rabbitmq-server-0\" (UID: \"396048fa-2424-4d6c-80b6-61e9cce8a4ec\") " pod="openstack/rabbitmq-server-0" Mar 12 08:30:12 crc kubenswrapper[4809]: I0312 08:30:12.702878 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/396048fa-2424-4d6c-80b6-61e9cce8a4ec-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"396048fa-2424-4d6c-80b6-61e9cce8a4ec\") " pod="openstack/rabbitmq-server-0" Mar 12 08:30:12 crc kubenswrapper[4809]: I0312 08:30:12.703044 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/396048fa-2424-4d6c-80b6-61e9cce8a4ec-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"396048fa-2424-4d6c-80b6-61e9cce8a4ec\") " pod="openstack/rabbitmq-server-0" Mar 12 08:30:12 crc kubenswrapper[4809]: I0312 08:30:12.703238 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/396048fa-2424-4d6c-80b6-61e9cce8a4ec-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"396048fa-2424-4d6c-80b6-61e9cce8a4ec\") " pod="openstack/rabbitmq-server-0" Mar 12 08:30:12 crc kubenswrapper[4809]: I0312 08:30:12.703389 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/396048fa-2424-4d6c-80b6-61e9cce8a4ec-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"396048fa-2424-4d6c-80b6-61e9cce8a4ec\") " pod="openstack/rabbitmq-server-0" Mar 12 08:30:12 crc kubenswrapper[4809]: I0312 08:30:12.703588 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/396048fa-2424-4d6c-80b6-61e9cce8a4ec-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"396048fa-2424-4d6c-80b6-61e9cce8a4ec\") " pod="openstack/rabbitmq-server-0" Mar 12 08:30:12 crc kubenswrapper[4809]: I0312 08:30:12.703779 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rd546\" (UniqueName: \"kubernetes.io/projected/396048fa-2424-4d6c-80b6-61e9cce8a4ec-kube-api-access-rd546\") pod \"rabbitmq-server-0\" (UID: \"396048fa-2424-4d6c-80b6-61e9cce8a4ec\") " pod="openstack/rabbitmq-server-0" Mar 12 08:30:12 crc kubenswrapper[4809]: I0312 08:30:12.703932 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/396048fa-2424-4d6c-80b6-61e9cce8a4ec-config-data\") pod \"rabbitmq-server-0\" (UID: \"396048fa-2424-4d6c-80b6-61e9cce8a4ec\") " pod="openstack/rabbitmq-server-0" Mar 12 08:30:12 crc kubenswrapper[4809]: I0312 08:30:12.704046 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/396048fa-2424-4d6c-80b6-61e9cce8a4ec-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"396048fa-2424-4d6c-80b6-61e9cce8a4ec\") " pod="openstack/rabbitmq-server-0" Mar 12 08:30:12 crc kubenswrapper[4809]: I0312 08:30:12.704264 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-15723c3d-2262-4670-a49d-b9f57c4f291f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15723c3d-2262-4670-a49d-b9f57c4f291f\") pod \"rabbitmq-server-0\" (UID: \"396048fa-2424-4d6c-80b6-61e9cce8a4ec\") " pod="openstack/rabbitmq-server-0" Mar 12 08:30:12 crc kubenswrapper[4809]: I0312 08:30:12.704545 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/396048fa-2424-4d6c-80b6-61e9cce8a4ec-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"396048fa-2424-4d6c-80b6-61e9cce8a4ec\") " pod="openstack/rabbitmq-server-0" Mar 12 08:30:12 crc kubenswrapper[4809]: I0312 08:30:12.704588 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/396048fa-2424-4d6c-80b6-61e9cce8a4ec-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"396048fa-2424-4d6c-80b6-61e9cce8a4ec\") " pod="openstack/rabbitmq-server-0" Mar 12 08:30:12 crc kubenswrapper[4809]: I0312 08:30:12.704886 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/396048fa-2424-4d6c-80b6-61e9cce8a4ec-config-data\") pod \"rabbitmq-server-0\" (UID: \"396048fa-2424-4d6c-80b6-61e9cce8a4ec\") " pod="openstack/rabbitmq-server-0" Mar 12 08:30:12 crc kubenswrapper[4809]: I0312 08:30:12.704953 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/396048fa-2424-4d6c-80b6-61e9cce8a4ec-server-conf\") pod \"rabbitmq-server-0\" (UID: \"396048fa-2424-4d6c-80b6-61e9cce8a4ec\") " pod="openstack/rabbitmq-server-0" Mar 12 08:30:12 crc kubenswrapper[4809]: I0312 08:30:12.707437 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/396048fa-2424-4d6c-80b6-61e9cce8a4ec-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"396048fa-2424-4d6c-80b6-61e9cce8a4ec\") " pod="openstack/rabbitmq-server-0" Mar 12 08:30:12 crc kubenswrapper[4809]: I0312 08:30:12.708525 4809 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 08:30:12 crc kubenswrapper[4809]: I0312 08:30:12.708578 4809 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-15723c3d-2262-4670-a49d-b9f57c4f291f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15723c3d-2262-4670-a49d-b9f57c4f291f\") pod \"rabbitmq-server-0\" (UID: \"396048fa-2424-4d6c-80b6-61e9cce8a4ec\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ffccda2180f9e360d117dc76242223eeb49978e3ef6885ba8cb876e094cb7312/globalmount\"" pod="openstack/rabbitmq-server-0" Mar 12 08:30:12 crc kubenswrapper[4809]: I0312 08:30:12.708941 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/396048fa-2424-4d6c-80b6-61e9cce8a4ec-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"396048fa-2424-4d6c-80b6-61e9cce8a4ec\") " pod="openstack/rabbitmq-server-0" Mar 12 08:30:12 crc kubenswrapper[4809]: I0312 08:30:12.710228 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/396048fa-2424-4d6c-80b6-61e9cce8a4ec-pod-info\") pod \"rabbitmq-server-0\" (UID: \"396048fa-2424-4d6c-80b6-61e9cce8a4ec\") " pod="openstack/rabbitmq-server-0" Mar 12 08:30:12 crc kubenswrapper[4809]: I0312 08:30:12.712638 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/396048fa-2424-4d6c-80b6-61e9cce8a4ec-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"396048fa-2424-4d6c-80b6-61e9cce8a4ec\") " pod="openstack/rabbitmq-server-0" Mar 12 08:30:12 crc kubenswrapper[4809]: I0312 08:30:12.712688 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/396048fa-2424-4d6c-80b6-61e9cce8a4ec-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"396048fa-2424-4d6c-80b6-61e9cce8a4ec\") " pod="openstack/rabbitmq-server-0" Mar 12 08:30:12 crc kubenswrapper[4809]: I0312 08:30:12.725529 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rd546\" (UniqueName: \"kubernetes.io/projected/396048fa-2424-4d6c-80b6-61e9cce8a4ec-kube-api-access-rd546\") pod \"rabbitmq-server-0\" (UID: \"396048fa-2424-4d6c-80b6-61e9cce8a4ec\") " pod="openstack/rabbitmq-server-0" Mar 12 08:30:12 crc kubenswrapper[4809]: I0312 08:30:12.780883 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-15723c3d-2262-4670-a49d-b9f57c4f291f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15723c3d-2262-4670-a49d-b9f57c4f291f\") pod \"rabbitmq-server-0\" (UID: \"396048fa-2424-4d6c-80b6-61e9cce8a4ec\") " pod="openstack/rabbitmq-server-0" Mar 12 08:30:12 crc kubenswrapper[4809]: I0312 08:30:12.847534 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 12 08:30:13 crc kubenswrapper[4809]: I0312 08:30:13.129012 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d043d696-d09f-4c43-8960-0d31789103e8" path="/var/lib/kubelet/pods/d043d696-d09f-4c43-8960-0d31789103e8/volumes" Mar 12 08:30:13 crc kubenswrapper[4809]: I0312 08:30:13.381899 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 12 08:30:14 crc kubenswrapper[4809]: I0312 08:30:14.355773 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"396048fa-2424-4d6c-80b6-61e9cce8a4ec","Type":"ContainerStarted","Data":"83936ce8e8a6ac49667c71e3e5ae4886e883dd42f34ac6e8a76d7d7541857c3f"} Mar 12 08:30:16 crc kubenswrapper[4809]: I0312 08:30:16.401373 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"396048fa-2424-4d6c-80b6-61e9cce8a4ec","Type":"ContainerStarted","Data":"b4a9352f104c6a38251ba4f980df49a32e8b0000ad172142c35d3fd721ff36d8"} Mar 12 08:30:26 crc kubenswrapper[4809]: I0312 08:30:26.628382 4809 scope.go:117] "RemoveContainer" containerID="a5d33dc459ac717259e45f13ec4149298d61afa5b3792d19a16b0edda76ae037" Mar 12 08:30:26 crc kubenswrapper[4809]: I0312 08:30:26.657938 4809 scope.go:117] "RemoveContainer" containerID="7218fd99851780e805adc0b898f00f5c9ab9816475d6a72103dcc775a9395347" Mar 12 08:30:26 crc kubenswrapper[4809]: I0312 08:30:26.717380 4809 scope.go:117] "RemoveContainer" containerID="d245ce5e756182f20e26a6759b3800a839afa1db8cc82a4b20d0d7a6b179c10b" Mar 12 08:30:26 crc kubenswrapper[4809]: I0312 08:30:26.768162 4809 scope.go:117] "RemoveContainer" containerID="853e1d2444390fdd233ca4750bb5446e59760186012bfcafdad952e0f8516afa" Mar 12 08:30:26 crc kubenswrapper[4809]: I0312 08:30:26.821046 4809 scope.go:117] "RemoveContainer" containerID="6fe198a9e0493ad9e7ac15c32d55e3b52abb8fa1dcf93d05ded3ac3af3f2ae7c" Mar 12 08:30:26 crc kubenswrapper[4809]: I0312 08:30:26.884889 4809 scope.go:117] "RemoveContainer" containerID="e4d92fee75bd1ec69fd16528e38ef58e14eb355e4dd1b3a0540f3a72c00dc6e5" Mar 12 08:30:48 crc kubenswrapper[4809]: I0312 08:30:48.922903 4809 generic.go:334] "Generic (PLEG): container finished" podID="396048fa-2424-4d6c-80b6-61e9cce8a4ec" containerID="b4a9352f104c6a38251ba4f980df49a32e8b0000ad172142c35d3fd721ff36d8" exitCode=0 Mar 12 08:30:48 crc kubenswrapper[4809]: I0312 08:30:48.923005 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"396048fa-2424-4d6c-80b6-61e9cce8a4ec","Type":"ContainerDied","Data":"b4a9352f104c6a38251ba4f980df49a32e8b0000ad172142c35d3fd721ff36d8"} Mar 12 08:30:49 crc kubenswrapper[4809]: I0312 08:30:49.949840 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"396048fa-2424-4d6c-80b6-61e9cce8a4ec","Type":"ContainerStarted","Data":"88496d885695dcff3aa5395b50a83ef666053c40a66360910829fcabf6f291d1"} Mar 12 08:30:49 crc kubenswrapper[4809]: I0312 08:30:49.950598 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Mar 12 08:30:49 crc kubenswrapper[4809]: I0312 08:30:49.986378 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.986350896 podStartE2EDuration="37.986350896s" podCreationTimestamp="2026-03-12 08:30:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 08:30:49.977144806 +0000 UTC m=+1923.559180549" watchObservedRunningTime="2026-03-12 08:30:49.986350896 +0000 UTC m=+1923.568386639" Mar 12 08:31:02 crc kubenswrapper[4809]: I0312 08:31:02.853416 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Mar 12 08:31:27 crc kubenswrapper[4809]: I0312 08:31:27.035640 4809 scope.go:117] "RemoveContainer" containerID="551c0ff174feb8d2b277e430ba590567b30d95f1d4cbe77b316da86a5bfb016a" Mar 12 08:31:27 crc kubenswrapper[4809]: I0312 08:31:27.067501 4809 scope.go:117] "RemoveContainer" containerID="c6791c423abdc7139b629d816a564bb56b9bb90e0cb7bfa1e33c388b4497120c" Mar 12 08:31:27 crc kubenswrapper[4809]: I0312 08:31:27.105247 4809 scope.go:117] "RemoveContainer" containerID="be3735f0f67c6917d2b921e4dc1e7667101646436f00d546509750763220c817" Mar 12 08:31:27 crc kubenswrapper[4809]: I0312 08:31:27.156886 4809 scope.go:117] "RemoveContainer" containerID="bc1696149afa3efa4c2c7614a3ab764ee68a721bf0538cd612938a8385627ba1" Mar 12 08:31:27 crc kubenswrapper[4809]: I0312 08:31:27.209964 4809 scope.go:117] "RemoveContainer" containerID="a874a15901146c96b8af8cf44e4ee0171cb7f2f1416fb9537e4d69d8bc61a78b" Mar 12 08:31:27 crc kubenswrapper[4809]: I0312 08:31:27.250788 4809 scope.go:117] "RemoveContainer" containerID="b832cb2acb463b61a2457f91dc10003a0197fa782dfaca3d5f4799194bdf73ce" Mar 12 08:31:45 crc kubenswrapper[4809]: I0312 08:31:45.048681 4809 patch_prober.go:28] interesting pod/machine-config-daemon-h6d4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 08:31:45 crc kubenswrapper[4809]: I0312 08:31:45.049503 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 08:32:00 crc kubenswrapper[4809]: I0312 08:32:00.160455 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555072-v4pfw"] Mar 12 08:32:00 crc kubenswrapper[4809]: I0312 08:32:00.162682 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555072-v4pfw" Mar 12 08:32:00 crc kubenswrapper[4809]: I0312 08:32:00.165689 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 08:32:00 crc kubenswrapper[4809]: I0312 08:32:00.165734 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-tvxqv" Mar 12 08:32:00 crc kubenswrapper[4809]: I0312 08:32:00.165905 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 08:32:00 crc kubenswrapper[4809]: I0312 08:32:00.187713 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555072-v4pfw"] Mar 12 08:32:00 crc kubenswrapper[4809]: I0312 08:32:00.282128 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8ktd\" (UniqueName: \"kubernetes.io/projected/838ee61a-a827-4619-84e2-e9ecc34eae6e-kube-api-access-g8ktd\") pod \"auto-csr-approver-29555072-v4pfw\" (UID: \"838ee61a-a827-4619-84e2-e9ecc34eae6e\") " pod="openshift-infra/auto-csr-approver-29555072-v4pfw" Mar 12 08:32:00 crc kubenswrapper[4809]: I0312 08:32:00.384807 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8ktd\" (UniqueName: \"kubernetes.io/projected/838ee61a-a827-4619-84e2-e9ecc34eae6e-kube-api-access-g8ktd\") pod \"auto-csr-approver-29555072-v4pfw\" (UID: \"838ee61a-a827-4619-84e2-e9ecc34eae6e\") " pod="openshift-infra/auto-csr-approver-29555072-v4pfw" Mar 12 08:32:00 crc kubenswrapper[4809]: I0312 08:32:00.406772 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8ktd\" (UniqueName: \"kubernetes.io/projected/838ee61a-a827-4619-84e2-e9ecc34eae6e-kube-api-access-g8ktd\") pod \"auto-csr-approver-29555072-v4pfw\" (UID: \"838ee61a-a827-4619-84e2-e9ecc34eae6e\") " pod="openshift-infra/auto-csr-approver-29555072-v4pfw" Mar 12 08:32:00 crc kubenswrapper[4809]: I0312 08:32:00.495092 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555072-v4pfw" Mar 12 08:32:01 crc kubenswrapper[4809]: I0312 08:32:01.072238 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555072-v4pfw"] Mar 12 08:32:02 crc kubenswrapper[4809]: I0312 08:32:02.081658 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555072-v4pfw" event={"ID":"838ee61a-a827-4619-84e2-e9ecc34eae6e","Type":"ContainerStarted","Data":"89eebd7a640065913f7a1f29c958397c5b0d5b4d4ef83e2b64568bd6801cf49a"} Mar 12 08:32:03 crc kubenswrapper[4809]: I0312 08:32:03.058353 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-b202-account-create-update-678rc"] Mar 12 08:32:03 crc kubenswrapper[4809]: I0312 08:32:03.086491 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-b202-account-create-update-678rc"] Mar 12 08:32:03 crc kubenswrapper[4809]: I0312 08:32:03.101081 4809 generic.go:334] "Generic (PLEG): container finished" podID="838ee61a-a827-4619-84e2-e9ecc34eae6e" containerID="ccfa28e695c0acb1bd487d351c5faf25abe20e45248891b1ec8089b850e56db9" exitCode=0 Mar 12 08:32:03 crc kubenswrapper[4809]: I0312 08:32:03.101183 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555072-v4pfw" event={"ID":"838ee61a-a827-4619-84e2-e9ecc34eae6e","Type":"ContainerDied","Data":"ccfa28e695c0acb1bd487d351c5faf25abe20e45248891b1ec8089b850e56db9"} Mar 12 08:32:03 crc kubenswrapper[4809]: I0312 08:32:03.128536 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54d1448b-4fcf-49ce-aab4-884a71885bb6" path="/var/lib/kubelet/pods/54d1448b-4fcf-49ce-aab4-884a71885bb6/volumes" Mar 12 08:32:04 crc kubenswrapper[4809]: I0312 08:32:04.587102 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555072-v4pfw" Mar 12 08:32:04 crc kubenswrapper[4809]: I0312 08:32:04.717631 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8ktd\" (UniqueName: \"kubernetes.io/projected/838ee61a-a827-4619-84e2-e9ecc34eae6e-kube-api-access-g8ktd\") pod \"838ee61a-a827-4619-84e2-e9ecc34eae6e\" (UID: \"838ee61a-a827-4619-84e2-e9ecc34eae6e\") " Mar 12 08:32:04 crc kubenswrapper[4809]: I0312 08:32:04.725402 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/838ee61a-a827-4619-84e2-e9ecc34eae6e-kube-api-access-g8ktd" (OuterVolumeSpecName: "kube-api-access-g8ktd") pod "838ee61a-a827-4619-84e2-e9ecc34eae6e" (UID: "838ee61a-a827-4619-84e2-e9ecc34eae6e"). InnerVolumeSpecName "kube-api-access-g8ktd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:32:04 crc kubenswrapper[4809]: I0312 08:32:04.822898 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8ktd\" (UniqueName: \"kubernetes.io/projected/838ee61a-a827-4619-84e2-e9ecc34eae6e-kube-api-access-g8ktd\") on node \"crc\" DevicePath \"\"" Mar 12 08:32:05 crc kubenswrapper[4809]: I0312 08:32:05.054704 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-4855-account-create-update-m9b2t"] Mar 12 08:32:05 crc kubenswrapper[4809]: I0312 08:32:05.072756 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7b33-account-create-update-zxwc5"] Mar 12 08:32:05 crc kubenswrapper[4809]: I0312 08:32:05.087400 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-t975n"] Mar 12 08:32:05 crc kubenswrapper[4809]: I0312 08:32:05.105130 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-5m8tq"] Mar 12 08:32:05 crc kubenswrapper[4809]: I0312 08:32:05.130201 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555072-v4pfw" event={"ID":"838ee61a-a827-4619-84e2-e9ecc34eae6e","Type":"ContainerDied","Data":"89eebd7a640065913f7a1f29c958397c5b0d5b4d4ef83e2b64568bd6801cf49a"} Mar 12 08:32:05 crc kubenswrapper[4809]: I0312 08:32:05.130242 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89eebd7a640065913f7a1f29c958397c5b0d5b4d4ef83e2b64568bd6801cf49a" Mar 12 08:32:05 crc kubenswrapper[4809]: I0312 08:32:05.130299 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555072-v4pfw" Mar 12 08:32:05 crc kubenswrapper[4809]: I0312 08:32:05.132493 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-7b33-account-create-update-zxwc5"] Mar 12 08:32:05 crc kubenswrapper[4809]: I0312 08:32:05.147784 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-4855-account-create-update-m9b2t"] Mar 12 08:32:05 crc kubenswrapper[4809]: I0312 08:32:05.161870 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-t975n"] Mar 12 08:32:05 crc kubenswrapper[4809]: I0312 08:32:05.175808 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-5m8tq"] Mar 12 08:32:05 crc kubenswrapper[4809]: I0312 08:32:05.191169 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-6hwhh"] Mar 12 08:32:05 crc kubenswrapper[4809]: I0312 08:32:05.207249 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-6hwhh"] Mar 12 08:32:05 crc kubenswrapper[4809]: I0312 08:32:05.227964 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-hkstm"] Mar 12 08:32:05 crc kubenswrapper[4809]: I0312 08:32:05.245241 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-hkstm"] Mar 12 08:32:05 crc kubenswrapper[4809]: I0312 08:32:05.668351 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555066-dzhqs"] Mar 12 08:32:05 crc kubenswrapper[4809]: I0312 08:32:05.681292 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555066-dzhqs"] Mar 12 08:32:06 crc kubenswrapper[4809]: I0312 08:32:06.069355 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-56b8-account-create-update-2c67x"] Mar 12 08:32:06 crc kubenswrapper[4809]: I0312 08:32:06.084044 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-56b8-account-create-update-2c67x"] Mar 12 08:32:07 crc kubenswrapper[4809]: I0312 08:32:07.149299 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d092868-620f-4652-a74b-a3650282474a" path="/var/lib/kubelet/pods/0d092868-620f-4652-a74b-a3650282474a/volumes" Mar 12 08:32:07 crc kubenswrapper[4809]: I0312 08:32:07.169609 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="117d7e22-88db-4dd9-a3d9-625dd1b577de" path="/var/lib/kubelet/pods/117d7e22-88db-4dd9-a3d9-625dd1b577de/volumes" Mar 12 08:32:07 crc kubenswrapper[4809]: I0312 08:32:07.172855 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bea16ac-c763-4dea-891e-35af9814c6a8" path="/var/lib/kubelet/pods/5bea16ac-c763-4dea-891e-35af9814c6a8/volumes" Mar 12 08:32:07 crc kubenswrapper[4809]: I0312 08:32:07.173811 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68799bd7-b150-4834-a9c3-e1d95cb2af7f" path="/var/lib/kubelet/pods/68799bd7-b150-4834-a9c3-e1d95cb2af7f/volumes" Mar 12 08:32:07 crc kubenswrapper[4809]: I0312 08:32:07.174990 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="765761f4-7998-4a19-8ff6-d72af224951c" path="/var/lib/kubelet/pods/765761f4-7998-4a19-8ff6-d72af224951c/volumes" Mar 12 08:32:07 crc kubenswrapper[4809]: I0312 08:32:07.176025 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9faf40d-5722-4d62-a8b6-017e0ab167d2" path="/var/lib/kubelet/pods/a9faf40d-5722-4d62-a8b6-017e0ab167d2/volumes" Mar 12 08:32:07 crc kubenswrapper[4809]: I0312 08:32:07.177500 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aecd8d5a-ebc5-430d-b54d-f027d02c3def" path="/var/lib/kubelet/pods/aecd8d5a-ebc5-430d-b54d-f027d02c3def/volumes" Mar 12 08:32:07 crc kubenswrapper[4809]: I0312 08:32:07.178231 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7b849d3-1e9c-4122-ac6f-d9100a6187d2" path="/var/lib/kubelet/pods/b7b849d3-1e9c-4122-ac6f-d9100a6187d2/volumes" Mar 12 08:32:11 crc kubenswrapper[4809]: I0312 08:32:11.047560 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-7hl7d"] Mar 12 08:32:11 crc kubenswrapper[4809]: I0312 08:32:11.062575 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-7hl7d"] Mar 12 08:32:11 crc kubenswrapper[4809]: I0312 08:32:11.123897 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="688a9778-473c-44fa-a4ba-ac00d4e21a10" path="/var/lib/kubelet/pods/688a9778-473c-44fa-a4ba-ac00d4e21a10/volumes" Mar 12 08:32:13 crc kubenswrapper[4809]: I0312 08:32:13.048520 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-cffe-account-create-update-4vbvq"] Mar 12 08:32:13 crc kubenswrapper[4809]: I0312 08:32:13.059988 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-cffe-account-create-update-4vbvq"] Mar 12 08:32:13 crc kubenswrapper[4809]: I0312 08:32:13.120334 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b3ee226-2b1e-4d6b-b0ac-6ede45a3cfff" path="/var/lib/kubelet/pods/3b3ee226-2b1e-4d6b-b0ac-6ede45a3cfff/volumes" Mar 12 08:32:15 crc kubenswrapper[4809]: I0312 08:32:15.048509 4809 patch_prober.go:28] interesting pod/machine-config-daemon-h6d4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 08:32:15 crc kubenswrapper[4809]: I0312 08:32:15.048839 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 08:32:22 crc kubenswrapper[4809]: I0312 08:32:22.034681 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-pnpgj"] Mar 12 08:32:22 crc kubenswrapper[4809]: I0312 08:32:22.046638 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-pnpgj"] Mar 12 08:32:23 crc kubenswrapper[4809]: I0312 08:32:23.125001 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0874ad6a-f0b6-4b55-938a-aefe9dc8ebaf" path="/var/lib/kubelet/pods/0874ad6a-f0b6-4b55-938a-aefe9dc8ebaf/volumes" Mar 12 08:32:27 crc kubenswrapper[4809]: I0312 08:32:27.392203 4809 scope.go:117] "RemoveContainer" containerID="f2e363d1208bee8705631da4519b442de6b732cdf38c31bebe25e81e761b9b5e" Mar 12 08:32:27 crc kubenswrapper[4809]: I0312 08:32:27.423211 4809 scope.go:117] "RemoveContainer" containerID="e403d5813e66777eeb6d2c3addd9cc362ddb235c4ceb4fb50681b2d66622ae2a" Mar 12 08:32:27 crc kubenswrapper[4809]: I0312 08:32:27.498396 4809 scope.go:117] "RemoveContainer" containerID="3c7772715689b58daeff59ad1214869c6a8a8640d2a4e0e821150f039eb73048" Mar 12 08:32:27 crc kubenswrapper[4809]: I0312 08:32:27.549660 4809 scope.go:117] "RemoveContainer" containerID="868f6f1d0d73f5ae768ae483c5a77830d1b0a6b89ca79c86657e30edd7ce45e7" Mar 12 08:32:27 crc kubenswrapper[4809]: I0312 08:32:27.599338 4809 scope.go:117] "RemoveContainer" containerID="380e4e9ac0328706b84c3eecd96967f7de7cf093daaeff21af047bcd4164aab5" Mar 12 08:32:27 crc kubenswrapper[4809]: I0312 08:32:27.659338 4809 scope.go:117] "RemoveContainer" containerID="a45fa71d0bd3b61b59c29d11ca018611976ed910a61665dda3cfc9044cce3425" Mar 12 08:32:27 crc kubenswrapper[4809]: I0312 08:32:27.726058 4809 scope.go:117] "RemoveContainer" containerID="0e0693879750442393e55afb37489772c8505c81eef03f3e4b5f44b06f753c89" Mar 12 08:32:27 crc kubenswrapper[4809]: I0312 08:32:27.764833 4809 scope.go:117] "RemoveContainer" containerID="3a438231b1583d34a00c11e0ebfe0b6d225dd328ae7ec0d849fe8cc27f1051d3" Mar 12 08:32:27 crc kubenswrapper[4809]: I0312 08:32:27.815282 4809 scope.go:117] "RemoveContainer" containerID="353e627a3c11215f87a5cb13740bac75c8510ed931ce3804dbf24ac9a081b6bd" Mar 12 08:32:27 crc kubenswrapper[4809]: I0312 08:32:27.847157 4809 scope.go:117] "RemoveContainer" containerID="21b811943b08988a3596b1ba906a57bf8740487822c38eae8cd2592fc3f1d129" Mar 12 08:32:27 crc kubenswrapper[4809]: I0312 08:32:27.872740 4809 scope.go:117] "RemoveContainer" containerID="43313c59113888de1212b1194c98774b04ff88c098f313032020d1022a08e568" Mar 12 08:32:27 crc kubenswrapper[4809]: I0312 08:32:27.898412 4809 scope.go:117] "RemoveContainer" containerID="b8b3a2f4d8a12ac11be4677768907f37b93be1114ae893e614e8d393fec2c0c2" Mar 12 08:32:27 crc kubenswrapper[4809]: I0312 08:32:27.924777 4809 scope.go:117] "RemoveContainer" containerID="1bbfca5d99887738ec2786a81ccecdb8000287233937d66c8101d1c47decc03b" Mar 12 08:32:27 crc kubenswrapper[4809]: I0312 08:32:27.953309 4809 scope.go:117] "RemoveContainer" containerID="d2af86247af42944b1743dd43f6d2ddb6af9d79c163cd7c46aa14102c984c0f2" Mar 12 08:32:27 crc kubenswrapper[4809]: I0312 08:32:27.981176 4809 scope.go:117] "RemoveContainer" containerID="7ce3df5ec7639c225cedddcda84daf42856f9356dde19e7108d7b6ee7a257b70" Mar 12 08:32:28 crc kubenswrapper[4809]: I0312 08:32:28.006494 4809 scope.go:117] "RemoveContainer" containerID="2ef48b99426e10d93866e3f8eefb58f3074dc49c341744abd1cb4c1eb0c39535" Mar 12 08:32:35 crc kubenswrapper[4809]: I0312 08:32:35.084375 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-qdxsg"] Mar 12 08:32:35 crc kubenswrapper[4809]: I0312 08:32:35.103023 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-qdxsg"] Mar 12 08:32:35 crc kubenswrapper[4809]: I0312 08:32:35.119799 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5ececaf-3560-4284-9d82-ac39de15bf88" path="/var/lib/kubelet/pods/a5ececaf-3560-4284-9d82-ac39de15bf88/volumes" Mar 12 08:32:37 crc kubenswrapper[4809]: I0312 08:32:37.029191 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4jjdb"] Mar 12 08:32:37 crc kubenswrapper[4809]: E0312 08:32:37.030269 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="838ee61a-a827-4619-84e2-e9ecc34eae6e" containerName="oc" Mar 12 08:32:37 crc kubenswrapper[4809]: I0312 08:32:37.030301 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="838ee61a-a827-4619-84e2-e9ecc34eae6e" containerName="oc" Mar 12 08:32:37 crc kubenswrapper[4809]: I0312 08:32:37.030691 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="838ee61a-a827-4619-84e2-e9ecc34eae6e" containerName="oc" Mar 12 08:32:37 crc kubenswrapper[4809]: I0312 08:32:37.033732 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4jjdb" Mar 12 08:32:37 crc kubenswrapper[4809]: I0312 08:32:37.043157 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4jjdb"] Mar 12 08:32:37 crc kubenswrapper[4809]: I0312 08:32:37.065368 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2176c52-ac52-4571-b15a-f3d31ffd5fb5-utilities\") pod \"community-operators-4jjdb\" (UID: \"c2176c52-ac52-4571-b15a-f3d31ffd5fb5\") " pod="openshift-marketplace/community-operators-4jjdb" Mar 12 08:32:37 crc kubenswrapper[4809]: I0312 08:32:37.065488 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2176c52-ac52-4571-b15a-f3d31ffd5fb5-catalog-content\") pod \"community-operators-4jjdb\" (UID: \"c2176c52-ac52-4571-b15a-f3d31ffd5fb5\") " pod="openshift-marketplace/community-operators-4jjdb" Mar 12 08:32:37 crc kubenswrapper[4809]: I0312 08:32:37.065702 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gl2j\" (UniqueName: \"kubernetes.io/projected/c2176c52-ac52-4571-b15a-f3d31ffd5fb5-kube-api-access-9gl2j\") pod \"community-operators-4jjdb\" (UID: \"c2176c52-ac52-4571-b15a-f3d31ffd5fb5\") " pod="openshift-marketplace/community-operators-4jjdb" Mar 12 08:32:37 crc kubenswrapper[4809]: I0312 08:32:37.168329 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2176c52-ac52-4571-b15a-f3d31ffd5fb5-utilities\") pod \"community-operators-4jjdb\" (UID: \"c2176c52-ac52-4571-b15a-f3d31ffd5fb5\") " pod="openshift-marketplace/community-operators-4jjdb" Mar 12 08:32:37 crc kubenswrapper[4809]: I0312 08:32:37.168527 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2176c52-ac52-4571-b15a-f3d31ffd5fb5-catalog-content\") pod \"community-operators-4jjdb\" (UID: \"c2176c52-ac52-4571-b15a-f3d31ffd5fb5\") " pod="openshift-marketplace/community-operators-4jjdb" Mar 12 08:32:37 crc kubenswrapper[4809]: I0312 08:32:37.168749 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gl2j\" (UniqueName: \"kubernetes.io/projected/c2176c52-ac52-4571-b15a-f3d31ffd5fb5-kube-api-access-9gl2j\") pod \"community-operators-4jjdb\" (UID: \"c2176c52-ac52-4571-b15a-f3d31ffd5fb5\") " pod="openshift-marketplace/community-operators-4jjdb" Mar 12 08:32:37 crc kubenswrapper[4809]: I0312 08:32:37.170037 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2176c52-ac52-4571-b15a-f3d31ffd5fb5-utilities\") pod \"community-operators-4jjdb\" (UID: \"c2176c52-ac52-4571-b15a-f3d31ffd5fb5\") " pod="openshift-marketplace/community-operators-4jjdb" Mar 12 08:32:37 crc kubenswrapper[4809]: I0312 08:32:37.170478 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2176c52-ac52-4571-b15a-f3d31ffd5fb5-catalog-content\") pod \"community-operators-4jjdb\" (UID: \"c2176c52-ac52-4571-b15a-f3d31ffd5fb5\") " pod="openshift-marketplace/community-operators-4jjdb" Mar 12 08:32:37 crc kubenswrapper[4809]: I0312 08:32:37.195693 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gl2j\" (UniqueName: \"kubernetes.io/projected/c2176c52-ac52-4571-b15a-f3d31ffd5fb5-kube-api-access-9gl2j\") pod \"community-operators-4jjdb\" (UID: \"c2176c52-ac52-4571-b15a-f3d31ffd5fb5\") " pod="openshift-marketplace/community-operators-4jjdb" Mar 12 08:32:37 crc kubenswrapper[4809]: I0312 08:32:37.230551 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9d29n"] Mar 12 08:32:37 crc kubenswrapper[4809]: I0312 08:32:37.233756 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9d29n" Mar 12 08:32:37 crc kubenswrapper[4809]: I0312 08:32:37.258568 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9d29n"] Mar 12 08:32:37 crc kubenswrapper[4809]: I0312 08:32:37.384163 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hkb9\" (UniqueName: \"kubernetes.io/projected/dc647467-6bca-4396-a76d-0e79ad3661bb-kube-api-access-4hkb9\") pod \"certified-operators-9d29n\" (UID: \"dc647467-6bca-4396-a76d-0e79ad3661bb\") " pod="openshift-marketplace/certified-operators-9d29n" Mar 12 08:32:37 crc kubenswrapper[4809]: I0312 08:32:37.384251 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc647467-6bca-4396-a76d-0e79ad3661bb-catalog-content\") pod \"certified-operators-9d29n\" (UID: \"dc647467-6bca-4396-a76d-0e79ad3661bb\") " pod="openshift-marketplace/certified-operators-9d29n" Mar 12 08:32:37 crc kubenswrapper[4809]: I0312 08:32:37.385508 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc647467-6bca-4396-a76d-0e79ad3661bb-utilities\") pod \"certified-operators-9d29n\" (UID: \"dc647467-6bca-4396-a76d-0e79ad3661bb\") " pod="openshift-marketplace/certified-operators-9d29n" Mar 12 08:32:37 crc kubenswrapper[4809]: I0312 08:32:37.394860 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4jjdb" Mar 12 08:32:37 crc kubenswrapper[4809]: I0312 08:32:37.486819 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc647467-6bca-4396-a76d-0e79ad3661bb-utilities\") pod \"certified-operators-9d29n\" (UID: \"dc647467-6bca-4396-a76d-0e79ad3661bb\") " pod="openshift-marketplace/certified-operators-9d29n" Mar 12 08:32:37 crc kubenswrapper[4809]: I0312 08:32:37.486956 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hkb9\" (UniqueName: \"kubernetes.io/projected/dc647467-6bca-4396-a76d-0e79ad3661bb-kube-api-access-4hkb9\") pod \"certified-operators-9d29n\" (UID: \"dc647467-6bca-4396-a76d-0e79ad3661bb\") " pod="openshift-marketplace/certified-operators-9d29n" Mar 12 08:32:37 crc kubenswrapper[4809]: I0312 08:32:37.486988 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc647467-6bca-4396-a76d-0e79ad3661bb-catalog-content\") pod \"certified-operators-9d29n\" (UID: \"dc647467-6bca-4396-a76d-0e79ad3661bb\") " pod="openshift-marketplace/certified-operators-9d29n" Mar 12 08:32:37 crc kubenswrapper[4809]: I0312 08:32:37.487444 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc647467-6bca-4396-a76d-0e79ad3661bb-utilities\") pod \"certified-operators-9d29n\" (UID: \"dc647467-6bca-4396-a76d-0e79ad3661bb\") " pod="openshift-marketplace/certified-operators-9d29n" Mar 12 08:32:37 crc kubenswrapper[4809]: I0312 08:32:37.487631 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc647467-6bca-4396-a76d-0e79ad3661bb-catalog-content\") pod \"certified-operators-9d29n\" (UID: \"dc647467-6bca-4396-a76d-0e79ad3661bb\") " pod="openshift-marketplace/certified-operators-9d29n" Mar 12 08:32:37 crc kubenswrapper[4809]: I0312 08:32:37.528005 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hkb9\" (UniqueName: \"kubernetes.io/projected/dc647467-6bca-4396-a76d-0e79ad3661bb-kube-api-access-4hkb9\") pod \"certified-operators-9d29n\" (UID: \"dc647467-6bca-4396-a76d-0e79ad3661bb\") " pod="openshift-marketplace/certified-operators-9d29n" Mar 12 08:32:37 crc kubenswrapper[4809]: I0312 08:32:37.580869 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9d29n" Mar 12 08:32:37 crc kubenswrapper[4809]: I0312 08:32:37.914039 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4jjdb"] Mar 12 08:32:38 crc kubenswrapper[4809]: I0312 08:32:38.246291 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9d29n"] Mar 12 08:32:38 crc kubenswrapper[4809]: I0312 08:32:38.621480 4809 generic.go:334] "Generic (PLEG): container finished" podID="c2176c52-ac52-4571-b15a-f3d31ffd5fb5" containerID="2fe5682c6514038f7fa66d2c687c4d9de5aeaa680df7492af11649aceeb8fbb6" exitCode=0 Mar 12 08:32:38 crc kubenswrapper[4809]: I0312 08:32:38.623886 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4jjdb" event={"ID":"c2176c52-ac52-4571-b15a-f3d31ffd5fb5","Type":"ContainerDied","Data":"2fe5682c6514038f7fa66d2c687c4d9de5aeaa680df7492af11649aceeb8fbb6"} Mar 12 08:32:38 crc kubenswrapper[4809]: I0312 08:32:38.623925 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4jjdb" event={"ID":"c2176c52-ac52-4571-b15a-f3d31ffd5fb5","Type":"ContainerStarted","Data":"c7225bf3e755031827b844503e17886f390a0e311ee6b25bf476ecd518703446"} Mar 12 08:32:38 crc kubenswrapper[4809]: I0312 08:32:38.628571 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9d29n" event={"ID":"dc647467-6bca-4396-a76d-0e79ad3661bb","Type":"ContainerStarted","Data":"ed2ceec8ed5ab001f8eaf5b99debfec9a5da5bbff55f38fdabeafccdfefd8d44"} Mar 12 08:32:38 crc kubenswrapper[4809]: I0312 08:32:38.628987 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9d29n" event={"ID":"dc647467-6bca-4396-a76d-0e79ad3661bb","Type":"ContainerStarted","Data":"4c59f60020241b49a798cb8a718161f46f8dc6ddbbd393094019dad788873a1c"} Mar 12 08:32:39 crc kubenswrapper[4809]: I0312 08:32:39.068506 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-bfd7-account-create-update-c8q5g"] Mar 12 08:32:39 crc kubenswrapper[4809]: I0312 08:32:39.083584 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-756c-account-create-update-vv58x"] Mar 12 08:32:39 crc kubenswrapper[4809]: I0312 08:32:39.101338 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-e5ed-account-create-update-tgts4"] Mar 12 08:32:39 crc kubenswrapper[4809]: I0312 08:32:39.136834 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-3d72-account-create-update-vntwh"] Mar 12 08:32:39 crc kubenswrapper[4809]: I0312 08:32:39.136872 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-8rbct"] Mar 12 08:32:39 crc kubenswrapper[4809]: I0312 08:32:39.152526 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-bfd7-account-create-update-c8q5g"] Mar 12 08:32:39 crc kubenswrapper[4809]: I0312 08:32:39.166018 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-drcgq"] Mar 12 08:32:39 crc kubenswrapper[4809]: I0312 08:32:39.176075 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-e5ed-account-create-update-tgts4"] Mar 12 08:32:39 crc kubenswrapper[4809]: I0312 08:32:39.187530 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-vfrtk"] Mar 12 08:32:39 crc kubenswrapper[4809]: I0312 08:32:39.202988 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-756c-account-create-update-vv58x"] Mar 12 08:32:39 crc kubenswrapper[4809]: I0312 08:32:39.219780 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-vfrtk"] Mar 12 08:32:39 crc kubenswrapper[4809]: I0312 08:32:39.237735 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-3d72-account-create-update-vntwh"] Mar 12 08:32:39 crc kubenswrapper[4809]: I0312 08:32:39.248906 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-8rbct"] Mar 12 08:32:39 crc kubenswrapper[4809]: I0312 08:32:39.266364 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-drcgq"] Mar 12 08:32:39 crc kubenswrapper[4809]: I0312 08:32:39.651930 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4jjdb" event={"ID":"c2176c52-ac52-4571-b15a-f3d31ffd5fb5","Type":"ContainerStarted","Data":"94d35644f271d387e41e1f29825ec5e1a98f88151ff7b5ef67e3223ca76e486c"} Mar 12 08:32:39 crc kubenswrapper[4809]: I0312 08:32:39.657856 4809 generic.go:334] "Generic (PLEG): container finished" podID="dc647467-6bca-4396-a76d-0e79ad3661bb" containerID="ed2ceec8ed5ab001f8eaf5b99debfec9a5da5bbff55f38fdabeafccdfefd8d44" exitCode=0 Mar 12 08:32:39 crc kubenswrapper[4809]: I0312 08:32:39.657960 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9d29n" event={"ID":"dc647467-6bca-4396-a76d-0e79ad3661bb","Type":"ContainerDied","Data":"ed2ceec8ed5ab001f8eaf5b99debfec9a5da5bbff55f38fdabeafccdfefd8d44"} Mar 12 08:32:40 crc kubenswrapper[4809]: I0312 08:32:40.672044 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9d29n" event={"ID":"dc647467-6bca-4396-a76d-0e79ad3661bb","Type":"ContainerStarted","Data":"eac94858e9387c62fa18135375cb71fbe0ea2123b4874d640477de71a5a9ce52"} Mar 12 08:32:41 crc kubenswrapper[4809]: I0312 08:32:41.123268 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="193b129f-b891-4890-88b0-bfcc2799127b" path="/var/lib/kubelet/pods/193b129f-b891-4890-88b0-bfcc2799127b/volumes" Mar 12 08:32:41 crc kubenswrapper[4809]: I0312 08:32:41.124917 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ccdcb5e-d68e-4046-9f86-3d37634c9cf3" path="/var/lib/kubelet/pods/1ccdcb5e-d68e-4046-9f86-3d37634c9cf3/volumes" Mar 12 08:32:41 crc kubenswrapper[4809]: I0312 08:32:41.126202 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fba0a11-b0dd-48b1-9dff-a9e30daf4a3d" path="/var/lib/kubelet/pods/1fba0a11-b0dd-48b1-9dff-a9e30daf4a3d/volumes" Mar 12 08:32:41 crc kubenswrapper[4809]: I0312 08:32:41.128196 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d780d8-832e-43d9-81f7-8047de4d9076" path="/var/lib/kubelet/pods/31d780d8-832e-43d9-81f7-8047de4d9076/volumes" Mar 12 08:32:41 crc kubenswrapper[4809]: I0312 08:32:41.129967 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="656bf0f9-30c6-4f99-acd3-39996a0fa0b4" path="/var/lib/kubelet/pods/656bf0f9-30c6-4f99-acd3-39996a0fa0b4/volumes" Mar 12 08:32:41 crc kubenswrapper[4809]: I0312 08:32:41.130666 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d11d110e-e009-481e-a5f6-1a380f66764c" path="/var/lib/kubelet/pods/d11d110e-e009-481e-a5f6-1a380f66764c/volumes" Mar 12 08:32:41 crc kubenswrapper[4809]: I0312 08:32:41.131429 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d834299e-a8cc-4c17-9b41-8e00d9fa2929" path="/var/lib/kubelet/pods/d834299e-a8cc-4c17-9b41-8e00d9fa2929/volumes" Mar 12 08:32:42 crc kubenswrapper[4809]: I0312 08:32:42.699461 4809 generic.go:334] "Generic (PLEG): container finished" podID="c2176c52-ac52-4571-b15a-f3d31ffd5fb5" containerID="94d35644f271d387e41e1f29825ec5e1a98f88151ff7b5ef67e3223ca76e486c" exitCode=0 Mar 12 08:32:42 crc kubenswrapper[4809]: I0312 08:32:42.699663 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4jjdb" event={"ID":"c2176c52-ac52-4571-b15a-f3d31ffd5fb5","Type":"ContainerDied","Data":"94d35644f271d387e41e1f29825ec5e1a98f88151ff7b5ef67e3223ca76e486c"} Mar 12 08:32:42 crc kubenswrapper[4809]: I0312 08:32:42.703823 4809 generic.go:334] "Generic (PLEG): container finished" podID="dc647467-6bca-4396-a76d-0e79ad3661bb" containerID="eac94858e9387c62fa18135375cb71fbe0ea2123b4874d640477de71a5a9ce52" exitCode=0 Mar 12 08:32:42 crc kubenswrapper[4809]: I0312 08:32:42.704243 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9d29n" event={"ID":"dc647467-6bca-4396-a76d-0e79ad3661bb","Type":"ContainerDied","Data":"eac94858e9387c62fa18135375cb71fbe0ea2123b4874d640477de71a5a9ce52"} Mar 12 08:32:43 crc kubenswrapper[4809]: I0312 08:32:43.720166 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4jjdb" event={"ID":"c2176c52-ac52-4571-b15a-f3d31ffd5fb5","Type":"ContainerStarted","Data":"992d84781f339f01150db3f84ae94053ac68034f06fe89bc0b6a431a8156f9c4"} Mar 12 08:32:43 crc kubenswrapper[4809]: I0312 08:32:43.724552 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9d29n" event={"ID":"dc647467-6bca-4396-a76d-0e79ad3661bb","Type":"ContainerStarted","Data":"22c3ecd9ca9cec7bea4b7d4a04838f55e4b43fb05f3330879d38ff8dd9a435bf"} Mar 12 08:32:43 crc kubenswrapper[4809]: I0312 08:32:43.789593 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4jjdb" podStartSLOduration=3.294167768 podStartE2EDuration="7.78955238s" podCreationTimestamp="2026-03-12 08:32:36 +0000 UTC" firstStartedPulling="2026-03-12 08:32:38.624216924 +0000 UTC m=+2032.206252657" lastFinishedPulling="2026-03-12 08:32:43.119601506 +0000 UTC m=+2036.701637269" observedRunningTime="2026-03-12 08:32:43.75577488 +0000 UTC m=+2037.337810623" watchObservedRunningTime="2026-03-12 08:32:43.78955238 +0000 UTC m=+2037.371588143" Mar 12 08:32:43 crc kubenswrapper[4809]: I0312 08:32:43.806473 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9d29n" podStartSLOduration=2.354566513 podStartE2EDuration="6.80645272s" podCreationTimestamp="2026-03-12 08:32:37 +0000 UTC" firstStartedPulling="2026-03-12 08:32:38.631255506 +0000 UTC m=+2032.213291239" lastFinishedPulling="2026-03-12 08:32:43.083141723 +0000 UTC m=+2036.665177446" observedRunningTime="2026-03-12 08:32:43.782273941 +0000 UTC m=+2037.364309684" watchObservedRunningTime="2026-03-12 08:32:43.80645272 +0000 UTC m=+2037.388488453" Mar 12 08:32:44 crc kubenswrapper[4809]: I0312 08:32:44.735465 4809 generic.go:334] "Generic (PLEG): container finished" podID="23801b93-a5b2-44dc-b04a-bc3c50fccbfd" containerID="52b27754ae461836ffec296cab3737ca6b37e8234634f62cda6771b838a2e5b0" exitCode=0 Mar 12 08:32:44 crc kubenswrapper[4809]: I0312 08:32:44.735577 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zb4bp" event={"ID":"23801b93-a5b2-44dc-b04a-bc3c50fccbfd","Type":"ContainerDied","Data":"52b27754ae461836ffec296cab3737ca6b37e8234634f62cda6771b838a2e5b0"} Mar 12 08:32:45 crc kubenswrapper[4809]: I0312 08:32:45.033650 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-85jsk"] Mar 12 08:32:45 crc kubenswrapper[4809]: I0312 08:32:45.044632 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-85jsk"] Mar 12 08:32:45 crc kubenswrapper[4809]: I0312 08:32:45.048086 4809 patch_prober.go:28] interesting pod/machine-config-daemon-h6d4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 08:32:45 crc kubenswrapper[4809]: I0312 08:32:45.048160 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 08:32:45 crc kubenswrapper[4809]: I0312 08:32:45.048208 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" Mar 12 08:32:45 crc kubenswrapper[4809]: I0312 08:32:45.049174 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"97289a619c1926063f6f049c601dbe841c3463284a0fd671e4120b278e3af3bb"} pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 08:32:45 crc kubenswrapper[4809]: I0312 08:32:45.049226 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" containerID="cri-o://97289a619c1926063f6f049c601dbe841c3463284a0fd671e4120b278e3af3bb" gracePeriod=600 Mar 12 08:32:45 crc kubenswrapper[4809]: I0312 08:32:45.119354 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c55ab9f-27fc-410f-a8bf-fb2a34d4bec1" path="/var/lib/kubelet/pods/2c55ab9f-27fc-410f-a8bf-fb2a34d4bec1/volumes" Mar 12 08:32:45 crc kubenswrapper[4809]: I0312 08:32:45.756616 4809 generic.go:334] "Generic (PLEG): container finished" podID="101483ba-8ed3-40eb-9855-077e9add029f" containerID="97289a619c1926063f6f049c601dbe841c3463284a0fd671e4120b278e3af3bb" exitCode=0 Mar 12 08:32:45 crc kubenswrapper[4809]: I0312 08:32:45.756650 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" event={"ID":"101483ba-8ed3-40eb-9855-077e9add029f","Type":"ContainerDied","Data":"97289a619c1926063f6f049c601dbe841c3463284a0fd671e4120b278e3af3bb"} Mar 12 08:32:45 crc kubenswrapper[4809]: I0312 08:32:45.756963 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" event={"ID":"101483ba-8ed3-40eb-9855-077e9add029f","Type":"ContainerStarted","Data":"bf2cc15f1ee829a164218f56f3866a7078c46a7688edcf75cd90ece45a87d0e0"} Mar 12 08:32:45 crc kubenswrapper[4809]: I0312 08:32:45.756986 4809 scope.go:117] "RemoveContainer" containerID="864ab2b9321734db2ec1c2e4ce57f8a096a813a5cbaa0744b231535713d6ab10" Mar 12 08:32:46 crc kubenswrapper[4809]: I0312 08:32:46.341993 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zb4bp" Mar 12 08:32:46 crc kubenswrapper[4809]: I0312 08:32:46.465661 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/23801b93-a5b2-44dc-b04a-bc3c50fccbfd-ssh-key-openstack-edpm-ipam\") pod \"23801b93-a5b2-44dc-b04a-bc3c50fccbfd\" (UID: \"23801b93-a5b2-44dc-b04a-bc3c50fccbfd\") " Mar 12 08:32:46 crc kubenswrapper[4809]: I0312 08:32:46.466658 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdxx8\" (UniqueName: \"kubernetes.io/projected/23801b93-a5b2-44dc-b04a-bc3c50fccbfd-kube-api-access-zdxx8\") pod \"23801b93-a5b2-44dc-b04a-bc3c50fccbfd\" (UID: \"23801b93-a5b2-44dc-b04a-bc3c50fccbfd\") " Mar 12 08:32:46 crc kubenswrapper[4809]: I0312 08:32:46.466839 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23801b93-a5b2-44dc-b04a-bc3c50fccbfd-bootstrap-combined-ca-bundle\") pod \"23801b93-a5b2-44dc-b04a-bc3c50fccbfd\" (UID: \"23801b93-a5b2-44dc-b04a-bc3c50fccbfd\") " Mar 12 08:32:46 crc kubenswrapper[4809]: I0312 08:32:46.467011 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23801b93-a5b2-44dc-b04a-bc3c50fccbfd-inventory\") pod \"23801b93-a5b2-44dc-b04a-bc3c50fccbfd\" (UID: \"23801b93-a5b2-44dc-b04a-bc3c50fccbfd\") " Mar 12 08:32:46 crc kubenswrapper[4809]: I0312 08:32:46.482312 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23801b93-a5b2-44dc-b04a-bc3c50fccbfd-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "23801b93-a5b2-44dc-b04a-bc3c50fccbfd" (UID: "23801b93-a5b2-44dc-b04a-bc3c50fccbfd"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:32:46 crc kubenswrapper[4809]: I0312 08:32:46.483493 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23801b93-a5b2-44dc-b04a-bc3c50fccbfd-kube-api-access-zdxx8" (OuterVolumeSpecName: "kube-api-access-zdxx8") pod "23801b93-a5b2-44dc-b04a-bc3c50fccbfd" (UID: "23801b93-a5b2-44dc-b04a-bc3c50fccbfd"). InnerVolumeSpecName "kube-api-access-zdxx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:32:46 crc kubenswrapper[4809]: I0312 08:32:46.512213 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23801b93-a5b2-44dc-b04a-bc3c50fccbfd-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "23801b93-a5b2-44dc-b04a-bc3c50fccbfd" (UID: "23801b93-a5b2-44dc-b04a-bc3c50fccbfd"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:32:46 crc kubenswrapper[4809]: I0312 08:32:46.565327 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23801b93-a5b2-44dc-b04a-bc3c50fccbfd-inventory" (OuterVolumeSpecName: "inventory") pod "23801b93-a5b2-44dc-b04a-bc3c50fccbfd" (UID: "23801b93-a5b2-44dc-b04a-bc3c50fccbfd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:32:46 crc kubenswrapper[4809]: I0312 08:32:46.569868 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdxx8\" (UniqueName: \"kubernetes.io/projected/23801b93-a5b2-44dc-b04a-bc3c50fccbfd-kube-api-access-zdxx8\") on node \"crc\" DevicePath \"\"" Mar 12 08:32:46 crc kubenswrapper[4809]: I0312 08:32:46.569918 4809 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23801b93-a5b2-44dc-b04a-bc3c50fccbfd-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:32:46 crc kubenswrapper[4809]: I0312 08:32:46.569928 4809 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23801b93-a5b2-44dc-b04a-bc3c50fccbfd-inventory\") on node \"crc\" DevicePath \"\"" Mar 12 08:32:46 crc kubenswrapper[4809]: I0312 08:32:46.569941 4809 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/23801b93-a5b2-44dc-b04a-bc3c50fccbfd-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 08:32:46 crc kubenswrapper[4809]: I0312 08:32:46.771253 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zb4bp" event={"ID":"23801b93-a5b2-44dc-b04a-bc3c50fccbfd","Type":"ContainerDied","Data":"5d72a12691ed263533cfd4de1b70f5a98c640ad9b3ce0d509af3a3d91cb6638e"} Mar 12 08:32:46 crc kubenswrapper[4809]: I0312 08:32:46.771300 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d72a12691ed263533cfd4de1b70f5a98c640ad9b3ce0d509af3a3d91cb6638e" Mar 12 08:32:46 crc kubenswrapper[4809]: I0312 08:32:46.771364 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zb4bp" Mar 12 08:32:46 crc kubenswrapper[4809]: I0312 08:32:46.858009 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kcsgx"] Mar 12 08:32:46 crc kubenswrapper[4809]: E0312 08:32:46.858554 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23801b93-a5b2-44dc-b04a-bc3c50fccbfd" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Mar 12 08:32:46 crc kubenswrapper[4809]: I0312 08:32:46.858574 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="23801b93-a5b2-44dc-b04a-bc3c50fccbfd" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Mar 12 08:32:46 crc kubenswrapper[4809]: I0312 08:32:46.858801 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="23801b93-a5b2-44dc-b04a-bc3c50fccbfd" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Mar 12 08:32:46 crc kubenswrapper[4809]: I0312 08:32:46.859619 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kcsgx" Mar 12 08:32:46 crc kubenswrapper[4809]: I0312 08:32:46.862437 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 12 08:32:46 crc kubenswrapper[4809]: I0312 08:32:46.866382 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 08:32:46 crc kubenswrapper[4809]: I0312 08:32:46.868457 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 12 08:32:46 crc kubenswrapper[4809]: I0312 08:32:46.868841 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7nx95" Mar 12 08:32:46 crc kubenswrapper[4809]: I0312 08:32:46.905773 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kcsgx"] Mar 12 08:32:46 crc kubenswrapper[4809]: I0312 08:32:46.988208 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1f2c1c6e-1410-4f2e-9cb9-8f55ccf83f59-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kcsgx\" (UID: \"1f2c1c6e-1410-4f2e-9cb9-8f55ccf83f59\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kcsgx" Mar 12 08:32:46 crc kubenswrapper[4809]: I0312 08:32:46.988341 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt9jq\" (UniqueName: \"kubernetes.io/projected/1f2c1c6e-1410-4f2e-9cb9-8f55ccf83f59-kube-api-access-zt9jq\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kcsgx\" (UID: \"1f2c1c6e-1410-4f2e-9cb9-8f55ccf83f59\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kcsgx" Mar 12 08:32:46 crc kubenswrapper[4809]: I0312 08:32:46.988539 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1f2c1c6e-1410-4f2e-9cb9-8f55ccf83f59-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kcsgx\" (UID: \"1f2c1c6e-1410-4f2e-9cb9-8f55ccf83f59\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kcsgx" Mar 12 08:32:47 crc kubenswrapper[4809]: I0312 08:32:47.090671 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt9jq\" (UniqueName: \"kubernetes.io/projected/1f2c1c6e-1410-4f2e-9cb9-8f55ccf83f59-kube-api-access-zt9jq\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kcsgx\" (UID: \"1f2c1c6e-1410-4f2e-9cb9-8f55ccf83f59\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kcsgx" Mar 12 08:32:47 crc kubenswrapper[4809]: I0312 08:32:47.090763 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1f2c1c6e-1410-4f2e-9cb9-8f55ccf83f59-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kcsgx\" (UID: \"1f2c1c6e-1410-4f2e-9cb9-8f55ccf83f59\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kcsgx" Mar 12 08:32:47 crc kubenswrapper[4809]: I0312 08:32:47.090924 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1f2c1c6e-1410-4f2e-9cb9-8f55ccf83f59-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kcsgx\" (UID: \"1f2c1c6e-1410-4f2e-9cb9-8f55ccf83f59\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kcsgx" Mar 12 08:32:47 crc kubenswrapper[4809]: I0312 08:32:47.092778 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 12 08:32:47 crc kubenswrapper[4809]: I0312 08:32:47.093417 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 12 08:32:47 crc kubenswrapper[4809]: I0312 08:32:47.112018 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1f2c1c6e-1410-4f2e-9cb9-8f55ccf83f59-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kcsgx\" (UID: \"1f2c1c6e-1410-4f2e-9cb9-8f55ccf83f59\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kcsgx" Mar 12 08:32:47 crc kubenswrapper[4809]: I0312 08:32:47.115083 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zt9jq\" (UniqueName: \"kubernetes.io/projected/1f2c1c6e-1410-4f2e-9cb9-8f55ccf83f59-kube-api-access-zt9jq\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kcsgx\" (UID: \"1f2c1c6e-1410-4f2e-9cb9-8f55ccf83f59\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kcsgx" Mar 12 08:32:47 crc kubenswrapper[4809]: I0312 08:32:47.117283 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1f2c1c6e-1410-4f2e-9cb9-8f55ccf83f59-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kcsgx\" (UID: \"1f2c1c6e-1410-4f2e-9cb9-8f55ccf83f59\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kcsgx" Mar 12 08:32:47 crc kubenswrapper[4809]: I0312 08:32:47.187371 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7nx95" Mar 12 08:32:47 crc kubenswrapper[4809]: I0312 08:32:47.191568 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kcsgx" Mar 12 08:32:47 crc kubenswrapper[4809]: I0312 08:32:47.395482 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4jjdb" Mar 12 08:32:47 crc kubenswrapper[4809]: I0312 08:32:47.396500 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4jjdb" Mar 12 08:32:47 crc kubenswrapper[4809]: I0312 08:32:47.477232 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4jjdb" Mar 12 08:32:47 crc kubenswrapper[4809]: I0312 08:32:47.582422 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9d29n" Mar 12 08:32:47 crc kubenswrapper[4809]: I0312 08:32:47.582571 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9d29n" Mar 12 08:32:47 crc kubenswrapper[4809]: I0312 08:32:47.900270 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kcsgx"] Mar 12 08:32:48 crc kubenswrapper[4809]: I0312 08:32:48.425073 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 08:32:48 crc kubenswrapper[4809]: I0312 08:32:48.634709 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-9d29n" podUID="dc647467-6bca-4396-a76d-0e79ad3661bb" containerName="registry-server" probeResult="failure" output=< Mar 12 08:32:48 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 08:32:48 crc kubenswrapper[4809]: > Mar 12 08:32:48 crc kubenswrapper[4809]: I0312 08:32:48.824732 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kcsgx" event={"ID":"1f2c1c6e-1410-4f2e-9cb9-8f55ccf83f59","Type":"ContainerStarted","Data":"40689e8ccbe2c048648c6e4b887c1bab8fa1e6913b8840f7e5753a65c9b5bd52"} Mar 12 08:32:48 crc kubenswrapper[4809]: I0312 08:32:48.892565 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4jjdb" Mar 12 08:32:48 crc kubenswrapper[4809]: I0312 08:32:48.961244 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4jjdb"] Mar 12 08:32:49 crc kubenswrapper[4809]: I0312 08:32:49.843303 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kcsgx" event={"ID":"1f2c1c6e-1410-4f2e-9cb9-8f55ccf83f59","Type":"ContainerStarted","Data":"de3ffe85b648c0cdd06d4ba951f6c72523baf5dd09433f1067e9928c088e26a9"} Mar 12 08:32:49 crc kubenswrapper[4809]: I0312 08:32:49.867462 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kcsgx" podStartSLOduration=3.350676719 podStartE2EDuration="3.867443279s" podCreationTimestamp="2026-03-12 08:32:46 +0000 UTC" firstStartedPulling="2026-03-12 08:32:47.905089142 +0000 UTC m=+2041.487124875" lastFinishedPulling="2026-03-12 08:32:48.421855702 +0000 UTC m=+2042.003891435" observedRunningTime="2026-03-12 08:32:49.865543597 +0000 UTC m=+2043.447579330" watchObservedRunningTime="2026-03-12 08:32:49.867443279 +0000 UTC m=+2043.449479012" Mar 12 08:32:50 crc kubenswrapper[4809]: I0312 08:32:50.854268 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4jjdb" podUID="c2176c52-ac52-4571-b15a-f3d31ffd5fb5" containerName="registry-server" containerID="cri-o://992d84781f339f01150db3f84ae94053ac68034f06fe89bc0b6a431a8156f9c4" gracePeriod=2 Mar 12 08:32:51 crc kubenswrapper[4809]: I0312 08:32:51.407131 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4jjdb" Mar 12 08:32:51 crc kubenswrapper[4809]: I0312 08:32:51.551985 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gl2j\" (UniqueName: \"kubernetes.io/projected/c2176c52-ac52-4571-b15a-f3d31ffd5fb5-kube-api-access-9gl2j\") pod \"c2176c52-ac52-4571-b15a-f3d31ffd5fb5\" (UID: \"c2176c52-ac52-4571-b15a-f3d31ffd5fb5\") " Mar 12 08:32:51 crc kubenswrapper[4809]: I0312 08:32:51.552141 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2176c52-ac52-4571-b15a-f3d31ffd5fb5-utilities\") pod \"c2176c52-ac52-4571-b15a-f3d31ffd5fb5\" (UID: \"c2176c52-ac52-4571-b15a-f3d31ffd5fb5\") " Mar 12 08:32:51 crc kubenswrapper[4809]: I0312 08:32:51.552520 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2176c52-ac52-4571-b15a-f3d31ffd5fb5-catalog-content\") pod \"c2176c52-ac52-4571-b15a-f3d31ffd5fb5\" (UID: \"c2176c52-ac52-4571-b15a-f3d31ffd5fb5\") " Mar 12 08:32:51 crc kubenswrapper[4809]: I0312 08:32:51.553188 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2176c52-ac52-4571-b15a-f3d31ffd5fb5-utilities" (OuterVolumeSpecName: "utilities") pod "c2176c52-ac52-4571-b15a-f3d31ffd5fb5" (UID: "c2176c52-ac52-4571-b15a-f3d31ffd5fb5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:32:51 crc kubenswrapper[4809]: I0312 08:32:51.553990 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2176c52-ac52-4571-b15a-f3d31ffd5fb5-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 08:32:51 crc kubenswrapper[4809]: I0312 08:32:51.560608 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2176c52-ac52-4571-b15a-f3d31ffd5fb5-kube-api-access-9gl2j" (OuterVolumeSpecName: "kube-api-access-9gl2j") pod "c2176c52-ac52-4571-b15a-f3d31ffd5fb5" (UID: "c2176c52-ac52-4571-b15a-f3d31ffd5fb5"). InnerVolumeSpecName "kube-api-access-9gl2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:32:51 crc kubenswrapper[4809]: I0312 08:32:51.606419 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2176c52-ac52-4571-b15a-f3d31ffd5fb5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c2176c52-ac52-4571-b15a-f3d31ffd5fb5" (UID: "c2176c52-ac52-4571-b15a-f3d31ffd5fb5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:32:51 crc kubenswrapper[4809]: I0312 08:32:51.657425 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gl2j\" (UniqueName: \"kubernetes.io/projected/c2176c52-ac52-4571-b15a-f3d31ffd5fb5-kube-api-access-9gl2j\") on node \"crc\" DevicePath \"\"" Mar 12 08:32:51 crc kubenswrapper[4809]: I0312 08:32:51.657482 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2176c52-ac52-4571-b15a-f3d31ffd5fb5-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 08:32:51 crc kubenswrapper[4809]: I0312 08:32:51.870365 4809 generic.go:334] "Generic (PLEG): container finished" podID="c2176c52-ac52-4571-b15a-f3d31ffd5fb5" containerID="992d84781f339f01150db3f84ae94053ac68034f06fe89bc0b6a431a8156f9c4" exitCode=0 Mar 12 08:32:51 crc kubenswrapper[4809]: I0312 08:32:51.870457 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4jjdb" event={"ID":"c2176c52-ac52-4571-b15a-f3d31ffd5fb5","Type":"ContainerDied","Data":"992d84781f339f01150db3f84ae94053ac68034f06fe89bc0b6a431a8156f9c4"} Mar 12 08:32:51 crc kubenswrapper[4809]: I0312 08:32:51.870479 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4jjdb" Mar 12 08:32:51 crc kubenswrapper[4809]: I0312 08:32:51.870521 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4jjdb" event={"ID":"c2176c52-ac52-4571-b15a-f3d31ffd5fb5","Type":"ContainerDied","Data":"c7225bf3e755031827b844503e17886f390a0e311ee6b25bf476ecd518703446"} Mar 12 08:32:51 crc kubenswrapper[4809]: I0312 08:32:51.870543 4809 scope.go:117] "RemoveContainer" containerID="992d84781f339f01150db3f84ae94053ac68034f06fe89bc0b6a431a8156f9c4" Mar 12 08:32:51 crc kubenswrapper[4809]: I0312 08:32:51.915024 4809 scope.go:117] "RemoveContainer" containerID="94d35644f271d387e41e1f29825ec5e1a98f88151ff7b5ef67e3223ca76e486c" Mar 12 08:32:51 crc kubenswrapper[4809]: I0312 08:32:51.926964 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4jjdb"] Mar 12 08:32:51 crc kubenswrapper[4809]: I0312 08:32:51.944569 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4jjdb"] Mar 12 08:32:51 crc kubenswrapper[4809]: I0312 08:32:51.960075 4809 scope.go:117] "RemoveContainer" containerID="2fe5682c6514038f7fa66d2c687c4d9de5aeaa680df7492af11649aceeb8fbb6" Mar 12 08:32:52 crc kubenswrapper[4809]: I0312 08:32:52.002093 4809 scope.go:117] "RemoveContainer" containerID="992d84781f339f01150db3f84ae94053ac68034f06fe89bc0b6a431a8156f9c4" Mar 12 08:32:52 crc kubenswrapper[4809]: E0312 08:32:52.002817 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"992d84781f339f01150db3f84ae94053ac68034f06fe89bc0b6a431a8156f9c4\": container with ID starting with 992d84781f339f01150db3f84ae94053ac68034f06fe89bc0b6a431a8156f9c4 not found: ID does not exist" containerID="992d84781f339f01150db3f84ae94053ac68034f06fe89bc0b6a431a8156f9c4" Mar 12 08:32:52 crc kubenswrapper[4809]: I0312 08:32:52.002886 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"992d84781f339f01150db3f84ae94053ac68034f06fe89bc0b6a431a8156f9c4"} err="failed to get container status \"992d84781f339f01150db3f84ae94053ac68034f06fe89bc0b6a431a8156f9c4\": rpc error: code = NotFound desc = could not find container \"992d84781f339f01150db3f84ae94053ac68034f06fe89bc0b6a431a8156f9c4\": container with ID starting with 992d84781f339f01150db3f84ae94053ac68034f06fe89bc0b6a431a8156f9c4 not found: ID does not exist" Mar 12 08:32:52 crc kubenswrapper[4809]: I0312 08:32:52.002933 4809 scope.go:117] "RemoveContainer" containerID="94d35644f271d387e41e1f29825ec5e1a98f88151ff7b5ef67e3223ca76e486c" Mar 12 08:32:52 crc kubenswrapper[4809]: E0312 08:32:52.003544 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94d35644f271d387e41e1f29825ec5e1a98f88151ff7b5ef67e3223ca76e486c\": container with ID starting with 94d35644f271d387e41e1f29825ec5e1a98f88151ff7b5ef67e3223ca76e486c not found: ID does not exist" containerID="94d35644f271d387e41e1f29825ec5e1a98f88151ff7b5ef67e3223ca76e486c" Mar 12 08:32:52 crc kubenswrapper[4809]: I0312 08:32:52.003614 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94d35644f271d387e41e1f29825ec5e1a98f88151ff7b5ef67e3223ca76e486c"} err="failed to get container status \"94d35644f271d387e41e1f29825ec5e1a98f88151ff7b5ef67e3223ca76e486c\": rpc error: code = NotFound desc = could not find container \"94d35644f271d387e41e1f29825ec5e1a98f88151ff7b5ef67e3223ca76e486c\": container with ID starting with 94d35644f271d387e41e1f29825ec5e1a98f88151ff7b5ef67e3223ca76e486c not found: ID does not exist" Mar 12 08:32:52 crc kubenswrapper[4809]: I0312 08:32:52.003647 4809 scope.go:117] "RemoveContainer" containerID="2fe5682c6514038f7fa66d2c687c4d9de5aeaa680df7492af11649aceeb8fbb6" Mar 12 08:32:52 crc kubenswrapper[4809]: E0312 08:32:52.004079 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fe5682c6514038f7fa66d2c687c4d9de5aeaa680df7492af11649aceeb8fbb6\": container with ID starting with 2fe5682c6514038f7fa66d2c687c4d9de5aeaa680df7492af11649aceeb8fbb6 not found: ID does not exist" containerID="2fe5682c6514038f7fa66d2c687c4d9de5aeaa680df7492af11649aceeb8fbb6" Mar 12 08:32:52 crc kubenswrapper[4809]: I0312 08:32:52.004173 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fe5682c6514038f7fa66d2c687c4d9de5aeaa680df7492af11649aceeb8fbb6"} err="failed to get container status \"2fe5682c6514038f7fa66d2c687c4d9de5aeaa680df7492af11649aceeb8fbb6\": rpc error: code = NotFound desc = could not find container \"2fe5682c6514038f7fa66d2c687c4d9de5aeaa680df7492af11649aceeb8fbb6\": container with ID starting with 2fe5682c6514038f7fa66d2c687c4d9de5aeaa680df7492af11649aceeb8fbb6 not found: ID does not exist" Mar 12 08:32:53 crc kubenswrapper[4809]: I0312 08:32:53.128309 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2176c52-ac52-4571-b15a-f3d31ffd5fb5" path="/var/lib/kubelet/pods/c2176c52-ac52-4571-b15a-f3d31ffd5fb5/volumes" Mar 12 08:32:57 crc kubenswrapper[4809]: I0312 08:32:57.643514 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9d29n" Mar 12 08:32:57 crc kubenswrapper[4809]: I0312 08:32:57.714532 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9d29n" Mar 12 08:32:57 crc kubenswrapper[4809]: I0312 08:32:57.886706 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9d29n"] Mar 12 08:32:58 crc kubenswrapper[4809]: I0312 08:32:58.963169 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9d29n" podUID="dc647467-6bca-4396-a76d-0e79ad3661bb" containerName="registry-server" containerID="cri-o://22c3ecd9ca9cec7bea4b7d4a04838f55e4b43fb05f3330879d38ff8dd9a435bf" gracePeriod=2 Mar 12 08:32:59 crc kubenswrapper[4809]: I0312 08:32:59.502129 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9d29n" Mar 12 08:32:59 crc kubenswrapper[4809]: I0312 08:32:59.612895 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc647467-6bca-4396-a76d-0e79ad3661bb-catalog-content\") pod \"dc647467-6bca-4396-a76d-0e79ad3661bb\" (UID: \"dc647467-6bca-4396-a76d-0e79ad3661bb\") " Mar 12 08:32:59 crc kubenswrapper[4809]: I0312 08:32:59.613499 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hkb9\" (UniqueName: \"kubernetes.io/projected/dc647467-6bca-4396-a76d-0e79ad3661bb-kube-api-access-4hkb9\") pod \"dc647467-6bca-4396-a76d-0e79ad3661bb\" (UID: \"dc647467-6bca-4396-a76d-0e79ad3661bb\") " Mar 12 08:32:59 crc kubenswrapper[4809]: I0312 08:32:59.613754 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc647467-6bca-4396-a76d-0e79ad3661bb-utilities\") pod \"dc647467-6bca-4396-a76d-0e79ad3661bb\" (UID: \"dc647467-6bca-4396-a76d-0e79ad3661bb\") " Mar 12 08:32:59 crc kubenswrapper[4809]: I0312 08:32:59.615802 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc647467-6bca-4396-a76d-0e79ad3661bb-utilities" (OuterVolumeSpecName: "utilities") pod "dc647467-6bca-4396-a76d-0e79ad3661bb" (UID: "dc647467-6bca-4396-a76d-0e79ad3661bb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:32:59 crc kubenswrapper[4809]: I0312 08:32:59.629626 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc647467-6bca-4396-a76d-0e79ad3661bb-kube-api-access-4hkb9" (OuterVolumeSpecName: "kube-api-access-4hkb9") pod "dc647467-6bca-4396-a76d-0e79ad3661bb" (UID: "dc647467-6bca-4396-a76d-0e79ad3661bb"). InnerVolumeSpecName "kube-api-access-4hkb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:32:59 crc kubenswrapper[4809]: I0312 08:32:59.669132 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc647467-6bca-4396-a76d-0e79ad3661bb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dc647467-6bca-4396-a76d-0e79ad3661bb" (UID: "dc647467-6bca-4396-a76d-0e79ad3661bb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:32:59 crc kubenswrapper[4809]: I0312 08:32:59.717101 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc647467-6bca-4396-a76d-0e79ad3661bb-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 08:32:59 crc kubenswrapper[4809]: I0312 08:32:59.717163 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc647467-6bca-4396-a76d-0e79ad3661bb-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 08:32:59 crc kubenswrapper[4809]: I0312 08:32:59.717180 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hkb9\" (UniqueName: \"kubernetes.io/projected/dc647467-6bca-4396-a76d-0e79ad3661bb-kube-api-access-4hkb9\") on node \"crc\" DevicePath \"\"" Mar 12 08:32:59 crc kubenswrapper[4809]: I0312 08:32:59.987274 4809 generic.go:334] "Generic (PLEG): container finished" podID="dc647467-6bca-4396-a76d-0e79ad3661bb" containerID="22c3ecd9ca9cec7bea4b7d4a04838f55e4b43fb05f3330879d38ff8dd9a435bf" exitCode=0 Mar 12 08:32:59 crc kubenswrapper[4809]: I0312 08:32:59.987334 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9d29n" event={"ID":"dc647467-6bca-4396-a76d-0e79ad3661bb","Type":"ContainerDied","Data":"22c3ecd9ca9cec7bea4b7d4a04838f55e4b43fb05f3330879d38ff8dd9a435bf"} Mar 12 08:32:59 crc kubenswrapper[4809]: I0312 08:32:59.987368 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9d29n" event={"ID":"dc647467-6bca-4396-a76d-0e79ad3661bb","Type":"ContainerDied","Data":"4c59f60020241b49a798cb8a718161f46f8dc6ddbbd393094019dad788873a1c"} Mar 12 08:32:59 crc kubenswrapper[4809]: I0312 08:32:59.987388 4809 scope.go:117] "RemoveContainer" containerID="22c3ecd9ca9cec7bea4b7d4a04838f55e4b43fb05f3330879d38ff8dd9a435bf" Mar 12 08:32:59 crc kubenswrapper[4809]: I0312 08:32:59.987545 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9d29n" Mar 12 08:33:00 crc kubenswrapper[4809]: I0312 08:33:00.032103 4809 scope.go:117] "RemoveContainer" containerID="eac94858e9387c62fa18135375cb71fbe0ea2123b4874d640477de71a5a9ce52" Mar 12 08:33:00 crc kubenswrapper[4809]: I0312 08:33:00.052211 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9d29n"] Mar 12 08:33:00 crc kubenswrapper[4809]: I0312 08:33:00.062868 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9d29n"] Mar 12 08:33:00 crc kubenswrapper[4809]: I0312 08:33:00.070634 4809 scope.go:117] "RemoveContainer" containerID="ed2ceec8ed5ab001f8eaf5b99debfec9a5da5bbff55f38fdabeafccdfefd8d44" Mar 12 08:33:00 crc kubenswrapper[4809]: I0312 08:33:00.146336 4809 scope.go:117] "RemoveContainer" containerID="22c3ecd9ca9cec7bea4b7d4a04838f55e4b43fb05f3330879d38ff8dd9a435bf" Mar 12 08:33:00 crc kubenswrapper[4809]: E0312 08:33:00.147020 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22c3ecd9ca9cec7bea4b7d4a04838f55e4b43fb05f3330879d38ff8dd9a435bf\": container with ID starting with 22c3ecd9ca9cec7bea4b7d4a04838f55e4b43fb05f3330879d38ff8dd9a435bf not found: ID does not exist" containerID="22c3ecd9ca9cec7bea4b7d4a04838f55e4b43fb05f3330879d38ff8dd9a435bf" Mar 12 08:33:00 crc kubenswrapper[4809]: I0312 08:33:00.147096 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22c3ecd9ca9cec7bea4b7d4a04838f55e4b43fb05f3330879d38ff8dd9a435bf"} err="failed to get container status \"22c3ecd9ca9cec7bea4b7d4a04838f55e4b43fb05f3330879d38ff8dd9a435bf\": rpc error: code = NotFound desc = could not find container \"22c3ecd9ca9cec7bea4b7d4a04838f55e4b43fb05f3330879d38ff8dd9a435bf\": container with ID starting with 22c3ecd9ca9cec7bea4b7d4a04838f55e4b43fb05f3330879d38ff8dd9a435bf not found: ID does not exist" Mar 12 08:33:00 crc kubenswrapper[4809]: I0312 08:33:00.147154 4809 scope.go:117] "RemoveContainer" containerID="eac94858e9387c62fa18135375cb71fbe0ea2123b4874d640477de71a5a9ce52" Mar 12 08:33:00 crc kubenswrapper[4809]: E0312 08:33:00.147806 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eac94858e9387c62fa18135375cb71fbe0ea2123b4874d640477de71a5a9ce52\": container with ID starting with eac94858e9387c62fa18135375cb71fbe0ea2123b4874d640477de71a5a9ce52 not found: ID does not exist" containerID="eac94858e9387c62fa18135375cb71fbe0ea2123b4874d640477de71a5a9ce52" Mar 12 08:33:00 crc kubenswrapper[4809]: I0312 08:33:00.147899 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eac94858e9387c62fa18135375cb71fbe0ea2123b4874d640477de71a5a9ce52"} err="failed to get container status \"eac94858e9387c62fa18135375cb71fbe0ea2123b4874d640477de71a5a9ce52\": rpc error: code = NotFound desc = could not find container \"eac94858e9387c62fa18135375cb71fbe0ea2123b4874d640477de71a5a9ce52\": container with ID starting with eac94858e9387c62fa18135375cb71fbe0ea2123b4874d640477de71a5a9ce52 not found: ID does not exist" Mar 12 08:33:00 crc kubenswrapper[4809]: I0312 08:33:00.147931 4809 scope.go:117] "RemoveContainer" containerID="ed2ceec8ed5ab001f8eaf5b99debfec9a5da5bbff55f38fdabeafccdfefd8d44" Mar 12 08:33:00 crc kubenswrapper[4809]: E0312 08:33:00.148395 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed2ceec8ed5ab001f8eaf5b99debfec9a5da5bbff55f38fdabeafccdfefd8d44\": container with ID starting with ed2ceec8ed5ab001f8eaf5b99debfec9a5da5bbff55f38fdabeafccdfefd8d44 not found: ID does not exist" containerID="ed2ceec8ed5ab001f8eaf5b99debfec9a5da5bbff55f38fdabeafccdfefd8d44" Mar 12 08:33:00 crc kubenswrapper[4809]: I0312 08:33:00.148418 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed2ceec8ed5ab001f8eaf5b99debfec9a5da5bbff55f38fdabeafccdfefd8d44"} err="failed to get container status \"ed2ceec8ed5ab001f8eaf5b99debfec9a5da5bbff55f38fdabeafccdfefd8d44\": rpc error: code = NotFound desc = could not find container \"ed2ceec8ed5ab001f8eaf5b99debfec9a5da5bbff55f38fdabeafccdfefd8d44\": container with ID starting with ed2ceec8ed5ab001f8eaf5b99debfec9a5da5bbff55f38fdabeafccdfefd8d44 not found: ID does not exist" Mar 12 08:33:01 crc kubenswrapper[4809]: I0312 08:33:01.127837 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc647467-6bca-4396-a76d-0e79ad3661bb" path="/var/lib/kubelet/pods/dc647467-6bca-4396-a76d-0e79ad3661bb/volumes" Mar 12 08:33:08 crc kubenswrapper[4809]: I0312 08:33:08.075560 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-6pfp4"] Mar 12 08:33:08 crc kubenswrapper[4809]: I0312 08:33:08.092913 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-6pfp4"] Mar 12 08:33:09 crc kubenswrapper[4809]: I0312 08:33:09.134436 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdb484e1-36e8-4bfe-aeb2-72fc1c331cda" path="/var/lib/kubelet/pods/cdb484e1-36e8-4bfe-aeb2-72fc1c331cda/volumes" Mar 12 08:33:24 crc kubenswrapper[4809]: I0312 08:33:24.043902 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-nzpz5"] Mar 12 08:33:24 crc kubenswrapper[4809]: I0312 08:33:24.060927 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-nzpz5"] Mar 12 08:33:24 crc kubenswrapper[4809]: I0312 08:33:24.949262 4809 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-dppmm container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 08:33:24 crc kubenswrapper[4809]: I0312 08:33:24.949682 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-dppmm" podUID="1189f657-b031-4ece-859b-95d3eadd8221" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 08:33:25 crc kubenswrapper[4809]: I0312 08:33:25.130216 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3108f49-e70c-4650-a969-b83a1ed46a14" path="/var/lib/kubelet/pods/d3108f49-e70c-4650-a969-b83a1ed46a14/volumes" Mar 12 08:33:25 crc kubenswrapper[4809]: I0312 08:33:25.168271 4809 patch_prober.go:28] interesting pod/router-default-5444994796-ttmrw container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 08:33:25 crc kubenswrapper[4809]: I0312 08:33:25.168305 4809 patch_prober.go:28] interesting pod/router-default-5444994796-ttmrw container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 08:33:25 crc kubenswrapper[4809]: I0312 08:33:25.168355 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-ttmrw" podUID="b24173e0-5140-4c4d-ab8b-ea3696db0b74" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 08:33:25 crc kubenswrapper[4809]: I0312 08:33:25.168515 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-ttmrw" podUID="b24173e0-5140-4c4d-ab8b-ea3696db0b74" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 08:33:28 crc kubenswrapper[4809]: I0312 08:33:28.431807 4809 scope.go:117] "RemoveContainer" containerID="b1cdfec6c07a1cb3b52e6148237b1b1cffebcdeb51474dc302a93b1b39f0ba4c" Mar 12 08:33:28 crc kubenswrapper[4809]: I0312 08:33:28.498464 4809 scope.go:117] "RemoveContainer" containerID="4941303b1e8bcbb1797326a4cfdbad9d29bb0afcff0c88fd24ebb36eb9690275" Mar 12 08:33:28 crc kubenswrapper[4809]: I0312 08:33:28.973526 4809 scope.go:117] "RemoveContainer" containerID="114022f72c6823d90e88665866d16f0dfca63980b30479a6b187a9af05e9d09c" Mar 12 08:33:29 crc kubenswrapper[4809]: I0312 08:33:29.098466 4809 scope.go:117] "RemoveContainer" containerID="778210d7bee4521eefed935f786032eeadb5eaa91c6aaa08f29d1f0733bd4e8c" Mar 12 08:33:29 crc kubenswrapper[4809]: I0312 08:33:29.191781 4809 scope.go:117] "RemoveContainer" containerID="4ee61acb61b7383a33753520c01f0f3ea94393b21b79270375c526414b7d0d57" Mar 12 08:33:29 crc kubenswrapper[4809]: I0312 08:33:29.242597 4809 scope.go:117] "RemoveContainer" containerID="0ae94b47a680a52a07d23639fe4c7bc0466feb5b16b9b932a0f45b2a9739e885" Mar 12 08:33:29 crc kubenswrapper[4809]: I0312 08:33:29.328736 4809 scope.go:117] "RemoveContainer" containerID="6708c0691d84e5151f002f9e2d9eb1612e0bc72aaf3d3923ae49e2bf6f2f414d" Mar 12 08:33:29 crc kubenswrapper[4809]: I0312 08:33:29.381390 4809 scope.go:117] "RemoveContainer" containerID="00dfeb858ed8f349814317a06f81c52920176c3f4d3f5cba44c6e61078c86a9b" Mar 12 08:33:29 crc kubenswrapper[4809]: I0312 08:33:29.408411 4809 scope.go:117] "RemoveContainer" containerID="bc0ab4d7344b148c072b826b583cf9737c2d8d5957b4d242227b170c4d9a76dc" Mar 12 08:33:29 crc kubenswrapper[4809]: I0312 08:33:29.440622 4809 scope.go:117] "RemoveContainer" containerID="9ce6278a38d95a21cb0eee8368fa80075e844b43c4cd9c1c74615d57a007e847" Mar 12 08:33:29 crc kubenswrapper[4809]: I0312 08:33:29.486543 4809 scope.go:117] "RemoveContainer" containerID="7c5fa709bfaea8ecec4a762140e3d2c31d7eb5c3cb7982be000b977a40205591" Mar 12 08:33:29 crc kubenswrapper[4809]: I0312 08:33:29.551776 4809 scope.go:117] "RemoveContainer" containerID="cb9c0650431f2f1b74e7b39bc5735f6bbd388833594ca9be2ee0e9ffcc18175d" Mar 12 08:33:29 crc kubenswrapper[4809]: I0312 08:33:29.620318 4809 scope.go:117] "RemoveContainer" containerID="b70e9b0f42a41567f7d3edfc8bd25ec04b7f4ae8f4c5ae879608b77c71628bd1" Mar 12 08:33:30 crc kubenswrapper[4809]: I0312 08:33:30.053006 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-98nmv"] Mar 12 08:33:30 crc kubenswrapper[4809]: I0312 08:33:30.065223 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-98nmv"] Mar 12 08:33:31 crc kubenswrapper[4809]: I0312 08:33:31.122968 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0857990f-7921-4ea0-a0c1-e431cc7de107" path="/var/lib/kubelet/pods/0857990f-7921-4ea0-a0c1-e431cc7de107/volumes" Mar 12 08:33:38 crc kubenswrapper[4809]: I0312 08:33:38.075701 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-jssh7"] Mar 12 08:33:38 crc kubenswrapper[4809]: I0312 08:33:38.133548 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-jssh7"] Mar 12 08:33:38 crc kubenswrapper[4809]: I0312 08:33:38.150513 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-jf5zs"] Mar 12 08:33:38 crc kubenswrapper[4809]: I0312 08:33:38.166969 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-jf5zs"] Mar 12 08:33:39 crc kubenswrapper[4809]: I0312 08:33:39.137989 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23f0c997-5c6b-4229-911e-efe7b42f59f7" path="/var/lib/kubelet/pods/23f0c997-5c6b-4229-911e-efe7b42f59f7/volumes" Mar 12 08:33:39 crc kubenswrapper[4809]: I0312 08:33:39.139509 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="760b497c-568e-4501-86c1-1c9f6b5e7f7d" path="/var/lib/kubelet/pods/760b497c-568e-4501-86c1-1c9f6b5e7f7d/volumes" Mar 12 08:34:00 crc kubenswrapper[4809]: I0312 08:34:00.149332 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555074-7rpg6"] Mar 12 08:34:00 crc kubenswrapper[4809]: E0312 08:34:00.151326 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2176c52-ac52-4571-b15a-f3d31ffd5fb5" containerName="registry-server" Mar 12 08:34:00 crc kubenswrapper[4809]: I0312 08:34:00.151352 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2176c52-ac52-4571-b15a-f3d31ffd5fb5" containerName="registry-server" Mar 12 08:34:00 crc kubenswrapper[4809]: E0312 08:34:00.151387 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc647467-6bca-4396-a76d-0e79ad3661bb" containerName="extract-content" Mar 12 08:34:00 crc kubenswrapper[4809]: I0312 08:34:00.151400 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc647467-6bca-4396-a76d-0e79ad3661bb" containerName="extract-content" Mar 12 08:34:00 crc kubenswrapper[4809]: E0312 08:34:00.151438 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc647467-6bca-4396-a76d-0e79ad3661bb" containerName="registry-server" Mar 12 08:34:00 crc kubenswrapper[4809]: I0312 08:34:00.151451 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc647467-6bca-4396-a76d-0e79ad3661bb" containerName="registry-server" Mar 12 08:34:00 crc kubenswrapper[4809]: E0312 08:34:00.151497 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc647467-6bca-4396-a76d-0e79ad3661bb" containerName="extract-utilities" Mar 12 08:34:00 crc kubenswrapper[4809]: I0312 08:34:00.151510 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc647467-6bca-4396-a76d-0e79ad3661bb" containerName="extract-utilities" Mar 12 08:34:00 crc kubenswrapper[4809]: E0312 08:34:00.151548 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2176c52-ac52-4571-b15a-f3d31ffd5fb5" containerName="extract-utilities" Mar 12 08:34:00 crc kubenswrapper[4809]: I0312 08:34:00.151560 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2176c52-ac52-4571-b15a-f3d31ffd5fb5" containerName="extract-utilities" Mar 12 08:34:00 crc kubenswrapper[4809]: E0312 08:34:00.151579 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2176c52-ac52-4571-b15a-f3d31ffd5fb5" containerName="extract-content" Mar 12 08:34:00 crc kubenswrapper[4809]: I0312 08:34:00.151597 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2176c52-ac52-4571-b15a-f3d31ffd5fb5" containerName="extract-content" Mar 12 08:34:00 crc kubenswrapper[4809]: I0312 08:34:00.152000 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2176c52-ac52-4571-b15a-f3d31ffd5fb5" containerName="registry-server" Mar 12 08:34:00 crc kubenswrapper[4809]: I0312 08:34:00.152045 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc647467-6bca-4396-a76d-0e79ad3661bb" containerName="registry-server" Mar 12 08:34:00 crc kubenswrapper[4809]: I0312 08:34:00.153551 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555074-7rpg6" Mar 12 08:34:00 crc kubenswrapper[4809]: I0312 08:34:00.158245 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555074-7rpg6"] Mar 12 08:34:00 crc kubenswrapper[4809]: I0312 08:34:00.202297 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-tvxqv" Mar 12 08:34:00 crc kubenswrapper[4809]: I0312 08:34:00.202421 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 08:34:00 crc kubenswrapper[4809]: I0312 08:34:00.202643 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 08:34:00 crc kubenswrapper[4809]: I0312 08:34:00.312290 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7fjt\" (UniqueName: \"kubernetes.io/projected/0648e299-c5fa-40d2-8997-bb9189dea019-kube-api-access-x7fjt\") pod \"auto-csr-approver-29555074-7rpg6\" (UID: \"0648e299-c5fa-40d2-8997-bb9189dea019\") " pod="openshift-infra/auto-csr-approver-29555074-7rpg6" Mar 12 08:34:00 crc kubenswrapper[4809]: I0312 08:34:00.416640 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7fjt\" (UniqueName: \"kubernetes.io/projected/0648e299-c5fa-40d2-8997-bb9189dea019-kube-api-access-x7fjt\") pod \"auto-csr-approver-29555074-7rpg6\" (UID: \"0648e299-c5fa-40d2-8997-bb9189dea019\") " pod="openshift-infra/auto-csr-approver-29555074-7rpg6" Mar 12 08:34:00 crc kubenswrapper[4809]: I0312 08:34:00.441234 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7fjt\" (UniqueName: \"kubernetes.io/projected/0648e299-c5fa-40d2-8997-bb9189dea019-kube-api-access-x7fjt\") pod \"auto-csr-approver-29555074-7rpg6\" (UID: \"0648e299-c5fa-40d2-8997-bb9189dea019\") " pod="openshift-infra/auto-csr-approver-29555074-7rpg6" Mar 12 08:34:00 crc kubenswrapper[4809]: I0312 08:34:00.520501 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555074-7rpg6" Mar 12 08:34:01 crc kubenswrapper[4809]: I0312 08:34:01.064598 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555074-7rpg6"] Mar 12 08:34:01 crc kubenswrapper[4809]: W0312 08:34:01.069270 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0648e299_c5fa_40d2_8997_bb9189dea019.slice/crio-61e67dc51c2e392821488bf4032e39116a33805245dc9ddb99e6c7d3a171ce22 WatchSource:0}: Error finding container 61e67dc51c2e392821488bf4032e39116a33805245dc9ddb99e6c7d3a171ce22: Status 404 returned error can't find the container with id 61e67dc51c2e392821488bf4032e39116a33805245dc9ddb99e6c7d3a171ce22 Mar 12 08:34:01 crc kubenswrapper[4809]: I0312 08:34:01.072657 4809 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 08:34:01 crc kubenswrapper[4809]: I0312 08:34:01.939468 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555074-7rpg6" event={"ID":"0648e299-c5fa-40d2-8997-bb9189dea019","Type":"ContainerStarted","Data":"61e67dc51c2e392821488bf4032e39116a33805245dc9ddb99e6c7d3a171ce22"} Mar 12 08:34:02 crc kubenswrapper[4809]: I0312 08:34:02.966427 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555074-7rpg6" event={"ID":"0648e299-c5fa-40d2-8997-bb9189dea019","Type":"ContainerStarted","Data":"c54dc4feb9d455802d45ead893fd38dd6c38537551cfdc102ccb423bfe6a80cc"} Mar 12 08:34:03 crc kubenswrapper[4809]: I0312 08:34:03.051186 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555074-7rpg6" podStartSLOduration=1.9203759470000001 podStartE2EDuration="3.051164527s" podCreationTimestamp="2026-03-12 08:34:00 +0000 UTC" firstStartedPulling="2026-03-12 08:34:01.072409443 +0000 UTC m=+2114.654445176" lastFinishedPulling="2026-03-12 08:34:02.203198023 +0000 UTC m=+2115.785233756" observedRunningTime="2026-03-12 08:34:03.041029001 +0000 UTC m=+2116.623064734" watchObservedRunningTime="2026-03-12 08:34:03.051164527 +0000 UTC m=+2116.633200260" Mar 12 08:34:03 crc kubenswrapper[4809]: I0312 08:34:03.981937 4809 generic.go:334] "Generic (PLEG): container finished" podID="0648e299-c5fa-40d2-8997-bb9189dea019" containerID="c54dc4feb9d455802d45ead893fd38dd6c38537551cfdc102ccb423bfe6a80cc" exitCode=0 Mar 12 08:34:03 crc kubenswrapper[4809]: I0312 08:34:03.982017 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555074-7rpg6" event={"ID":"0648e299-c5fa-40d2-8997-bb9189dea019","Type":"ContainerDied","Data":"c54dc4feb9d455802d45ead893fd38dd6c38537551cfdc102ccb423bfe6a80cc"} Mar 12 08:34:05 crc kubenswrapper[4809]: I0312 08:34:05.438060 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555074-7rpg6" Mar 12 08:34:05 crc kubenswrapper[4809]: I0312 08:34:05.582966 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7fjt\" (UniqueName: \"kubernetes.io/projected/0648e299-c5fa-40d2-8997-bb9189dea019-kube-api-access-x7fjt\") pod \"0648e299-c5fa-40d2-8997-bb9189dea019\" (UID: \"0648e299-c5fa-40d2-8997-bb9189dea019\") " Mar 12 08:34:05 crc kubenswrapper[4809]: I0312 08:34:05.590086 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0648e299-c5fa-40d2-8997-bb9189dea019-kube-api-access-x7fjt" (OuterVolumeSpecName: "kube-api-access-x7fjt") pod "0648e299-c5fa-40d2-8997-bb9189dea019" (UID: "0648e299-c5fa-40d2-8997-bb9189dea019"). InnerVolumeSpecName "kube-api-access-x7fjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:34:05 crc kubenswrapper[4809]: I0312 08:34:05.686690 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7fjt\" (UniqueName: \"kubernetes.io/projected/0648e299-c5fa-40d2-8997-bb9189dea019-kube-api-access-x7fjt\") on node \"crc\" DevicePath \"\"" Mar 12 08:34:06 crc kubenswrapper[4809]: I0312 08:34:06.027630 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555074-7rpg6" event={"ID":"0648e299-c5fa-40d2-8997-bb9189dea019","Type":"ContainerDied","Data":"61e67dc51c2e392821488bf4032e39116a33805245dc9ddb99e6c7d3a171ce22"} Mar 12 08:34:06 crc kubenswrapper[4809]: I0312 08:34:06.028212 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61e67dc51c2e392821488bf4032e39116a33805245dc9ddb99e6c7d3a171ce22" Mar 12 08:34:06 crc kubenswrapper[4809]: I0312 08:34:06.027698 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555074-7rpg6" Mar 12 08:34:06 crc kubenswrapper[4809]: I0312 08:34:06.062426 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-qxpjh"] Mar 12 08:34:06 crc kubenswrapper[4809]: I0312 08:34:06.071754 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-qxpjh"] Mar 12 08:34:06 crc kubenswrapper[4809]: I0312 08:34:06.518324 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555068-7zqvj"] Mar 12 08:34:06 crc kubenswrapper[4809]: I0312 08:34:06.533380 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555068-7zqvj"] Mar 12 08:34:07 crc kubenswrapper[4809]: I0312 08:34:07.123604 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a" path="/var/lib/kubelet/pods/8f0ce648-579a-4e2a-9b18-3f5e1a9ead4a/volumes" Mar 12 08:34:07 crc kubenswrapper[4809]: I0312 08:34:07.124547 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a56d570f-f46b-4b34-9241-124d236d7e21" path="/var/lib/kubelet/pods/a56d570f-f46b-4b34-9241-124d236d7e21/volumes" Mar 12 08:34:29 crc kubenswrapper[4809]: I0312 08:34:29.980030 4809 scope.go:117] "RemoveContainer" containerID="0db4a14003c408fe75529fbe0259f755b43e44c40008f4a33a1ce14f88f9e1f5" Mar 12 08:34:30 crc kubenswrapper[4809]: I0312 08:34:30.017614 4809 scope.go:117] "RemoveContainer" containerID="0a22e9e2c499942df89501a02c7147e117207ba29dc1a525308304c227ac508f" Mar 12 08:34:30 crc kubenswrapper[4809]: I0312 08:34:30.090466 4809 scope.go:117] "RemoveContainer" containerID="d8fa398a028701803e96d3104388e36869dab06e89ada1de3c840908f82cf454" Mar 12 08:34:30 crc kubenswrapper[4809]: I0312 08:34:30.121651 4809 scope.go:117] "RemoveContainer" containerID="3df00dc1c45a265da9eb8de85ae550b3fe88069581253b858dd1b77cc3b0fda4" Mar 12 08:34:30 crc kubenswrapper[4809]: I0312 08:34:30.177776 4809 scope.go:117] "RemoveContainer" containerID="0be9d23b1007dffd29b365806fec94ed61f269d21b25655a004b7ec4fa52ad75" Mar 12 08:34:30 crc kubenswrapper[4809]: I0312 08:34:30.239338 4809 scope.go:117] "RemoveContainer" containerID="cb0b5aae4655ec5d15ff955cf21bf0af072f7fe49e63438ef83a5f9f31218560" Mar 12 08:34:30 crc kubenswrapper[4809]: I0312 08:34:30.295761 4809 scope.go:117] "RemoveContainer" containerID="ebf8743c0e7a1b5a1c243fff15f22d096343151bb387070b8e16be8809b7c964" Mar 12 08:34:45 crc kubenswrapper[4809]: I0312 08:34:45.049236 4809 patch_prober.go:28] interesting pod/machine-config-daemon-h6d4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 08:34:45 crc kubenswrapper[4809]: I0312 08:34:45.051262 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 08:34:49 crc kubenswrapper[4809]: I0312 08:34:49.639421 4809 generic.go:334] "Generic (PLEG): container finished" podID="1f2c1c6e-1410-4f2e-9cb9-8f55ccf83f59" containerID="de3ffe85b648c0cdd06d4ba951f6c72523baf5dd09433f1067e9928c088e26a9" exitCode=0 Mar 12 08:34:49 crc kubenswrapper[4809]: I0312 08:34:49.639459 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kcsgx" event={"ID":"1f2c1c6e-1410-4f2e-9cb9-8f55ccf83f59","Type":"ContainerDied","Data":"de3ffe85b648c0cdd06d4ba951f6c72523baf5dd09433f1067e9928c088e26a9"} Mar 12 08:34:51 crc kubenswrapper[4809]: I0312 08:34:51.121969 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kcsgx" Mar 12 08:34:51 crc kubenswrapper[4809]: I0312 08:34:51.304937 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1f2c1c6e-1410-4f2e-9cb9-8f55ccf83f59-inventory\") pod \"1f2c1c6e-1410-4f2e-9cb9-8f55ccf83f59\" (UID: \"1f2c1c6e-1410-4f2e-9cb9-8f55ccf83f59\") " Mar 12 08:34:51 crc kubenswrapper[4809]: I0312 08:34:51.305145 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1f2c1c6e-1410-4f2e-9cb9-8f55ccf83f59-ssh-key-openstack-edpm-ipam\") pod \"1f2c1c6e-1410-4f2e-9cb9-8f55ccf83f59\" (UID: \"1f2c1c6e-1410-4f2e-9cb9-8f55ccf83f59\") " Mar 12 08:34:51 crc kubenswrapper[4809]: I0312 08:34:51.305383 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zt9jq\" (UniqueName: \"kubernetes.io/projected/1f2c1c6e-1410-4f2e-9cb9-8f55ccf83f59-kube-api-access-zt9jq\") pod \"1f2c1c6e-1410-4f2e-9cb9-8f55ccf83f59\" (UID: \"1f2c1c6e-1410-4f2e-9cb9-8f55ccf83f59\") " Mar 12 08:34:51 crc kubenswrapper[4809]: I0312 08:34:51.311604 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f2c1c6e-1410-4f2e-9cb9-8f55ccf83f59-kube-api-access-zt9jq" (OuterVolumeSpecName: "kube-api-access-zt9jq") pod "1f2c1c6e-1410-4f2e-9cb9-8f55ccf83f59" (UID: "1f2c1c6e-1410-4f2e-9cb9-8f55ccf83f59"). InnerVolumeSpecName "kube-api-access-zt9jq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:34:51 crc kubenswrapper[4809]: I0312 08:34:51.338762 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f2c1c6e-1410-4f2e-9cb9-8f55ccf83f59-inventory" (OuterVolumeSpecName: "inventory") pod "1f2c1c6e-1410-4f2e-9cb9-8f55ccf83f59" (UID: "1f2c1c6e-1410-4f2e-9cb9-8f55ccf83f59"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:34:51 crc kubenswrapper[4809]: I0312 08:34:51.374433 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f2c1c6e-1410-4f2e-9cb9-8f55ccf83f59-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1f2c1c6e-1410-4f2e-9cb9-8f55ccf83f59" (UID: "1f2c1c6e-1410-4f2e-9cb9-8f55ccf83f59"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:34:51 crc kubenswrapper[4809]: I0312 08:34:51.408285 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zt9jq\" (UniqueName: \"kubernetes.io/projected/1f2c1c6e-1410-4f2e-9cb9-8f55ccf83f59-kube-api-access-zt9jq\") on node \"crc\" DevicePath \"\"" Mar 12 08:34:51 crc kubenswrapper[4809]: I0312 08:34:51.408329 4809 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1f2c1c6e-1410-4f2e-9cb9-8f55ccf83f59-inventory\") on node \"crc\" DevicePath \"\"" Mar 12 08:34:51 crc kubenswrapper[4809]: I0312 08:34:51.408343 4809 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1f2c1c6e-1410-4f2e-9cb9-8f55ccf83f59-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 08:34:51 crc kubenswrapper[4809]: I0312 08:34:51.674480 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kcsgx" event={"ID":"1f2c1c6e-1410-4f2e-9cb9-8f55ccf83f59","Type":"ContainerDied","Data":"40689e8ccbe2c048648c6e4b887c1bab8fa1e6913b8840f7e5753a65c9b5bd52"} Mar 12 08:34:51 crc kubenswrapper[4809]: I0312 08:34:51.674524 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kcsgx" Mar 12 08:34:51 crc kubenswrapper[4809]: I0312 08:34:51.674532 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40689e8ccbe2c048648c6e4b887c1bab8fa1e6913b8840f7e5753a65c9b5bd52" Mar 12 08:34:51 crc kubenswrapper[4809]: I0312 08:34:51.800339 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-cl2gg"] Mar 12 08:34:51 crc kubenswrapper[4809]: E0312 08:34:51.801309 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f2c1c6e-1410-4f2e-9cb9-8f55ccf83f59" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Mar 12 08:34:51 crc kubenswrapper[4809]: I0312 08:34:51.801330 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f2c1c6e-1410-4f2e-9cb9-8f55ccf83f59" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Mar 12 08:34:51 crc kubenswrapper[4809]: E0312 08:34:51.801345 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0648e299-c5fa-40d2-8997-bb9189dea019" containerName="oc" Mar 12 08:34:51 crc kubenswrapper[4809]: I0312 08:34:51.801352 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="0648e299-c5fa-40d2-8997-bb9189dea019" containerName="oc" Mar 12 08:34:51 crc kubenswrapper[4809]: I0312 08:34:51.801633 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f2c1c6e-1410-4f2e-9cb9-8f55ccf83f59" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Mar 12 08:34:51 crc kubenswrapper[4809]: I0312 08:34:51.801652 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="0648e299-c5fa-40d2-8997-bb9189dea019" containerName="oc" Mar 12 08:34:51 crc kubenswrapper[4809]: I0312 08:34:51.802557 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-cl2gg"] Mar 12 08:34:51 crc kubenswrapper[4809]: I0312 08:34:51.802649 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-cl2gg" Mar 12 08:34:51 crc kubenswrapper[4809]: I0312 08:34:51.826484 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 12 08:34:51 crc kubenswrapper[4809]: I0312 08:34:51.826720 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 08:34:51 crc kubenswrapper[4809]: I0312 08:34:51.826968 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 12 08:34:51 crc kubenswrapper[4809]: I0312 08:34:51.827650 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7nx95" Mar 12 08:34:51 crc kubenswrapper[4809]: I0312 08:34:51.927793 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/888c8215-5ca1-481a-88cb-b01e21be6eff-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-cl2gg\" (UID: \"888c8215-5ca1-481a-88cb-b01e21be6eff\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-cl2gg" Mar 12 08:34:51 crc kubenswrapper[4809]: I0312 08:34:51.927912 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/888c8215-5ca1-481a-88cb-b01e21be6eff-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-cl2gg\" (UID: \"888c8215-5ca1-481a-88cb-b01e21be6eff\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-cl2gg" Mar 12 08:34:51 crc kubenswrapper[4809]: I0312 08:34:51.927966 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqjmf\" (UniqueName: \"kubernetes.io/projected/888c8215-5ca1-481a-88cb-b01e21be6eff-kube-api-access-qqjmf\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-cl2gg\" (UID: \"888c8215-5ca1-481a-88cb-b01e21be6eff\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-cl2gg" Mar 12 08:34:52 crc kubenswrapper[4809]: I0312 08:34:52.031840 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/888c8215-5ca1-481a-88cb-b01e21be6eff-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-cl2gg\" (UID: \"888c8215-5ca1-481a-88cb-b01e21be6eff\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-cl2gg" Mar 12 08:34:52 crc kubenswrapper[4809]: I0312 08:34:52.032002 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/888c8215-5ca1-481a-88cb-b01e21be6eff-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-cl2gg\" (UID: \"888c8215-5ca1-481a-88cb-b01e21be6eff\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-cl2gg" Mar 12 08:34:52 crc kubenswrapper[4809]: I0312 08:34:52.032072 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqjmf\" (UniqueName: \"kubernetes.io/projected/888c8215-5ca1-481a-88cb-b01e21be6eff-kube-api-access-qqjmf\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-cl2gg\" (UID: \"888c8215-5ca1-481a-88cb-b01e21be6eff\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-cl2gg" Mar 12 08:34:52 crc kubenswrapper[4809]: I0312 08:34:52.037888 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/888c8215-5ca1-481a-88cb-b01e21be6eff-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-cl2gg\" (UID: \"888c8215-5ca1-481a-88cb-b01e21be6eff\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-cl2gg" Mar 12 08:34:52 crc kubenswrapper[4809]: I0312 08:34:52.038455 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/888c8215-5ca1-481a-88cb-b01e21be6eff-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-cl2gg\" (UID: \"888c8215-5ca1-481a-88cb-b01e21be6eff\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-cl2gg" Mar 12 08:34:52 crc kubenswrapper[4809]: I0312 08:34:52.048846 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqjmf\" (UniqueName: \"kubernetes.io/projected/888c8215-5ca1-481a-88cb-b01e21be6eff-kube-api-access-qqjmf\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-cl2gg\" (UID: \"888c8215-5ca1-481a-88cb-b01e21be6eff\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-cl2gg" Mar 12 08:34:52 crc kubenswrapper[4809]: I0312 08:34:52.156934 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-cl2gg" Mar 12 08:34:52 crc kubenswrapper[4809]: I0312 08:34:52.790932 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-cl2gg"] Mar 12 08:34:53 crc kubenswrapper[4809]: I0312 08:34:53.710171 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-cl2gg" event={"ID":"888c8215-5ca1-481a-88cb-b01e21be6eff","Type":"ContainerStarted","Data":"437d25b918993128073f8be2f5dd3ee0d6514c2f0ebfa37fd23925199d3e0743"} Mar 12 08:34:53 crc kubenswrapper[4809]: I0312 08:34:53.710655 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-cl2gg" event={"ID":"888c8215-5ca1-481a-88cb-b01e21be6eff","Type":"ContainerStarted","Data":"e3fbb22136607196946050133294911e2e4c47bf05b0942d1144744b06c4ee76"} Mar 12 08:34:53 crc kubenswrapper[4809]: I0312 08:34:53.741589 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-cl2gg" podStartSLOduration=2.236192719 podStartE2EDuration="2.741573539s" podCreationTimestamp="2026-03-12 08:34:51 +0000 UTC" firstStartedPulling="2026-03-12 08:34:52.798303468 +0000 UTC m=+2166.380339221" lastFinishedPulling="2026-03-12 08:34:53.303684288 +0000 UTC m=+2166.885720041" observedRunningTime="2026-03-12 08:34:53.735749521 +0000 UTC m=+2167.317785254" watchObservedRunningTime="2026-03-12 08:34:53.741573539 +0000 UTC m=+2167.323609272" Mar 12 08:34:59 crc kubenswrapper[4809]: I0312 08:34:59.066810 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-c95a-account-create-update-k247d"] Mar 12 08:34:59 crc kubenswrapper[4809]: I0312 08:34:59.084645 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-887k5"] Mar 12 08:34:59 crc kubenswrapper[4809]: I0312 08:34:59.100690 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-887k5"] Mar 12 08:34:59 crc kubenswrapper[4809]: I0312 08:34:59.131722 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b066594-e122-4fc4-95bb-c66b48bfd0f3" path="/var/lib/kubelet/pods/3b066594-e122-4fc4-95bb-c66b48bfd0f3/volumes" Mar 12 08:34:59 crc kubenswrapper[4809]: I0312 08:34:59.132745 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-c95a-account-create-update-k247d"] Mar 12 08:34:59 crc kubenswrapper[4809]: I0312 08:34:59.346163 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-q77f6"] Mar 12 08:34:59 crc kubenswrapper[4809]: I0312 08:34:59.350360 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q77f6" Mar 12 08:34:59 crc kubenswrapper[4809]: I0312 08:34:59.352589 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3af019a0-aab2-419e-bd5f-a98ed2bf485f-utilities\") pod \"redhat-operators-q77f6\" (UID: \"3af019a0-aab2-419e-bd5f-a98ed2bf485f\") " pod="openshift-marketplace/redhat-operators-q77f6" Mar 12 08:34:59 crc kubenswrapper[4809]: I0312 08:34:59.352651 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx752\" (UniqueName: \"kubernetes.io/projected/3af019a0-aab2-419e-bd5f-a98ed2bf485f-kube-api-access-qx752\") pod \"redhat-operators-q77f6\" (UID: \"3af019a0-aab2-419e-bd5f-a98ed2bf485f\") " pod="openshift-marketplace/redhat-operators-q77f6" Mar 12 08:34:59 crc kubenswrapper[4809]: I0312 08:34:59.352742 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3af019a0-aab2-419e-bd5f-a98ed2bf485f-catalog-content\") pod \"redhat-operators-q77f6\" (UID: \"3af019a0-aab2-419e-bd5f-a98ed2bf485f\") " pod="openshift-marketplace/redhat-operators-q77f6" Mar 12 08:34:59 crc kubenswrapper[4809]: I0312 08:34:59.356927 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q77f6"] Mar 12 08:34:59 crc kubenswrapper[4809]: I0312 08:34:59.455005 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3af019a0-aab2-419e-bd5f-a98ed2bf485f-utilities\") pod \"redhat-operators-q77f6\" (UID: \"3af019a0-aab2-419e-bd5f-a98ed2bf485f\") " pod="openshift-marketplace/redhat-operators-q77f6" Mar 12 08:34:59 crc kubenswrapper[4809]: I0312 08:34:59.455078 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qx752\" (UniqueName: \"kubernetes.io/projected/3af019a0-aab2-419e-bd5f-a98ed2bf485f-kube-api-access-qx752\") pod \"redhat-operators-q77f6\" (UID: \"3af019a0-aab2-419e-bd5f-a98ed2bf485f\") " pod="openshift-marketplace/redhat-operators-q77f6" Mar 12 08:34:59 crc kubenswrapper[4809]: I0312 08:34:59.455177 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3af019a0-aab2-419e-bd5f-a98ed2bf485f-catalog-content\") pod \"redhat-operators-q77f6\" (UID: \"3af019a0-aab2-419e-bd5f-a98ed2bf485f\") " pod="openshift-marketplace/redhat-operators-q77f6" Mar 12 08:34:59 crc kubenswrapper[4809]: I0312 08:34:59.455669 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3af019a0-aab2-419e-bd5f-a98ed2bf485f-catalog-content\") pod \"redhat-operators-q77f6\" (UID: \"3af019a0-aab2-419e-bd5f-a98ed2bf485f\") " pod="openshift-marketplace/redhat-operators-q77f6" Mar 12 08:34:59 crc kubenswrapper[4809]: I0312 08:34:59.455932 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3af019a0-aab2-419e-bd5f-a98ed2bf485f-utilities\") pod \"redhat-operators-q77f6\" (UID: \"3af019a0-aab2-419e-bd5f-a98ed2bf485f\") " pod="openshift-marketplace/redhat-operators-q77f6" Mar 12 08:34:59 crc kubenswrapper[4809]: I0312 08:34:59.481870 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qx752\" (UniqueName: \"kubernetes.io/projected/3af019a0-aab2-419e-bd5f-a98ed2bf485f-kube-api-access-qx752\") pod \"redhat-operators-q77f6\" (UID: \"3af019a0-aab2-419e-bd5f-a98ed2bf485f\") " pod="openshift-marketplace/redhat-operators-q77f6" Mar 12 08:34:59 crc kubenswrapper[4809]: I0312 08:34:59.713782 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q77f6" Mar 12 08:35:00 crc kubenswrapper[4809]: I0312 08:35:00.053483 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-ed56-account-create-update-mdf8p"] Mar 12 08:35:00 crc kubenswrapper[4809]: I0312 08:35:00.086680 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-bqf9m"] Mar 12 08:35:00 crc kubenswrapper[4809]: I0312 08:35:00.104447 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-rr92v"] Mar 12 08:35:00 crc kubenswrapper[4809]: I0312 08:35:00.114320 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-ed56-account-create-update-mdf8p"] Mar 12 08:35:00 crc kubenswrapper[4809]: I0312 08:35:00.125345 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-bqf9m"] Mar 12 08:35:00 crc kubenswrapper[4809]: I0312 08:35:00.136087 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-rr92v"] Mar 12 08:35:00 crc kubenswrapper[4809]: I0312 08:35:00.259748 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q77f6"] Mar 12 08:35:00 crc kubenswrapper[4809]: W0312 08:35:00.260504 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3af019a0_aab2_419e_bd5f_a98ed2bf485f.slice/crio-58c43e7548eb12f73faae2fa20b1077c39f1e5f04feef7e9b6b61b70c30fb752 WatchSource:0}: Error finding container 58c43e7548eb12f73faae2fa20b1077c39f1e5f04feef7e9b6b61b70c30fb752: Status 404 returned error can't find the container with id 58c43e7548eb12f73faae2fa20b1077c39f1e5f04feef7e9b6b61b70c30fb752 Mar 12 08:35:00 crc kubenswrapper[4809]: I0312 08:35:00.808423 4809 generic.go:334] "Generic (PLEG): container finished" podID="3af019a0-aab2-419e-bd5f-a98ed2bf485f" containerID="40560e45da165688e7a214ac540aa2cbbb2fb9e1d6e835d61a7d05fb98ba603d" exitCode=0 Mar 12 08:35:00 crc kubenswrapper[4809]: I0312 08:35:00.808607 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q77f6" event={"ID":"3af019a0-aab2-419e-bd5f-a98ed2bf485f","Type":"ContainerDied","Data":"40560e45da165688e7a214ac540aa2cbbb2fb9e1d6e835d61a7d05fb98ba603d"} Mar 12 08:35:00 crc kubenswrapper[4809]: I0312 08:35:00.809048 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q77f6" event={"ID":"3af019a0-aab2-419e-bd5f-a98ed2bf485f","Type":"ContainerStarted","Data":"58c43e7548eb12f73faae2fa20b1077c39f1e5f04feef7e9b6b61b70c30fb752"} Mar 12 08:35:01 crc kubenswrapper[4809]: I0312 08:35:01.038219 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-353e-account-create-update-hmg8n"] Mar 12 08:35:01 crc kubenswrapper[4809]: I0312 08:35:01.050857 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-353e-account-create-update-hmg8n"] Mar 12 08:35:01 crc kubenswrapper[4809]: I0312 08:35:01.119253 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3698cbf3-c28c-429f-b568-d8a68b979dfb" path="/var/lib/kubelet/pods/3698cbf3-c28c-429f-b568-d8a68b979dfb/volumes" Mar 12 08:35:01 crc kubenswrapper[4809]: I0312 08:35:01.120191 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4043f775-958c-4f5d-828f-dda0811c0a7e" path="/var/lib/kubelet/pods/4043f775-958c-4f5d-828f-dda0811c0a7e/volumes" Mar 12 08:35:01 crc kubenswrapper[4809]: I0312 08:35:01.120817 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="547a107f-945b-4d0d-929d-061a9a044077" path="/var/lib/kubelet/pods/547a107f-945b-4d0d-929d-061a9a044077/volumes" Mar 12 08:35:01 crc kubenswrapper[4809]: I0312 08:35:01.121508 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c61faad0-e94e-4480-8ac7-854d1717dc78" path="/var/lib/kubelet/pods/c61faad0-e94e-4480-8ac7-854d1717dc78/volumes" Mar 12 08:35:01 crc kubenswrapper[4809]: I0312 08:35:01.122757 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efb90ca2-e6b9-4160-a1be-80e9418f9d43" path="/var/lib/kubelet/pods/efb90ca2-e6b9-4160-a1be-80e9418f9d43/volumes" Mar 12 08:35:02 crc kubenswrapper[4809]: I0312 08:35:02.859530 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q77f6" event={"ID":"3af019a0-aab2-419e-bd5f-a98ed2bf485f","Type":"ContainerStarted","Data":"8ebcdb8964dd6b30900447c04f6f38f42c89c05b0086cb8a14efc67f0e4f30dd"} Mar 12 08:35:06 crc kubenswrapper[4809]: I0312 08:35:06.906895 4809 generic.go:334] "Generic (PLEG): container finished" podID="3af019a0-aab2-419e-bd5f-a98ed2bf485f" containerID="8ebcdb8964dd6b30900447c04f6f38f42c89c05b0086cb8a14efc67f0e4f30dd" exitCode=0 Mar 12 08:35:06 crc kubenswrapper[4809]: I0312 08:35:06.906993 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q77f6" event={"ID":"3af019a0-aab2-419e-bd5f-a98ed2bf485f","Type":"ContainerDied","Data":"8ebcdb8964dd6b30900447c04f6f38f42c89c05b0086cb8a14efc67f0e4f30dd"} Mar 12 08:35:07 crc kubenswrapper[4809]: I0312 08:35:07.933513 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q77f6" event={"ID":"3af019a0-aab2-419e-bd5f-a98ed2bf485f","Type":"ContainerStarted","Data":"a6306d92c64a1c88757cd9a74bf0f23db8089d1e0b6a0f16d5bf3b771e6533a4"} Mar 12 08:35:07 crc kubenswrapper[4809]: I0312 08:35:07.981399 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-q77f6" podStartSLOduration=2.477677999 podStartE2EDuration="8.981350829s" podCreationTimestamp="2026-03-12 08:34:59 +0000 UTC" firstStartedPulling="2026-03-12 08:35:00.816385491 +0000 UTC m=+2174.398421224" lastFinishedPulling="2026-03-12 08:35:07.320058321 +0000 UTC m=+2180.902094054" observedRunningTime="2026-03-12 08:35:07.951429354 +0000 UTC m=+2181.533465087" watchObservedRunningTime="2026-03-12 08:35:07.981350829 +0000 UTC m=+2181.563386592" Mar 12 08:35:09 crc kubenswrapper[4809]: I0312 08:35:09.714698 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-q77f6" Mar 12 08:35:09 crc kubenswrapper[4809]: I0312 08:35:09.715087 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-q77f6" Mar 12 08:35:10 crc kubenswrapper[4809]: I0312 08:35:10.781532 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-q77f6" podUID="3af019a0-aab2-419e-bd5f-a98ed2bf485f" containerName="registry-server" probeResult="failure" output=< Mar 12 08:35:10 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 08:35:10 crc kubenswrapper[4809]: > Mar 12 08:35:15 crc kubenswrapper[4809]: I0312 08:35:15.048318 4809 patch_prober.go:28] interesting pod/machine-config-daemon-h6d4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 08:35:15 crc kubenswrapper[4809]: I0312 08:35:15.048941 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 08:35:20 crc kubenswrapper[4809]: I0312 08:35:20.771271 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-q77f6" podUID="3af019a0-aab2-419e-bd5f-a98ed2bf485f" containerName="registry-server" probeResult="failure" output=< Mar 12 08:35:20 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 08:35:20 crc kubenswrapper[4809]: > Mar 12 08:35:21 crc kubenswrapper[4809]: I0312 08:35:21.170596 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-265gm"] Mar 12 08:35:21 crc kubenswrapper[4809]: I0312 08:35:21.173880 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-265gm" Mar 12 08:35:21 crc kubenswrapper[4809]: I0312 08:35:21.184682 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-265gm"] Mar 12 08:35:21 crc kubenswrapper[4809]: I0312 08:35:21.256774 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5klsp\" (UniqueName: \"kubernetes.io/projected/a7e75c75-6a38-4ef2-934e-36e63113c978-kube-api-access-5klsp\") pod \"redhat-marketplace-265gm\" (UID: \"a7e75c75-6a38-4ef2-934e-36e63113c978\") " pod="openshift-marketplace/redhat-marketplace-265gm" Mar 12 08:35:21 crc kubenswrapper[4809]: I0312 08:35:21.256864 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7e75c75-6a38-4ef2-934e-36e63113c978-catalog-content\") pod \"redhat-marketplace-265gm\" (UID: \"a7e75c75-6a38-4ef2-934e-36e63113c978\") " pod="openshift-marketplace/redhat-marketplace-265gm" Mar 12 08:35:21 crc kubenswrapper[4809]: I0312 08:35:21.257047 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7e75c75-6a38-4ef2-934e-36e63113c978-utilities\") pod \"redhat-marketplace-265gm\" (UID: \"a7e75c75-6a38-4ef2-934e-36e63113c978\") " pod="openshift-marketplace/redhat-marketplace-265gm" Mar 12 08:35:21 crc kubenswrapper[4809]: I0312 08:35:21.359268 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7e75c75-6a38-4ef2-934e-36e63113c978-utilities\") pod \"redhat-marketplace-265gm\" (UID: \"a7e75c75-6a38-4ef2-934e-36e63113c978\") " pod="openshift-marketplace/redhat-marketplace-265gm" Mar 12 08:35:21 crc kubenswrapper[4809]: I0312 08:35:21.359413 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5klsp\" (UniqueName: \"kubernetes.io/projected/a7e75c75-6a38-4ef2-934e-36e63113c978-kube-api-access-5klsp\") pod \"redhat-marketplace-265gm\" (UID: \"a7e75c75-6a38-4ef2-934e-36e63113c978\") " pod="openshift-marketplace/redhat-marketplace-265gm" Mar 12 08:35:21 crc kubenswrapper[4809]: I0312 08:35:21.359463 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7e75c75-6a38-4ef2-934e-36e63113c978-catalog-content\") pod \"redhat-marketplace-265gm\" (UID: \"a7e75c75-6a38-4ef2-934e-36e63113c978\") " pod="openshift-marketplace/redhat-marketplace-265gm" Mar 12 08:35:21 crc kubenswrapper[4809]: I0312 08:35:21.360003 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7e75c75-6a38-4ef2-934e-36e63113c978-utilities\") pod \"redhat-marketplace-265gm\" (UID: \"a7e75c75-6a38-4ef2-934e-36e63113c978\") " pod="openshift-marketplace/redhat-marketplace-265gm" Mar 12 08:35:21 crc kubenswrapper[4809]: I0312 08:35:21.360083 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7e75c75-6a38-4ef2-934e-36e63113c978-catalog-content\") pod \"redhat-marketplace-265gm\" (UID: \"a7e75c75-6a38-4ef2-934e-36e63113c978\") " pod="openshift-marketplace/redhat-marketplace-265gm" Mar 12 08:35:21 crc kubenswrapper[4809]: I0312 08:35:21.379990 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5klsp\" (UniqueName: \"kubernetes.io/projected/a7e75c75-6a38-4ef2-934e-36e63113c978-kube-api-access-5klsp\") pod \"redhat-marketplace-265gm\" (UID: \"a7e75c75-6a38-4ef2-934e-36e63113c978\") " pod="openshift-marketplace/redhat-marketplace-265gm" Mar 12 08:35:21 crc kubenswrapper[4809]: I0312 08:35:21.518498 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-265gm" Mar 12 08:35:22 crc kubenswrapper[4809]: I0312 08:35:22.184246 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-265gm"] Mar 12 08:35:22 crc kubenswrapper[4809]: W0312 08:35:22.195959 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda7e75c75_6a38_4ef2_934e_36e63113c978.slice/crio-914b7be934fd4ed8933808da9ab586a8aea01b81a66105ab0a3dd50aad7baba3 WatchSource:0}: Error finding container 914b7be934fd4ed8933808da9ab586a8aea01b81a66105ab0a3dd50aad7baba3: Status 404 returned error can't find the container with id 914b7be934fd4ed8933808da9ab586a8aea01b81a66105ab0a3dd50aad7baba3 Mar 12 08:35:23 crc kubenswrapper[4809]: I0312 08:35:23.166855 4809 generic.go:334] "Generic (PLEG): container finished" podID="a7e75c75-6a38-4ef2-934e-36e63113c978" containerID="a83ea793fa5561a8d54951072a4eb84797feec33a1e817417dd0ccb467bdaf99" exitCode=0 Mar 12 08:35:23 crc kubenswrapper[4809]: I0312 08:35:23.166979 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-265gm" event={"ID":"a7e75c75-6a38-4ef2-934e-36e63113c978","Type":"ContainerDied","Data":"a83ea793fa5561a8d54951072a4eb84797feec33a1e817417dd0ccb467bdaf99"} Mar 12 08:35:23 crc kubenswrapper[4809]: I0312 08:35:23.167399 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-265gm" event={"ID":"a7e75c75-6a38-4ef2-934e-36e63113c978","Type":"ContainerStarted","Data":"914b7be934fd4ed8933808da9ab586a8aea01b81a66105ab0a3dd50aad7baba3"} Mar 12 08:35:24 crc kubenswrapper[4809]: I0312 08:35:24.182239 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-265gm" event={"ID":"a7e75c75-6a38-4ef2-934e-36e63113c978","Type":"ContainerStarted","Data":"408f7aca14fd09e1456d0545c97ead2a90a6185a59e596822c5fc8a10a3997da"} Mar 12 08:35:26 crc kubenswrapper[4809]: I0312 08:35:26.204899 4809 generic.go:334] "Generic (PLEG): container finished" podID="a7e75c75-6a38-4ef2-934e-36e63113c978" containerID="408f7aca14fd09e1456d0545c97ead2a90a6185a59e596822c5fc8a10a3997da" exitCode=0 Mar 12 08:35:26 crc kubenswrapper[4809]: I0312 08:35:26.205419 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-265gm" event={"ID":"a7e75c75-6a38-4ef2-934e-36e63113c978","Type":"ContainerDied","Data":"408f7aca14fd09e1456d0545c97ead2a90a6185a59e596822c5fc8a10a3997da"} Mar 12 08:35:27 crc kubenswrapper[4809]: I0312 08:35:27.223834 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-265gm" event={"ID":"a7e75c75-6a38-4ef2-934e-36e63113c978","Type":"ContainerStarted","Data":"72386443de708a82483e4c7b796fa86cd9fd9efca76c50d9d6945d8f0affe044"} Mar 12 08:35:27 crc kubenswrapper[4809]: I0312 08:35:27.249884 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-265gm" podStartSLOduration=2.784629739 podStartE2EDuration="6.249858953s" podCreationTimestamp="2026-03-12 08:35:21 +0000 UTC" firstStartedPulling="2026-03-12 08:35:23.169861318 +0000 UTC m=+2196.751897051" lastFinishedPulling="2026-03-12 08:35:26.635090532 +0000 UTC m=+2200.217126265" observedRunningTime="2026-03-12 08:35:27.24533161 +0000 UTC m=+2200.827367353" watchObservedRunningTime="2026-03-12 08:35:27.249858953 +0000 UTC m=+2200.831894686" Mar 12 08:35:30 crc kubenswrapper[4809]: I0312 08:35:30.500149 4809 scope.go:117] "RemoveContainer" containerID="41aea85cc92f7ccdcc94d0796405333e1ca4469648cab2e1a3a034cd5178dce1" Mar 12 08:35:30 crc kubenswrapper[4809]: I0312 08:35:30.532214 4809 scope.go:117] "RemoveContainer" containerID="86c33c08028baa3b4062ef96f1f1e2ad8effe34c49f8ea3c394ec6630d7acb89" Mar 12 08:35:30 crc kubenswrapper[4809]: I0312 08:35:30.606388 4809 scope.go:117] "RemoveContainer" containerID="0e60788436b4e057435c814d9f27fba1dc94f4221b9bac8a2b753ea57d926a0c" Mar 12 08:35:30 crc kubenswrapper[4809]: I0312 08:35:30.657951 4809 scope.go:117] "RemoveContainer" containerID="8291d5b0647993172b77bf5ff49ff769f72510cee6072dffdedd08da817cff18" Mar 12 08:35:30 crc kubenswrapper[4809]: I0312 08:35:30.721173 4809 scope.go:117] "RemoveContainer" containerID="28ab8f075b84fcafe9f059f99c244c02f523ec7e33c220de759fc6097d5e4d2f" Mar 12 08:35:30 crc kubenswrapper[4809]: I0312 08:35:30.780329 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-q77f6" podUID="3af019a0-aab2-419e-bd5f-a98ed2bf485f" containerName="registry-server" probeResult="failure" output=< Mar 12 08:35:30 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 08:35:30 crc kubenswrapper[4809]: > Mar 12 08:35:30 crc kubenswrapper[4809]: I0312 08:35:30.795007 4809 scope.go:117] "RemoveContainer" containerID="7519b599ab886240e22563946d82b66fe3199afd3757d33036c754c228eadd92" Mar 12 08:35:31 crc kubenswrapper[4809]: I0312 08:35:31.518879 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-265gm" Mar 12 08:35:31 crc kubenswrapper[4809]: I0312 08:35:31.519422 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-265gm" Mar 12 08:35:31 crc kubenswrapper[4809]: I0312 08:35:31.602679 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-265gm" Mar 12 08:35:32 crc kubenswrapper[4809]: I0312 08:35:32.331718 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-265gm" Mar 12 08:35:32 crc kubenswrapper[4809]: I0312 08:35:32.389459 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-265gm"] Mar 12 08:35:34 crc kubenswrapper[4809]: I0312 08:35:34.065542 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-vbtp5"] Mar 12 08:35:34 crc kubenswrapper[4809]: I0312 08:35:34.077600 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-vbtp5"] Mar 12 08:35:34 crc kubenswrapper[4809]: I0312 08:35:34.297466 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-265gm" podUID="a7e75c75-6a38-4ef2-934e-36e63113c978" containerName="registry-server" containerID="cri-o://72386443de708a82483e4c7b796fa86cd9fd9efca76c50d9d6945d8f0affe044" gracePeriod=2 Mar 12 08:35:35 crc kubenswrapper[4809]: I0312 08:35:35.122238 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ff0ea45-0824-477e-9dde-a71bca537168" path="/var/lib/kubelet/pods/7ff0ea45-0824-477e-9dde-a71bca537168/volumes" Mar 12 08:35:35 crc kubenswrapper[4809]: I0312 08:35:35.318362 4809 generic.go:334] "Generic (PLEG): container finished" podID="a7e75c75-6a38-4ef2-934e-36e63113c978" containerID="72386443de708a82483e4c7b796fa86cd9fd9efca76c50d9d6945d8f0affe044" exitCode=0 Mar 12 08:35:35 crc kubenswrapper[4809]: I0312 08:35:35.318438 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-265gm" event={"ID":"a7e75c75-6a38-4ef2-934e-36e63113c978","Type":"ContainerDied","Data":"72386443de708a82483e4c7b796fa86cd9fd9efca76c50d9d6945d8f0affe044"} Mar 12 08:35:35 crc kubenswrapper[4809]: I0312 08:35:35.318487 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-265gm" event={"ID":"a7e75c75-6a38-4ef2-934e-36e63113c978","Type":"ContainerDied","Data":"914b7be934fd4ed8933808da9ab586a8aea01b81a66105ab0a3dd50aad7baba3"} Mar 12 08:35:35 crc kubenswrapper[4809]: I0312 08:35:35.318513 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="914b7be934fd4ed8933808da9ab586a8aea01b81a66105ab0a3dd50aad7baba3" Mar 12 08:35:35 crc kubenswrapper[4809]: I0312 08:35:35.432352 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-265gm" Mar 12 08:35:35 crc kubenswrapper[4809]: I0312 08:35:35.474133 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7e75c75-6a38-4ef2-934e-36e63113c978-catalog-content\") pod \"a7e75c75-6a38-4ef2-934e-36e63113c978\" (UID: \"a7e75c75-6a38-4ef2-934e-36e63113c978\") " Mar 12 08:35:35 crc kubenswrapper[4809]: I0312 08:35:35.474206 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7e75c75-6a38-4ef2-934e-36e63113c978-utilities\") pod \"a7e75c75-6a38-4ef2-934e-36e63113c978\" (UID: \"a7e75c75-6a38-4ef2-934e-36e63113c978\") " Mar 12 08:35:35 crc kubenswrapper[4809]: I0312 08:35:35.474451 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5klsp\" (UniqueName: \"kubernetes.io/projected/a7e75c75-6a38-4ef2-934e-36e63113c978-kube-api-access-5klsp\") pod \"a7e75c75-6a38-4ef2-934e-36e63113c978\" (UID: \"a7e75c75-6a38-4ef2-934e-36e63113c978\") " Mar 12 08:35:35 crc kubenswrapper[4809]: I0312 08:35:35.475283 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7e75c75-6a38-4ef2-934e-36e63113c978-utilities" (OuterVolumeSpecName: "utilities") pod "a7e75c75-6a38-4ef2-934e-36e63113c978" (UID: "a7e75c75-6a38-4ef2-934e-36e63113c978"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:35:35 crc kubenswrapper[4809]: I0312 08:35:35.501226 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7e75c75-6a38-4ef2-934e-36e63113c978-kube-api-access-5klsp" (OuterVolumeSpecName: "kube-api-access-5klsp") pod "a7e75c75-6a38-4ef2-934e-36e63113c978" (UID: "a7e75c75-6a38-4ef2-934e-36e63113c978"). InnerVolumeSpecName "kube-api-access-5klsp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:35:35 crc kubenswrapper[4809]: I0312 08:35:35.507734 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7e75c75-6a38-4ef2-934e-36e63113c978-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a7e75c75-6a38-4ef2-934e-36e63113c978" (UID: "a7e75c75-6a38-4ef2-934e-36e63113c978"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:35:35 crc kubenswrapper[4809]: I0312 08:35:35.577439 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5klsp\" (UniqueName: \"kubernetes.io/projected/a7e75c75-6a38-4ef2-934e-36e63113c978-kube-api-access-5klsp\") on node \"crc\" DevicePath \"\"" Mar 12 08:35:35 crc kubenswrapper[4809]: I0312 08:35:35.577478 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7e75c75-6a38-4ef2-934e-36e63113c978-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 08:35:35 crc kubenswrapper[4809]: I0312 08:35:35.577515 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7e75c75-6a38-4ef2-934e-36e63113c978-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 08:35:36 crc kubenswrapper[4809]: I0312 08:35:36.328946 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-265gm" Mar 12 08:35:36 crc kubenswrapper[4809]: I0312 08:35:36.447258 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-265gm"] Mar 12 08:35:36 crc kubenswrapper[4809]: I0312 08:35:36.461912 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-265gm"] Mar 12 08:35:37 crc kubenswrapper[4809]: I0312 08:35:37.119813 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7e75c75-6a38-4ef2-934e-36e63113c978" path="/var/lib/kubelet/pods/a7e75c75-6a38-4ef2-934e-36e63113c978/volumes" Mar 12 08:35:40 crc kubenswrapper[4809]: I0312 08:35:40.764874 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-q77f6" podUID="3af019a0-aab2-419e-bd5f-a98ed2bf485f" containerName="registry-server" probeResult="failure" output=< Mar 12 08:35:40 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 08:35:40 crc kubenswrapper[4809]: > Mar 12 08:35:45 crc kubenswrapper[4809]: I0312 08:35:45.048980 4809 patch_prober.go:28] interesting pod/machine-config-daemon-h6d4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 08:35:45 crc kubenswrapper[4809]: I0312 08:35:45.049765 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 08:35:45 crc kubenswrapper[4809]: I0312 08:35:45.049839 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" Mar 12 08:35:45 crc kubenswrapper[4809]: I0312 08:35:45.050754 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bf2cc15f1ee829a164218f56f3866a7078c46a7688edcf75cd90ece45a87d0e0"} pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 08:35:45 crc kubenswrapper[4809]: I0312 08:35:45.050862 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" containerID="cri-o://bf2cc15f1ee829a164218f56f3866a7078c46a7688edcf75cd90ece45a87d0e0" gracePeriod=600 Mar 12 08:35:45 crc kubenswrapper[4809]: E0312 08:35:45.322914 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:35:45 crc kubenswrapper[4809]: I0312 08:35:45.435159 4809 generic.go:334] "Generic (PLEG): container finished" podID="101483ba-8ed3-40eb-9855-077e9add029f" containerID="bf2cc15f1ee829a164218f56f3866a7078c46a7688edcf75cd90ece45a87d0e0" exitCode=0 Mar 12 08:35:45 crc kubenswrapper[4809]: I0312 08:35:45.435216 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" event={"ID":"101483ba-8ed3-40eb-9855-077e9add029f","Type":"ContainerDied","Data":"bf2cc15f1ee829a164218f56f3866a7078c46a7688edcf75cd90ece45a87d0e0"} Mar 12 08:35:45 crc kubenswrapper[4809]: I0312 08:35:45.435275 4809 scope.go:117] "RemoveContainer" containerID="97289a619c1926063f6f049c601dbe841c3463284a0fd671e4120b278e3af3bb" Mar 12 08:35:45 crc kubenswrapper[4809]: I0312 08:35:45.436017 4809 scope.go:117] "RemoveContainer" containerID="bf2cc15f1ee829a164218f56f3866a7078c46a7688edcf75cd90ece45a87d0e0" Mar 12 08:35:45 crc kubenswrapper[4809]: E0312 08:35:45.436402 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:35:46 crc kubenswrapper[4809]: I0312 08:35:46.032230 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-814c-account-create-update-8vv4f"] Mar 12 08:35:46 crc kubenswrapper[4809]: I0312 08:35:46.042382 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-814c-account-create-update-8vv4f"] Mar 12 08:35:47 crc kubenswrapper[4809]: I0312 08:35:47.125535 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5dd15dc5-2ce6-4c1b-a683-f73beca93754" path="/var/lib/kubelet/pods/5dd15dc5-2ce6-4c1b-a683-f73beca93754/volumes" Mar 12 08:35:49 crc kubenswrapper[4809]: I0312 08:35:49.040103 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-qrkcj"] Mar 12 08:35:49 crc kubenswrapper[4809]: I0312 08:35:49.063747 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-qrkcj"] Mar 12 08:35:49 crc kubenswrapper[4809]: I0312 08:35:49.121597 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d584655-82a5-46a8-ae0b-9c1abf01de7a" path="/var/lib/kubelet/pods/0d584655-82a5-46a8-ae0b-9c1abf01de7a/volumes" Mar 12 08:35:49 crc kubenswrapper[4809]: I0312 08:35:49.792825 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-q77f6" Mar 12 08:35:49 crc kubenswrapper[4809]: I0312 08:35:49.850342 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-q77f6" Mar 12 08:35:50 crc kubenswrapper[4809]: I0312 08:35:50.050356 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q77f6"] Mar 12 08:35:51 crc kubenswrapper[4809]: I0312 08:35:51.528411 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-q77f6" podUID="3af019a0-aab2-419e-bd5f-a98ed2bf485f" containerName="registry-server" containerID="cri-o://a6306d92c64a1c88757cd9a74bf0f23db8089d1e0b6a0f16d5bf3b771e6533a4" gracePeriod=2 Mar 12 08:35:52 crc kubenswrapper[4809]: I0312 08:35:52.045513 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q77f6" Mar 12 08:35:52 crc kubenswrapper[4809]: I0312 08:35:52.161848 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3af019a0-aab2-419e-bd5f-a98ed2bf485f-catalog-content\") pod \"3af019a0-aab2-419e-bd5f-a98ed2bf485f\" (UID: \"3af019a0-aab2-419e-bd5f-a98ed2bf485f\") " Mar 12 08:35:52 crc kubenswrapper[4809]: I0312 08:35:52.162248 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3af019a0-aab2-419e-bd5f-a98ed2bf485f-utilities\") pod \"3af019a0-aab2-419e-bd5f-a98ed2bf485f\" (UID: \"3af019a0-aab2-419e-bd5f-a98ed2bf485f\") " Mar 12 08:35:52 crc kubenswrapper[4809]: I0312 08:35:52.162391 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qx752\" (UniqueName: \"kubernetes.io/projected/3af019a0-aab2-419e-bd5f-a98ed2bf485f-kube-api-access-qx752\") pod \"3af019a0-aab2-419e-bd5f-a98ed2bf485f\" (UID: \"3af019a0-aab2-419e-bd5f-a98ed2bf485f\") " Mar 12 08:35:52 crc kubenswrapper[4809]: I0312 08:35:52.163585 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3af019a0-aab2-419e-bd5f-a98ed2bf485f-utilities" (OuterVolumeSpecName: "utilities") pod "3af019a0-aab2-419e-bd5f-a98ed2bf485f" (UID: "3af019a0-aab2-419e-bd5f-a98ed2bf485f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:35:52 crc kubenswrapper[4809]: I0312 08:35:52.170513 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3af019a0-aab2-419e-bd5f-a98ed2bf485f-kube-api-access-qx752" (OuterVolumeSpecName: "kube-api-access-qx752") pod "3af019a0-aab2-419e-bd5f-a98ed2bf485f" (UID: "3af019a0-aab2-419e-bd5f-a98ed2bf485f"). InnerVolumeSpecName "kube-api-access-qx752". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:35:52 crc kubenswrapper[4809]: I0312 08:35:52.267580 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3af019a0-aab2-419e-bd5f-a98ed2bf485f-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 08:35:52 crc kubenswrapper[4809]: I0312 08:35:52.267631 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qx752\" (UniqueName: \"kubernetes.io/projected/3af019a0-aab2-419e-bd5f-a98ed2bf485f-kube-api-access-qx752\") on node \"crc\" DevicePath \"\"" Mar 12 08:35:52 crc kubenswrapper[4809]: I0312 08:35:52.293194 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3af019a0-aab2-419e-bd5f-a98ed2bf485f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3af019a0-aab2-419e-bd5f-a98ed2bf485f" (UID: "3af019a0-aab2-419e-bd5f-a98ed2bf485f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:35:52 crc kubenswrapper[4809]: I0312 08:35:52.370471 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3af019a0-aab2-419e-bd5f-a98ed2bf485f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 08:35:52 crc kubenswrapper[4809]: I0312 08:35:52.544593 4809 generic.go:334] "Generic (PLEG): container finished" podID="3af019a0-aab2-419e-bd5f-a98ed2bf485f" containerID="a6306d92c64a1c88757cd9a74bf0f23db8089d1e0b6a0f16d5bf3b771e6533a4" exitCode=0 Mar 12 08:35:52 crc kubenswrapper[4809]: I0312 08:35:52.544645 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q77f6" event={"ID":"3af019a0-aab2-419e-bd5f-a98ed2bf485f","Type":"ContainerDied","Data":"a6306d92c64a1c88757cd9a74bf0f23db8089d1e0b6a0f16d5bf3b771e6533a4"} Mar 12 08:35:52 crc kubenswrapper[4809]: I0312 08:35:52.544676 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q77f6" event={"ID":"3af019a0-aab2-419e-bd5f-a98ed2bf485f","Type":"ContainerDied","Data":"58c43e7548eb12f73faae2fa20b1077c39f1e5f04feef7e9b6b61b70c30fb752"} Mar 12 08:35:52 crc kubenswrapper[4809]: I0312 08:35:52.544706 4809 scope.go:117] "RemoveContainer" containerID="a6306d92c64a1c88757cd9a74bf0f23db8089d1e0b6a0f16d5bf3b771e6533a4" Mar 12 08:35:52 crc kubenswrapper[4809]: I0312 08:35:52.544990 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q77f6" Mar 12 08:35:52 crc kubenswrapper[4809]: I0312 08:35:52.580855 4809 scope.go:117] "RemoveContainer" containerID="8ebcdb8964dd6b30900447c04f6f38f42c89c05b0086cb8a14efc67f0e4f30dd" Mar 12 08:35:52 crc kubenswrapper[4809]: I0312 08:35:52.589726 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q77f6"] Mar 12 08:35:52 crc kubenswrapper[4809]: I0312 08:35:52.602904 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-q77f6"] Mar 12 08:35:52 crc kubenswrapper[4809]: I0312 08:35:52.613651 4809 scope.go:117] "RemoveContainer" containerID="40560e45da165688e7a214ac540aa2cbbb2fb9e1d6e835d61a7d05fb98ba603d" Mar 12 08:35:52 crc kubenswrapper[4809]: I0312 08:35:52.690155 4809 scope.go:117] "RemoveContainer" containerID="a6306d92c64a1c88757cd9a74bf0f23db8089d1e0b6a0f16d5bf3b771e6533a4" Mar 12 08:35:52 crc kubenswrapper[4809]: E0312 08:35:52.690655 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6306d92c64a1c88757cd9a74bf0f23db8089d1e0b6a0f16d5bf3b771e6533a4\": container with ID starting with a6306d92c64a1c88757cd9a74bf0f23db8089d1e0b6a0f16d5bf3b771e6533a4 not found: ID does not exist" containerID="a6306d92c64a1c88757cd9a74bf0f23db8089d1e0b6a0f16d5bf3b771e6533a4" Mar 12 08:35:52 crc kubenswrapper[4809]: I0312 08:35:52.690723 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6306d92c64a1c88757cd9a74bf0f23db8089d1e0b6a0f16d5bf3b771e6533a4"} err="failed to get container status \"a6306d92c64a1c88757cd9a74bf0f23db8089d1e0b6a0f16d5bf3b771e6533a4\": rpc error: code = NotFound desc = could not find container \"a6306d92c64a1c88757cd9a74bf0f23db8089d1e0b6a0f16d5bf3b771e6533a4\": container with ID starting with a6306d92c64a1c88757cd9a74bf0f23db8089d1e0b6a0f16d5bf3b771e6533a4 not found: ID does not exist" Mar 12 08:35:52 crc kubenswrapper[4809]: I0312 08:35:52.690766 4809 scope.go:117] "RemoveContainer" containerID="8ebcdb8964dd6b30900447c04f6f38f42c89c05b0086cb8a14efc67f0e4f30dd" Mar 12 08:35:52 crc kubenswrapper[4809]: E0312 08:35:52.691317 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ebcdb8964dd6b30900447c04f6f38f42c89c05b0086cb8a14efc67f0e4f30dd\": container with ID starting with 8ebcdb8964dd6b30900447c04f6f38f42c89c05b0086cb8a14efc67f0e4f30dd not found: ID does not exist" containerID="8ebcdb8964dd6b30900447c04f6f38f42c89c05b0086cb8a14efc67f0e4f30dd" Mar 12 08:35:52 crc kubenswrapper[4809]: I0312 08:35:52.691365 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ebcdb8964dd6b30900447c04f6f38f42c89c05b0086cb8a14efc67f0e4f30dd"} err="failed to get container status \"8ebcdb8964dd6b30900447c04f6f38f42c89c05b0086cb8a14efc67f0e4f30dd\": rpc error: code = NotFound desc = could not find container \"8ebcdb8964dd6b30900447c04f6f38f42c89c05b0086cb8a14efc67f0e4f30dd\": container with ID starting with 8ebcdb8964dd6b30900447c04f6f38f42c89c05b0086cb8a14efc67f0e4f30dd not found: ID does not exist" Mar 12 08:35:52 crc kubenswrapper[4809]: I0312 08:35:52.691410 4809 scope.go:117] "RemoveContainer" containerID="40560e45da165688e7a214ac540aa2cbbb2fb9e1d6e835d61a7d05fb98ba603d" Mar 12 08:35:52 crc kubenswrapper[4809]: E0312 08:35:52.691770 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40560e45da165688e7a214ac540aa2cbbb2fb9e1d6e835d61a7d05fb98ba603d\": container with ID starting with 40560e45da165688e7a214ac540aa2cbbb2fb9e1d6e835d61a7d05fb98ba603d not found: ID does not exist" containerID="40560e45da165688e7a214ac540aa2cbbb2fb9e1d6e835d61a7d05fb98ba603d" Mar 12 08:35:52 crc kubenswrapper[4809]: I0312 08:35:52.691807 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40560e45da165688e7a214ac540aa2cbbb2fb9e1d6e835d61a7d05fb98ba603d"} err="failed to get container status \"40560e45da165688e7a214ac540aa2cbbb2fb9e1d6e835d61a7d05fb98ba603d\": rpc error: code = NotFound desc = could not find container \"40560e45da165688e7a214ac540aa2cbbb2fb9e1d6e835d61a7d05fb98ba603d\": container with ID starting with 40560e45da165688e7a214ac540aa2cbbb2fb9e1d6e835d61a7d05fb98ba603d not found: ID does not exist" Mar 12 08:35:53 crc kubenswrapper[4809]: I0312 08:35:53.136027 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3af019a0-aab2-419e-bd5f-a98ed2bf485f" path="/var/lib/kubelet/pods/3af019a0-aab2-419e-bd5f-a98ed2bf485f/volumes" Mar 12 08:35:55 crc kubenswrapper[4809]: I0312 08:35:55.080180 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-m6nvs"] Mar 12 08:35:55 crc kubenswrapper[4809]: I0312 08:35:55.105706 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-m6nvs"] Mar 12 08:35:55 crc kubenswrapper[4809]: I0312 08:35:55.128047 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="661848d4-7363-415e-8446-98751a00c6de" path="/var/lib/kubelet/pods/661848d4-7363-415e-8446-98751a00c6de/volumes" Mar 12 08:35:59 crc kubenswrapper[4809]: I0312 08:35:59.107132 4809 scope.go:117] "RemoveContainer" containerID="bf2cc15f1ee829a164218f56f3866a7078c46a7688edcf75cd90ece45a87d0e0" Mar 12 08:35:59 crc kubenswrapper[4809]: E0312 08:35:59.107832 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:36:00 crc kubenswrapper[4809]: I0312 08:36:00.037642 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bc2l4"] Mar 12 08:36:00 crc kubenswrapper[4809]: I0312 08:36:00.054325 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bc2l4"] Mar 12 08:36:00 crc kubenswrapper[4809]: I0312 08:36:00.142244 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555076-4wwrr"] Mar 12 08:36:00 crc kubenswrapper[4809]: E0312 08:36:00.142895 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3af019a0-aab2-419e-bd5f-a98ed2bf485f" containerName="registry-server" Mar 12 08:36:00 crc kubenswrapper[4809]: I0312 08:36:00.142912 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="3af019a0-aab2-419e-bd5f-a98ed2bf485f" containerName="registry-server" Mar 12 08:36:00 crc kubenswrapper[4809]: E0312 08:36:00.142957 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7e75c75-6a38-4ef2-934e-36e63113c978" containerName="registry-server" Mar 12 08:36:00 crc kubenswrapper[4809]: I0312 08:36:00.142966 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7e75c75-6a38-4ef2-934e-36e63113c978" containerName="registry-server" Mar 12 08:36:00 crc kubenswrapper[4809]: E0312 08:36:00.142981 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7e75c75-6a38-4ef2-934e-36e63113c978" containerName="extract-utilities" Mar 12 08:36:00 crc kubenswrapper[4809]: I0312 08:36:00.142995 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7e75c75-6a38-4ef2-934e-36e63113c978" containerName="extract-utilities" Mar 12 08:36:00 crc kubenswrapper[4809]: E0312 08:36:00.143015 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3af019a0-aab2-419e-bd5f-a98ed2bf485f" containerName="extract-content" Mar 12 08:36:00 crc kubenswrapper[4809]: I0312 08:36:00.143023 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="3af019a0-aab2-419e-bd5f-a98ed2bf485f" containerName="extract-content" Mar 12 08:36:00 crc kubenswrapper[4809]: E0312 08:36:00.143050 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7e75c75-6a38-4ef2-934e-36e63113c978" containerName="extract-content" Mar 12 08:36:00 crc kubenswrapper[4809]: I0312 08:36:00.143058 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7e75c75-6a38-4ef2-934e-36e63113c978" containerName="extract-content" Mar 12 08:36:00 crc kubenswrapper[4809]: E0312 08:36:00.143071 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3af019a0-aab2-419e-bd5f-a98ed2bf485f" containerName="extract-utilities" Mar 12 08:36:00 crc kubenswrapper[4809]: I0312 08:36:00.143079 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="3af019a0-aab2-419e-bd5f-a98ed2bf485f" containerName="extract-utilities" Mar 12 08:36:00 crc kubenswrapper[4809]: I0312 08:36:00.143409 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="3af019a0-aab2-419e-bd5f-a98ed2bf485f" containerName="registry-server" Mar 12 08:36:00 crc kubenswrapper[4809]: I0312 08:36:00.143429 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7e75c75-6a38-4ef2-934e-36e63113c978" containerName="registry-server" Mar 12 08:36:00 crc kubenswrapper[4809]: I0312 08:36:00.144548 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555076-4wwrr" Mar 12 08:36:00 crc kubenswrapper[4809]: I0312 08:36:00.147431 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 08:36:00 crc kubenswrapper[4809]: I0312 08:36:00.147740 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-tvxqv" Mar 12 08:36:00 crc kubenswrapper[4809]: I0312 08:36:00.147814 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 08:36:00 crc kubenswrapper[4809]: I0312 08:36:00.158259 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555076-4wwrr"] Mar 12 08:36:00 crc kubenswrapper[4809]: I0312 08:36:00.222930 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g92w9\" (UniqueName: \"kubernetes.io/projected/0a12be75-8f6d-4114-be11-98ecd6f751f8-kube-api-access-g92w9\") pod \"auto-csr-approver-29555076-4wwrr\" (UID: \"0a12be75-8f6d-4114-be11-98ecd6f751f8\") " pod="openshift-infra/auto-csr-approver-29555076-4wwrr" Mar 12 08:36:00 crc kubenswrapper[4809]: I0312 08:36:00.325584 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g92w9\" (UniqueName: \"kubernetes.io/projected/0a12be75-8f6d-4114-be11-98ecd6f751f8-kube-api-access-g92w9\") pod \"auto-csr-approver-29555076-4wwrr\" (UID: \"0a12be75-8f6d-4114-be11-98ecd6f751f8\") " pod="openshift-infra/auto-csr-approver-29555076-4wwrr" Mar 12 08:36:00 crc kubenswrapper[4809]: I0312 08:36:00.349678 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g92w9\" (UniqueName: \"kubernetes.io/projected/0a12be75-8f6d-4114-be11-98ecd6f751f8-kube-api-access-g92w9\") pod \"auto-csr-approver-29555076-4wwrr\" (UID: \"0a12be75-8f6d-4114-be11-98ecd6f751f8\") " pod="openshift-infra/auto-csr-approver-29555076-4wwrr" Mar 12 08:36:00 crc kubenswrapper[4809]: I0312 08:36:00.468584 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555076-4wwrr" Mar 12 08:36:01 crc kubenswrapper[4809]: I0312 08:36:01.009405 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555076-4wwrr"] Mar 12 08:36:01 crc kubenswrapper[4809]: I0312 08:36:01.120563 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b188d8-97a2-47ef-a863-e243b1f38483" path="/var/lib/kubelet/pods/20b188d8-97a2-47ef-a863-e243b1f38483/volumes" Mar 12 08:36:01 crc kubenswrapper[4809]: I0312 08:36:01.667996 4809 generic.go:334] "Generic (PLEG): container finished" podID="888c8215-5ca1-481a-88cb-b01e21be6eff" containerID="437d25b918993128073f8be2f5dd3ee0d6514c2f0ebfa37fd23925199d3e0743" exitCode=0 Mar 12 08:36:01 crc kubenswrapper[4809]: I0312 08:36:01.668082 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-cl2gg" event={"ID":"888c8215-5ca1-481a-88cb-b01e21be6eff","Type":"ContainerDied","Data":"437d25b918993128073f8be2f5dd3ee0d6514c2f0ebfa37fd23925199d3e0743"} Mar 12 08:36:01 crc kubenswrapper[4809]: I0312 08:36:01.669848 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555076-4wwrr" event={"ID":"0a12be75-8f6d-4114-be11-98ecd6f751f8","Type":"ContainerStarted","Data":"5284b309eb6f469da153c92e9a7196af17ac051a319737bb68a2cc06c0ade7e4"} Mar 12 08:36:02 crc kubenswrapper[4809]: I0312 08:36:02.683179 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555076-4wwrr" event={"ID":"0a12be75-8f6d-4114-be11-98ecd6f751f8","Type":"ContainerStarted","Data":"1753d2a0768e80cda3adcbd57d5e42c9e69c26a21f8af4f7d60b282783c38b9a"} Mar 12 08:36:02 crc kubenswrapper[4809]: I0312 08:36:02.719810 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555076-4wwrr" podStartSLOduration=1.76663123 podStartE2EDuration="2.719780425s" podCreationTimestamp="2026-03-12 08:36:00 +0000 UTC" firstStartedPulling="2026-03-12 08:36:01.01724314 +0000 UTC m=+2234.599278873" lastFinishedPulling="2026-03-12 08:36:01.970392325 +0000 UTC m=+2235.552428068" observedRunningTime="2026-03-12 08:36:02.709570637 +0000 UTC m=+2236.291606380" watchObservedRunningTime="2026-03-12 08:36:02.719780425 +0000 UTC m=+2236.301816158" Mar 12 08:36:03 crc kubenswrapper[4809]: I0312 08:36:03.251295 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-cl2gg" Mar 12 08:36:03 crc kubenswrapper[4809]: I0312 08:36:03.307766 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/888c8215-5ca1-481a-88cb-b01e21be6eff-ssh-key-openstack-edpm-ipam\") pod \"888c8215-5ca1-481a-88cb-b01e21be6eff\" (UID: \"888c8215-5ca1-481a-88cb-b01e21be6eff\") " Mar 12 08:36:03 crc kubenswrapper[4809]: I0312 08:36:03.308261 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/888c8215-5ca1-481a-88cb-b01e21be6eff-inventory\") pod \"888c8215-5ca1-481a-88cb-b01e21be6eff\" (UID: \"888c8215-5ca1-481a-88cb-b01e21be6eff\") " Mar 12 08:36:03 crc kubenswrapper[4809]: I0312 08:36:03.308335 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqjmf\" (UniqueName: \"kubernetes.io/projected/888c8215-5ca1-481a-88cb-b01e21be6eff-kube-api-access-qqjmf\") pod \"888c8215-5ca1-481a-88cb-b01e21be6eff\" (UID: \"888c8215-5ca1-481a-88cb-b01e21be6eff\") " Mar 12 08:36:03 crc kubenswrapper[4809]: I0312 08:36:03.316331 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/888c8215-5ca1-481a-88cb-b01e21be6eff-kube-api-access-qqjmf" (OuterVolumeSpecName: "kube-api-access-qqjmf") pod "888c8215-5ca1-481a-88cb-b01e21be6eff" (UID: "888c8215-5ca1-481a-88cb-b01e21be6eff"). InnerVolumeSpecName "kube-api-access-qqjmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:36:03 crc kubenswrapper[4809]: I0312 08:36:03.349772 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/888c8215-5ca1-481a-88cb-b01e21be6eff-inventory" (OuterVolumeSpecName: "inventory") pod "888c8215-5ca1-481a-88cb-b01e21be6eff" (UID: "888c8215-5ca1-481a-88cb-b01e21be6eff"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:36:03 crc kubenswrapper[4809]: I0312 08:36:03.367106 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/888c8215-5ca1-481a-88cb-b01e21be6eff-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "888c8215-5ca1-481a-88cb-b01e21be6eff" (UID: "888c8215-5ca1-481a-88cb-b01e21be6eff"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:36:03 crc kubenswrapper[4809]: I0312 08:36:03.413392 4809 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/888c8215-5ca1-481a-88cb-b01e21be6eff-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 08:36:03 crc kubenswrapper[4809]: I0312 08:36:03.413429 4809 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/888c8215-5ca1-481a-88cb-b01e21be6eff-inventory\") on node \"crc\" DevicePath \"\"" Mar 12 08:36:03 crc kubenswrapper[4809]: I0312 08:36:03.413441 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqjmf\" (UniqueName: \"kubernetes.io/projected/888c8215-5ca1-481a-88cb-b01e21be6eff-kube-api-access-qqjmf\") on node \"crc\" DevicePath \"\"" Mar 12 08:36:03 crc kubenswrapper[4809]: I0312 08:36:03.700250 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-cl2gg" Mar 12 08:36:03 crc kubenswrapper[4809]: I0312 08:36:03.700556 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-cl2gg" event={"ID":"888c8215-5ca1-481a-88cb-b01e21be6eff","Type":"ContainerDied","Data":"e3fbb22136607196946050133294911e2e4c47bf05b0942d1144744b06c4ee76"} Mar 12 08:36:03 crc kubenswrapper[4809]: I0312 08:36:03.700764 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3fbb22136607196946050133294911e2e4c47bf05b0942d1144744b06c4ee76" Mar 12 08:36:03 crc kubenswrapper[4809]: I0312 08:36:03.702946 4809 generic.go:334] "Generic (PLEG): container finished" podID="0a12be75-8f6d-4114-be11-98ecd6f751f8" containerID="1753d2a0768e80cda3adcbd57d5e42c9e69c26a21f8af4f7d60b282783c38b9a" exitCode=0 Mar 12 08:36:03 crc kubenswrapper[4809]: I0312 08:36:03.702981 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555076-4wwrr" event={"ID":"0a12be75-8f6d-4114-be11-98ecd6f751f8","Type":"ContainerDied","Data":"1753d2a0768e80cda3adcbd57d5e42c9e69c26a21f8af4f7d60b282783c38b9a"} Mar 12 08:36:03 crc kubenswrapper[4809]: I0312 08:36:03.824427 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-trnst"] Mar 12 08:36:03 crc kubenswrapper[4809]: E0312 08:36:03.825327 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="888c8215-5ca1-481a-88cb-b01e21be6eff" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Mar 12 08:36:03 crc kubenswrapper[4809]: I0312 08:36:03.825355 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="888c8215-5ca1-481a-88cb-b01e21be6eff" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Mar 12 08:36:03 crc kubenswrapper[4809]: I0312 08:36:03.825748 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="888c8215-5ca1-481a-88cb-b01e21be6eff" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Mar 12 08:36:03 crc kubenswrapper[4809]: I0312 08:36:03.827078 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-trnst" Mar 12 08:36:03 crc kubenswrapper[4809]: I0312 08:36:03.835745 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 12 08:36:03 crc kubenswrapper[4809]: I0312 08:36:03.836474 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 08:36:03 crc kubenswrapper[4809]: I0312 08:36:03.837518 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 12 08:36:03 crc kubenswrapper[4809]: I0312 08:36:03.839068 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7nx95" Mar 12 08:36:03 crc kubenswrapper[4809]: I0312 08:36:03.844296 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-trnst"] Mar 12 08:36:03 crc kubenswrapper[4809]: I0312 08:36:03.926706 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2rtg\" (UniqueName: \"kubernetes.io/projected/833f444f-3131-4fe7-b59a-fd9c67224b6e-kube-api-access-c2rtg\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-trnst\" (UID: \"833f444f-3131-4fe7-b59a-fd9c67224b6e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-trnst" Mar 12 08:36:03 crc kubenswrapper[4809]: I0312 08:36:03.926858 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/833f444f-3131-4fe7-b59a-fd9c67224b6e-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-trnst\" (UID: \"833f444f-3131-4fe7-b59a-fd9c67224b6e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-trnst" Mar 12 08:36:03 crc kubenswrapper[4809]: I0312 08:36:03.926977 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/833f444f-3131-4fe7-b59a-fd9c67224b6e-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-trnst\" (UID: \"833f444f-3131-4fe7-b59a-fd9c67224b6e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-trnst" Mar 12 08:36:04 crc kubenswrapper[4809]: I0312 08:36:04.030962 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/833f444f-3131-4fe7-b59a-fd9c67224b6e-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-trnst\" (UID: \"833f444f-3131-4fe7-b59a-fd9c67224b6e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-trnst" Mar 12 08:36:04 crc kubenswrapper[4809]: I0312 08:36:04.031578 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2rtg\" (UniqueName: \"kubernetes.io/projected/833f444f-3131-4fe7-b59a-fd9c67224b6e-kube-api-access-c2rtg\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-trnst\" (UID: \"833f444f-3131-4fe7-b59a-fd9c67224b6e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-trnst" Mar 12 08:36:04 crc kubenswrapper[4809]: I0312 08:36:04.031781 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/833f444f-3131-4fe7-b59a-fd9c67224b6e-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-trnst\" (UID: \"833f444f-3131-4fe7-b59a-fd9c67224b6e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-trnst" Mar 12 08:36:04 crc kubenswrapper[4809]: I0312 08:36:04.036901 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/833f444f-3131-4fe7-b59a-fd9c67224b6e-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-trnst\" (UID: \"833f444f-3131-4fe7-b59a-fd9c67224b6e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-trnst" Mar 12 08:36:04 crc kubenswrapper[4809]: I0312 08:36:04.042176 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/833f444f-3131-4fe7-b59a-fd9c67224b6e-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-trnst\" (UID: \"833f444f-3131-4fe7-b59a-fd9c67224b6e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-trnst" Mar 12 08:36:04 crc kubenswrapper[4809]: I0312 08:36:04.055568 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2rtg\" (UniqueName: \"kubernetes.io/projected/833f444f-3131-4fe7-b59a-fd9c67224b6e-kube-api-access-c2rtg\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-trnst\" (UID: \"833f444f-3131-4fe7-b59a-fd9c67224b6e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-trnst" Mar 12 08:36:04 crc kubenswrapper[4809]: I0312 08:36:04.154842 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-trnst" Mar 12 08:36:04 crc kubenswrapper[4809]: I0312 08:36:04.850595 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-trnst"] Mar 12 08:36:05 crc kubenswrapper[4809]: I0312 08:36:05.227997 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555076-4wwrr" Mar 12 08:36:05 crc kubenswrapper[4809]: I0312 08:36:05.284003 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g92w9\" (UniqueName: \"kubernetes.io/projected/0a12be75-8f6d-4114-be11-98ecd6f751f8-kube-api-access-g92w9\") pod \"0a12be75-8f6d-4114-be11-98ecd6f751f8\" (UID: \"0a12be75-8f6d-4114-be11-98ecd6f751f8\") " Mar 12 08:36:05 crc kubenswrapper[4809]: I0312 08:36:05.293920 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a12be75-8f6d-4114-be11-98ecd6f751f8-kube-api-access-g92w9" (OuterVolumeSpecName: "kube-api-access-g92w9") pod "0a12be75-8f6d-4114-be11-98ecd6f751f8" (UID: "0a12be75-8f6d-4114-be11-98ecd6f751f8"). InnerVolumeSpecName "kube-api-access-g92w9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:36:05 crc kubenswrapper[4809]: I0312 08:36:05.388871 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g92w9\" (UniqueName: \"kubernetes.io/projected/0a12be75-8f6d-4114-be11-98ecd6f751f8-kube-api-access-g92w9\") on node \"crc\" DevicePath \"\"" Mar 12 08:36:05 crc kubenswrapper[4809]: I0312 08:36:05.742524 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-trnst" event={"ID":"833f444f-3131-4fe7-b59a-fd9c67224b6e","Type":"ContainerStarted","Data":"b5c3cb275c2c8b2f21d1182ee708fb6295229e68aae149cd39543d5805d0b87d"} Mar 12 08:36:05 crc kubenswrapper[4809]: I0312 08:36:05.742592 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-trnst" event={"ID":"833f444f-3131-4fe7-b59a-fd9c67224b6e","Type":"ContainerStarted","Data":"8df730f91d505010071dd0b549a5cd86e1e883e64b3ea0095ef64256d5dd9e56"} Mar 12 08:36:05 crc kubenswrapper[4809]: I0312 08:36:05.763940 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-trnst" podStartSLOduration=2.220843067 podStartE2EDuration="2.763904414s" podCreationTimestamp="2026-03-12 08:36:03 +0000 UTC" firstStartedPulling="2026-03-12 08:36:04.858808647 +0000 UTC m=+2238.440844380" lastFinishedPulling="2026-03-12 08:36:05.401869994 +0000 UTC m=+2238.983905727" observedRunningTime="2026-03-12 08:36:05.759172796 +0000 UTC m=+2239.341208549" watchObservedRunningTime="2026-03-12 08:36:05.763904414 +0000 UTC m=+2239.345940147" Mar 12 08:36:05 crc kubenswrapper[4809]: I0312 08:36:05.766993 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555076-4wwrr" event={"ID":"0a12be75-8f6d-4114-be11-98ecd6f751f8","Type":"ContainerDied","Data":"5284b309eb6f469da153c92e9a7196af17ac051a319737bb68a2cc06c0ade7e4"} Mar 12 08:36:05 crc kubenswrapper[4809]: I0312 08:36:05.767103 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5284b309eb6f469da153c92e9a7196af17ac051a319737bb68a2cc06c0ade7e4" Mar 12 08:36:05 crc kubenswrapper[4809]: I0312 08:36:05.767240 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555076-4wwrr" Mar 12 08:36:05 crc kubenswrapper[4809]: I0312 08:36:05.799028 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555070-k86cx"] Mar 12 08:36:05 crc kubenswrapper[4809]: I0312 08:36:05.810793 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555070-k86cx"] Mar 12 08:36:07 crc kubenswrapper[4809]: I0312 08:36:07.119967 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba2b5d5b-19ed-4e05-9165-b5064fec1040" path="/var/lib/kubelet/pods/ba2b5d5b-19ed-4e05-9165-b5064fec1040/volumes" Mar 12 08:36:10 crc kubenswrapper[4809]: I0312 08:36:10.839751 4809 generic.go:334] "Generic (PLEG): container finished" podID="833f444f-3131-4fe7-b59a-fd9c67224b6e" containerID="b5c3cb275c2c8b2f21d1182ee708fb6295229e68aae149cd39543d5805d0b87d" exitCode=0 Mar 12 08:36:10 crc kubenswrapper[4809]: I0312 08:36:10.839899 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-trnst" event={"ID":"833f444f-3131-4fe7-b59a-fd9c67224b6e","Type":"ContainerDied","Data":"b5c3cb275c2c8b2f21d1182ee708fb6295229e68aae149cd39543d5805d0b87d"} Mar 12 08:36:12 crc kubenswrapper[4809]: I0312 08:36:12.393141 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-trnst" Mar 12 08:36:12 crc kubenswrapper[4809]: I0312 08:36:12.506653 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2rtg\" (UniqueName: \"kubernetes.io/projected/833f444f-3131-4fe7-b59a-fd9c67224b6e-kube-api-access-c2rtg\") pod \"833f444f-3131-4fe7-b59a-fd9c67224b6e\" (UID: \"833f444f-3131-4fe7-b59a-fd9c67224b6e\") " Mar 12 08:36:12 crc kubenswrapper[4809]: I0312 08:36:12.507056 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/833f444f-3131-4fe7-b59a-fd9c67224b6e-inventory\") pod \"833f444f-3131-4fe7-b59a-fd9c67224b6e\" (UID: \"833f444f-3131-4fe7-b59a-fd9c67224b6e\") " Mar 12 08:36:12 crc kubenswrapper[4809]: I0312 08:36:12.507266 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/833f444f-3131-4fe7-b59a-fd9c67224b6e-ssh-key-openstack-edpm-ipam\") pod \"833f444f-3131-4fe7-b59a-fd9c67224b6e\" (UID: \"833f444f-3131-4fe7-b59a-fd9c67224b6e\") " Mar 12 08:36:12 crc kubenswrapper[4809]: I0312 08:36:12.513369 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/833f444f-3131-4fe7-b59a-fd9c67224b6e-kube-api-access-c2rtg" (OuterVolumeSpecName: "kube-api-access-c2rtg") pod "833f444f-3131-4fe7-b59a-fd9c67224b6e" (UID: "833f444f-3131-4fe7-b59a-fd9c67224b6e"). InnerVolumeSpecName "kube-api-access-c2rtg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:36:12 crc kubenswrapper[4809]: I0312 08:36:12.539452 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/833f444f-3131-4fe7-b59a-fd9c67224b6e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "833f444f-3131-4fe7-b59a-fd9c67224b6e" (UID: "833f444f-3131-4fe7-b59a-fd9c67224b6e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:36:12 crc kubenswrapper[4809]: I0312 08:36:12.548794 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/833f444f-3131-4fe7-b59a-fd9c67224b6e-inventory" (OuterVolumeSpecName: "inventory") pod "833f444f-3131-4fe7-b59a-fd9c67224b6e" (UID: "833f444f-3131-4fe7-b59a-fd9c67224b6e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:36:12 crc kubenswrapper[4809]: I0312 08:36:12.611219 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2rtg\" (UniqueName: \"kubernetes.io/projected/833f444f-3131-4fe7-b59a-fd9c67224b6e-kube-api-access-c2rtg\") on node \"crc\" DevicePath \"\"" Mar 12 08:36:12 crc kubenswrapper[4809]: I0312 08:36:12.611258 4809 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/833f444f-3131-4fe7-b59a-fd9c67224b6e-inventory\") on node \"crc\" DevicePath \"\"" Mar 12 08:36:12 crc kubenswrapper[4809]: I0312 08:36:12.611285 4809 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/833f444f-3131-4fe7-b59a-fd9c67224b6e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 08:36:12 crc kubenswrapper[4809]: I0312 08:36:12.864708 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-trnst" event={"ID":"833f444f-3131-4fe7-b59a-fd9c67224b6e","Type":"ContainerDied","Data":"8df730f91d505010071dd0b549a5cd86e1e883e64b3ea0095ef64256d5dd9e56"} Mar 12 08:36:12 crc kubenswrapper[4809]: I0312 08:36:12.864755 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8df730f91d505010071dd0b549a5cd86e1e883e64b3ea0095ef64256d5dd9e56" Mar 12 08:36:12 crc kubenswrapper[4809]: I0312 08:36:12.864856 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-trnst" Mar 12 08:36:12 crc kubenswrapper[4809]: I0312 08:36:12.961490 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-wc48v"] Mar 12 08:36:12 crc kubenswrapper[4809]: E0312 08:36:12.962224 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="833f444f-3131-4fe7-b59a-fd9c67224b6e" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Mar 12 08:36:12 crc kubenswrapper[4809]: I0312 08:36:12.962246 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="833f444f-3131-4fe7-b59a-fd9c67224b6e" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Mar 12 08:36:12 crc kubenswrapper[4809]: E0312 08:36:12.962304 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a12be75-8f6d-4114-be11-98ecd6f751f8" containerName="oc" Mar 12 08:36:12 crc kubenswrapper[4809]: I0312 08:36:12.962314 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a12be75-8f6d-4114-be11-98ecd6f751f8" containerName="oc" Mar 12 08:36:12 crc kubenswrapper[4809]: I0312 08:36:12.962639 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a12be75-8f6d-4114-be11-98ecd6f751f8" containerName="oc" Mar 12 08:36:12 crc kubenswrapper[4809]: I0312 08:36:12.962669 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="833f444f-3131-4fe7-b59a-fd9c67224b6e" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Mar 12 08:36:12 crc kubenswrapper[4809]: I0312 08:36:12.963659 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-wc48v" Mar 12 08:36:12 crc kubenswrapper[4809]: I0312 08:36:12.966222 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 12 08:36:12 crc kubenswrapper[4809]: I0312 08:36:12.970061 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 12 08:36:12 crc kubenswrapper[4809]: I0312 08:36:12.970243 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 08:36:12 crc kubenswrapper[4809]: I0312 08:36:12.970427 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7nx95" Mar 12 08:36:12 crc kubenswrapper[4809]: I0312 08:36:12.982478 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-wc48v"] Mar 12 08:36:13 crc kubenswrapper[4809]: I0312 08:36:13.025029 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7291754c-2659-44a6-b305-527b19034672-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-wc48v\" (UID: \"7291754c-2659-44a6-b305-527b19034672\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-wc48v" Mar 12 08:36:13 crc kubenswrapper[4809]: I0312 08:36:13.025288 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68bb8\" (UniqueName: \"kubernetes.io/projected/7291754c-2659-44a6-b305-527b19034672-kube-api-access-68bb8\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-wc48v\" (UID: \"7291754c-2659-44a6-b305-527b19034672\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-wc48v" Mar 12 08:36:13 crc kubenswrapper[4809]: I0312 08:36:13.025337 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7291754c-2659-44a6-b305-527b19034672-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-wc48v\" (UID: \"7291754c-2659-44a6-b305-527b19034672\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-wc48v" Mar 12 08:36:13 crc kubenswrapper[4809]: I0312 08:36:13.127413 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7291754c-2659-44a6-b305-527b19034672-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-wc48v\" (UID: \"7291754c-2659-44a6-b305-527b19034672\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-wc48v" Mar 12 08:36:13 crc kubenswrapper[4809]: I0312 08:36:13.127497 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68bb8\" (UniqueName: \"kubernetes.io/projected/7291754c-2659-44a6-b305-527b19034672-kube-api-access-68bb8\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-wc48v\" (UID: \"7291754c-2659-44a6-b305-527b19034672\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-wc48v" Mar 12 08:36:13 crc kubenswrapper[4809]: I0312 08:36:13.127536 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7291754c-2659-44a6-b305-527b19034672-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-wc48v\" (UID: \"7291754c-2659-44a6-b305-527b19034672\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-wc48v" Mar 12 08:36:13 crc kubenswrapper[4809]: I0312 08:36:13.131499 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7291754c-2659-44a6-b305-527b19034672-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-wc48v\" (UID: \"7291754c-2659-44a6-b305-527b19034672\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-wc48v" Mar 12 08:36:13 crc kubenswrapper[4809]: I0312 08:36:13.131554 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7291754c-2659-44a6-b305-527b19034672-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-wc48v\" (UID: \"7291754c-2659-44a6-b305-527b19034672\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-wc48v" Mar 12 08:36:13 crc kubenswrapper[4809]: I0312 08:36:13.147696 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68bb8\" (UniqueName: \"kubernetes.io/projected/7291754c-2659-44a6-b305-527b19034672-kube-api-access-68bb8\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-wc48v\" (UID: \"7291754c-2659-44a6-b305-527b19034672\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-wc48v" Mar 12 08:36:13 crc kubenswrapper[4809]: I0312 08:36:13.285532 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-wc48v" Mar 12 08:36:13 crc kubenswrapper[4809]: I0312 08:36:13.974287 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-wc48v"] Mar 12 08:36:14 crc kubenswrapper[4809]: I0312 08:36:14.107151 4809 scope.go:117] "RemoveContainer" containerID="bf2cc15f1ee829a164218f56f3866a7078c46a7688edcf75cd90ece45a87d0e0" Mar 12 08:36:14 crc kubenswrapper[4809]: E0312 08:36:14.107409 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:36:14 crc kubenswrapper[4809]: I0312 08:36:14.893486 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-wc48v" event={"ID":"7291754c-2659-44a6-b305-527b19034672","Type":"ContainerStarted","Data":"516953a71f7b2f59911047c1acb2f4371328aaf2a5394094c49f9136d946714d"} Mar 12 08:36:14 crc kubenswrapper[4809]: I0312 08:36:14.894168 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-wc48v" event={"ID":"7291754c-2659-44a6-b305-527b19034672","Type":"ContainerStarted","Data":"9d65eacd5834e03ad43c9248cd440267900fc0e3d919f785173c2249d7f0e8dd"} Mar 12 08:36:14 crc kubenswrapper[4809]: I0312 08:36:14.918961 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-wc48v" podStartSLOduration=2.451804198 podStartE2EDuration="2.918940099s" podCreationTimestamp="2026-03-12 08:36:12 +0000 UTC" firstStartedPulling="2026-03-12 08:36:13.983000002 +0000 UTC m=+2247.565035735" lastFinishedPulling="2026-03-12 08:36:14.450135903 +0000 UTC m=+2248.032171636" observedRunningTime="2026-03-12 08:36:14.918741774 +0000 UTC m=+2248.500777537" watchObservedRunningTime="2026-03-12 08:36:14.918940099 +0000 UTC m=+2248.500975852" Mar 12 08:36:27 crc kubenswrapper[4809]: I0312 08:36:27.117078 4809 scope.go:117] "RemoveContainer" containerID="bf2cc15f1ee829a164218f56f3866a7078c46a7688edcf75cd90ece45a87d0e0" Mar 12 08:36:27 crc kubenswrapper[4809]: E0312 08:36:27.118023 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:36:30 crc kubenswrapper[4809]: I0312 08:36:30.969262 4809 scope.go:117] "RemoveContainer" containerID="5b90b48ef5fc296ddac81d2e4f25cde423906c39d086a92406eed385bd5f370f" Mar 12 08:36:31 crc kubenswrapper[4809]: I0312 08:36:31.038407 4809 scope.go:117] "RemoveContainer" containerID="06827f215a5beefb6b2dc60783316a20e387ab9c23e3d80e5ba1763b2e194afe" Mar 12 08:36:31 crc kubenswrapper[4809]: I0312 08:36:31.063608 4809 scope.go:117] "RemoveContainer" containerID="00e51b55baea3a9058a538fab5506ef615a01a6c2515cdf89e35ddf508e181b2" Mar 12 08:36:31 crc kubenswrapper[4809]: I0312 08:36:31.134161 4809 scope.go:117] "RemoveContainer" containerID="7a0a35cb89bb6f296c304da624d5217bb095ccc7c7933fbcb6efbef2f6ea96b5" Mar 12 08:36:31 crc kubenswrapper[4809]: I0312 08:36:31.185621 4809 scope.go:117] "RemoveContainer" containerID="3d5c947b257864e12e92b5e465b049ec476bdd76f6f249a743a5bc23774eb9da" Mar 12 08:36:31 crc kubenswrapper[4809]: I0312 08:36:31.248059 4809 scope.go:117] "RemoveContainer" containerID="d2160229e26eb2441b207a1b23558ef1b1791130c13a1faa180bce47278795bd" Mar 12 08:36:41 crc kubenswrapper[4809]: I0312 08:36:41.072531 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-fblks"] Mar 12 08:36:41 crc kubenswrapper[4809]: I0312 08:36:41.087416 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-fblks"] Mar 12 08:36:41 crc kubenswrapper[4809]: I0312 08:36:41.125185 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d78e41a4-1f33-45bf-a0b8-6e53b47a3f70" path="/var/lib/kubelet/pods/d78e41a4-1f33-45bf-a0b8-6e53b47a3f70/volumes" Mar 12 08:36:42 crc kubenswrapper[4809]: I0312 08:36:42.106451 4809 scope.go:117] "RemoveContainer" containerID="bf2cc15f1ee829a164218f56f3866a7078c46a7688edcf75cd90ece45a87d0e0" Mar 12 08:36:42 crc kubenswrapper[4809]: E0312 08:36:42.107035 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:36:52 crc kubenswrapper[4809]: I0312 08:36:52.424666 4809 generic.go:334] "Generic (PLEG): container finished" podID="7291754c-2659-44a6-b305-527b19034672" containerID="516953a71f7b2f59911047c1acb2f4371328aaf2a5394094c49f9136d946714d" exitCode=0 Mar 12 08:36:52 crc kubenswrapper[4809]: I0312 08:36:52.424780 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-wc48v" event={"ID":"7291754c-2659-44a6-b305-527b19034672","Type":"ContainerDied","Data":"516953a71f7b2f59911047c1acb2f4371328aaf2a5394094c49f9136d946714d"} Mar 12 08:36:54 crc kubenswrapper[4809]: I0312 08:36:54.045427 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-wc48v" Mar 12 08:36:54 crc kubenswrapper[4809]: I0312 08:36:54.180514 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7291754c-2659-44a6-b305-527b19034672-inventory\") pod \"7291754c-2659-44a6-b305-527b19034672\" (UID: \"7291754c-2659-44a6-b305-527b19034672\") " Mar 12 08:36:54 crc kubenswrapper[4809]: I0312 08:36:54.180984 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7291754c-2659-44a6-b305-527b19034672-ssh-key-openstack-edpm-ipam\") pod \"7291754c-2659-44a6-b305-527b19034672\" (UID: \"7291754c-2659-44a6-b305-527b19034672\") " Mar 12 08:36:54 crc kubenswrapper[4809]: I0312 08:36:54.181088 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68bb8\" (UniqueName: \"kubernetes.io/projected/7291754c-2659-44a6-b305-527b19034672-kube-api-access-68bb8\") pod \"7291754c-2659-44a6-b305-527b19034672\" (UID: \"7291754c-2659-44a6-b305-527b19034672\") " Mar 12 08:36:54 crc kubenswrapper[4809]: I0312 08:36:54.186659 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7291754c-2659-44a6-b305-527b19034672-kube-api-access-68bb8" (OuterVolumeSpecName: "kube-api-access-68bb8") pod "7291754c-2659-44a6-b305-527b19034672" (UID: "7291754c-2659-44a6-b305-527b19034672"). InnerVolumeSpecName "kube-api-access-68bb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:36:54 crc kubenswrapper[4809]: I0312 08:36:54.214150 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7291754c-2659-44a6-b305-527b19034672-inventory" (OuterVolumeSpecName: "inventory") pod "7291754c-2659-44a6-b305-527b19034672" (UID: "7291754c-2659-44a6-b305-527b19034672"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:36:54 crc kubenswrapper[4809]: I0312 08:36:54.214551 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7291754c-2659-44a6-b305-527b19034672-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7291754c-2659-44a6-b305-527b19034672" (UID: "7291754c-2659-44a6-b305-527b19034672"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:36:54 crc kubenswrapper[4809]: I0312 08:36:54.284116 4809 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7291754c-2659-44a6-b305-527b19034672-inventory\") on node \"crc\" DevicePath \"\"" Mar 12 08:36:54 crc kubenswrapper[4809]: I0312 08:36:54.284171 4809 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7291754c-2659-44a6-b305-527b19034672-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 08:36:54 crc kubenswrapper[4809]: I0312 08:36:54.284184 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68bb8\" (UniqueName: \"kubernetes.io/projected/7291754c-2659-44a6-b305-527b19034672-kube-api-access-68bb8\") on node \"crc\" DevicePath \"\"" Mar 12 08:36:54 crc kubenswrapper[4809]: I0312 08:36:54.454042 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-wc48v" event={"ID":"7291754c-2659-44a6-b305-527b19034672","Type":"ContainerDied","Data":"9d65eacd5834e03ad43c9248cd440267900fc0e3d919f785173c2249d7f0e8dd"} Mar 12 08:36:54 crc kubenswrapper[4809]: I0312 08:36:54.454082 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d65eacd5834e03ad43c9248cd440267900fc0e3d919f785173c2249d7f0e8dd" Mar 12 08:36:54 crc kubenswrapper[4809]: I0312 08:36:54.454175 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-wc48v" Mar 12 08:36:54 crc kubenswrapper[4809]: I0312 08:36:54.542332 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mb7f2"] Mar 12 08:36:54 crc kubenswrapper[4809]: E0312 08:36:54.542890 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7291754c-2659-44a6-b305-527b19034672" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Mar 12 08:36:54 crc kubenswrapper[4809]: I0312 08:36:54.542904 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="7291754c-2659-44a6-b305-527b19034672" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Mar 12 08:36:54 crc kubenswrapper[4809]: I0312 08:36:54.543156 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="7291754c-2659-44a6-b305-527b19034672" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Mar 12 08:36:54 crc kubenswrapper[4809]: I0312 08:36:54.544061 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mb7f2" Mar 12 08:36:54 crc kubenswrapper[4809]: I0312 08:36:54.549202 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7nx95" Mar 12 08:36:54 crc kubenswrapper[4809]: I0312 08:36:54.549228 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 12 08:36:54 crc kubenswrapper[4809]: I0312 08:36:54.549721 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 08:36:54 crc kubenswrapper[4809]: I0312 08:36:54.549847 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 12 08:36:54 crc kubenswrapper[4809]: I0312 08:36:54.569280 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mb7f2"] Mar 12 08:36:54 crc kubenswrapper[4809]: I0312 08:36:54.693235 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8klb\" (UniqueName: \"kubernetes.io/projected/f49ea7c3-c615-485c-8780-984d8c590f22-kube-api-access-t8klb\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mb7f2\" (UID: \"f49ea7c3-c615-485c-8780-984d8c590f22\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mb7f2" Mar 12 08:36:54 crc kubenswrapper[4809]: I0312 08:36:54.693635 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f49ea7c3-c615-485c-8780-984d8c590f22-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mb7f2\" (UID: \"f49ea7c3-c615-485c-8780-984d8c590f22\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mb7f2" Mar 12 08:36:54 crc kubenswrapper[4809]: I0312 08:36:54.693881 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f49ea7c3-c615-485c-8780-984d8c590f22-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mb7f2\" (UID: \"f49ea7c3-c615-485c-8780-984d8c590f22\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mb7f2" Mar 12 08:36:54 crc kubenswrapper[4809]: I0312 08:36:54.795941 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f49ea7c3-c615-485c-8780-984d8c590f22-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mb7f2\" (UID: \"f49ea7c3-c615-485c-8780-984d8c590f22\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mb7f2" Mar 12 08:36:54 crc kubenswrapper[4809]: I0312 08:36:54.796129 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8klb\" (UniqueName: \"kubernetes.io/projected/f49ea7c3-c615-485c-8780-984d8c590f22-kube-api-access-t8klb\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mb7f2\" (UID: \"f49ea7c3-c615-485c-8780-984d8c590f22\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mb7f2" Mar 12 08:36:54 crc kubenswrapper[4809]: I0312 08:36:54.796241 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f49ea7c3-c615-485c-8780-984d8c590f22-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mb7f2\" (UID: \"f49ea7c3-c615-485c-8780-984d8c590f22\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mb7f2" Mar 12 08:36:54 crc kubenswrapper[4809]: I0312 08:36:54.800453 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f49ea7c3-c615-485c-8780-984d8c590f22-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mb7f2\" (UID: \"f49ea7c3-c615-485c-8780-984d8c590f22\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mb7f2" Mar 12 08:36:54 crc kubenswrapper[4809]: I0312 08:36:54.800464 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f49ea7c3-c615-485c-8780-984d8c590f22-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mb7f2\" (UID: \"f49ea7c3-c615-485c-8780-984d8c590f22\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mb7f2" Mar 12 08:36:54 crc kubenswrapper[4809]: I0312 08:36:54.813019 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8klb\" (UniqueName: \"kubernetes.io/projected/f49ea7c3-c615-485c-8780-984d8c590f22-kube-api-access-t8klb\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mb7f2\" (UID: \"f49ea7c3-c615-485c-8780-984d8c590f22\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mb7f2" Mar 12 08:36:54 crc kubenswrapper[4809]: I0312 08:36:54.862846 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mb7f2" Mar 12 08:36:55 crc kubenswrapper[4809]: W0312 08:36:55.448793 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf49ea7c3_c615_485c_8780_984d8c590f22.slice/crio-b6757d598fbfc05e4ba29eb912974ba2497e56e8a704fd1df525dfe0bc9266d9 WatchSource:0}: Error finding container b6757d598fbfc05e4ba29eb912974ba2497e56e8a704fd1df525dfe0bc9266d9: Status 404 returned error can't find the container with id b6757d598fbfc05e4ba29eb912974ba2497e56e8a704fd1df525dfe0bc9266d9 Mar 12 08:36:55 crc kubenswrapper[4809]: I0312 08:36:55.450362 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mb7f2"] Mar 12 08:36:55 crc kubenswrapper[4809]: I0312 08:36:55.468235 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mb7f2" event={"ID":"f49ea7c3-c615-485c-8780-984d8c590f22","Type":"ContainerStarted","Data":"b6757d598fbfc05e4ba29eb912974ba2497e56e8a704fd1df525dfe0bc9266d9"} Mar 12 08:36:56 crc kubenswrapper[4809]: I0312 08:36:56.477581 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mb7f2" event={"ID":"f49ea7c3-c615-485c-8780-984d8c590f22","Type":"ContainerStarted","Data":"a5395c742c0da89095387e5b47dee49c06de61d9dafd09a33273f7b44eeda67e"} Mar 12 08:36:56 crc kubenswrapper[4809]: I0312 08:36:56.498616 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mb7f2" podStartSLOduration=2.02747811 podStartE2EDuration="2.49859221s" podCreationTimestamp="2026-03-12 08:36:54 +0000 UTC" firstStartedPulling="2026-03-12 08:36:55.452234989 +0000 UTC m=+2289.034270722" lastFinishedPulling="2026-03-12 08:36:55.923349079 +0000 UTC m=+2289.505384822" observedRunningTime="2026-03-12 08:36:56.496999486 +0000 UTC m=+2290.079035229" watchObservedRunningTime="2026-03-12 08:36:56.49859221 +0000 UTC m=+2290.080627953" Mar 12 08:36:57 crc kubenswrapper[4809]: I0312 08:36:57.113459 4809 scope.go:117] "RemoveContainer" containerID="bf2cc15f1ee829a164218f56f3866a7078c46a7688edcf75cd90ece45a87d0e0" Mar 12 08:36:57 crc kubenswrapper[4809]: E0312 08:36:57.114092 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:37:12 crc kubenswrapper[4809]: I0312 08:37:12.107324 4809 scope.go:117] "RemoveContainer" containerID="bf2cc15f1ee829a164218f56f3866a7078c46a7688edcf75cd90ece45a87d0e0" Mar 12 08:37:12 crc kubenswrapper[4809]: E0312 08:37:12.108619 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:37:26 crc kubenswrapper[4809]: I0312 08:37:26.107496 4809 scope.go:117] "RemoveContainer" containerID="bf2cc15f1ee829a164218f56f3866a7078c46a7688edcf75cd90ece45a87d0e0" Mar 12 08:37:26 crc kubenswrapper[4809]: E0312 08:37:26.109032 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:37:31 crc kubenswrapper[4809]: I0312 08:37:31.448086 4809 scope.go:117] "RemoveContainer" containerID="6e5cc0d743f0aac26a1e000c34f95ad967b50780bc885fdcc6981791b211f92e" Mar 12 08:37:40 crc kubenswrapper[4809]: I0312 08:37:40.109634 4809 scope.go:117] "RemoveContainer" containerID="bf2cc15f1ee829a164218f56f3866a7078c46a7688edcf75cd90ece45a87d0e0" Mar 12 08:37:40 crc kubenswrapper[4809]: E0312 08:37:40.111702 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:37:43 crc kubenswrapper[4809]: I0312 08:37:43.054327 4809 generic.go:334] "Generic (PLEG): container finished" podID="f49ea7c3-c615-485c-8780-984d8c590f22" containerID="a5395c742c0da89095387e5b47dee49c06de61d9dafd09a33273f7b44eeda67e" exitCode=0 Mar 12 08:37:43 crc kubenswrapper[4809]: I0312 08:37:43.054457 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mb7f2" event={"ID":"f49ea7c3-c615-485c-8780-984d8c590f22","Type":"ContainerDied","Data":"a5395c742c0da89095387e5b47dee49c06de61d9dafd09a33273f7b44eeda67e"} Mar 12 08:37:44 crc kubenswrapper[4809]: I0312 08:37:44.626988 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mb7f2" Mar 12 08:37:44 crc kubenswrapper[4809]: I0312 08:37:44.797979 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8klb\" (UniqueName: \"kubernetes.io/projected/f49ea7c3-c615-485c-8780-984d8c590f22-kube-api-access-t8klb\") pod \"f49ea7c3-c615-485c-8780-984d8c590f22\" (UID: \"f49ea7c3-c615-485c-8780-984d8c590f22\") " Mar 12 08:37:44 crc kubenswrapper[4809]: I0312 08:37:44.798060 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f49ea7c3-c615-485c-8780-984d8c590f22-inventory\") pod \"f49ea7c3-c615-485c-8780-984d8c590f22\" (UID: \"f49ea7c3-c615-485c-8780-984d8c590f22\") " Mar 12 08:37:44 crc kubenswrapper[4809]: I0312 08:37:44.798098 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f49ea7c3-c615-485c-8780-984d8c590f22-ssh-key-openstack-edpm-ipam\") pod \"f49ea7c3-c615-485c-8780-984d8c590f22\" (UID: \"f49ea7c3-c615-485c-8780-984d8c590f22\") " Mar 12 08:37:44 crc kubenswrapper[4809]: I0312 08:37:44.807811 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f49ea7c3-c615-485c-8780-984d8c590f22-kube-api-access-t8klb" (OuterVolumeSpecName: "kube-api-access-t8klb") pod "f49ea7c3-c615-485c-8780-984d8c590f22" (UID: "f49ea7c3-c615-485c-8780-984d8c590f22"). InnerVolumeSpecName "kube-api-access-t8klb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:37:44 crc kubenswrapper[4809]: I0312 08:37:44.832477 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f49ea7c3-c615-485c-8780-984d8c590f22-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f49ea7c3-c615-485c-8780-984d8c590f22" (UID: "f49ea7c3-c615-485c-8780-984d8c590f22"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:37:44 crc kubenswrapper[4809]: I0312 08:37:44.846617 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f49ea7c3-c615-485c-8780-984d8c590f22-inventory" (OuterVolumeSpecName: "inventory") pod "f49ea7c3-c615-485c-8780-984d8c590f22" (UID: "f49ea7c3-c615-485c-8780-984d8c590f22"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:37:44 crc kubenswrapper[4809]: I0312 08:37:44.901914 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8klb\" (UniqueName: \"kubernetes.io/projected/f49ea7c3-c615-485c-8780-984d8c590f22-kube-api-access-t8klb\") on node \"crc\" DevicePath \"\"" Mar 12 08:37:44 crc kubenswrapper[4809]: I0312 08:37:44.901957 4809 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f49ea7c3-c615-485c-8780-984d8c590f22-inventory\") on node \"crc\" DevicePath \"\"" Mar 12 08:37:44 crc kubenswrapper[4809]: I0312 08:37:44.901971 4809 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f49ea7c3-c615-485c-8780-984d8c590f22-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 08:37:45 crc kubenswrapper[4809]: I0312 08:37:45.077600 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mb7f2" event={"ID":"f49ea7c3-c615-485c-8780-984d8c590f22","Type":"ContainerDied","Data":"b6757d598fbfc05e4ba29eb912974ba2497e56e8a704fd1df525dfe0bc9266d9"} Mar 12 08:37:45 crc kubenswrapper[4809]: I0312 08:37:45.077653 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6757d598fbfc05e4ba29eb912974ba2497e56e8a704fd1df525dfe0bc9266d9" Mar 12 08:37:45 crc kubenswrapper[4809]: I0312 08:37:45.077664 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mb7f2" Mar 12 08:37:45 crc kubenswrapper[4809]: I0312 08:37:45.177733 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-w6xqs"] Mar 12 08:37:45 crc kubenswrapper[4809]: E0312 08:37:45.178348 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f49ea7c3-c615-485c-8780-984d8c590f22" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Mar 12 08:37:45 crc kubenswrapper[4809]: I0312 08:37:45.178367 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="f49ea7c3-c615-485c-8780-984d8c590f22" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Mar 12 08:37:45 crc kubenswrapper[4809]: I0312 08:37:45.178625 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="f49ea7c3-c615-485c-8780-984d8c590f22" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Mar 12 08:37:45 crc kubenswrapper[4809]: I0312 08:37:45.179511 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-w6xqs" Mar 12 08:37:45 crc kubenswrapper[4809]: I0312 08:37:45.182680 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 08:37:45 crc kubenswrapper[4809]: I0312 08:37:45.186661 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7nx95" Mar 12 08:37:45 crc kubenswrapper[4809]: I0312 08:37:45.189657 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-w6xqs"] Mar 12 08:37:45 crc kubenswrapper[4809]: I0312 08:37:45.189936 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 12 08:37:45 crc kubenswrapper[4809]: I0312 08:37:45.207163 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 12 08:37:45 crc kubenswrapper[4809]: I0312 08:37:45.311487 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/08e213da-a2a0-49b8-851c-c33ded78276a-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-w6xqs\" (UID: \"08e213da-a2a0-49b8-851c-c33ded78276a\") " pod="openstack/ssh-known-hosts-edpm-deployment-w6xqs" Mar 12 08:37:45 crc kubenswrapper[4809]: I0312 08:37:45.311575 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/08e213da-a2a0-49b8-851c-c33ded78276a-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-w6xqs\" (UID: \"08e213da-a2a0-49b8-851c-c33ded78276a\") " pod="openstack/ssh-known-hosts-edpm-deployment-w6xqs" Mar 12 08:37:45 crc kubenswrapper[4809]: I0312 08:37:45.311616 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tg4h\" (UniqueName: \"kubernetes.io/projected/08e213da-a2a0-49b8-851c-c33ded78276a-kube-api-access-7tg4h\") pod \"ssh-known-hosts-edpm-deployment-w6xqs\" (UID: \"08e213da-a2a0-49b8-851c-c33ded78276a\") " pod="openstack/ssh-known-hosts-edpm-deployment-w6xqs" Mar 12 08:37:45 crc kubenswrapper[4809]: I0312 08:37:45.414542 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/08e213da-a2a0-49b8-851c-c33ded78276a-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-w6xqs\" (UID: \"08e213da-a2a0-49b8-851c-c33ded78276a\") " pod="openstack/ssh-known-hosts-edpm-deployment-w6xqs" Mar 12 08:37:45 crc kubenswrapper[4809]: I0312 08:37:45.414646 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/08e213da-a2a0-49b8-851c-c33ded78276a-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-w6xqs\" (UID: \"08e213da-a2a0-49b8-851c-c33ded78276a\") " pod="openstack/ssh-known-hosts-edpm-deployment-w6xqs" Mar 12 08:37:45 crc kubenswrapper[4809]: I0312 08:37:45.414687 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tg4h\" (UniqueName: \"kubernetes.io/projected/08e213da-a2a0-49b8-851c-c33ded78276a-kube-api-access-7tg4h\") pod \"ssh-known-hosts-edpm-deployment-w6xqs\" (UID: \"08e213da-a2a0-49b8-851c-c33ded78276a\") " pod="openstack/ssh-known-hosts-edpm-deployment-w6xqs" Mar 12 08:37:45 crc kubenswrapper[4809]: I0312 08:37:45.419339 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/08e213da-a2a0-49b8-851c-c33ded78276a-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-w6xqs\" (UID: \"08e213da-a2a0-49b8-851c-c33ded78276a\") " pod="openstack/ssh-known-hosts-edpm-deployment-w6xqs" Mar 12 08:37:45 crc kubenswrapper[4809]: I0312 08:37:45.432518 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/08e213da-a2a0-49b8-851c-c33ded78276a-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-w6xqs\" (UID: \"08e213da-a2a0-49b8-851c-c33ded78276a\") " pod="openstack/ssh-known-hosts-edpm-deployment-w6xqs" Mar 12 08:37:45 crc kubenswrapper[4809]: I0312 08:37:45.433524 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tg4h\" (UniqueName: \"kubernetes.io/projected/08e213da-a2a0-49b8-851c-c33ded78276a-kube-api-access-7tg4h\") pod \"ssh-known-hosts-edpm-deployment-w6xqs\" (UID: \"08e213da-a2a0-49b8-851c-c33ded78276a\") " pod="openstack/ssh-known-hosts-edpm-deployment-w6xqs" Mar 12 08:37:45 crc kubenswrapper[4809]: I0312 08:37:45.512171 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-w6xqs" Mar 12 08:37:46 crc kubenswrapper[4809]: I0312 08:37:46.072932 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-w6xqs"] Mar 12 08:37:46 crc kubenswrapper[4809]: I0312 08:37:46.093409 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-w6xqs" event={"ID":"08e213da-a2a0-49b8-851c-c33ded78276a","Type":"ContainerStarted","Data":"d34752110bf171b515105a1b8062285dd9b62cbb28522dd0f334f4b1772b12c4"} Mar 12 08:37:47 crc kubenswrapper[4809]: I0312 08:37:47.120596 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-w6xqs" event={"ID":"08e213da-a2a0-49b8-851c-c33ded78276a","Type":"ContainerStarted","Data":"41478906e995ce6afb50ec7a8bba5bd17a0dcaf94c0fee53a757519312306c8a"} Mar 12 08:37:47 crc kubenswrapper[4809]: I0312 08:37:47.176203 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-w6xqs" podStartSLOduration=1.661128924 podStartE2EDuration="2.176174618s" podCreationTimestamp="2026-03-12 08:37:45 +0000 UTC" firstStartedPulling="2026-03-12 08:37:46.082714746 +0000 UTC m=+2339.664750489" lastFinishedPulling="2026-03-12 08:37:46.59776045 +0000 UTC m=+2340.179796183" observedRunningTime="2026-03-12 08:37:47.161925681 +0000 UTC m=+2340.743961414" watchObservedRunningTime="2026-03-12 08:37:47.176174618 +0000 UTC m=+2340.758210371" Mar 12 08:37:53 crc kubenswrapper[4809]: I0312 08:37:53.190875 4809 generic.go:334] "Generic (PLEG): container finished" podID="08e213da-a2a0-49b8-851c-c33ded78276a" containerID="41478906e995ce6afb50ec7a8bba5bd17a0dcaf94c0fee53a757519312306c8a" exitCode=0 Mar 12 08:37:53 crc kubenswrapper[4809]: I0312 08:37:53.191015 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-w6xqs" event={"ID":"08e213da-a2a0-49b8-851c-c33ded78276a","Type":"ContainerDied","Data":"41478906e995ce6afb50ec7a8bba5bd17a0dcaf94c0fee53a757519312306c8a"} Mar 12 08:37:54 crc kubenswrapper[4809]: I0312 08:37:54.700794 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-w6xqs" Mar 12 08:37:54 crc kubenswrapper[4809]: I0312 08:37:54.762408 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/08e213da-a2a0-49b8-851c-c33ded78276a-inventory-0\") pod \"08e213da-a2a0-49b8-851c-c33ded78276a\" (UID: \"08e213da-a2a0-49b8-851c-c33ded78276a\") " Mar 12 08:37:54 crc kubenswrapper[4809]: I0312 08:37:54.762740 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tg4h\" (UniqueName: \"kubernetes.io/projected/08e213da-a2a0-49b8-851c-c33ded78276a-kube-api-access-7tg4h\") pod \"08e213da-a2a0-49b8-851c-c33ded78276a\" (UID: \"08e213da-a2a0-49b8-851c-c33ded78276a\") " Mar 12 08:37:54 crc kubenswrapper[4809]: I0312 08:37:54.762800 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/08e213da-a2a0-49b8-851c-c33ded78276a-ssh-key-openstack-edpm-ipam\") pod \"08e213da-a2a0-49b8-851c-c33ded78276a\" (UID: \"08e213da-a2a0-49b8-851c-c33ded78276a\") " Mar 12 08:37:54 crc kubenswrapper[4809]: I0312 08:37:54.770432 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08e213da-a2a0-49b8-851c-c33ded78276a-kube-api-access-7tg4h" (OuterVolumeSpecName: "kube-api-access-7tg4h") pod "08e213da-a2a0-49b8-851c-c33ded78276a" (UID: "08e213da-a2a0-49b8-851c-c33ded78276a"). InnerVolumeSpecName "kube-api-access-7tg4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:37:54 crc kubenswrapper[4809]: I0312 08:37:54.796026 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08e213da-a2a0-49b8-851c-c33ded78276a-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "08e213da-a2a0-49b8-851c-c33ded78276a" (UID: "08e213da-a2a0-49b8-851c-c33ded78276a"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:37:54 crc kubenswrapper[4809]: I0312 08:37:54.807159 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08e213da-a2a0-49b8-851c-c33ded78276a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "08e213da-a2a0-49b8-851c-c33ded78276a" (UID: "08e213da-a2a0-49b8-851c-c33ded78276a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:37:54 crc kubenswrapper[4809]: I0312 08:37:54.866462 4809 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/08e213da-a2a0-49b8-851c-c33ded78276a-inventory-0\") on node \"crc\" DevicePath \"\"" Mar 12 08:37:54 crc kubenswrapper[4809]: I0312 08:37:54.866499 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7tg4h\" (UniqueName: \"kubernetes.io/projected/08e213da-a2a0-49b8-851c-c33ded78276a-kube-api-access-7tg4h\") on node \"crc\" DevicePath \"\"" Mar 12 08:37:54 crc kubenswrapper[4809]: I0312 08:37:54.866514 4809 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/08e213da-a2a0-49b8-851c-c33ded78276a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 08:37:55 crc kubenswrapper[4809]: I0312 08:37:55.107397 4809 scope.go:117] "RemoveContainer" containerID="bf2cc15f1ee829a164218f56f3866a7078c46a7688edcf75cd90ece45a87d0e0" Mar 12 08:37:55 crc kubenswrapper[4809]: E0312 08:37:55.108016 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:37:55 crc kubenswrapper[4809]: I0312 08:37:55.212723 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-w6xqs" event={"ID":"08e213da-a2a0-49b8-851c-c33ded78276a","Type":"ContainerDied","Data":"d34752110bf171b515105a1b8062285dd9b62cbb28522dd0f334f4b1772b12c4"} Mar 12 08:37:55 crc kubenswrapper[4809]: I0312 08:37:55.212778 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d34752110bf171b515105a1b8062285dd9b62cbb28522dd0f334f4b1772b12c4" Mar 12 08:37:55 crc kubenswrapper[4809]: I0312 08:37:55.212857 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-w6xqs" Mar 12 08:37:55 crc kubenswrapper[4809]: I0312 08:37:55.308055 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-9qpr5"] Mar 12 08:37:55 crc kubenswrapper[4809]: E0312 08:37:55.308803 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08e213da-a2a0-49b8-851c-c33ded78276a" containerName="ssh-known-hosts-edpm-deployment" Mar 12 08:37:55 crc kubenswrapper[4809]: I0312 08:37:55.308834 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="08e213da-a2a0-49b8-851c-c33ded78276a" containerName="ssh-known-hosts-edpm-deployment" Mar 12 08:37:55 crc kubenswrapper[4809]: I0312 08:37:55.309235 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="08e213da-a2a0-49b8-851c-c33ded78276a" containerName="ssh-known-hosts-edpm-deployment" Mar 12 08:37:55 crc kubenswrapper[4809]: I0312 08:37:55.310507 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9qpr5" Mar 12 08:37:55 crc kubenswrapper[4809]: I0312 08:37:55.312991 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 08:37:55 crc kubenswrapper[4809]: I0312 08:37:55.313069 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 12 08:37:55 crc kubenswrapper[4809]: I0312 08:37:55.313304 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7nx95" Mar 12 08:37:55 crc kubenswrapper[4809]: I0312 08:37:55.316151 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 12 08:37:55 crc kubenswrapper[4809]: I0312 08:37:55.340029 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-9qpr5"] Mar 12 08:37:55 crc kubenswrapper[4809]: I0312 08:37:55.382094 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldj24\" (UniqueName: \"kubernetes.io/projected/3bd1743a-df42-404b-b846-0ca55bf273ef-kube-api-access-ldj24\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9qpr5\" (UID: \"3bd1743a-df42-404b-b846-0ca55bf273ef\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9qpr5" Mar 12 08:37:55 crc kubenswrapper[4809]: I0312 08:37:55.382307 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3bd1743a-df42-404b-b846-0ca55bf273ef-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9qpr5\" (UID: \"3bd1743a-df42-404b-b846-0ca55bf273ef\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9qpr5" Mar 12 08:37:55 crc kubenswrapper[4809]: I0312 08:37:55.382347 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3bd1743a-df42-404b-b846-0ca55bf273ef-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9qpr5\" (UID: \"3bd1743a-df42-404b-b846-0ca55bf273ef\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9qpr5" Mar 12 08:37:55 crc kubenswrapper[4809]: I0312 08:37:55.485372 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3bd1743a-df42-404b-b846-0ca55bf273ef-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9qpr5\" (UID: \"3bd1743a-df42-404b-b846-0ca55bf273ef\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9qpr5" Mar 12 08:37:55 crc kubenswrapper[4809]: I0312 08:37:55.485503 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3bd1743a-df42-404b-b846-0ca55bf273ef-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9qpr5\" (UID: \"3bd1743a-df42-404b-b846-0ca55bf273ef\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9qpr5" Mar 12 08:37:55 crc kubenswrapper[4809]: I0312 08:37:55.485667 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldj24\" (UniqueName: \"kubernetes.io/projected/3bd1743a-df42-404b-b846-0ca55bf273ef-kube-api-access-ldj24\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9qpr5\" (UID: \"3bd1743a-df42-404b-b846-0ca55bf273ef\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9qpr5" Mar 12 08:37:55 crc kubenswrapper[4809]: I0312 08:37:55.505880 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3bd1743a-df42-404b-b846-0ca55bf273ef-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9qpr5\" (UID: \"3bd1743a-df42-404b-b846-0ca55bf273ef\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9qpr5" Mar 12 08:37:55 crc kubenswrapper[4809]: I0312 08:37:55.505981 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3bd1743a-df42-404b-b846-0ca55bf273ef-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9qpr5\" (UID: \"3bd1743a-df42-404b-b846-0ca55bf273ef\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9qpr5" Mar 12 08:37:55 crc kubenswrapper[4809]: I0312 08:37:55.511654 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldj24\" (UniqueName: \"kubernetes.io/projected/3bd1743a-df42-404b-b846-0ca55bf273ef-kube-api-access-ldj24\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9qpr5\" (UID: \"3bd1743a-df42-404b-b846-0ca55bf273ef\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9qpr5" Mar 12 08:37:55 crc kubenswrapper[4809]: I0312 08:37:55.635451 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9qpr5" Mar 12 08:37:56 crc kubenswrapper[4809]: I0312 08:37:56.268803 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-9qpr5"] Mar 12 08:37:57 crc kubenswrapper[4809]: I0312 08:37:57.240186 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9qpr5" event={"ID":"3bd1743a-df42-404b-b846-0ca55bf273ef","Type":"ContainerStarted","Data":"87a705d3a01cc55e57eb1a185d95f132c74e5d032b8a57a187506b528c92061e"} Mar 12 08:37:57 crc kubenswrapper[4809]: I0312 08:37:57.240652 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9qpr5" event={"ID":"3bd1743a-df42-404b-b846-0ca55bf273ef","Type":"ContainerStarted","Data":"bc29e9c19820745da3c3c934c9500a2c38e1324101f98a39b869de3544ac43e1"} Mar 12 08:37:57 crc kubenswrapper[4809]: I0312 08:37:57.262932 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9qpr5" podStartSLOduration=1.726056705 podStartE2EDuration="2.262908512s" podCreationTimestamp="2026-03-12 08:37:55 +0000 UTC" firstStartedPulling="2026-03-12 08:37:56.26869203 +0000 UTC m=+2349.850727763" lastFinishedPulling="2026-03-12 08:37:56.805543827 +0000 UTC m=+2350.387579570" observedRunningTime="2026-03-12 08:37:57.256065526 +0000 UTC m=+2350.838101259" watchObservedRunningTime="2026-03-12 08:37:57.262908512 +0000 UTC m=+2350.844944255" Mar 12 08:38:00 crc kubenswrapper[4809]: I0312 08:38:00.163887 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555078-5hcvx"] Mar 12 08:38:00 crc kubenswrapper[4809]: I0312 08:38:00.166044 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555078-5hcvx" Mar 12 08:38:00 crc kubenswrapper[4809]: I0312 08:38:00.168741 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 08:38:00 crc kubenswrapper[4809]: I0312 08:38:00.168807 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 08:38:00 crc kubenswrapper[4809]: I0312 08:38:00.169157 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-tvxqv" Mar 12 08:38:00 crc kubenswrapper[4809]: I0312 08:38:00.198548 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555078-5hcvx"] Mar 12 08:38:00 crc kubenswrapper[4809]: I0312 08:38:00.210830 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4q9cl\" (UniqueName: \"kubernetes.io/projected/e95f881b-5b87-442e-bd68-b9dfaf6c1bf3-kube-api-access-4q9cl\") pod \"auto-csr-approver-29555078-5hcvx\" (UID: \"e95f881b-5b87-442e-bd68-b9dfaf6c1bf3\") " pod="openshift-infra/auto-csr-approver-29555078-5hcvx" Mar 12 08:38:00 crc kubenswrapper[4809]: I0312 08:38:00.313743 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4q9cl\" (UniqueName: \"kubernetes.io/projected/e95f881b-5b87-442e-bd68-b9dfaf6c1bf3-kube-api-access-4q9cl\") pod \"auto-csr-approver-29555078-5hcvx\" (UID: \"e95f881b-5b87-442e-bd68-b9dfaf6c1bf3\") " pod="openshift-infra/auto-csr-approver-29555078-5hcvx" Mar 12 08:38:00 crc kubenswrapper[4809]: I0312 08:38:00.357966 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4q9cl\" (UniqueName: \"kubernetes.io/projected/e95f881b-5b87-442e-bd68-b9dfaf6c1bf3-kube-api-access-4q9cl\") pod \"auto-csr-approver-29555078-5hcvx\" (UID: \"e95f881b-5b87-442e-bd68-b9dfaf6c1bf3\") " pod="openshift-infra/auto-csr-approver-29555078-5hcvx" Mar 12 08:38:00 crc kubenswrapper[4809]: I0312 08:38:00.485694 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555078-5hcvx" Mar 12 08:38:01 crc kubenswrapper[4809]: I0312 08:38:01.010395 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555078-5hcvx"] Mar 12 08:38:01 crc kubenswrapper[4809]: W0312 08:38:01.014861 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode95f881b_5b87_442e_bd68_b9dfaf6c1bf3.slice/crio-7f48973f5273b6f22a8247765cd826393b271cde368988a7a2211449886c2316 WatchSource:0}: Error finding container 7f48973f5273b6f22a8247765cd826393b271cde368988a7a2211449886c2316: Status 404 returned error can't find the container with id 7f48973f5273b6f22a8247765cd826393b271cde368988a7a2211449886c2316 Mar 12 08:38:01 crc kubenswrapper[4809]: I0312 08:38:01.297261 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555078-5hcvx" event={"ID":"e95f881b-5b87-442e-bd68-b9dfaf6c1bf3","Type":"ContainerStarted","Data":"7f48973f5273b6f22a8247765cd826393b271cde368988a7a2211449886c2316"} Mar 12 08:38:05 crc kubenswrapper[4809]: I0312 08:38:05.340940 4809 generic.go:334] "Generic (PLEG): container finished" podID="3bd1743a-df42-404b-b846-0ca55bf273ef" containerID="87a705d3a01cc55e57eb1a185d95f132c74e5d032b8a57a187506b528c92061e" exitCode=0 Mar 12 08:38:05 crc kubenswrapper[4809]: I0312 08:38:05.341061 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9qpr5" event={"ID":"3bd1743a-df42-404b-b846-0ca55bf273ef","Type":"ContainerDied","Data":"87a705d3a01cc55e57eb1a185d95f132c74e5d032b8a57a187506b528c92061e"} Mar 12 08:38:06 crc kubenswrapper[4809]: I0312 08:38:06.356493 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555078-5hcvx" event={"ID":"e95f881b-5b87-442e-bd68-b9dfaf6c1bf3","Type":"ContainerStarted","Data":"b56025b8da2276bfbff31a5a75c3ba43bb0e2549bd4d40235c4f51cbeb3b414a"} Mar 12 08:38:06 crc kubenswrapper[4809]: I0312 08:38:06.380608 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555078-5hcvx" podStartSLOduration=1.5686615210000001 podStartE2EDuration="6.38058562s" podCreationTimestamp="2026-03-12 08:38:00 +0000 UTC" firstStartedPulling="2026-03-12 08:38:01.018784089 +0000 UTC m=+2354.600819822" lastFinishedPulling="2026-03-12 08:38:05.830708168 +0000 UTC m=+2359.412743921" observedRunningTime="2026-03-12 08:38:06.371704699 +0000 UTC m=+2359.953740432" watchObservedRunningTime="2026-03-12 08:38:06.38058562 +0000 UTC m=+2359.962621343" Mar 12 08:38:06 crc kubenswrapper[4809]: I0312 08:38:06.812310 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9qpr5" Mar 12 08:38:06 crc kubenswrapper[4809]: I0312 08:38:06.907515 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3bd1743a-df42-404b-b846-0ca55bf273ef-inventory\") pod \"3bd1743a-df42-404b-b846-0ca55bf273ef\" (UID: \"3bd1743a-df42-404b-b846-0ca55bf273ef\") " Mar 12 08:38:06 crc kubenswrapper[4809]: I0312 08:38:06.907897 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3bd1743a-df42-404b-b846-0ca55bf273ef-ssh-key-openstack-edpm-ipam\") pod \"3bd1743a-df42-404b-b846-0ca55bf273ef\" (UID: \"3bd1743a-df42-404b-b846-0ca55bf273ef\") " Mar 12 08:38:06 crc kubenswrapper[4809]: I0312 08:38:06.908893 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldj24\" (UniqueName: \"kubernetes.io/projected/3bd1743a-df42-404b-b846-0ca55bf273ef-kube-api-access-ldj24\") pod \"3bd1743a-df42-404b-b846-0ca55bf273ef\" (UID: \"3bd1743a-df42-404b-b846-0ca55bf273ef\") " Mar 12 08:38:06 crc kubenswrapper[4809]: I0312 08:38:06.914404 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bd1743a-df42-404b-b846-0ca55bf273ef-kube-api-access-ldj24" (OuterVolumeSpecName: "kube-api-access-ldj24") pod "3bd1743a-df42-404b-b846-0ca55bf273ef" (UID: "3bd1743a-df42-404b-b846-0ca55bf273ef"). InnerVolumeSpecName "kube-api-access-ldj24". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:38:06 crc kubenswrapper[4809]: I0312 08:38:06.949950 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bd1743a-df42-404b-b846-0ca55bf273ef-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3bd1743a-df42-404b-b846-0ca55bf273ef" (UID: "3bd1743a-df42-404b-b846-0ca55bf273ef"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:38:06 crc kubenswrapper[4809]: I0312 08:38:06.950632 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bd1743a-df42-404b-b846-0ca55bf273ef-inventory" (OuterVolumeSpecName: "inventory") pod "3bd1743a-df42-404b-b846-0ca55bf273ef" (UID: "3bd1743a-df42-404b-b846-0ca55bf273ef"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:38:07 crc kubenswrapper[4809]: I0312 08:38:07.013045 4809 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3bd1743a-df42-404b-b846-0ca55bf273ef-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 08:38:07 crc kubenswrapper[4809]: I0312 08:38:07.013081 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldj24\" (UniqueName: \"kubernetes.io/projected/3bd1743a-df42-404b-b846-0ca55bf273ef-kube-api-access-ldj24\") on node \"crc\" DevicePath \"\"" Mar 12 08:38:07 crc kubenswrapper[4809]: I0312 08:38:07.013091 4809 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3bd1743a-df42-404b-b846-0ca55bf273ef-inventory\") on node \"crc\" DevicePath \"\"" Mar 12 08:38:07 crc kubenswrapper[4809]: I0312 08:38:07.132704 4809 scope.go:117] "RemoveContainer" containerID="bf2cc15f1ee829a164218f56f3866a7078c46a7688edcf75cd90ece45a87d0e0" Mar 12 08:38:07 crc kubenswrapper[4809]: E0312 08:38:07.133777 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:38:07 crc kubenswrapper[4809]: I0312 08:38:07.369543 4809 generic.go:334] "Generic (PLEG): container finished" podID="e95f881b-5b87-442e-bd68-b9dfaf6c1bf3" containerID="b56025b8da2276bfbff31a5a75c3ba43bb0e2549bd4d40235c4f51cbeb3b414a" exitCode=0 Mar 12 08:38:07 crc kubenswrapper[4809]: I0312 08:38:07.369648 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555078-5hcvx" event={"ID":"e95f881b-5b87-442e-bd68-b9dfaf6c1bf3","Type":"ContainerDied","Data":"b56025b8da2276bfbff31a5a75c3ba43bb0e2549bd4d40235c4f51cbeb3b414a"} Mar 12 08:38:07 crc kubenswrapper[4809]: I0312 08:38:07.371719 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9qpr5" event={"ID":"3bd1743a-df42-404b-b846-0ca55bf273ef","Type":"ContainerDied","Data":"bc29e9c19820745da3c3c934c9500a2c38e1324101f98a39b869de3544ac43e1"} Mar 12 08:38:07 crc kubenswrapper[4809]: I0312 08:38:07.371825 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc29e9c19820745da3c3c934c9500a2c38e1324101f98a39b869de3544ac43e1" Mar 12 08:38:07 crc kubenswrapper[4809]: I0312 08:38:07.371933 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9qpr5" Mar 12 08:38:07 crc kubenswrapper[4809]: I0312 08:38:07.463357 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dw8zf"] Mar 12 08:38:07 crc kubenswrapper[4809]: E0312 08:38:07.464177 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bd1743a-df42-404b-b846-0ca55bf273ef" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Mar 12 08:38:07 crc kubenswrapper[4809]: I0312 08:38:07.464259 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bd1743a-df42-404b-b846-0ca55bf273ef" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Mar 12 08:38:07 crc kubenswrapper[4809]: I0312 08:38:07.464590 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bd1743a-df42-404b-b846-0ca55bf273ef" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Mar 12 08:38:07 crc kubenswrapper[4809]: I0312 08:38:07.465517 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dw8zf" Mar 12 08:38:07 crc kubenswrapper[4809]: I0312 08:38:07.470873 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 12 08:38:07 crc kubenswrapper[4809]: I0312 08:38:07.471140 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 12 08:38:07 crc kubenswrapper[4809]: I0312 08:38:07.471306 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7nx95" Mar 12 08:38:07 crc kubenswrapper[4809]: I0312 08:38:07.471531 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 08:38:07 crc kubenswrapper[4809]: I0312 08:38:07.476597 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dw8zf"] Mar 12 08:38:07 crc kubenswrapper[4809]: I0312 08:38:07.542528 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6d715d91-c333-4191-afe9-f58e0350a408-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dw8zf\" (UID: \"6d715d91-c333-4191-afe9-f58e0350a408\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dw8zf" Mar 12 08:38:07 crc kubenswrapper[4809]: I0312 08:38:07.542618 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjkh4\" (UniqueName: \"kubernetes.io/projected/6d715d91-c333-4191-afe9-f58e0350a408-kube-api-access-jjkh4\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dw8zf\" (UID: \"6d715d91-c333-4191-afe9-f58e0350a408\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dw8zf" Mar 12 08:38:07 crc kubenswrapper[4809]: I0312 08:38:07.542963 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6d715d91-c333-4191-afe9-f58e0350a408-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dw8zf\" (UID: \"6d715d91-c333-4191-afe9-f58e0350a408\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dw8zf" Mar 12 08:38:07 crc kubenswrapper[4809]: I0312 08:38:07.645446 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6d715d91-c333-4191-afe9-f58e0350a408-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dw8zf\" (UID: \"6d715d91-c333-4191-afe9-f58e0350a408\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dw8zf" Mar 12 08:38:07 crc kubenswrapper[4809]: I0312 08:38:07.645675 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6d715d91-c333-4191-afe9-f58e0350a408-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dw8zf\" (UID: \"6d715d91-c333-4191-afe9-f58e0350a408\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dw8zf" Mar 12 08:38:07 crc kubenswrapper[4809]: I0312 08:38:07.645720 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjkh4\" (UniqueName: \"kubernetes.io/projected/6d715d91-c333-4191-afe9-f58e0350a408-kube-api-access-jjkh4\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dw8zf\" (UID: \"6d715d91-c333-4191-afe9-f58e0350a408\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dw8zf" Mar 12 08:38:07 crc kubenswrapper[4809]: I0312 08:38:07.651904 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6d715d91-c333-4191-afe9-f58e0350a408-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dw8zf\" (UID: \"6d715d91-c333-4191-afe9-f58e0350a408\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dw8zf" Mar 12 08:38:07 crc kubenswrapper[4809]: I0312 08:38:07.652042 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6d715d91-c333-4191-afe9-f58e0350a408-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dw8zf\" (UID: \"6d715d91-c333-4191-afe9-f58e0350a408\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dw8zf" Mar 12 08:38:07 crc kubenswrapper[4809]: I0312 08:38:07.663660 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjkh4\" (UniqueName: \"kubernetes.io/projected/6d715d91-c333-4191-afe9-f58e0350a408-kube-api-access-jjkh4\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dw8zf\" (UID: \"6d715d91-c333-4191-afe9-f58e0350a408\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dw8zf" Mar 12 08:38:07 crc kubenswrapper[4809]: I0312 08:38:07.797080 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dw8zf" Mar 12 08:38:08 crc kubenswrapper[4809]: I0312 08:38:08.594177 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dw8zf"] Mar 12 08:38:08 crc kubenswrapper[4809]: I0312 08:38:08.916992 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555078-5hcvx" Mar 12 08:38:09 crc kubenswrapper[4809]: I0312 08:38:09.012139 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4q9cl\" (UniqueName: \"kubernetes.io/projected/e95f881b-5b87-442e-bd68-b9dfaf6c1bf3-kube-api-access-4q9cl\") pod \"e95f881b-5b87-442e-bd68-b9dfaf6c1bf3\" (UID: \"e95f881b-5b87-442e-bd68-b9dfaf6c1bf3\") " Mar 12 08:38:09 crc kubenswrapper[4809]: I0312 08:38:09.022539 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e95f881b-5b87-442e-bd68-b9dfaf6c1bf3-kube-api-access-4q9cl" (OuterVolumeSpecName: "kube-api-access-4q9cl") pod "e95f881b-5b87-442e-bd68-b9dfaf6c1bf3" (UID: "e95f881b-5b87-442e-bd68-b9dfaf6c1bf3"). InnerVolumeSpecName "kube-api-access-4q9cl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:38:09 crc kubenswrapper[4809]: I0312 08:38:09.115442 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4q9cl\" (UniqueName: \"kubernetes.io/projected/e95f881b-5b87-442e-bd68-b9dfaf6c1bf3-kube-api-access-4q9cl\") on node \"crc\" DevicePath \"\"" Mar 12 08:38:09 crc kubenswrapper[4809]: I0312 08:38:09.395332 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555078-5hcvx" Mar 12 08:38:09 crc kubenswrapper[4809]: I0312 08:38:09.396506 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555078-5hcvx" event={"ID":"e95f881b-5b87-442e-bd68-b9dfaf6c1bf3","Type":"ContainerDied","Data":"7f48973f5273b6f22a8247765cd826393b271cde368988a7a2211449886c2316"} Mar 12 08:38:09 crc kubenswrapper[4809]: I0312 08:38:09.396537 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f48973f5273b6f22a8247765cd826393b271cde368988a7a2211449886c2316" Mar 12 08:38:09 crc kubenswrapper[4809]: I0312 08:38:09.400425 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dw8zf" event={"ID":"6d715d91-c333-4191-afe9-f58e0350a408","Type":"ContainerStarted","Data":"6dc838cdfe08ae6901b1232933406b44b26432de33c47920b7835d7558951622"} Mar 12 08:38:09 crc kubenswrapper[4809]: E0312 08:38:09.439459 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode95f881b_5b87_442e_bd68_b9dfaf6c1bf3.slice\": RecentStats: unable to find data in memory cache]" Mar 12 08:38:09 crc kubenswrapper[4809]: I0312 08:38:09.456431 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555072-v4pfw"] Mar 12 08:38:09 crc kubenswrapper[4809]: I0312 08:38:09.469616 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555072-v4pfw"] Mar 12 08:38:10 crc kubenswrapper[4809]: I0312 08:38:10.412610 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dw8zf" event={"ID":"6d715d91-c333-4191-afe9-f58e0350a408","Type":"ContainerStarted","Data":"e2fe6b10de718b5603f6414103487f71ca118c29fbc89b4b264575530c33c7f1"} Mar 12 08:38:10 crc kubenswrapper[4809]: I0312 08:38:10.446737 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dw8zf" podStartSLOduration=2.989516187 podStartE2EDuration="3.446714057s" podCreationTimestamp="2026-03-12 08:38:07 +0000 UTC" firstStartedPulling="2026-03-12 08:38:08.593751869 +0000 UTC m=+2362.175787602" lastFinishedPulling="2026-03-12 08:38:09.050949739 +0000 UTC m=+2362.632985472" observedRunningTime="2026-03-12 08:38:10.435596584 +0000 UTC m=+2364.017632317" watchObservedRunningTime="2026-03-12 08:38:10.446714057 +0000 UTC m=+2364.028749790" Mar 12 08:38:11 crc kubenswrapper[4809]: I0312 08:38:11.125693 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="838ee61a-a827-4619-84e2-e9ecc34eae6e" path="/var/lib/kubelet/pods/838ee61a-a827-4619-84e2-e9ecc34eae6e/volumes" Mar 12 08:38:19 crc kubenswrapper[4809]: E0312 08:38:19.751085 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d715d91_c333_4191_afe9_f58e0350a408.slice/crio-e2fe6b10de718b5603f6414103487f71ca118c29fbc89b4b264575530c33c7f1.scope\": RecentStats: unable to find data in memory cache]" Mar 12 08:38:20 crc kubenswrapper[4809]: I0312 08:38:20.059904 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-j9k6d"] Mar 12 08:38:20 crc kubenswrapper[4809]: I0312 08:38:20.076104 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-j9k6d"] Mar 12 08:38:20 crc kubenswrapper[4809]: I0312 08:38:20.543806 4809 generic.go:334] "Generic (PLEG): container finished" podID="6d715d91-c333-4191-afe9-f58e0350a408" containerID="e2fe6b10de718b5603f6414103487f71ca118c29fbc89b4b264575530c33c7f1" exitCode=0 Mar 12 08:38:20 crc kubenswrapper[4809]: I0312 08:38:20.543850 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dw8zf" event={"ID":"6d715d91-c333-4191-afe9-f58e0350a408","Type":"ContainerDied","Data":"e2fe6b10de718b5603f6414103487f71ca118c29fbc89b4b264575530c33c7f1"} Mar 12 08:38:21 crc kubenswrapper[4809]: I0312 08:38:21.118337 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8873bebf-493e-4f3d-85a0-fca09ce4d946" path="/var/lib/kubelet/pods/8873bebf-493e-4f3d-85a0-fca09ce4d946/volumes" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.106238 4809 scope.go:117] "RemoveContainer" containerID="bf2cc15f1ee829a164218f56f3866a7078c46a7688edcf75cd90ece45a87d0e0" Mar 12 08:38:22 crc kubenswrapper[4809]: E0312 08:38:22.107014 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.129574 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dw8zf" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.273153 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6d715d91-c333-4191-afe9-f58e0350a408-ssh-key-openstack-edpm-ipam\") pod \"6d715d91-c333-4191-afe9-f58e0350a408\" (UID: \"6d715d91-c333-4191-afe9-f58e0350a408\") " Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.273632 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6d715d91-c333-4191-afe9-f58e0350a408-inventory\") pod \"6d715d91-c333-4191-afe9-f58e0350a408\" (UID: \"6d715d91-c333-4191-afe9-f58e0350a408\") " Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.274005 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjkh4\" (UniqueName: \"kubernetes.io/projected/6d715d91-c333-4191-afe9-f58e0350a408-kube-api-access-jjkh4\") pod \"6d715d91-c333-4191-afe9-f58e0350a408\" (UID: \"6d715d91-c333-4191-afe9-f58e0350a408\") " Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.282671 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d715d91-c333-4191-afe9-f58e0350a408-kube-api-access-jjkh4" (OuterVolumeSpecName: "kube-api-access-jjkh4") pod "6d715d91-c333-4191-afe9-f58e0350a408" (UID: "6d715d91-c333-4191-afe9-f58e0350a408"). InnerVolumeSpecName "kube-api-access-jjkh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.308910 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d715d91-c333-4191-afe9-f58e0350a408-inventory" (OuterVolumeSpecName: "inventory") pod "6d715d91-c333-4191-afe9-f58e0350a408" (UID: "6d715d91-c333-4191-afe9-f58e0350a408"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.327502 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d715d91-c333-4191-afe9-f58e0350a408-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6d715d91-c333-4191-afe9-f58e0350a408" (UID: "6d715d91-c333-4191-afe9-f58e0350a408"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.379824 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjkh4\" (UniqueName: \"kubernetes.io/projected/6d715d91-c333-4191-afe9-f58e0350a408-kube-api-access-jjkh4\") on node \"crc\" DevicePath \"\"" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.379864 4809 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6d715d91-c333-4191-afe9-f58e0350a408-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.379874 4809 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6d715d91-c333-4191-afe9-f58e0350a408-inventory\") on node \"crc\" DevicePath \"\"" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.567503 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dw8zf" event={"ID":"6d715d91-c333-4191-afe9-f58e0350a408","Type":"ContainerDied","Data":"6dc838cdfe08ae6901b1232933406b44b26432de33c47920b7835d7558951622"} Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.567570 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6dc838cdfe08ae6901b1232933406b44b26432de33c47920b7835d7558951622" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.567597 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dw8zf" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.666440 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q"] Mar 12 08:38:22 crc kubenswrapper[4809]: E0312 08:38:22.666975 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d715d91-c333-4191-afe9-f58e0350a408" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.666997 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d715d91-c333-4191-afe9-f58e0350a408" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Mar 12 08:38:22 crc kubenswrapper[4809]: E0312 08:38:22.667020 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e95f881b-5b87-442e-bd68-b9dfaf6c1bf3" containerName="oc" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.667027 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="e95f881b-5b87-442e-bd68-b9dfaf6c1bf3" containerName="oc" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.668012 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d715d91-c333-4191-afe9-f58e0350a408" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.668049 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="e95f881b-5b87-442e-bd68-b9dfaf6c1bf3" containerName="oc" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.669520 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.673436 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.673643 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.673827 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.673985 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.674036 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.674242 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.674523 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7nx95" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.676564 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.679456 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.685060 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q"] Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.790995 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4f1326dd-cb21-41ec-9927-70a2f27b2020-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q\" (UID: \"4f1326dd-cb21-41ec-9927-70a2f27b2020\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.791411 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f1326dd-cb21-41ec-9927-70a2f27b2020-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q\" (UID: \"4f1326dd-cb21-41ec-9927-70a2f27b2020\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.791560 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4f1326dd-cb21-41ec-9927-70a2f27b2020-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q\" (UID: \"4f1326dd-cb21-41ec-9927-70a2f27b2020\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.791636 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f1326dd-cb21-41ec-9927-70a2f27b2020-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q\" (UID: \"4f1326dd-cb21-41ec-9927-70a2f27b2020\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.791818 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f1326dd-cb21-41ec-9927-70a2f27b2020-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q\" (UID: \"4f1326dd-cb21-41ec-9927-70a2f27b2020\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.791950 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4f1326dd-cb21-41ec-9927-70a2f27b2020-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q\" (UID: \"4f1326dd-cb21-41ec-9927-70a2f27b2020\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.792205 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f1326dd-cb21-41ec-9927-70a2f27b2020-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q\" (UID: \"4f1326dd-cb21-41ec-9927-70a2f27b2020\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.792360 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4f1326dd-cb21-41ec-9927-70a2f27b2020-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q\" (UID: \"4f1326dd-cb21-41ec-9927-70a2f27b2020\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.792393 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4f1326dd-cb21-41ec-9927-70a2f27b2020-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q\" (UID: \"4f1326dd-cb21-41ec-9927-70a2f27b2020\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.792480 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f1326dd-cb21-41ec-9927-70a2f27b2020-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q\" (UID: \"4f1326dd-cb21-41ec-9927-70a2f27b2020\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.792590 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4f1326dd-cb21-41ec-9927-70a2f27b2020-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q\" (UID: \"4f1326dd-cb21-41ec-9927-70a2f27b2020\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.792751 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f1326dd-cb21-41ec-9927-70a2f27b2020-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q\" (UID: \"4f1326dd-cb21-41ec-9927-70a2f27b2020\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.792831 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4f1326dd-cb21-41ec-9927-70a2f27b2020-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q\" (UID: \"4f1326dd-cb21-41ec-9927-70a2f27b2020\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.792886 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrm8k\" (UniqueName: \"kubernetes.io/projected/4f1326dd-cb21-41ec-9927-70a2f27b2020-kube-api-access-rrm8k\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q\" (UID: \"4f1326dd-cb21-41ec-9927-70a2f27b2020\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.792974 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f1326dd-cb21-41ec-9927-70a2f27b2020-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q\" (UID: \"4f1326dd-cb21-41ec-9927-70a2f27b2020\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.793008 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f1326dd-cb21-41ec-9927-70a2f27b2020-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q\" (UID: \"4f1326dd-cb21-41ec-9927-70a2f27b2020\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.896054 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4f1326dd-cb21-41ec-9927-70a2f27b2020-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q\" (UID: \"4f1326dd-cb21-41ec-9927-70a2f27b2020\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.896187 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f1326dd-cb21-41ec-9927-70a2f27b2020-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q\" (UID: \"4f1326dd-cb21-41ec-9927-70a2f27b2020\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.896281 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4f1326dd-cb21-41ec-9927-70a2f27b2020-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q\" (UID: \"4f1326dd-cb21-41ec-9927-70a2f27b2020\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.896352 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f1326dd-cb21-41ec-9927-70a2f27b2020-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q\" (UID: \"4f1326dd-cb21-41ec-9927-70a2f27b2020\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.896440 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f1326dd-cb21-41ec-9927-70a2f27b2020-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q\" (UID: \"4f1326dd-cb21-41ec-9927-70a2f27b2020\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.896517 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4f1326dd-cb21-41ec-9927-70a2f27b2020-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q\" (UID: \"4f1326dd-cb21-41ec-9927-70a2f27b2020\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.896619 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f1326dd-cb21-41ec-9927-70a2f27b2020-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q\" (UID: \"4f1326dd-cb21-41ec-9927-70a2f27b2020\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.896714 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4f1326dd-cb21-41ec-9927-70a2f27b2020-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q\" (UID: \"4f1326dd-cb21-41ec-9927-70a2f27b2020\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.896772 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4f1326dd-cb21-41ec-9927-70a2f27b2020-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q\" (UID: \"4f1326dd-cb21-41ec-9927-70a2f27b2020\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.896824 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f1326dd-cb21-41ec-9927-70a2f27b2020-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q\" (UID: \"4f1326dd-cb21-41ec-9927-70a2f27b2020\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.896907 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4f1326dd-cb21-41ec-9927-70a2f27b2020-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q\" (UID: \"4f1326dd-cb21-41ec-9927-70a2f27b2020\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.896998 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f1326dd-cb21-41ec-9927-70a2f27b2020-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q\" (UID: \"4f1326dd-cb21-41ec-9927-70a2f27b2020\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.897057 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4f1326dd-cb21-41ec-9927-70a2f27b2020-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q\" (UID: \"4f1326dd-cb21-41ec-9927-70a2f27b2020\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.897110 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrm8k\" (UniqueName: \"kubernetes.io/projected/4f1326dd-cb21-41ec-9927-70a2f27b2020-kube-api-access-rrm8k\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q\" (UID: \"4f1326dd-cb21-41ec-9927-70a2f27b2020\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.897194 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f1326dd-cb21-41ec-9927-70a2f27b2020-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q\" (UID: \"4f1326dd-cb21-41ec-9927-70a2f27b2020\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.897230 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f1326dd-cb21-41ec-9927-70a2f27b2020-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q\" (UID: \"4f1326dd-cb21-41ec-9927-70a2f27b2020\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.902866 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f1326dd-cb21-41ec-9927-70a2f27b2020-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q\" (UID: \"4f1326dd-cb21-41ec-9927-70a2f27b2020\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.903308 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4f1326dd-cb21-41ec-9927-70a2f27b2020-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q\" (UID: \"4f1326dd-cb21-41ec-9927-70a2f27b2020\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.903418 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4f1326dd-cb21-41ec-9927-70a2f27b2020-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q\" (UID: \"4f1326dd-cb21-41ec-9927-70a2f27b2020\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.903770 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4f1326dd-cb21-41ec-9927-70a2f27b2020-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q\" (UID: \"4f1326dd-cb21-41ec-9927-70a2f27b2020\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.906329 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4f1326dd-cb21-41ec-9927-70a2f27b2020-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q\" (UID: \"4f1326dd-cb21-41ec-9927-70a2f27b2020\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.906548 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4f1326dd-cb21-41ec-9927-70a2f27b2020-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q\" (UID: \"4f1326dd-cb21-41ec-9927-70a2f27b2020\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.906593 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f1326dd-cb21-41ec-9927-70a2f27b2020-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q\" (UID: \"4f1326dd-cb21-41ec-9927-70a2f27b2020\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.906621 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f1326dd-cb21-41ec-9927-70a2f27b2020-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q\" (UID: \"4f1326dd-cb21-41ec-9927-70a2f27b2020\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.907186 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f1326dd-cb21-41ec-9927-70a2f27b2020-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q\" (UID: \"4f1326dd-cb21-41ec-9927-70a2f27b2020\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.907863 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f1326dd-cb21-41ec-9927-70a2f27b2020-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q\" (UID: \"4f1326dd-cb21-41ec-9927-70a2f27b2020\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.908800 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4f1326dd-cb21-41ec-9927-70a2f27b2020-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q\" (UID: \"4f1326dd-cb21-41ec-9927-70a2f27b2020\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.914753 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f1326dd-cb21-41ec-9927-70a2f27b2020-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q\" (UID: \"4f1326dd-cb21-41ec-9927-70a2f27b2020\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.916262 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4f1326dd-cb21-41ec-9927-70a2f27b2020-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q\" (UID: \"4f1326dd-cb21-41ec-9927-70a2f27b2020\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.917917 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f1326dd-cb21-41ec-9927-70a2f27b2020-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q\" (UID: \"4f1326dd-cb21-41ec-9927-70a2f27b2020\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.918245 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f1326dd-cb21-41ec-9927-70a2f27b2020-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q\" (UID: \"4f1326dd-cb21-41ec-9927-70a2f27b2020\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.925461 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrm8k\" (UniqueName: \"kubernetes.io/projected/4f1326dd-cb21-41ec-9927-70a2f27b2020-kube-api-access-rrm8k\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q\" (UID: \"4f1326dd-cb21-41ec-9927-70a2f27b2020\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q" Mar 12 08:38:22 crc kubenswrapper[4809]: I0312 08:38:22.987405 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q" Mar 12 08:38:23 crc kubenswrapper[4809]: I0312 08:38:23.614967 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q"] Mar 12 08:38:24 crc kubenswrapper[4809]: I0312 08:38:24.621638 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q" event={"ID":"4f1326dd-cb21-41ec-9927-70a2f27b2020","Type":"ContainerStarted","Data":"353c152a06543bd927002a25a3de7c03299bac45ed9336806786342b3ed11608"} Mar 12 08:38:24 crc kubenswrapper[4809]: I0312 08:38:24.622259 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q" event={"ID":"4f1326dd-cb21-41ec-9927-70a2f27b2020","Type":"ContainerStarted","Data":"f2ba60a088a7e0dbbf52a28092ee4c1f075c6faa2b0977cff06d253ee5c9f72e"} Mar 12 08:38:24 crc kubenswrapper[4809]: I0312 08:38:24.664001 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q" podStartSLOduration=2.189863861 podStartE2EDuration="2.66397655s" podCreationTimestamp="2026-03-12 08:38:22 +0000 UTC" firstStartedPulling="2026-03-12 08:38:23.624380484 +0000 UTC m=+2377.206416217" lastFinishedPulling="2026-03-12 08:38:24.098493173 +0000 UTC m=+2377.680528906" observedRunningTime="2026-03-12 08:38:24.645285872 +0000 UTC m=+2378.227321625" watchObservedRunningTime="2026-03-12 08:38:24.66397655 +0000 UTC m=+2378.246012293" Mar 12 08:38:31 crc kubenswrapper[4809]: I0312 08:38:31.552094 4809 scope.go:117] "RemoveContainer" containerID="ccfa28e695c0acb1bd487d351c5faf25abe20e45248891b1ec8089b850e56db9" Mar 12 08:38:31 crc kubenswrapper[4809]: I0312 08:38:31.607440 4809 scope.go:117] "RemoveContainer" containerID="03f7026909aa5992acc439e0cd8d5b3acff3a019fff5719436348a0284caace8" Mar 12 08:38:33 crc kubenswrapper[4809]: I0312 08:38:33.110382 4809 scope.go:117] "RemoveContainer" containerID="bf2cc15f1ee829a164218f56f3866a7078c46a7688edcf75cd90ece45a87d0e0" Mar 12 08:38:33 crc kubenswrapper[4809]: E0312 08:38:33.111253 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:38:44 crc kubenswrapper[4809]: I0312 08:38:44.106953 4809 scope.go:117] "RemoveContainer" containerID="bf2cc15f1ee829a164218f56f3866a7078c46a7688edcf75cd90ece45a87d0e0" Mar 12 08:38:44 crc kubenswrapper[4809]: E0312 08:38:44.108055 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:38:55 crc kubenswrapper[4809]: I0312 08:38:55.106278 4809 scope.go:117] "RemoveContainer" containerID="bf2cc15f1ee829a164218f56f3866a7078c46a7688edcf75cd90ece45a87d0e0" Mar 12 08:38:55 crc kubenswrapper[4809]: E0312 08:38:55.107250 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:39:05 crc kubenswrapper[4809]: I0312 08:39:05.244418 4809 generic.go:334] "Generic (PLEG): container finished" podID="4f1326dd-cb21-41ec-9927-70a2f27b2020" containerID="353c152a06543bd927002a25a3de7c03299bac45ed9336806786342b3ed11608" exitCode=0 Mar 12 08:39:05 crc kubenswrapper[4809]: I0312 08:39:05.244489 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q" event={"ID":"4f1326dd-cb21-41ec-9927-70a2f27b2020","Type":"ContainerDied","Data":"353c152a06543bd927002a25a3de7c03299bac45ed9336806786342b3ed11608"} Mar 12 08:39:06 crc kubenswrapper[4809]: I0312 08:39:06.743794 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q" Mar 12 08:39:06 crc kubenswrapper[4809]: I0312 08:39:06.814501 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4f1326dd-cb21-41ec-9927-70a2f27b2020-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"4f1326dd-cb21-41ec-9927-70a2f27b2020\" (UID: \"4f1326dd-cb21-41ec-9927-70a2f27b2020\") " Mar 12 08:39:06 crc kubenswrapper[4809]: I0312 08:39:06.814591 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4f1326dd-cb21-41ec-9927-70a2f27b2020-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"4f1326dd-cb21-41ec-9927-70a2f27b2020\" (UID: \"4f1326dd-cb21-41ec-9927-70a2f27b2020\") " Mar 12 08:39:06 crc kubenswrapper[4809]: I0312 08:39:06.814629 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f1326dd-cb21-41ec-9927-70a2f27b2020-repo-setup-combined-ca-bundle\") pod \"4f1326dd-cb21-41ec-9927-70a2f27b2020\" (UID: \"4f1326dd-cb21-41ec-9927-70a2f27b2020\") " Mar 12 08:39:06 crc kubenswrapper[4809]: I0312 08:39:06.814665 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4f1326dd-cb21-41ec-9927-70a2f27b2020-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"4f1326dd-cb21-41ec-9927-70a2f27b2020\" (UID: \"4f1326dd-cb21-41ec-9927-70a2f27b2020\") " Mar 12 08:39:06 crc kubenswrapper[4809]: I0312 08:39:06.814704 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrm8k\" (UniqueName: \"kubernetes.io/projected/4f1326dd-cb21-41ec-9927-70a2f27b2020-kube-api-access-rrm8k\") pod \"4f1326dd-cb21-41ec-9927-70a2f27b2020\" (UID: \"4f1326dd-cb21-41ec-9927-70a2f27b2020\") " Mar 12 08:39:06 crc kubenswrapper[4809]: I0312 08:39:06.814738 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f1326dd-cb21-41ec-9927-70a2f27b2020-ovn-combined-ca-bundle\") pod \"4f1326dd-cb21-41ec-9927-70a2f27b2020\" (UID: \"4f1326dd-cb21-41ec-9927-70a2f27b2020\") " Mar 12 08:39:06 crc kubenswrapper[4809]: I0312 08:39:06.814760 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f1326dd-cb21-41ec-9927-70a2f27b2020-telemetry-combined-ca-bundle\") pod \"4f1326dd-cb21-41ec-9927-70a2f27b2020\" (UID: \"4f1326dd-cb21-41ec-9927-70a2f27b2020\") " Mar 12 08:39:06 crc kubenswrapper[4809]: I0312 08:39:06.814794 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4f1326dd-cb21-41ec-9927-70a2f27b2020-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"4f1326dd-cb21-41ec-9927-70a2f27b2020\" (UID: \"4f1326dd-cb21-41ec-9927-70a2f27b2020\") " Mar 12 08:39:06 crc kubenswrapper[4809]: I0312 08:39:06.814814 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f1326dd-cb21-41ec-9927-70a2f27b2020-nova-combined-ca-bundle\") pod \"4f1326dd-cb21-41ec-9927-70a2f27b2020\" (UID: \"4f1326dd-cb21-41ec-9927-70a2f27b2020\") " Mar 12 08:39:06 crc kubenswrapper[4809]: I0312 08:39:06.814849 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f1326dd-cb21-41ec-9927-70a2f27b2020-libvirt-combined-ca-bundle\") pod \"4f1326dd-cb21-41ec-9927-70a2f27b2020\" (UID: \"4f1326dd-cb21-41ec-9927-70a2f27b2020\") " Mar 12 08:39:06 crc kubenswrapper[4809]: I0312 08:39:06.814904 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f1326dd-cb21-41ec-9927-70a2f27b2020-telemetry-power-monitoring-combined-ca-bundle\") pod \"4f1326dd-cb21-41ec-9927-70a2f27b2020\" (UID: \"4f1326dd-cb21-41ec-9927-70a2f27b2020\") " Mar 12 08:39:06 crc kubenswrapper[4809]: I0312 08:39:06.814992 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f1326dd-cb21-41ec-9927-70a2f27b2020-neutron-metadata-combined-ca-bundle\") pod \"4f1326dd-cb21-41ec-9927-70a2f27b2020\" (UID: \"4f1326dd-cb21-41ec-9927-70a2f27b2020\") " Mar 12 08:39:06 crc kubenswrapper[4809]: I0312 08:39:06.815014 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4f1326dd-cb21-41ec-9927-70a2f27b2020-ssh-key-openstack-edpm-ipam\") pod \"4f1326dd-cb21-41ec-9927-70a2f27b2020\" (UID: \"4f1326dd-cb21-41ec-9927-70a2f27b2020\") " Mar 12 08:39:06 crc kubenswrapper[4809]: I0312 08:39:06.815083 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4f1326dd-cb21-41ec-9927-70a2f27b2020-inventory\") pod \"4f1326dd-cb21-41ec-9927-70a2f27b2020\" (UID: \"4f1326dd-cb21-41ec-9927-70a2f27b2020\") " Mar 12 08:39:06 crc kubenswrapper[4809]: I0312 08:39:06.815135 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f1326dd-cb21-41ec-9927-70a2f27b2020-bootstrap-combined-ca-bundle\") pod \"4f1326dd-cb21-41ec-9927-70a2f27b2020\" (UID: \"4f1326dd-cb21-41ec-9927-70a2f27b2020\") " Mar 12 08:39:06 crc kubenswrapper[4809]: I0312 08:39:06.815185 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4f1326dd-cb21-41ec-9927-70a2f27b2020-openstack-edpm-ipam-ovn-default-certs-0\") pod \"4f1326dd-cb21-41ec-9927-70a2f27b2020\" (UID: \"4f1326dd-cb21-41ec-9927-70a2f27b2020\") " Mar 12 08:39:06 crc kubenswrapper[4809]: I0312 08:39:06.821494 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f1326dd-cb21-41ec-9927-70a2f27b2020-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "4f1326dd-cb21-41ec-9927-70a2f27b2020" (UID: "4f1326dd-cb21-41ec-9927-70a2f27b2020"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:39:06 crc kubenswrapper[4809]: I0312 08:39:06.821624 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f1326dd-cb21-41ec-9927-70a2f27b2020-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0") pod "4f1326dd-cb21-41ec-9927-70a2f27b2020" (UID: "4f1326dd-cb21-41ec-9927-70a2f27b2020"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:39:06 crc kubenswrapper[4809]: I0312 08:39:06.822544 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f1326dd-cb21-41ec-9927-70a2f27b2020-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "4f1326dd-cb21-41ec-9927-70a2f27b2020" (UID: "4f1326dd-cb21-41ec-9927-70a2f27b2020"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:39:06 crc kubenswrapper[4809]: I0312 08:39:06.824975 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f1326dd-cb21-41ec-9927-70a2f27b2020-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "4f1326dd-cb21-41ec-9927-70a2f27b2020" (UID: "4f1326dd-cb21-41ec-9927-70a2f27b2020"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:39:06 crc kubenswrapper[4809]: I0312 08:39:06.829930 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f1326dd-cb21-41ec-9927-70a2f27b2020-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "4f1326dd-cb21-41ec-9927-70a2f27b2020" (UID: "4f1326dd-cb21-41ec-9927-70a2f27b2020"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:39:06 crc kubenswrapper[4809]: I0312 08:39:06.830055 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f1326dd-cb21-41ec-9927-70a2f27b2020-kube-api-access-rrm8k" (OuterVolumeSpecName: "kube-api-access-rrm8k") pod "4f1326dd-cb21-41ec-9927-70a2f27b2020" (UID: "4f1326dd-cb21-41ec-9927-70a2f27b2020"). InnerVolumeSpecName "kube-api-access-rrm8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:39:06 crc kubenswrapper[4809]: I0312 08:39:06.830496 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f1326dd-cb21-41ec-9927-70a2f27b2020-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "4f1326dd-cb21-41ec-9927-70a2f27b2020" (UID: "4f1326dd-cb21-41ec-9927-70a2f27b2020"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:39:06 crc kubenswrapper[4809]: I0312 08:39:06.830603 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f1326dd-cb21-41ec-9927-70a2f27b2020-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "4f1326dd-cb21-41ec-9927-70a2f27b2020" (UID: "4f1326dd-cb21-41ec-9927-70a2f27b2020"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:39:06 crc kubenswrapper[4809]: I0312 08:39:06.830853 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f1326dd-cb21-41ec-9927-70a2f27b2020-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "4f1326dd-cb21-41ec-9927-70a2f27b2020" (UID: "4f1326dd-cb21-41ec-9927-70a2f27b2020"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:39:06 crc kubenswrapper[4809]: I0312 08:39:06.832337 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f1326dd-cb21-41ec-9927-70a2f27b2020-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "4f1326dd-cb21-41ec-9927-70a2f27b2020" (UID: "4f1326dd-cb21-41ec-9927-70a2f27b2020"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:39:06 crc kubenswrapper[4809]: I0312 08:39:06.833691 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f1326dd-cb21-41ec-9927-70a2f27b2020-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "4f1326dd-cb21-41ec-9927-70a2f27b2020" (UID: "4f1326dd-cb21-41ec-9927-70a2f27b2020"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:39:06 crc kubenswrapper[4809]: I0312 08:39:06.834673 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f1326dd-cb21-41ec-9927-70a2f27b2020-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "4f1326dd-cb21-41ec-9927-70a2f27b2020" (UID: "4f1326dd-cb21-41ec-9927-70a2f27b2020"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:39:06 crc kubenswrapper[4809]: I0312 08:39:06.840804 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f1326dd-cb21-41ec-9927-70a2f27b2020-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "4f1326dd-cb21-41ec-9927-70a2f27b2020" (UID: "4f1326dd-cb21-41ec-9927-70a2f27b2020"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:39:06 crc kubenswrapper[4809]: I0312 08:39:06.845062 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f1326dd-cb21-41ec-9927-70a2f27b2020-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "4f1326dd-cb21-41ec-9927-70a2f27b2020" (UID: "4f1326dd-cb21-41ec-9927-70a2f27b2020"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:39:06 crc kubenswrapper[4809]: I0312 08:39:06.860814 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f1326dd-cb21-41ec-9927-70a2f27b2020-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4f1326dd-cb21-41ec-9927-70a2f27b2020" (UID: "4f1326dd-cb21-41ec-9927-70a2f27b2020"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:39:06 crc kubenswrapper[4809]: I0312 08:39:06.875504 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f1326dd-cb21-41ec-9927-70a2f27b2020-inventory" (OuterVolumeSpecName: "inventory") pod "4f1326dd-cb21-41ec-9927-70a2f27b2020" (UID: "4f1326dd-cb21-41ec-9927-70a2f27b2020"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:39:06 crc kubenswrapper[4809]: I0312 08:39:06.919093 4809 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f1326dd-cb21-41ec-9927-70a2f27b2020-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:39:06 crc kubenswrapper[4809]: I0312 08:39:06.919158 4809 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4f1326dd-cb21-41ec-9927-70a2f27b2020-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 08:39:06 crc kubenswrapper[4809]: I0312 08:39:06.919179 4809 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f1326dd-cb21-41ec-9927-70a2f27b2020-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:39:06 crc kubenswrapper[4809]: I0312 08:39:06.919195 4809 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4f1326dd-cb21-41ec-9927-70a2f27b2020-inventory\") on node \"crc\" DevicePath \"\"" Mar 12 08:39:06 crc kubenswrapper[4809]: I0312 08:39:06.919210 4809 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f1326dd-cb21-41ec-9927-70a2f27b2020-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:39:06 crc kubenswrapper[4809]: I0312 08:39:06.919227 4809 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4f1326dd-cb21-41ec-9927-70a2f27b2020-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Mar 12 08:39:06 crc kubenswrapper[4809]: I0312 08:39:06.919241 4809 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4f1326dd-cb21-41ec-9927-70a2f27b2020-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") on node \"crc\" DevicePath \"\"" Mar 12 08:39:06 crc kubenswrapper[4809]: I0312 08:39:06.919257 4809 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4f1326dd-cb21-41ec-9927-70a2f27b2020-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Mar 12 08:39:06 crc kubenswrapper[4809]: I0312 08:39:06.919271 4809 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f1326dd-cb21-41ec-9927-70a2f27b2020-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:39:06 crc kubenswrapper[4809]: I0312 08:39:06.919283 4809 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4f1326dd-cb21-41ec-9927-70a2f27b2020-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Mar 12 08:39:06 crc kubenswrapper[4809]: I0312 08:39:06.919298 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrm8k\" (UniqueName: \"kubernetes.io/projected/4f1326dd-cb21-41ec-9927-70a2f27b2020-kube-api-access-rrm8k\") on node \"crc\" DevicePath \"\"" Mar 12 08:39:06 crc kubenswrapper[4809]: I0312 08:39:06.919312 4809 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f1326dd-cb21-41ec-9927-70a2f27b2020-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:39:06 crc kubenswrapper[4809]: I0312 08:39:06.919323 4809 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f1326dd-cb21-41ec-9927-70a2f27b2020-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:39:06 crc kubenswrapper[4809]: I0312 08:39:06.919336 4809 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4f1326dd-cb21-41ec-9927-70a2f27b2020-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Mar 12 08:39:06 crc kubenswrapper[4809]: I0312 08:39:06.919348 4809 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f1326dd-cb21-41ec-9927-70a2f27b2020-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:39:06 crc kubenswrapper[4809]: I0312 08:39:06.919364 4809 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f1326dd-cb21-41ec-9927-70a2f27b2020-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:39:07 crc kubenswrapper[4809]: I0312 08:39:07.113952 4809 scope.go:117] "RemoveContainer" containerID="bf2cc15f1ee829a164218f56f3866a7078c46a7688edcf75cd90ece45a87d0e0" Mar 12 08:39:07 crc kubenswrapper[4809]: E0312 08:39:07.114362 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:39:07 crc kubenswrapper[4809]: I0312 08:39:07.268963 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q" event={"ID":"4f1326dd-cb21-41ec-9927-70a2f27b2020","Type":"ContainerDied","Data":"f2ba60a088a7e0dbbf52a28092ee4c1f075c6faa2b0977cff06d253ee5c9f72e"} Mar 12 08:39:07 crc kubenswrapper[4809]: I0312 08:39:07.269542 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2ba60a088a7e0dbbf52a28092ee4c1f075c6faa2b0977cff06d253ee5c9f72e" Mar 12 08:39:07 crc kubenswrapper[4809]: I0312 08:39:07.269027 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q" Mar 12 08:39:07 crc kubenswrapper[4809]: I0312 08:39:07.406520 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-45gch"] Mar 12 08:39:07 crc kubenswrapper[4809]: E0312 08:39:07.407033 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f1326dd-cb21-41ec-9927-70a2f27b2020" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Mar 12 08:39:07 crc kubenswrapper[4809]: I0312 08:39:07.407053 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f1326dd-cb21-41ec-9927-70a2f27b2020" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Mar 12 08:39:07 crc kubenswrapper[4809]: I0312 08:39:07.407301 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f1326dd-cb21-41ec-9927-70a2f27b2020" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Mar 12 08:39:07 crc kubenswrapper[4809]: I0312 08:39:07.408087 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-45gch" Mar 12 08:39:07 crc kubenswrapper[4809]: I0312 08:39:07.412182 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Mar 12 08:39:07 crc kubenswrapper[4809]: I0312 08:39:07.412351 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 12 08:39:07 crc kubenswrapper[4809]: I0312 08:39:07.412482 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 12 08:39:07 crc kubenswrapper[4809]: I0312 08:39:07.424469 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 08:39:07 crc kubenswrapper[4809]: I0312 08:39:07.424735 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7nx95" Mar 12 08:39:07 crc kubenswrapper[4809]: I0312 08:39:07.436869 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-45gch"] Mar 12 08:39:07 crc kubenswrapper[4809]: I0312 08:39:07.534073 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/918530d9-5478-4136-8ba8-a36197850b6e-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-45gch\" (UID: \"918530d9-5478-4136-8ba8-a36197850b6e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-45gch" Mar 12 08:39:07 crc kubenswrapper[4809]: I0312 08:39:07.534153 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/918530d9-5478-4136-8ba8-a36197850b6e-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-45gch\" (UID: \"918530d9-5478-4136-8ba8-a36197850b6e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-45gch" Mar 12 08:39:07 crc kubenswrapper[4809]: I0312 08:39:07.534283 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5mr4\" (UniqueName: \"kubernetes.io/projected/918530d9-5478-4136-8ba8-a36197850b6e-kube-api-access-f5mr4\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-45gch\" (UID: \"918530d9-5478-4136-8ba8-a36197850b6e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-45gch" Mar 12 08:39:07 crc kubenswrapper[4809]: I0312 08:39:07.535084 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/918530d9-5478-4136-8ba8-a36197850b6e-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-45gch\" (UID: \"918530d9-5478-4136-8ba8-a36197850b6e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-45gch" Mar 12 08:39:07 crc kubenswrapper[4809]: I0312 08:39:07.535367 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/918530d9-5478-4136-8ba8-a36197850b6e-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-45gch\" (UID: \"918530d9-5478-4136-8ba8-a36197850b6e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-45gch" Mar 12 08:39:07 crc kubenswrapper[4809]: I0312 08:39:07.638971 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/918530d9-5478-4136-8ba8-a36197850b6e-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-45gch\" (UID: \"918530d9-5478-4136-8ba8-a36197850b6e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-45gch" Mar 12 08:39:07 crc kubenswrapper[4809]: I0312 08:39:07.639087 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/918530d9-5478-4136-8ba8-a36197850b6e-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-45gch\" (UID: \"918530d9-5478-4136-8ba8-a36197850b6e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-45gch" Mar 12 08:39:07 crc kubenswrapper[4809]: I0312 08:39:07.639303 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/918530d9-5478-4136-8ba8-a36197850b6e-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-45gch\" (UID: \"918530d9-5478-4136-8ba8-a36197850b6e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-45gch" Mar 12 08:39:07 crc kubenswrapper[4809]: I0312 08:39:07.639382 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/918530d9-5478-4136-8ba8-a36197850b6e-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-45gch\" (UID: \"918530d9-5478-4136-8ba8-a36197850b6e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-45gch" Mar 12 08:39:07 crc kubenswrapper[4809]: I0312 08:39:07.639503 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5mr4\" (UniqueName: \"kubernetes.io/projected/918530d9-5478-4136-8ba8-a36197850b6e-kube-api-access-f5mr4\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-45gch\" (UID: \"918530d9-5478-4136-8ba8-a36197850b6e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-45gch" Mar 12 08:39:07 crc kubenswrapper[4809]: I0312 08:39:07.640484 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/918530d9-5478-4136-8ba8-a36197850b6e-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-45gch\" (UID: \"918530d9-5478-4136-8ba8-a36197850b6e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-45gch" Mar 12 08:39:07 crc kubenswrapper[4809]: I0312 08:39:07.644214 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/918530d9-5478-4136-8ba8-a36197850b6e-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-45gch\" (UID: \"918530d9-5478-4136-8ba8-a36197850b6e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-45gch" Mar 12 08:39:07 crc kubenswrapper[4809]: I0312 08:39:07.644342 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/918530d9-5478-4136-8ba8-a36197850b6e-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-45gch\" (UID: \"918530d9-5478-4136-8ba8-a36197850b6e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-45gch" Mar 12 08:39:07 crc kubenswrapper[4809]: I0312 08:39:07.652561 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/918530d9-5478-4136-8ba8-a36197850b6e-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-45gch\" (UID: \"918530d9-5478-4136-8ba8-a36197850b6e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-45gch" Mar 12 08:39:07 crc kubenswrapper[4809]: I0312 08:39:07.657495 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5mr4\" (UniqueName: \"kubernetes.io/projected/918530d9-5478-4136-8ba8-a36197850b6e-kube-api-access-f5mr4\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-45gch\" (UID: \"918530d9-5478-4136-8ba8-a36197850b6e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-45gch" Mar 12 08:39:07 crc kubenswrapper[4809]: I0312 08:39:07.735938 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-45gch" Mar 12 08:39:08 crc kubenswrapper[4809]: I0312 08:39:08.374008 4809 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 08:39:08 crc kubenswrapper[4809]: I0312 08:39:08.376159 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-45gch"] Mar 12 08:39:09 crc kubenswrapper[4809]: I0312 08:39:09.291044 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-45gch" event={"ID":"918530d9-5478-4136-8ba8-a36197850b6e","Type":"ContainerStarted","Data":"bfd4634651e6b4baaac26fa9875c586afd6e81f705d9f38b22686899635da7de"} Mar 12 08:39:09 crc kubenswrapper[4809]: I0312 08:39:09.291618 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-45gch" event={"ID":"918530d9-5478-4136-8ba8-a36197850b6e","Type":"ContainerStarted","Data":"3d227ef6beea29f1eaa1dcb7afeb44023898676103e03b68699eaf42418df87b"} Mar 12 08:39:09 crc kubenswrapper[4809]: I0312 08:39:09.315787 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-45gch" podStartSLOduration=1.8517405839999999 podStartE2EDuration="2.31576515s" podCreationTimestamp="2026-03-12 08:39:07 +0000 UTC" firstStartedPulling="2026-03-12 08:39:08.373536132 +0000 UTC m=+2421.955571875" lastFinishedPulling="2026-03-12 08:39:08.837560708 +0000 UTC m=+2422.419596441" observedRunningTime="2026-03-12 08:39:09.30952967 +0000 UTC m=+2422.891565403" watchObservedRunningTime="2026-03-12 08:39:09.31576515 +0000 UTC m=+2422.897800893" Mar 12 08:39:22 crc kubenswrapper[4809]: I0312 08:39:22.107176 4809 scope.go:117] "RemoveContainer" containerID="bf2cc15f1ee829a164218f56f3866a7078c46a7688edcf75cd90ece45a87d0e0" Mar 12 08:39:22 crc kubenswrapper[4809]: E0312 08:39:22.108763 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:39:24 crc kubenswrapper[4809]: I0312 08:39:24.058642 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-2cqrr"] Mar 12 08:39:24 crc kubenswrapper[4809]: I0312 08:39:24.069164 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-2cqrr"] Mar 12 08:39:25 crc kubenswrapper[4809]: I0312 08:39:25.125177 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38ca4679-8c23-449c-a9e8-fc4e224bd1af" path="/var/lib/kubelet/pods/38ca4679-8c23-449c-a9e8-fc4e224bd1af/volumes" Mar 12 08:39:31 crc kubenswrapper[4809]: I0312 08:39:31.743550 4809 scope.go:117] "RemoveContainer" containerID="4064341e4d2b5d0a89dcafe2e2ef9ec5e88e7d508fbe1c21d0c0ced5acbe01bf" Mar 12 08:39:33 crc kubenswrapper[4809]: I0312 08:39:33.106833 4809 scope.go:117] "RemoveContainer" containerID="bf2cc15f1ee829a164218f56f3866a7078c46a7688edcf75cd90ece45a87d0e0" Mar 12 08:39:33 crc kubenswrapper[4809]: E0312 08:39:33.107896 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:39:46 crc kubenswrapper[4809]: I0312 08:39:46.106722 4809 scope.go:117] "RemoveContainer" containerID="bf2cc15f1ee829a164218f56f3866a7078c46a7688edcf75cd90ece45a87d0e0" Mar 12 08:39:46 crc kubenswrapper[4809]: E0312 08:39:46.107700 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:39:57 crc kubenswrapper[4809]: I0312 08:39:57.117366 4809 scope.go:117] "RemoveContainer" containerID="bf2cc15f1ee829a164218f56f3866a7078c46a7688edcf75cd90ece45a87d0e0" Mar 12 08:39:57 crc kubenswrapper[4809]: E0312 08:39:57.118272 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:40:00 crc kubenswrapper[4809]: I0312 08:40:00.163500 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555080-lt2dh"] Mar 12 08:40:00 crc kubenswrapper[4809]: I0312 08:40:00.167073 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555080-lt2dh" Mar 12 08:40:00 crc kubenswrapper[4809]: I0312 08:40:00.170133 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 08:40:00 crc kubenswrapper[4809]: I0312 08:40:00.170319 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 08:40:00 crc kubenswrapper[4809]: I0312 08:40:00.170905 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-tvxqv" Mar 12 08:40:00 crc kubenswrapper[4809]: I0312 08:40:00.173097 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555080-lt2dh"] Mar 12 08:40:00 crc kubenswrapper[4809]: I0312 08:40:00.349469 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw6sj\" (UniqueName: \"kubernetes.io/projected/7f467235-08a5-4fa4-8e2e-d92ac7f6e831-kube-api-access-mw6sj\") pod \"auto-csr-approver-29555080-lt2dh\" (UID: \"7f467235-08a5-4fa4-8e2e-d92ac7f6e831\") " pod="openshift-infra/auto-csr-approver-29555080-lt2dh" Mar 12 08:40:00 crc kubenswrapper[4809]: I0312 08:40:00.452158 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mw6sj\" (UniqueName: \"kubernetes.io/projected/7f467235-08a5-4fa4-8e2e-d92ac7f6e831-kube-api-access-mw6sj\") pod \"auto-csr-approver-29555080-lt2dh\" (UID: \"7f467235-08a5-4fa4-8e2e-d92ac7f6e831\") " pod="openshift-infra/auto-csr-approver-29555080-lt2dh" Mar 12 08:40:00 crc kubenswrapper[4809]: I0312 08:40:00.476831 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mw6sj\" (UniqueName: \"kubernetes.io/projected/7f467235-08a5-4fa4-8e2e-d92ac7f6e831-kube-api-access-mw6sj\") pod \"auto-csr-approver-29555080-lt2dh\" (UID: \"7f467235-08a5-4fa4-8e2e-d92ac7f6e831\") " pod="openshift-infra/auto-csr-approver-29555080-lt2dh" Mar 12 08:40:00 crc kubenswrapper[4809]: I0312 08:40:00.487580 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555080-lt2dh" Mar 12 08:40:01 crc kubenswrapper[4809]: I0312 08:40:01.012152 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555080-lt2dh"] Mar 12 08:40:02 crc kubenswrapper[4809]: I0312 08:40:02.004958 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555080-lt2dh" event={"ID":"7f467235-08a5-4fa4-8e2e-d92ac7f6e831","Type":"ContainerStarted","Data":"9b1c45a15b7fb6a9a8b78f4aef5836e9d372bcfd49decf53e67a6710baee53aa"} Mar 12 08:40:03 crc kubenswrapper[4809]: I0312 08:40:03.023038 4809 generic.go:334] "Generic (PLEG): container finished" podID="7f467235-08a5-4fa4-8e2e-d92ac7f6e831" containerID="08332b8c409d593e6beb57266c62b8a9822f886e549db8f35b3d1dd436718a4f" exitCode=0 Mar 12 08:40:03 crc kubenswrapper[4809]: I0312 08:40:03.023400 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555080-lt2dh" event={"ID":"7f467235-08a5-4fa4-8e2e-d92ac7f6e831","Type":"ContainerDied","Data":"08332b8c409d593e6beb57266c62b8a9822f886e549db8f35b3d1dd436718a4f"} Mar 12 08:40:04 crc kubenswrapper[4809]: I0312 08:40:04.442692 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555080-lt2dh" Mar 12 08:40:04 crc kubenswrapper[4809]: I0312 08:40:04.558304 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mw6sj\" (UniqueName: \"kubernetes.io/projected/7f467235-08a5-4fa4-8e2e-d92ac7f6e831-kube-api-access-mw6sj\") pod \"7f467235-08a5-4fa4-8e2e-d92ac7f6e831\" (UID: \"7f467235-08a5-4fa4-8e2e-d92ac7f6e831\") " Mar 12 08:40:04 crc kubenswrapper[4809]: I0312 08:40:04.566717 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f467235-08a5-4fa4-8e2e-d92ac7f6e831-kube-api-access-mw6sj" (OuterVolumeSpecName: "kube-api-access-mw6sj") pod "7f467235-08a5-4fa4-8e2e-d92ac7f6e831" (UID: "7f467235-08a5-4fa4-8e2e-d92ac7f6e831"). InnerVolumeSpecName "kube-api-access-mw6sj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:40:04 crc kubenswrapper[4809]: I0312 08:40:04.661224 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mw6sj\" (UniqueName: \"kubernetes.io/projected/7f467235-08a5-4fa4-8e2e-d92ac7f6e831-kube-api-access-mw6sj\") on node \"crc\" DevicePath \"\"" Mar 12 08:40:05 crc kubenswrapper[4809]: I0312 08:40:05.048470 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555080-lt2dh" event={"ID":"7f467235-08a5-4fa4-8e2e-d92ac7f6e831","Type":"ContainerDied","Data":"9b1c45a15b7fb6a9a8b78f4aef5836e9d372bcfd49decf53e67a6710baee53aa"} Mar 12 08:40:05 crc kubenswrapper[4809]: I0312 08:40:05.048744 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b1c45a15b7fb6a9a8b78f4aef5836e9d372bcfd49decf53e67a6710baee53aa" Mar 12 08:40:05 crc kubenswrapper[4809]: I0312 08:40:05.048533 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555080-lt2dh" Mar 12 08:40:05 crc kubenswrapper[4809]: I0312 08:40:05.527728 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555074-7rpg6"] Mar 12 08:40:05 crc kubenswrapper[4809]: I0312 08:40:05.536924 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555074-7rpg6"] Mar 12 08:40:07 crc kubenswrapper[4809]: I0312 08:40:07.096334 4809 generic.go:334] "Generic (PLEG): container finished" podID="918530d9-5478-4136-8ba8-a36197850b6e" containerID="bfd4634651e6b4baaac26fa9875c586afd6e81f705d9f38b22686899635da7de" exitCode=0 Mar 12 08:40:07 crc kubenswrapper[4809]: I0312 08:40:07.096412 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-45gch" event={"ID":"918530d9-5478-4136-8ba8-a36197850b6e","Type":"ContainerDied","Data":"bfd4634651e6b4baaac26fa9875c586afd6e81f705d9f38b22686899635da7de"} Mar 12 08:40:07 crc kubenswrapper[4809]: I0312 08:40:07.130707 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0648e299-c5fa-40d2-8997-bb9189dea019" path="/var/lib/kubelet/pods/0648e299-c5fa-40d2-8997-bb9189dea019/volumes" Mar 12 08:40:08 crc kubenswrapper[4809]: I0312 08:40:08.572828 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-45gch" Mar 12 08:40:08 crc kubenswrapper[4809]: I0312 08:40:08.702434 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/918530d9-5478-4136-8ba8-a36197850b6e-ssh-key-openstack-edpm-ipam\") pod \"918530d9-5478-4136-8ba8-a36197850b6e\" (UID: \"918530d9-5478-4136-8ba8-a36197850b6e\") " Mar 12 08:40:08 crc kubenswrapper[4809]: I0312 08:40:08.702622 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/918530d9-5478-4136-8ba8-a36197850b6e-inventory\") pod \"918530d9-5478-4136-8ba8-a36197850b6e\" (UID: \"918530d9-5478-4136-8ba8-a36197850b6e\") " Mar 12 08:40:08 crc kubenswrapper[4809]: I0312 08:40:08.702816 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/918530d9-5478-4136-8ba8-a36197850b6e-ovncontroller-config-0\") pod \"918530d9-5478-4136-8ba8-a36197850b6e\" (UID: \"918530d9-5478-4136-8ba8-a36197850b6e\") " Mar 12 08:40:08 crc kubenswrapper[4809]: I0312 08:40:08.702869 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5mr4\" (UniqueName: \"kubernetes.io/projected/918530d9-5478-4136-8ba8-a36197850b6e-kube-api-access-f5mr4\") pod \"918530d9-5478-4136-8ba8-a36197850b6e\" (UID: \"918530d9-5478-4136-8ba8-a36197850b6e\") " Mar 12 08:40:08 crc kubenswrapper[4809]: I0312 08:40:08.702950 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/918530d9-5478-4136-8ba8-a36197850b6e-ovn-combined-ca-bundle\") pod \"918530d9-5478-4136-8ba8-a36197850b6e\" (UID: \"918530d9-5478-4136-8ba8-a36197850b6e\") " Mar 12 08:40:08 crc kubenswrapper[4809]: I0312 08:40:08.707461 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/918530d9-5478-4136-8ba8-a36197850b6e-kube-api-access-f5mr4" (OuterVolumeSpecName: "kube-api-access-f5mr4") pod "918530d9-5478-4136-8ba8-a36197850b6e" (UID: "918530d9-5478-4136-8ba8-a36197850b6e"). InnerVolumeSpecName "kube-api-access-f5mr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:40:08 crc kubenswrapper[4809]: I0312 08:40:08.714512 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/918530d9-5478-4136-8ba8-a36197850b6e-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "918530d9-5478-4136-8ba8-a36197850b6e" (UID: "918530d9-5478-4136-8ba8-a36197850b6e"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:40:08 crc kubenswrapper[4809]: I0312 08:40:08.734623 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/918530d9-5478-4136-8ba8-a36197850b6e-inventory" (OuterVolumeSpecName: "inventory") pod "918530d9-5478-4136-8ba8-a36197850b6e" (UID: "918530d9-5478-4136-8ba8-a36197850b6e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:40:08 crc kubenswrapper[4809]: I0312 08:40:08.734842 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/918530d9-5478-4136-8ba8-a36197850b6e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "918530d9-5478-4136-8ba8-a36197850b6e" (UID: "918530d9-5478-4136-8ba8-a36197850b6e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:40:08 crc kubenswrapper[4809]: I0312 08:40:08.738835 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/918530d9-5478-4136-8ba8-a36197850b6e-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "918530d9-5478-4136-8ba8-a36197850b6e" (UID: "918530d9-5478-4136-8ba8-a36197850b6e"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:40:08 crc kubenswrapper[4809]: I0312 08:40:08.805861 4809 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/918530d9-5478-4136-8ba8-a36197850b6e-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Mar 12 08:40:08 crc kubenswrapper[4809]: I0312 08:40:08.805898 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5mr4\" (UniqueName: \"kubernetes.io/projected/918530d9-5478-4136-8ba8-a36197850b6e-kube-api-access-f5mr4\") on node \"crc\" DevicePath \"\"" Mar 12 08:40:08 crc kubenswrapper[4809]: I0312 08:40:08.805909 4809 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/918530d9-5478-4136-8ba8-a36197850b6e-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:40:08 crc kubenswrapper[4809]: I0312 08:40:08.805917 4809 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/918530d9-5478-4136-8ba8-a36197850b6e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 08:40:08 crc kubenswrapper[4809]: I0312 08:40:08.805927 4809 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/918530d9-5478-4136-8ba8-a36197850b6e-inventory\") on node \"crc\" DevicePath \"\"" Mar 12 08:40:09 crc kubenswrapper[4809]: I0312 08:40:09.122601 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-45gch" event={"ID":"918530d9-5478-4136-8ba8-a36197850b6e","Type":"ContainerDied","Data":"3d227ef6beea29f1eaa1dcb7afeb44023898676103e03b68699eaf42418df87b"} Mar 12 08:40:09 crc kubenswrapper[4809]: I0312 08:40:09.122639 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-45gch" Mar 12 08:40:09 crc kubenswrapper[4809]: I0312 08:40:09.122662 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d227ef6beea29f1eaa1dcb7afeb44023898676103e03b68699eaf42418df87b" Mar 12 08:40:09 crc kubenswrapper[4809]: I0312 08:40:09.224464 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tgn6n"] Mar 12 08:40:09 crc kubenswrapper[4809]: E0312 08:40:09.225804 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f467235-08a5-4fa4-8e2e-d92ac7f6e831" containerName="oc" Mar 12 08:40:09 crc kubenswrapper[4809]: I0312 08:40:09.225912 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f467235-08a5-4fa4-8e2e-d92ac7f6e831" containerName="oc" Mar 12 08:40:09 crc kubenswrapper[4809]: E0312 08:40:09.226016 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="918530d9-5478-4136-8ba8-a36197850b6e" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Mar 12 08:40:09 crc kubenswrapper[4809]: I0312 08:40:09.226082 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="918530d9-5478-4136-8ba8-a36197850b6e" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Mar 12 08:40:09 crc kubenswrapper[4809]: I0312 08:40:09.226484 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f467235-08a5-4fa4-8e2e-d92ac7f6e831" containerName="oc" Mar 12 08:40:09 crc kubenswrapper[4809]: I0312 08:40:09.226567 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="918530d9-5478-4136-8ba8-a36197850b6e" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Mar 12 08:40:09 crc kubenswrapper[4809]: I0312 08:40:09.227629 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tgn6n" Mar 12 08:40:09 crc kubenswrapper[4809]: I0312 08:40:09.232661 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 12 08:40:09 crc kubenswrapper[4809]: I0312 08:40:09.232685 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 12 08:40:09 crc kubenswrapper[4809]: I0312 08:40:09.232934 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Mar 12 08:40:09 crc kubenswrapper[4809]: I0312 08:40:09.233079 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Mar 12 08:40:09 crc kubenswrapper[4809]: I0312 08:40:09.233233 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7nx95" Mar 12 08:40:09 crc kubenswrapper[4809]: I0312 08:40:09.233311 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 08:40:09 crc kubenswrapper[4809]: I0312 08:40:09.251836 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tgn6n"] Mar 12 08:40:09 crc kubenswrapper[4809]: I0312 08:40:09.420847 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zwvh\" (UniqueName: \"kubernetes.io/projected/3ea89d78-22ee-4c08-9c21-f093139ca0ac-kube-api-access-4zwvh\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tgn6n\" (UID: \"3ea89d78-22ee-4c08-9c21-f093139ca0ac\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tgn6n" Mar 12 08:40:09 crc kubenswrapper[4809]: I0312 08:40:09.420907 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3ea89d78-22ee-4c08-9c21-f093139ca0ac-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tgn6n\" (UID: \"3ea89d78-22ee-4c08-9c21-f093139ca0ac\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tgn6n" Mar 12 08:40:09 crc kubenswrapper[4809]: I0312 08:40:09.421185 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3ea89d78-22ee-4c08-9c21-f093139ca0ac-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tgn6n\" (UID: \"3ea89d78-22ee-4c08-9c21-f093139ca0ac\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tgn6n" Mar 12 08:40:09 crc kubenswrapper[4809]: I0312 08:40:09.421246 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3ea89d78-22ee-4c08-9c21-f093139ca0ac-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tgn6n\" (UID: \"3ea89d78-22ee-4c08-9c21-f093139ca0ac\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tgn6n" Mar 12 08:40:09 crc kubenswrapper[4809]: I0312 08:40:09.421353 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ea89d78-22ee-4c08-9c21-f093139ca0ac-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tgn6n\" (UID: \"3ea89d78-22ee-4c08-9c21-f093139ca0ac\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tgn6n" Mar 12 08:40:09 crc kubenswrapper[4809]: I0312 08:40:09.421784 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3ea89d78-22ee-4c08-9c21-f093139ca0ac-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tgn6n\" (UID: \"3ea89d78-22ee-4c08-9c21-f093139ca0ac\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tgn6n" Mar 12 08:40:09 crc kubenswrapper[4809]: I0312 08:40:09.523954 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3ea89d78-22ee-4c08-9c21-f093139ca0ac-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tgn6n\" (UID: \"3ea89d78-22ee-4c08-9c21-f093139ca0ac\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tgn6n" Mar 12 08:40:09 crc kubenswrapper[4809]: I0312 08:40:09.524027 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zwvh\" (UniqueName: \"kubernetes.io/projected/3ea89d78-22ee-4c08-9c21-f093139ca0ac-kube-api-access-4zwvh\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tgn6n\" (UID: \"3ea89d78-22ee-4c08-9c21-f093139ca0ac\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tgn6n" Mar 12 08:40:09 crc kubenswrapper[4809]: I0312 08:40:09.524081 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3ea89d78-22ee-4c08-9c21-f093139ca0ac-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tgn6n\" (UID: \"3ea89d78-22ee-4c08-9c21-f093139ca0ac\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tgn6n" Mar 12 08:40:09 crc kubenswrapper[4809]: I0312 08:40:09.524190 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3ea89d78-22ee-4c08-9c21-f093139ca0ac-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tgn6n\" (UID: \"3ea89d78-22ee-4c08-9c21-f093139ca0ac\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tgn6n" Mar 12 08:40:09 crc kubenswrapper[4809]: I0312 08:40:09.524227 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3ea89d78-22ee-4c08-9c21-f093139ca0ac-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tgn6n\" (UID: \"3ea89d78-22ee-4c08-9c21-f093139ca0ac\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tgn6n" Mar 12 08:40:09 crc kubenswrapper[4809]: I0312 08:40:09.524277 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ea89d78-22ee-4c08-9c21-f093139ca0ac-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tgn6n\" (UID: \"3ea89d78-22ee-4c08-9c21-f093139ca0ac\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tgn6n" Mar 12 08:40:09 crc kubenswrapper[4809]: I0312 08:40:09.529174 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3ea89d78-22ee-4c08-9c21-f093139ca0ac-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tgn6n\" (UID: \"3ea89d78-22ee-4c08-9c21-f093139ca0ac\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tgn6n" Mar 12 08:40:09 crc kubenswrapper[4809]: I0312 08:40:09.529304 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3ea89d78-22ee-4c08-9c21-f093139ca0ac-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tgn6n\" (UID: \"3ea89d78-22ee-4c08-9c21-f093139ca0ac\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tgn6n" Mar 12 08:40:09 crc kubenswrapper[4809]: I0312 08:40:09.529610 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3ea89d78-22ee-4c08-9c21-f093139ca0ac-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tgn6n\" (UID: \"3ea89d78-22ee-4c08-9c21-f093139ca0ac\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tgn6n" Mar 12 08:40:09 crc kubenswrapper[4809]: I0312 08:40:09.529865 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3ea89d78-22ee-4c08-9c21-f093139ca0ac-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tgn6n\" (UID: \"3ea89d78-22ee-4c08-9c21-f093139ca0ac\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tgn6n" Mar 12 08:40:09 crc kubenswrapper[4809]: I0312 08:40:09.530564 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ea89d78-22ee-4c08-9c21-f093139ca0ac-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tgn6n\" (UID: \"3ea89d78-22ee-4c08-9c21-f093139ca0ac\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tgn6n" Mar 12 08:40:09 crc kubenswrapper[4809]: I0312 08:40:09.544678 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zwvh\" (UniqueName: \"kubernetes.io/projected/3ea89d78-22ee-4c08-9c21-f093139ca0ac-kube-api-access-4zwvh\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tgn6n\" (UID: \"3ea89d78-22ee-4c08-9c21-f093139ca0ac\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tgn6n" Mar 12 08:40:09 crc kubenswrapper[4809]: I0312 08:40:09.593850 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tgn6n" Mar 12 08:40:10 crc kubenswrapper[4809]: I0312 08:40:10.258685 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tgn6n"] Mar 12 08:40:11 crc kubenswrapper[4809]: I0312 08:40:11.106193 4809 scope.go:117] "RemoveContainer" containerID="bf2cc15f1ee829a164218f56f3866a7078c46a7688edcf75cd90ece45a87d0e0" Mar 12 08:40:11 crc kubenswrapper[4809]: E0312 08:40:11.107051 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:40:11 crc kubenswrapper[4809]: I0312 08:40:11.147686 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tgn6n" event={"ID":"3ea89d78-22ee-4c08-9c21-f093139ca0ac","Type":"ContainerStarted","Data":"b8b59642326185c42afc6291a242f084cc913c88e9143eb45acd1fcf4f413b63"} Mar 12 08:40:11 crc kubenswrapper[4809]: I0312 08:40:11.147742 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tgn6n" event={"ID":"3ea89d78-22ee-4c08-9c21-f093139ca0ac","Type":"ContainerStarted","Data":"5be3059055af805c9ac742374e014e3b0621322e11affe173e8713aed4ed1ac5"} Mar 12 08:40:11 crc kubenswrapper[4809]: I0312 08:40:11.185892 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tgn6n" podStartSLOduration=1.716190622 podStartE2EDuration="2.185873647s" podCreationTimestamp="2026-03-12 08:40:09 +0000 UTC" firstStartedPulling="2026-03-12 08:40:10.260936709 +0000 UTC m=+2483.842972442" lastFinishedPulling="2026-03-12 08:40:10.730619734 +0000 UTC m=+2484.312655467" observedRunningTime="2026-03-12 08:40:11.170978473 +0000 UTC m=+2484.753014206" watchObservedRunningTime="2026-03-12 08:40:11.185873647 +0000 UTC m=+2484.767909380" Mar 12 08:40:26 crc kubenswrapper[4809]: I0312 08:40:26.108048 4809 scope.go:117] "RemoveContainer" containerID="bf2cc15f1ee829a164218f56f3866a7078c46a7688edcf75cd90ece45a87d0e0" Mar 12 08:40:26 crc kubenswrapper[4809]: E0312 08:40:26.110164 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:40:31 crc kubenswrapper[4809]: I0312 08:40:31.868610 4809 scope.go:117] "RemoveContainer" containerID="c54dc4feb9d455802d45ead893fd38dd6c38537551cfdc102ccb423bfe6a80cc" Mar 12 08:40:40 crc kubenswrapper[4809]: I0312 08:40:40.107674 4809 scope.go:117] "RemoveContainer" containerID="bf2cc15f1ee829a164218f56f3866a7078c46a7688edcf75cd90ece45a87d0e0" Mar 12 08:40:40 crc kubenswrapper[4809]: E0312 08:40:40.108536 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:40:53 crc kubenswrapper[4809]: I0312 08:40:53.107360 4809 scope.go:117] "RemoveContainer" containerID="bf2cc15f1ee829a164218f56f3866a7078c46a7688edcf75cd90ece45a87d0e0" Mar 12 08:40:53 crc kubenswrapper[4809]: I0312 08:40:53.721613 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" event={"ID":"101483ba-8ed3-40eb-9855-077e9add029f","Type":"ContainerStarted","Data":"347f0d93199140295c3a4a9ffbf332939fc6998442e7b2f348c94f44201c547b"} Mar 12 08:40:55 crc kubenswrapper[4809]: I0312 08:40:55.745597 4809 generic.go:334] "Generic (PLEG): container finished" podID="3ea89d78-22ee-4c08-9c21-f093139ca0ac" containerID="b8b59642326185c42afc6291a242f084cc913c88e9143eb45acd1fcf4f413b63" exitCode=0 Mar 12 08:40:55 crc kubenswrapper[4809]: I0312 08:40:55.745747 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tgn6n" event={"ID":"3ea89d78-22ee-4c08-9c21-f093139ca0ac","Type":"ContainerDied","Data":"b8b59642326185c42afc6291a242f084cc913c88e9143eb45acd1fcf4f413b63"} Mar 12 08:40:57 crc kubenswrapper[4809]: I0312 08:40:57.270202 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tgn6n" Mar 12 08:40:57 crc kubenswrapper[4809]: I0312 08:40:57.402136 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3ea89d78-22ee-4c08-9c21-f093139ca0ac-ssh-key-openstack-edpm-ipam\") pod \"3ea89d78-22ee-4c08-9c21-f093139ca0ac\" (UID: \"3ea89d78-22ee-4c08-9c21-f093139ca0ac\") " Mar 12 08:40:57 crc kubenswrapper[4809]: I0312 08:40:57.402216 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ea89d78-22ee-4c08-9c21-f093139ca0ac-neutron-metadata-combined-ca-bundle\") pod \"3ea89d78-22ee-4c08-9c21-f093139ca0ac\" (UID: \"3ea89d78-22ee-4c08-9c21-f093139ca0ac\") " Mar 12 08:40:57 crc kubenswrapper[4809]: I0312 08:40:57.402424 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3ea89d78-22ee-4c08-9c21-f093139ca0ac-inventory\") pod \"3ea89d78-22ee-4c08-9c21-f093139ca0ac\" (UID: \"3ea89d78-22ee-4c08-9c21-f093139ca0ac\") " Mar 12 08:40:57 crc kubenswrapper[4809]: I0312 08:40:57.402459 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3ea89d78-22ee-4c08-9c21-f093139ca0ac-neutron-ovn-metadata-agent-neutron-config-0\") pod \"3ea89d78-22ee-4c08-9c21-f093139ca0ac\" (UID: \"3ea89d78-22ee-4c08-9c21-f093139ca0ac\") " Mar 12 08:40:57 crc kubenswrapper[4809]: I0312 08:40:57.402668 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zwvh\" (UniqueName: \"kubernetes.io/projected/3ea89d78-22ee-4c08-9c21-f093139ca0ac-kube-api-access-4zwvh\") pod \"3ea89d78-22ee-4c08-9c21-f093139ca0ac\" (UID: \"3ea89d78-22ee-4c08-9c21-f093139ca0ac\") " Mar 12 08:40:57 crc kubenswrapper[4809]: I0312 08:40:57.402741 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3ea89d78-22ee-4c08-9c21-f093139ca0ac-nova-metadata-neutron-config-0\") pod \"3ea89d78-22ee-4c08-9c21-f093139ca0ac\" (UID: \"3ea89d78-22ee-4c08-9c21-f093139ca0ac\") " Mar 12 08:40:57 crc kubenswrapper[4809]: I0312 08:40:57.407993 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ea89d78-22ee-4c08-9c21-f093139ca0ac-kube-api-access-4zwvh" (OuterVolumeSpecName: "kube-api-access-4zwvh") pod "3ea89d78-22ee-4c08-9c21-f093139ca0ac" (UID: "3ea89d78-22ee-4c08-9c21-f093139ca0ac"). InnerVolumeSpecName "kube-api-access-4zwvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:40:57 crc kubenswrapper[4809]: I0312 08:40:57.408624 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ea89d78-22ee-4c08-9c21-f093139ca0ac-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "3ea89d78-22ee-4c08-9c21-f093139ca0ac" (UID: "3ea89d78-22ee-4c08-9c21-f093139ca0ac"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:40:57 crc kubenswrapper[4809]: I0312 08:40:57.436322 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ea89d78-22ee-4c08-9c21-f093139ca0ac-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3ea89d78-22ee-4c08-9c21-f093139ca0ac" (UID: "3ea89d78-22ee-4c08-9c21-f093139ca0ac"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:40:57 crc kubenswrapper[4809]: I0312 08:40:57.444072 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ea89d78-22ee-4c08-9c21-f093139ca0ac-inventory" (OuterVolumeSpecName: "inventory") pod "3ea89d78-22ee-4c08-9c21-f093139ca0ac" (UID: "3ea89d78-22ee-4c08-9c21-f093139ca0ac"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:40:57 crc kubenswrapper[4809]: I0312 08:40:57.445770 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ea89d78-22ee-4c08-9c21-f093139ca0ac-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "3ea89d78-22ee-4c08-9c21-f093139ca0ac" (UID: "3ea89d78-22ee-4c08-9c21-f093139ca0ac"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:40:57 crc kubenswrapper[4809]: I0312 08:40:57.456939 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ea89d78-22ee-4c08-9c21-f093139ca0ac-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "3ea89d78-22ee-4c08-9c21-f093139ca0ac" (UID: "3ea89d78-22ee-4c08-9c21-f093139ca0ac"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:40:57 crc kubenswrapper[4809]: I0312 08:40:57.505739 4809 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3ea89d78-22ee-4c08-9c21-f093139ca0ac-inventory\") on node \"crc\" DevicePath \"\"" Mar 12 08:40:57 crc kubenswrapper[4809]: I0312 08:40:57.505780 4809 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3ea89d78-22ee-4c08-9c21-f093139ca0ac-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Mar 12 08:40:57 crc kubenswrapper[4809]: I0312 08:40:57.505795 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zwvh\" (UniqueName: \"kubernetes.io/projected/3ea89d78-22ee-4c08-9c21-f093139ca0ac-kube-api-access-4zwvh\") on node \"crc\" DevicePath \"\"" Mar 12 08:40:57 crc kubenswrapper[4809]: I0312 08:40:57.505809 4809 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3ea89d78-22ee-4c08-9c21-f093139ca0ac-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Mar 12 08:40:57 crc kubenswrapper[4809]: I0312 08:40:57.505821 4809 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3ea89d78-22ee-4c08-9c21-f093139ca0ac-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 08:40:57 crc kubenswrapper[4809]: I0312 08:40:57.505832 4809 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ea89d78-22ee-4c08-9c21-f093139ca0ac-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:40:57 crc kubenswrapper[4809]: I0312 08:40:57.776030 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tgn6n" event={"ID":"3ea89d78-22ee-4c08-9c21-f093139ca0ac","Type":"ContainerDied","Data":"5be3059055af805c9ac742374e014e3b0621322e11affe173e8713aed4ed1ac5"} Mar 12 08:40:57 crc kubenswrapper[4809]: I0312 08:40:57.776367 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5be3059055af805c9ac742374e014e3b0621322e11affe173e8713aed4ed1ac5" Mar 12 08:40:57 crc kubenswrapper[4809]: I0312 08:40:57.776097 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tgn6n" Mar 12 08:40:57 crc kubenswrapper[4809]: I0312 08:40:57.950837 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vb7zc"] Mar 12 08:40:57 crc kubenswrapper[4809]: E0312 08:40:57.951367 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ea89d78-22ee-4c08-9c21-f093139ca0ac" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Mar 12 08:40:57 crc kubenswrapper[4809]: I0312 08:40:57.951385 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ea89d78-22ee-4c08-9c21-f093139ca0ac" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Mar 12 08:40:57 crc kubenswrapper[4809]: I0312 08:40:57.951613 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ea89d78-22ee-4c08-9c21-f093139ca0ac" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Mar 12 08:40:57 crc kubenswrapper[4809]: I0312 08:40:57.952442 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vb7zc" Mar 12 08:40:57 crc kubenswrapper[4809]: I0312 08:40:57.954557 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 12 08:40:57 crc kubenswrapper[4809]: I0312 08:40:57.955170 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Mar 12 08:40:57 crc kubenswrapper[4809]: I0312 08:40:57.955411 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 08:40:57 crc kubenswrapper[4809]: I0312 08:40:57.955613 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7nx95" Mar 12 08:40:57 crc kubenswrapper[4809]: I0312 08:40:57.957144 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 12 08:40:57 crc kubenswrapper[4809]: I0312 08:40:57.967342 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vb7zc"] Mar 12 08:40:58 crc kubenswrapper[4809]: I0312 08:40:58.024147 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/284de205-702a-4c6f-9623-d11a516113ca-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vb7zc\" (UID: \"284de205-702a-4c6f-9623-d11a516113ca\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vb7zc" Mar 12 08:40:58 crc kubenswrapper[4809]: I0312 08:40:58.024398 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/284de205-702a-4c6f-9623-d11a516113ca-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vb7zc\" (UID: \"284de205-702a-4c6f-9623-d11a516113ca\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vb7zc" Mar 12 08:40:58 crc kubenswrapper[4809]: I0312 08:40:58.024618 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlbmz\" (UniqueName: \"kubernetes.io/projected/284de205-702a-4c6f-9623-d11a516113ca-kube-api-access-wlbmz\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vb7zc\" (UID: \"284de205-702a-4c6f-9623-d11a516113ca\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vb7zc" Mar 12 08:40:58 crc kubenswrapper[4809]: I0312 08:40:58.024807 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/284de205-702a-4c6f-9623-d11a516113ca-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vb7zc\" (UID: \"284de205-702a-4c6f-9623-d11a516113ca\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vb7zc" Mar 12 08:40:58 crc kubenswrapper[4809]: I0312 08:40:58.025048 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/284de205-702a-4c6f-9623-d11a516113ca-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vb7zc\" (UID: \"284de205-702a-4c6f-9623-d11a516113ca\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vb7zc" Mar 12 08:40:58 crc kubenswrapper[4809]: I0312 08:40:58.127596 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/284de205-702a-4c6f-9623-d11a516113ca-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vb7zc\" (UID: \"284de205-702a-4c6f-9623-d11a516113ca\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vb7zc" Mar 12 08:40:58 crc kubenswrapper[4809]: I0312 08:40:58.127700 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/284de205-702a-4c6f-9623-d11a516113ca-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vb7zc\" (UID: \"284de205-702a-4c6f-9623-d11a516113ca\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vb7zc" Mar 12 08:40:58 crc kubenswrapper[4809]: I0312 08:40:58.127800 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/284de205-702a-4c6f-9623-d11a516113ca-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vb7zc\" (UID: \"284de205-702a-4c6f-9623-d11a516113ca\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vb7zc" Mar 12 08:40:58 crc kubenswrapper[4809]: I0312 08:40:58.127834 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlbmz\" (UniqueName: \"kubernetes.io/projected/284de205-702a-4c6f-9623-d11a516113ca-kube-api-access-wlbmz\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vb7zc\" (UID: \"284de205-702a-4c6f-9623-d11a516113ca\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vb7zc" Mar 12 08:40:58 crc kubenswrapper[4809]: I0312 08:40:58.127913 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/284de205-702a-4c6f-9623-d11a516113ca-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vb7zc\" (UID: \"284de205-702a-4c6f-9623-d11a516113ca\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vb7zc" Mar 12 08:40:58 crc kubenswrapper[4809]: I0312 08:40:58.133576 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/284de205-702a-4c6f-9623-d11a516113ca-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vb7zc\" (UID: \"284de205-702a-4c6f-9623-d11a516113ca\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vb7zc" Mar 12 08:40:58 crc kubenswrapper[4809]: I0312 08:40:58.133617 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/284de205-702a-4c6f-9623-d11a516113ca-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vb7zc\" (UID: \"284de205-702a-4c6f-9623-d11a516113ca\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vb7zc" Mar 12 08:40:58 crc kubenswrapper[4809]: I0312 08:40:58.134140 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/284de205-702a-4c6f-9623-d11a516113ca-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vb7zc\" (UID: \"284de205-702a-4c6f-9623-d11a516113ca\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vb7zc" Mar 12 08:40:58 crc kubenswrapper[4809]: I0312 08:40:58.134424 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/284de205-702a-4c6f-9623-d11a516113ca-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vb7zc\" (UID: \"284de205-702a-4c6f-9623-d11a516113ca\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vb7zc" Mar 12 08:40:58 crc kubenswrapper[4809]: I0312 08:40:58.156592 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlbmz\" (UniqueName: \"kubernetes.io/projected/284de205-702a-4c6f-9623-d11a516113ca-kube-api-access-wlbmz\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vb7zc\" (UID: \"284de205-702a-4c6f-9623-d11a516113ca\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vb7zc" Mar 12 08:40:58 crc kubenswrapper[4809]: I0312 08:40:58.269073 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vb7zc" Mar 12 08:40:58 crc kubenswrapper[4809]: I0312 08:40:58.876973 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vb7zc"] Mar 12 08:40:58 crc kubenswrapper[4809]: W0312 08:40:58.878712 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod284de205_702a_4c6f_9623_d11a516113ca.slice/crio-1205436b3e4e54ef2eecee05d850b64c48e062df56391d1de6b7110bf2b44f29 WatchSource:0}: Error finding container 1205436b3e4e54ef2eecee05d850b64c48e062df56391d1de6b7110bf2b44f29: Status 404 returned error can't find the container with id 1205436b3e4e54ef2eecee05d850b64c48e062df56391d1de6b7110bf2b44f29 Mar 12 08:40:59 crc kubenswrapper[4809]: I0312 08:40:59.804232 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vb7zc" event={"ID":"284de205-702a-4c6f-9623-d11a516113ca","Type":"ContainerStarted","Data":"6a5ce4d8d19a38fd1b02d80c930be246efd0eb38d3bc75bdd136aa863e9720f2"} Mar 12 08:40:59 crc kubenswrapper[4809]: I0312 08:40:59.804884 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vb7zc" event={"ID":"284de205-702a-4c6f-9623-d11a516113ca","Type":"ContainerStarted","Data":"1205436b3e4e54ef2eecee05d850b64c48e062df56391d1de6b7110bf2b44f29"} Mar 12 08:40:59 crc kubenswrapper[4809]: I0312 08:40:59.823688 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vb7zc" podStartSLOduration=2.339753492 podStartE2EDuration="2.823667215s" podCreationTimestamp="2026-03-12 08:40:57 +0000 UTC" firstStartedPulling="2026-03-12 08:40:58.881322173 +0000 UTC m=+2532.463357906" lastFinishedPulling="2026-03-12 08:40:59.365235876 +0000 UTC m=+2532.947271629" observedRunningTime="2026-03-12 08:40:59.819000239 +0000 UTC m=+2533.401035972" watchObservedRunningTime="2026-03-12 08:40:59.823667215 +0000 UTC m=+2533.405702948" Mar 12 08:41:32 crc kubenswrapper[4809]: I0312 08:41:32.026451 4809 scope.go:117] "RemoveContainer" containerID="72386443de708a82483e4c7b796fa86cd9fd9efca76c50d9d6945d8f0affe044" Mar 12 08:41:32 crc kubenswrapper[4809]: I0312 08:41:32.049792 4809 scope.go:117] "RemoveContainer" containerID="a83ea793fa5561a8d54951072a4eb84797feec33a1e817417dd0ccb467bdaf99" Mar 12 08:41:32 crc kubenswrapper[4809]: I0312 08:41:32.078769 4809 scope.go:117] "RemoveContainer" containerID="408f7aca14fd09e1456d0545c97ead2a90a6185a59e596822c5fc8a10a3997da" Mar 12 08:42:00 crc kubenswrapper[4809]: I0312 08:42:00.147388 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555082-k4zh7"] Mar 12 08:42:00 crc kubenswrapper[4809]: I0312 08:42:00.149880 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555082-k4zh7" Mar 12 08:42:00 crc kubenswrapper[4809]: I0312 08:42:00.151582 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 08:42:00 crc kubenswrapper[4809]: I0312 08:42:00.152354 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-tvxqv" Mar 12 08:42:00 crc kubenswrapper[4809]: I0312 08:42:00.153655 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 08:42:00 crc kubenswrapper[4809]: I0312 08:42:00.189601 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpwvf\" (UniqueName: \"kubernetes.io/projected/3d6d6056-9b82-41d2-8042-ab2e5aa1376f-kube-api-access-tpwvf\") pod \"auto-csr-approver-29555082-k4zh7\" (UID: \"3d6d6056-9b82-41d2-8042-ab2e5aa1376f\") " pod="openshift-infra/auto-csr-approver-29555082-k4zh7" Mar 12 08:42:00 crc kubenswrapper[4809]: I0312 08:42:00.199884 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555082-k4zh7"] Mar 12 08:42:00 crc kubenswrapper[4809]: I0312 08:42:00.295030 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpwvf\" (UniqueName: \"kubernetes.io/projected/3d6d6056-9b82-41d2-8042-ab2e5aa1376f-kube-api-access-tpwvf\") pod \"auto-csr-approver-29555082-k4zh7\" (UID: \"3d6d6056-9b82-41d2-8042-ab2e5aa1376f\") " pod="openshift-infra/auto-csr-approver-29555082-k4zh7" Mar 12 08:42:00 crc kubenswrapper[4809]: I0312 08:42:00.322000 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpwvf\" (UniqueName: \"kubernetes.io/projected/3d6d6056-9b82-41d2-8042-ab2e5aa1376f-kube-api-access-tpwvf\") pod \"auto-csr-approver-29555082-k4zh7\" (UID: \"3d6d6056-9b82-41d2-8042-ab2e5aa1376f\") " pod="openshift-infra/auto-csr-approver-29555082-k4zh7" Mar 12 08:42:00 crc kubenswrapper[4809]: I0312 08:42:00.515059 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555082-k4zh7" Mar 12 08:42:01 crc kubenswrapper[4809]: I0312 08:42:01.067693 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555082-k4zh7"] Mar 12 08:42:01 crc kubenswrapper[4809]: I0312 08:42:01.511433 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555082-k4zh7" event={"ID":"3d6d6056-9b82-41d2-8042-ab2e5aa1376f","Type":"ContainerStarted","Data":"2271dce0783836346eb2abc7f5a6d44d6d2ee0bdedb5de5702eb674fd5f856b5"} Mar 12 08:42:02 crc kubenswrapper[4809]: I0312 08:42:02.526768 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555082-k4zh7" event={"ID":"3d6d6056-9b82-41d2-8042-ab2e5aa1376f","Type":"ContainerStarted","Data":"ac321b7f95d4b5fe1ace5f28aa530d64e0a7c789d5698607b6eaac8883468dd4"} Mar 12 08:42:02 crc kubenswrapper[4809]: I0312 08:42:02.551261 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555082-k4zh7" podStartSLOduration=1.6097449670000001 podStartE2EDuration="2.551243317s" podCreationTimestamp="2026-03-12 08:42:00 +0000 UTC" firstStartedPulling="2026-03-12 08:42:01.071405932 +0000 UTC m=+2594.653441665" lastFinishedPulling="2026-03-12 08:42:02.012904282 +0000 UTC m=+2595.594940015" observedRunningTime="2026-03-12 08:42:02.544487053 +0000 UTC m=+2596.126522786" watchObservedRunningTime="2026-03-12 08:42:02.551243317 +0000 UTC m=+2596.133279050" Mar 12 08:42:03 crc kubenswrapper[4809]: I0312 08:42:03.539767 4809 generic.go:334] "Generic (PLEG): container finished" podID="3d6d6056-9b82-41d2-8042-ab2e5aa1376f" containerID="ac321b7f95d4b5fe1ace5f28aa530d64e0a7c789d5698607b6eaac8883468dd4" exitCode=0 Mar 12 08:42:03 crc kubenswrapper[4809]: I0312 08:42:03.539993 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555082-k4zh7" event={"ID":"3d6d6056-9b82-41d2-8042-ab2e5aa1376f","Type":"ContainerDied","Data":"ac321b7f95d4b5fe1ace5f28aa530d64e0a7c789d5698607b6eaac8883468dd4"} Mar 12 08:42:04 crc kubenswrapper[4809]: I0312 08:42:04.971803 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555082-k4zh7" Mar 12 08:42:05 crc kubenswrapper[4809]: I0312 08:42:05.136696 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tpwvf\" (UniqueName: \"kubernetes.io/projected/3d6d6056-9b82-41d2-8042-ab2e5aa1376f-kube-api-access-tpwvf\") pod \"3d6d6056-9b82-41d2-8042-ab2e5aa1376f\" (UID: \"3d6d6056-9b82-41d2-8042-ab2e5aa1376f\") " Mar 12 08:42:05 crc kubenswrapper[4809]: I0312 08:42:05.143853 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d6d6056-9b82-41d2-8042-ab2e5aa1376f-kube-api-access-tpwvf" (OuterVolumeSpecName: "kube-api-access-tpwvf") pod "3d6d6056-9b82-41d2-8042-ab2e5aa1376f" (UID: "3d6d6056-9b82-41d2-8042-ab2e5aa1376f"). InnerVolumeSpecName "kube-api-access-tpwvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:42:05 crc kubenswrapper[4809]: I0312 08:42:05.239917 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tpwvf\" (UniqueName: \"kubernetes.io/projected/3d6d6056-9b82-41d2-8042-ab2e5aa1376f-kube-api-access-tpwvf\") on node \"crc\" DevicePath \"\"" Mar 12 08:42:05 crc kubenswrapper[4809]: I0312 08:42:05.562099 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555082-k4zh7" event={"ID":"3d6d6056-9b82-41d2-8042-ab2e5aa1376f","Type":"ContainerDied","Data":"2271dce0783836346eb2abc7f5a6d44d6d2ee0bdedb5de5702eb674fd5f856b5"} Mar 12 08:42:05 crc kubenswrapper[4809]: I0312 08:42:05.562154 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2271dce0783836346eb2abc7f5a6d44d6d2ee0bdedb5de5702eb674fd5f856b5" Mar 12 08:42:05 crc kubenswrapper[4809]: I0312 08:42:05.562211 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555082-k4zh7" Mar 12 08:42:05 crc kubenswrapper[4809]: I0312 08:42:05.624264 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555076-4wwrr"] Mar 12 08:42:05 crc kubenswrapper[4809]: I0312 08:42:05.635437 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555076-4wwrr"] Mar 12 08:42:07 crc kubenswrapper[4809]: I0312 08:42:07.119272 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a12be75-8f6d-4114-be11-98ecd6f751f8" path="/var/lib/kubelet/pods/0a12be75-8f6d-4114-be11-98ecd6f751f8/volumes" Mar 12 08:42:32 crc kubenswrapper[4809]: I0312 08:42:32.186782 4809 scope.go:117] "RemoveContainer" containerID="1753d2a0768e80cda3adcbd57d5e42c9e69c26a21f8af4f7d60b282783c38b9a" Mar 12 08:43:04 crc kubenswrapper[4809]: I0312 08:43:04.766629 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-q7hlx"] Mar 12 08:43:04 crc kubenswrapper[4809]: E0312 08:43:04.769317 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d6d6056-9b82-41d2-8042-ab2e5aa1376f" containerName="oc" Mar 12 08:43:04 crc kubenswrapper[4809]: I0312 08:43:04.769344 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d6d6056-9b82-41d2-8042-ab2e5aa1376f" containerName="oc" Mar 12 08:43:04 crc kubenswrapper[4809]: I0312 08:43:04.769832 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d6d6056-9b82-41d2-8042-ab2e5aa1376f" containerName="oc" Mar 12 08:43:04 crc kubenswrapper[4809]: I0312 08:43:04.774399 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q7hlx" Mar 12 08:43:04 crc kubenswrapper[4809]: I0312 08:43:04.784702 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q7hlx"] Mar 12 08:43:04 crc kubenswrapper[4809]: I0312 08:43:04.852685 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e5c36fc-835f-40b8-8b0c-380d00797bb0-utilities\") pod \"community-operators-q7hlx\" (UID: \"3e5c36fc-835f-40b8-8b0c-380d00797bb0\") " pod="openshift-marketplace/community-operators-q7hlx" Mar 12 08:43:04 crc kubenswrapper[4809]: I0312 08:43:04.852738 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwgcm\" (UniqueName: \"kubernetes.io/projected/3e5c36fc-835f-40b8-8b0c-380d00797bb0-kube-api-access-dwgcm\") pod \"community-operators-q7hlx\" (UID: \"3e5c36fc-835f-40b8-8b0c-380d00797bb0\") " pod="openshift-marketplace/community-operators-q7hlx" Mar 12 08:43:04 crc kubenswrapper[4809]: I0312 08:43:04.853266 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e5c36fc-835f-40b8-8b0c-380d00797bb0-catalog-content\") pod \"community-operators-q7hlx\" (UID: \"3e5c36fc-835f-40b8-8b0c-380d00797bb0\") " pod="openshift-marketplace/community-operators-q7hlx" Mar 12 08:43:04 crc kubenswrapper[4809]: I0312 08:43:04.955107 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e5c36fc-835f-40b8-8b0c-380d00797bb0-utilities\") pod \"community-operators-q7hlx\" (UID: \"3e5c36fc-835f-40b8-8b0c-380d00797bb0\") " pod="openshift-marketplace/community-operators-q7hlx" Mar 12 08:43:04 crc kubenswrapper[4809]: I0312 08:43:04.955259 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwgcm\" (UniqueName: \"kubernetes.io/projected/3e5c36fc-835f-40b8-8b0c-380d00797bb0-kube-api-access-dwgcm\") pod \"community-operators-q7hlx\" (UID: \"3e5c36fc-835f-40b8-8b0c-380d00797bb0\") " pod="openshift-marketplace/community-operators-q7hlx" Mar 12 08:43:04 crc kubenswrapper[4809]: I0312 08:43:04.955442 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e5c36fc-835f-40b8-8b0c-380d00797bb0-catalog-content\") pod \"community-operators-q7hlx\" (UID: \"3e5c36fc-835f-40b8-8b0c-380d00797bb0\") " pod="openshift-marketplace/community-operators-q7hlx" Mar 12 08:43:04 crc kubenswrapper[4809]: I0312 08:43:04.955552 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e5c36fc-835f-40b8-8b0c-380d00797bb0-utilities\") pod \"community-operators-q7hlx\" (UID: \"3e5c36fc-835f-40b8-8b0c-380d00797bb0\") " pod="openshift-marketplace/community-operators-q7hlx" Mar 12 08:43:04 crc kubenswrapper[4809]: I0312 08:43:04.955863 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e5c36fc-835f-40b8-8b0c-380d00797bb0-catalog-content\") pod \"community-operators-q7hlx\" (UID: \"3e5c36fc-835f-40b8-8b0c-380d00797bb0\") " pod="openshift-marketplace/community-operators-q7hlx" Mar 12 08:43:04 crc kubenswrapper[4809]: I0312 08:43:04.986353 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwgcm\" (UniqueName: \"kubernetes.io/projected/3e5c36fc-835f-40b8-8b0c-380d00797bb0-kube-api-access-dwgcm\") pod \"community-operators-q7hlx\" (UID: \"3e5c36fc-835f-40b8-8b0c-380d00797bb0\") " pod="openshift-marketplace/community-operators-q7hlx" Mar 12 08:43:05 crc kubenswrapper[4809]: I0312 08:43:05.100789 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q7hlx" Mar 12 08:43:05 crc kubenswrapper[4809]: I0312 08:43:05.745227 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q7hlx"] Mar 12 08:43:06 crc kubenswrapper[4809]: I0312 08:43:06.406850 4809 generic.go:334] "Generic (PLEG): container finished" podID="3e5c36fc-835f-40b8-8b0c-380d00797bb0" containerID="ace62c9ef8715449a2bb53e189c57e166484730f91c6348eb27d6e731b91c8e1" exitCode=0 Mar 12 08:43:06 crc kubenswrapper[4809]: I0312 08:43:06.406962 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7hlx" event={"ID":"3e5c36fc-835f-40b8-8b0c-380d00797bb0","Type":"ContainerDied","Data":"ace62c9ef8715449a2bb53e189c57e166484730f91c6348eb27d6e731b91c8e1"} Mar 12 08:43:06 crc kubenswrapper[4809]: I0312 08:43:06.407217 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7hlx" event={"ID":"3e5c36fc-835f-40b8-8b0c-380d00797bb0","Type":"ContainerStarted","Data":"bbc471db4f01de2514fe1a1bfd4c7fae422c9d862e0498267d88e70511fec1b5"} Mar 12 08:43:12 crc kubenswrapper[4809]: I0312 08:43:12.490515 4809 generic.go:334] "Generic (PLEG): container finished" podID="3e5c36fc-835f-40b8-8b0c-380d00797bb0" containerID="c13b022ff25101bcf48371177a34cfebd86cc15fe8b1efa6f34e9d9083dd491a" exitCode=0 Mar 12 08:43:12 crc kubenswrapper[4809]: I0312 08:43:12.490696 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7hlx" event={"ID":"3e5c36fc-835f-40b8-8b0c-380d00797bb0","Type":"ContainerDied","Data":"c13b022ff25101bcf48371177a34cfebd86cc15fe8b1efa6f34e9d9083dd491a"} Mar 12 08:43:13 crc kubenswrapper[4809]: I0312 08:43:13.504770 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7hlx" event={"ID":"3e5c36fc-835f-40b8-8b0c-380d00797bb0","Type":"ContainerStarted","Data":"b4e76551ac26182bb53e4a65fb0092d78bd05de2d5f17b94593356fde90ced27"} Mar 12 08:43:13 crc kubenswrapper[4809]: I0312 08:43:13.539884 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-q7hlx" podStartSLOduration=3.015888553 podStartE2EDuration="9.539853201s" podCreationTimestamp="2026-03-12 08:43:04 +0000 UTC" firstStartedPulling="2026-03-12 08:43:06.409050072 +0000 UTC m=+2659.991085805" lastFinishedPulling="2026-03-12 08:43:12.93301472 +0000 UTC m=+2666.515050453" observedRunningTime="2026-03-12 08:43:13.527649168 +0000 UTC m=+2667.109684931" watchObservedRunningTime="2026-03-12 08:43:13.539853201 +0000 UTC m=+2667.121888954" Mar 12 08:43:15 crc kubenswrapper[4809]: I0312 08:43:15.048483 4809 patch_prober.go:28] interesting pod/machine-config-daemon-h6d4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 08:43:15 crc kubenswrapper[4809]: I0312 08:43:15.048842 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 08:43:15 crc kubenswrapper[4809]: I0312 08:43:15.101288 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-q7hlx" Mar 12 08:43:15 crc kubenswrapper[4809]: I0312 08:43:15.101334 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-q7hlx" Mar 12 08:43:16 crc kubenswrapper[4809]: I0312 08:43:16.151310 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-q7hlx" podUID="3e5c36fc-835f-40b8-8b0c-380d00797bb0" containerName="registry-server" probeResult="failure" output=< Mar 12 08:43:16 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 08:43:16 crc kubenswrapper[4809]: > Mar 12 08:43:25 crc kubenswrapper[4809]: I0312 08:43:25.162075 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-q7hlx" Mar 12 08:43:25 crc kubenswrapper[4809]: I0312 08:43:25.229882 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-q7hlx" Mar 12 08:43:25 crc kubenswrapper[4809]: I0312 08:43:25.317075 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q7hlx"] Mar 12 08:43:25 crc kubenswrapper[4809]: I0312 08:43:25.405380 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nscs5"] Mar 12 08:43:25 crc kubenswrapper[4809]: I0312 08:43:25.405628 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nscs5" podUID="8d3cb5a3-ab13-4827-a66f-117150abe43b" containerName="registry-server" containerID="cri-o://17d1302c6f20bbe09ea21efc6596ec988f06c1b114586cef8da05c3180ab136b" gracePeriod=2 Mar 12 08:43:25 crc kubenswrapper[4809]: I0312 08:43:25.683546 4809 generic.go:334] "Generic (PLEG): container finished" podID="8d3cb5a3-ab13-4827-a66f-117150abe43b" containerID="17d1302c6f20bbe09ea21efc6596ec988f06c1b114586cef8da05c3180ab136b" exitCode=0 Mar 12 08:43:25 crc kubenswrapper[4809]: I0312 08:43:25.683626 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nscs5" event={"ID":"8d3cb5a3-ab13-4827-a66f-117150abe43b","Type":"ContainerDied","Data":"17d1302c6f20bbe09ea21efc6596ec988f06c1b114586cef8da05c3180ab136b"} Mar 12 08:43:26 crc kubenswrapper[4809]: I0312 08:43:26.203229 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nscs5" Mar 12 08:43:26 crc kubenswrapper[4809]: I0312 08:43:26.292686 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wt5gk\" (UniqueName: \"kubernetes.io/projected/8d3cb5a3-ab13-4827-a66f-117150abe43b-kube-api-access-wt5gk\") pod \"8d3cb5a3-ab13-4827-a66f-117150abe43b\" (UID: \"8d3cb5a3-ab13-4827-a66f-117150abe43b\") " Mar 12 08:43:26 crc kubenswrapper[4809]: I0312 08:43:26.292921 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d3cb5a3-ab13-4827-a66f-117150abe43b-catalog-content\") pod \"8d3cb5a3-ab13-4827-a66f-117150abe43b\" (UID: \"8d3cb5a3-ab13-4827-a66f-117150abe43b\") " Mar 12 08:43:26 crc kubenswrapper[4809]: I0312 08:43:26.293050 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d3cb5a3-ab13-4827-a66f-117150abe43b-utilities\") pod \"8d3cb5a3-ab13-4827-a66f-117150abe43b\" (UID: \"8d3cb5a3-ab13-4827-a66f-117150abe43b\") " Mar 12 08:43:26 crc kubenswrapper[4809]: I0312 08:43:26.297783 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d3cb5a3-ab13-4827-a66f-117150abe43b-utilities" (OuterVolumeSpecName: "utilities") pod "8d3cb5a3-ab13-4827-a66f-117150abe43b" (UID: "8d3cb5a3-ab13-4827-a66f-117150abe43b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:43:26 crc kubenswrapper[4809]: I0312 08:43:26.300909 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d3cb5a3-ab13-4827-a66f-117150abe43b-kube-api-access-wt5gk" (OuterVolumeSpecName: "kube-api-access-wt5gk") pod "8d3cb5a3-ab13-4827-a66f-117150abe43b" (UID: "8d3cb5a3-ab13-4827-a66f-117150abe43b"). InnerVolumeSpecName "kube-api-access-wt5gk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:43:26 crc kubenswrapper[4809]: I0312 08:43:26.382184 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d3cb5a3-ab13-4827-a66f-117150abe43b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8d3cb5a3-ab13-4827-a66f-117150abe43b" (UID: "8d3cb5a3-ab13-4827-a66f-117150abe43b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:43:26 crc kubenswrapper[4809]: I0312 08:43:26.398101 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wt5gk\" (UniqueName: \"kubernetes.io/projected/8d3cb5a3-ab13-4827-a66f-117150abe43b-kube-api-access-wt5gk\") on node \"crc\" DevicePath \"\"" Mar 12 08:43:26 crc kubenswrapper[4809]: I0312 08:43:26.398165 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d3cb5a3-ab13-4827-a66f-117150abe43b-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 08:43:26 crc kubenswrapper[4809]: I0312 08:43:26.398176 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d3cb5a3-ab13-4827-a66f-117150abe43b-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 08:43:26 crc kubenswrapper[4809]: I0312 08:43:26.696797 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nscs5" event={"ID":"8d3cb5a3-ab13-4827-a66f-117150abe43b","Type":"ContainerDied","Data":"5e8cef456a3ccf9a28aa47c24a54b6117ebe89563983c82af93c3a0fc9485f6a"} Mar 12 08:43:26 crc kubenswrapper[4809]: I0312 08:43:26.697197 4809 scope.go:117] "RemoveContainer" containerID="17d1302c6f20bbe09ea21efc6596ec988f06c1b114586cef8da05c3180ab136b" Mar 12 08:43:26 crc kubenswrapper[4809]: I0312 08:43:26.696883 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nscs5" Mar 12 08:43:26 crc kubenswrapper[4809]: I0312 08:43:26.763959 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nscs5"] Mar 12 08:43:26 crc kubenswrapper[4809]: I0312 08:43:26.777019 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nscs5"] Mar 12 08:43:26 crc kubenswrapper[4809]: I0312 08:43:26.780200 4809 scope.go:117] "RemoveContainer" containerID="f69ee30531caa2dd4c6c6d5bcb4fddd06cb9e0e40b4029435f823c0d136e8521" Mar 12 08:43:26 crc kubenswrapper[4809]: I0312 08:43:26.828851 4809 scope.go:117] "RemoveContainer" containerID="47739538ac431c900b19497312c4cc536e7833e54b717bbab04b1f63bdbfad77" Mar 12 08:43:27 crc kubenswrapper[4809]: I0312 08:43:27.119298 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d3cb5a3-ab13-4827-a66f-117150abe43b" path="/var/lib/kubelet/pods/8d3cb5a3-ab13-4827-a66f-117150abe43b/volumes" Mar 12 08:43:29 crc kubenswrapper[4809]: I0312 08:43:29.425176 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-klbqp"] Mar 12 08:43:29 crc kubenswrapper[4809]: E0312 08:43:29.426705 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d3cb5a3-ab13-4827-a66f-117150abe43b" containerName="extract-utilities" Mar 12 08:43:29 crc kubenswrapper[4809]: I0312 08:43:29.426742 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d3cb5a3-ab13-4827-a66f-117150abe43b" containerName="extract-utilities" Mar 12 08:43:29 crc kubenswrapper[4809]: E0312 08:43:29.426804 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d3cb5a3-ab13-4827-a66f-117150abe43b" containerName="extract-content" Mar 12 08:43:29 crc kubenswrapper[4809]: I0312 08:43:29.426818 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d3cb5a3-ab13-4827-a66f-117150abe43b" containerName="extract-content" Mar 12 08:43:29 crc kubenswrapper[4809]: E0312 08:43:29.426883 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d3cb5a3-ab13-4827-a66f-117150abe43b" containerName="registry-server" Mar 12 08:43:29 crc kubenswrapper[4809]: I0312 08:43:29.426903 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d3cb5a3-ab13-4827-a66f-117150abe43b" containerName="registry-server" Mar 12 08:43:29 crc kubenswrapper[4809]: I0312 08:43:29.427545 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d3cb5a3-ab13-4827-a66f-117150abe43b" containerName="registry-server" Mar 12 08:43:29 crc kubenswrapper[4809]: I0312 08:43:29.431656 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-klbqp" Mar 12 08:43:29 crc kubenswrapper[4809]: I0312 08:43:29.459507 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-klbqp"] Mar 12 08:43:29 crc kubenswrapper[4809]: I0312 08:43:29.573649 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7xn4\" (UniqueName: \"kubernetes.io/projected/c1a8c3e7-94b1-4eec-99e9-f54b19eb5a72-kube-api-access-j7xn4\") pod \"certified-operators-klbqp\" (UID: \"c1a8c3e7-94b1-4eec-99e9-f54b19eb5a72\") " pod="openshift-marketplace/certified-operators-klbqp" Mar 12 08:43:29 crc kubenswrapper[4809]: I0312 08:43:29.573720 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1a8c3e7-94b1-4eec-99e9-f54b19eb5a72-catalog-content\") pod \"certified-operators-klbqp\" (UID: \"c1a8c3e7-94b1-4eec-99e9-f54b19eb5a72\") " pod="openshift-marketplace/certified-operators-klbqp" Mar 12 08:43:29 crc kubenswrapper[4809]: I0312 08:43:29.573947 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1a8c3e7-94b1-4eec-99e9-f54b19eb5a72-utilities\") pod \"certified-operators-klbqp\" (UID: \"c1a8c3e7-94b1-4eec-99e9-f54b19eb5a72\") " pod="openshift-marketplace/certified-operators-klbqp" Mar 12 08:43:29 crc kubenswrapper[4809]: I0312 08:43:29.676395 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7xn4\" (UniqueName: \"kubernetes.io/projected/c1a8c3e7-94b1-4eec-99e9-f54b19eb5a72-kube-api-access-j7xn4\") pod \"certified-operators-klbqp\" (UID: \"c1a8c3e7-94b1-4eec-99e9-f54b19eb5a72\") " pod="openshift-marketplace/certified-operators-klbqp" Mar 12 08:43:29 crc kubenswrapper[4809]: I0312 08:43:29.676470 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1a8c3e7-94b1-4eec-99e9-f54b19eb5a72-catalog-content\") pod \"certified-operators-klbqp\" (UID: \"c1a8c3e7-94b1-4eec-99e9-f54b19eb5a72\") " pod="openshift-marketplace/certified-operators-klbqp" Mar 12 08:43:29 crc kubenswrapper[4809]: I0312 08:43:29.676526 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1a8c3e7-94b1-4eec-99e9-f54b19eb5a72-utilities\") pod \"certified-operators-klbqp\" (UID: \"c1a8c3e7-94b1-4eec-99e9-f54b19eb5a72\") " pod="openshift-marketplace/certified-operators-klbqp" Mar 12 08:43:29 crc kubenswrapper[4809]: I0312 08:43:29.677133 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1a8c3e7-94b1-4eec-99e9-f54b19eb5a72-catalog-content\") pod \"certified-operators-klbqp\" (UID: \"c1a8c3e7-94b1-4eec-99e9-f54b19eb5a72\") " pod="openshift-marketplace/certified-operators-klbqp" Mar 12 08:43:29 crc kubenswrapper[4809]: I0312 08:43:29.677174 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1a8c3e7-94b1-4eec-99e9-f54b19eb5a72-utilities\") pod \"certified-operators-klbqp\" (UID: \"c1a8c3e7-94b1-4eec-99e9-f54b19eb5a72\") " pod="openshift-marketplace/certified-operators-klbqp" Mar 12 08:43:29 crc kubenswrapper[4809]: I0312 08:43:29.703220 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7xn4\" (UniqueName: \"kubernetes.io/projected/c1a8c3e7-94b1-4eec-99e9-f54b19eb5a72-kube-api-access-j7xn4\") pod \"certified-operators-klbqp\" (UID: \"c1a8c3e7-94b1-4eec-99e9-f54b19eb5a72\") " pod="openshift-marketplace/certified-operators-klbqp" Mar 12 08:43:29 crc kubenswrapper[4809]: I0312 08:43:29.763317 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-klbqp" Mar 12 08:43:30 crc kubenswrapper[4809]: I0312 08:43:30.349316 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-klbqp"] Mar 12 08:43:30 crc kubenswrapper[4809]: W0312 08:43:30.367463 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1a8c3e7_94b1_4eec_99e9_f54b19eb5a72.slice/crio-0ed48396cab74fdda116efccb0f22b2daa5c1b8e83eb7a68b64275aaaf3435a2 WatchSource:0}: Error finding container 0ed48396cab74fdda116efccb0f22b2daa5c1b8e83eb7a68b64275aaaf3435a2: Status 404 returned error can't find the container with id 0ed48396cab74fdda116efccb0f22b2daa5c1b8e83eb7a68b64275aaaf3435a2 Mar 12 08:43:30 crc kubenswrapper[4809]: I0312 08:43:30.739322 4809 generic.go:334] "Generic (PLEG): container finished" podID="c1a8c3e7-94b1-4eec-99e9-f54b19eb5a72" containerID="963a6fe5db8a3223cb3dbd1db52b09a1fceeae8cccd9e9b736fe105bfb4a5ed3" exitCode=0 Mar 12 08:43:30 crc kubenswrapper[4809]: I0312 08:43:30.739419 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-klbqp" event={"ID":"c1a8c3e7-94b1-4eec-99e9-f54b19eb5a72","Type":"ContainerDied","Data":"963a6fe5db8a3223cb3dbd1db52b09a1fceeae8cccd9e9b736fe105bfb4a5ed3"} Mar 12 08:43:30 crc kubenswrapper[4809]: I0312 08:43:30.739687 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-klbqp" event={"ID":"c1a8c3e7-94b1-4eec-99e9-f54b19eb5a72","Type":"ContainerStarted","Data":"0ed48396cab74fdda116efccb0f22b2daa5c1b8e83eb7a68b64275aaaf3435a2"} Mar 12 08:43:31 crc kubenswrapper[4809]: I0312 08:43:31.752356 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-klbqp" event={"ID":"c1a8c3e7-94b1-4eec-99e9-f54b19eb5a72","Type":"ContainerStarted","Data":"3f338b48fa8d4c4e21aa6fd9345a11595099807281ae5b166d42b37d97a7b336"} Mar 12 08:43:33 crc kubenswrapper[4809]: I0312 08:43:33.775208 4809 generic.go:334] "Generic (PLEG): container finished" podID="c1a8c3e7-94b1-4eec-99e9-f54b19eb5a72" containerID="3f338b48fa8d4c4e21aa6fd9345a11595099807281ae5b166d42b37d97a7b336" exitCode=0 Mar 12 08:43:33 crc kubenswrapper[4809]: I0312 08:43:33.775726 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-klbqp" event={"ID":"c1a8c3e7-94b1-4eec-99e9-f54b19eb5a72","Type":"ContainerDied","Data":"3f338b48fa8d4c4e21aa6fd9345a11595099807281ae5b166d42b37d97a7b336"} Mar 12 08:43:34 crc kubenswrapper[4809]: I0312 08:43:34.790407 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-klbqp" event={"ID":"c1a8c3e7-94b1-4eec-99e9-f54b19eb5a72","Type":"ContainerStarted","Data":"0d48a25e2e390fbee02e2e71f03acc0ab537cd80ce54e479c3ccaa535a6066d3"} Mar 12 08:43:39 crc kubenswrapper[4809]: I0312 08:43:39.763797 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-klbqp" Mar 12 08:43:39 crc kubenswrapper[4809]: I0312 08:43:39.764570 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-klbqp" Mar 12 08:43:39 crc kubenswrapper[4809]: I0312 08:43:39.832237 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-klbqp" Mar 12 08:43:39 crc kubenswrapper[4809]: I0312 08:43:39.867141 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-klbqp" podStartSLOduration=7.269065148 podStartE2EDuration="10.867096956s" podCreationTimestamp="2026-03-12 08:43:29 +0000 UTC" firstStartedPulling="2026-03-12 08:43:30.741302367 +0000 UTC m=+2684.323338100" lastFinishedPulling="2026-03-12 08:43:34.339334165 +0000 UTC m=+2687.921369908" observedRunningTime="2026-03-12 08:43:34.80897298 +0000 UTC m=+2688.391008713" watchObservedRunningTime="2026-03-12 08:43:39.867096956 +0000 UTC m=+2693.449132689" Mar 12 08:43:39 crc kubenswrapper[4809]: I0312 08:43:39.940109 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-klbqp" Mar 12 08:43:40 crc kubenswrapper[4809]: I0312 08:43:40.084563 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-klbqp"] Mar 12 08:43:41 crc kubenswrapper[4809]: I0312 08:43:41.890619 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-klbqp" podUID="c1a8c3e7-94b1-4eec-99e9-f54b19eb5a72" containerName="registry-server" containerID="cri-o://0d48a25e2e390fbee02e2e71f03acc0ab537cd80ce54e479c3ccaa535a6066d3" gracePeriod=2 Mar 12 08:43:42 crc kubenswrapper[4809]: I0312 08:43:42.587090 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-klbqp" Mar 12 08:43:42 crc kubenswrapper[4809]: I0312 08:43:42.652675 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1a8c3e7-94b1-4eec-99e9-f54b19eb5a72-utilities\") pod \"c1a8c3e7-94b1-4eec-99e9-f54b19eb5a72\" (UID: \"c1a8c3e7-94b1-4eec-99e9-f54b19eb5a72\") " Mar 12 08:43:42 crc kubenswrapper[4809]: I0312 08:43:42.652926 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7xn4\" (UniqueName: \"kubernetes.io/projected/c1a8c3e7-94b1-4eec-99e9-f54b19eb5a72-kube-api-access-j7xn4\") pod \"c1a8c3e7-94b1-4eec-99e9-f54b19eb5a72\" (UID: \"c1a8c3e7-94b1-4eec-99e9-f54b19eb5a72\") " Mar 12 08:43:42 crc kubenswrapper[4809]: I0312 08:43:42.652979 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1a8c3e7-94b1-4eec-99e9-f54b19eb5a72-catalog-content\") pod \"c1a8c3e7-94b1-4eec-99e9-f54b19eb5a72\" (UID: \"c1a8c3e7-94b1-4eec-99e9-f54b19eb5a72\") " Mar 12 08:43:42 crc kubenswrapper[4809]: I0312 08:43:42.654072 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1a8c3e7-94b1-4eec-99e9-f54b19eb5a72-utilities" (OuterVolumeSpecName: "utilities") pod "c1a8c3e7-94b1-4eec-99e9-f54b19eb5a72" (UID: "c1a8c3e7-94b1-4eec-99e9-f54b19eb5a72"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:43:42 crc kubenswrapper[4809]: I0312 08:43:42.661801 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1a8c3e7-94b1-4eec-99e9-f54b19eb5a72-kube-api-access-j7xn4" (OuterVolumeSpecName: "kube-api-access-j7xn4") pod "c1a8c3e7-94b1-4eec-99e9-f54b19eb5a72" (UID: "c1a8c3e7-94b1-4eec-99e9-f54b19eb5a72"). InnerVolumeSpecName "kube-api-access-j7xn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:43:42 crc kubenswrapper[4809]: I0312 08:43:42.738235 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1a8c3e7-94b1-4eec-99e9-f54b19eb5a72-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c1a8c3e7-94b1-4eec-99e9-f54b19eb5a72" (UID: "c1a8c3e7-94b1-4eec-99e9-f54b19eb5a72"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:43:42 crc kubenswrapper[4809]: I0312 08:43:42.757688 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1a8c3e7-94b1-4eec-99e9-f54b19eb5a72-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 08:43:42 crc kubenswrapper[4809]: I0312 08:43:42.757750 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7xn4\" (UniqueName: \"kubernetes.io/projected/c1a8c3e7-94b1-4eec-99e9-f54b19eb5a72-kube-api-access-j7xn4\") on node \"crc\" DevicePath \"\"" Mar 12 08:43:42 crc kubenswrapper[4809]: I0312 08:43:42.757776 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1a8c3e7-94b1-4eec-99e9-f54b19eb5a72-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 08:43:42 crc kubenswrapper[4809]: I0312 08:43:42.908505 4809 generic.go:334] "Generic (PLEG): container finished" podID="c1a8c3e7-94b1-4eec-99e9-f54b19eb5a72" containerID="0d48a25e2e390fbee02e2e71f03acc0ab537cd80ce54e479c3ccaa535a6066d3" exitCode=0 Mar 12 08:43:42 crc kubenswrapper[4809]: I0312 08:43:42.908571 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-klbqp" event={"ID":"c1a8c3e7-94b1-4eec-99e9-f54b19eb5a72","Type":"ContainerDied","Data":"0d48a25e2e390fbee02e2e71f03acc0ab537cd80ce54e479c3ccaa535a6066d3"} Mar 12 08:43:42 crc kubenswrapper[4809]: I0312 08:43:42.908610 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-klbqp" event={"ID":"c1a8c3e7-94b1-4eec-99e9-f54b19eb5a72","Type":"ContainerDied","Data":"0ed48396cab74fdda116efccb0f22b2daa5c1b8e83eb7a68b64275aaaf3435a2"} Mar 12 08:43:42 crc kubenswrapper[4809]: I0312 08:43:42.908657 4809 scope.go:117] "RemoveContainer" containerID="0d48a25e2e390fbee02e2e71f03acc0ab537cd80ce54e479c3ccaa535a6066d3" Mar 12 08:43:42 crc kubenswrapper[4809]: I0312 08:43:42.908667 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-klbqp" Mar 12 08:43:42 crc kubenswrapper[4809]: I0312 08:43:42.954435 4809 scope.go:117] "RemoveContainer" containerID="3f338b48fa8d4c4e21aa6fd9345a11595099807281ae5b166d42b37d97a7b336" Mar 12 08:43:42 crc kubenswrapper[4809]: I0312 08:43:42.961812 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-klbqp"] Mar 12 08:43:42 crc kubenswrapper[4809]: I0312 08:43:42.987561 4809 scope.go:117] "RemoveContainer" containerID="963a6fe5db8a3223cb3dbd1db52b09a1fceeae8cccd9e9b736fe105bfb4a5ed3" Mar 12 08:43:42 crc kubenswrapper[4809]: I0312 08:43:42.991148 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-klbqp"] Mar 12 08:43:43 crc kubenswrapper[4809]: I0312 08:43:43.040699 4809 scope.go:117] "RemoveContainer" containerID="0d48a25e2e390fbee02e2e71f03acc0ab537cd80ce54e479c3ccaa535a6066d3" Mar 12 08:43:43 crc kubenswrapper[4809]: E0312 08:43:43.041841 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d48a25e2e390fbee02e2e71f03acc0ab537cd80ce54e479c3ccaa535a6066d3\": container with ID starting with 0d48a25e2e390fbee02e2e71f03acc0ab537cd80ce54e479c3ccaa535a6066d3 not found: ID does not exist" containerID="0d48a25e2e390fbee02e2e71f03acc0ab537cd80ce54e479c3ccaa535a6066d3" Mar 12 08:43:43 crc kubenswrapper[4809]: I0312 08:43:43.041894 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d48a25e2e390fbee02e2e71f03acc0ab537cd80ce54e479c3ccaa535a6066d3"} err="failed to get container status \"0d48a25e2e390fbee02e2e71f03acc0ab537cd80ce54e479c3ccaa535a6066d3\": rpc error: code = NotFound desc = could not find container \"0d48a25e2e390fbee02e2e71f03acc0ab537cd80ce54e479c3ccaa535a6066d3\": container with ID starting with 0d48a25e2e390fbee02e2e71f03acc0ab537cd80ce54e479c3ccaa535a6066d3 not found: ID does not exist" Mar 12 08:43:43 crc kubenswrapper[4809]: I0312 08:43:43.041930 4809 scope.go:117] "RemoveContainer" containerID="3f338b48fa8d4c4e21aa6fd9345a11595099807281ae5b166d42b37d97a7b336" Mar 12 08:43:43 crc kubenswrapper[4809]: E0312 08:43:43.042635 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f338b48fa8d4c4e21aa6fd9345a11595099807281ae5b166d42b37d97a7b336\": container with ID starting with 3f338b48fa8d4c4e21aa6fd9345a11595099807281ae5b166d42b37d97a7b336 not found: ID does not exist" containerID="3f338b48fa8d4c4e21aa6fd9345a11595099807281ae5b166d42b37d97a7b336" Mar 12 08:43:43 crc kubenswrapper[4809]: I0312 08:43:43.042719 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f338b48fa8d4c4e21aa6fd9345a11595099807281ae5b166d42b37d97a7b336"} err="failed to get container status \"3f338b48fa8d4c4e21aa6fd9345a11595099807281ae5b166d42b37d97a7b336\": rpc error: code = NotFound desc = could not find container \"3f338b48fa8d4c4e21aa6fd9345a11595099807281ae5b166d42b37d97a7b336\": container with ID starting with 3f338b48fa8d4c4e21aa6fd9345a11595099807281ae5b166d42b37d97a7b336 not found: ID does not exist" Mar 12 08:43:43 crc kubenswrapper[4809]: I0312 08:43:43.042781 4809 scope.go:117] "RemoveContainer" containerID="963a6fe5db8a3223cb3dbd1db52b09a1fceeae8cccd9e9b736fe105bfb4a5ed3" Mar 12 08:43:43 crc kubenswrapper[4809]: E0312 08:43:43.043205 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"963a6fe5db8a3223cb3dbd1db52b09a1fceeae8cccd9e9b736fe105bfb4a5ed3\": container with ID starting with 963a6fe5db8a3223cb3dbd1db52b09a1fceeae8cccd9e9b736fe105bfb4a5ed3 not found: ID does not exist" containerID="963a6fe5db8a3223cb3dbd1db52b09a1fceeae8cccd9e9b736fe105bfb4a5ed3" Mar 12 08:43:43 crc kubenswrapper[4809]: I0312 08:43:43.043248 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"963a6fe5db8a3223cb3dbd1db52b09a1fceeae8cccd9e9b736fe105bfb4a5ed3"} err="failed to get container status \"963a6fe5db8a3223cb3dbd1db52b09a1fceeae8cccd9e9b736fe105bfb4a5ed3\": rpc error: code = NotFound desc = could not find container \"963a6fe5db8a3223cb3dbd1db52b09a1fceeae8cccd9e9b736fe105bfb4a5ed3\": container with ID starting with 963a6fe5db8a3223cb3dbd1db52b09a1fceeae8cccd9e9b736fe105bfb4a5ed3 not found: ID does not exist" Mar 12 08:43:43 crc kubenswrapper[4809]: I0312 08:43:43.120891 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1a8c3e7-94b1-4eec-99e9-f54b19eb5a72" path="/var/lib/kubelet/pods/c1a8c3e7-94b1-4eec-99e9-f54b19eb5a72/volumes" Mar 12 08:43:45 crc kubenswrapper[4809]: I0312 08:43:45.048255 4809 patch_prober.go:28] interesting pod/machine-config-daemon-h6d4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 08:43:45 crc kubenswrapper[4809]: I0312 08:43:45.048597 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 08:44:00 crc kubenswrapper[4809]: I0312 08:44:00.148022 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555084-k2gpc"] Mar 12 08:44:00 crc kubenswrapper[4809]: E0312 08:44:00.149059 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1a8c3e7-94b1-4eec-99e9-f54b19eb5a72" containerName="extract-utilities" Mar 12 08:44:00 crc kubenswrapper[4809]: I0312 08:44:00.149072 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1a8c3e7-94b1-4eec-99e9-f54b19eb5a72" containerName="extract-utilities" Mar 12 08:44:00 crc kubenswrapper[4809]: E0312 08:44:00.149083 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1a8c3e7-94b1-4eec-99e9-f54b19eb5a72" containerName="registry-server" Mar 12 08:44:00 crc kubenswrapper[4809]: I0312 08:44:00.149089 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1a8c3e7-94b1-4eec-99e9-f54b19eb5a72" containerName="registry-server" Mar 12 08:44:00 crc kubenswrapper[4809]: E0312 08:44:00.149101 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1a8c3e7-94b1-4eec-99e9-f54b19eb5a72" containerName="extract-content" Mar 12 08:44:00 crc kubenswrapper[4809]: I0312 08:44:00.149107 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1a8c3e7-94b1-4eec-99e9-f54b19eb5a72" containerName="extract-content" Mar 12 08:44:00 crc kubenswrapper[4809]: I0312 08:44:00.149377 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1a8c3e7-94b1-4eec-99e9-f54b19eb5a72" containerName="registry-server" Mar 12 08:44:00 crc kubenswrapper[4809]: I0312 08:44:00.150213 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555084-k2gpc" Mar 12 08:44:00 crc kubenswrapper[4809]: I0312 08:44:00.154155 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 08:44:00 crc kubenswrapper[4809]: I0312 08:44:00.154276 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-tvxqv" Mar 12 08:44:00 crc kubenswrapper[4809]: I0312 08:44:00.157683 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 08:44:00 crc kubenswrapper[4809]: I0312 08:44:00.179280 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555084-k2gpc"] Mar 12 08:44:00 crc kubenswrapper[4809]: I0312 08:44:00.180309 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbmgd\" (UniqueName: \"kubernetes.io/projected/4a1a295d-1106-45a2-9718-13f7cef581bd-kube-api-access-mbmgd\") pod \"auto-csr-approver-29555084-k2gpc\" (UID: \"4a1a295d-1106-45a2-9718-13f7cef581bd\") " pod="openshift-infra/auto-csr-approver-29555084-k2gpc" Mar 12 08:44:00 crc kubenswrapper[4809]: I0312 08:44:00.282917 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbmgd\" (UniqueName: \"kubernetes.io/projected/4a1a295d-1106-45a2-9718-13f7cef581bd-kube-api-access-mbmgd\") pod \"auto-csr-approver-29555084-k2gpc\" (UID: \"4a1a295d-1106-45a2-9718-13f7cef581bd\") " pod="openshift-infra/auto-csr-approver-29555084-k2gpc" Mar 12 08:44:00 crc kubenswrapper[4809]: I0312 08:44:00.313870 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbmgd\" (UniqueName: \"kubernetes.io/projected/4a1a295d-1106-45a2-9718-13f7cef581bd-kube-api-access-mbmgd\") pod \"auto-csr-approver-29555084-k2gpc\" (UID: \"4a1a295d-1106-45a2-9718-13f7cef581bd\") " pod="openshift-infra/auto-csr-approver-29555084-k2gpc" Mar 12 08:44:00 crc kubenswrapper[4809]: I0312 08:44:00.476614 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555084-k2gpc" Mar 12 08:44:00 crc kubenswrapper[4809]: I0312 08:44:00.980906 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555084-k2gpc"] Mar 12 08:44:01 crc kubenswrapper[4809]: I0312 08:44:01.191134 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555084-k2gpc" event={"ID":"4a1a295d-1106-45a2-9718-13f7cef581bd","Type":"ContainerStarted","Data":"c6b6ec78bee4fd8feca6cbe2cbc6157836944e283f06eeb76c7089c9b32532cf"} Mar 12 08:44:02 crc kubenswrapper[4809]: I0312 08:44:02.235584 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555084-k2gpc" event={"ID":"4a1a295d-1106-45a2-9718-13f7cef581bd","Type":"ContainerStarted","Data":"a74fb42a7b0343554a461edd77982d87f15f5ef443fb40ae821b1060e218eca5"} Mar 12 08:44:02 crc kubenswrapper[4809]: I0312 08:44:02.257837 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555084-k2gpc" podStartSLOduration=1.36338217 podStartE2EDuration="2.257814939s" podCreationTimestamp="2026-03-12 08:44:00 +0000 UTC" firstStartedPulling="2026-03-12 08:44:00.988898755 +0000 UTC m=+2714.570934478" lastFinishedPulling="2026-03-12 08:44:01.883331514 +0000 UTC m=+2715.465367247" observedRunningTime="2026-03-12 08:44:02.252619177 +0000 UTC m=+2715.834654910" watchObservedRunningTime="2026-03-12 08:44:02.257814939 +0000 UTC m=+2715.839850672" Mar 12 08:44:03 crc kubenswrapper[4809]: I0312 08:44:03.256307 4809 generic.go:334] "Generic (PLEG): container finished" podID="4a1a295d-1106-45a2-9718-13f7cef581bd" containerID="a74fb42a7b0343554a461edd77982d87f15f5ef443fb40ae821b1060e218eca5" exitCode=0 Mar 12 08:44:03 crc kubenswrapper[4809]: I0312 08:44:03.256366 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555084-k2gpc" event={"ID":"4a1a295d-1106-45a2-9718-13f7cef581bd","Type":"ContainerDied","Data":"a74fb42a7b0343554a461edd77982d87f15f5ef443fb40ae821b1060e218eca5"} Mar 12 08:44:04 crc kubenswrapper[4809]: I0312 08:44:04.715693 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555084-k2gpc" Mar 12 08:44:04 crc kubenswrapper[4809]: I0312 08:44:04.822930 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbmgd\" (UniqueName: \"kubernetes.io/projected/4a1a295d-1106-45a2-9718-13f7cef581bd-kube-api-access-mbmgd\") pod \"4a1a295d-1106-45a2-9718-13f7cef581bd\" (UID: \"4a1a295d-1106-45a2-9718-13f7cef581bd\") " Mar 12 08:44:04 crc kubenswrapper[4809]: I0312 08:44:04.832972 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a1a295d-1106-45a2-9718-13f7cef581bd-kube-api-access-mbmgd" (OuterVolumeSpecName: "kube-api-access-mbmgd") pod "4a1a295d-1106-45a2-9718-13f7cef581bd" (UID: "4a1a295d-1106-45a2-9718-13f7cef581bd"). InnerVolumeSpecName "kube-api-access-mbmgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:44:04 crc kubenswrapper[4809]: I0312 08:44:04.925733 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbmgd\" (UniqueName: \"kubernetes.io/projected/4a1a295d-1106-45a2-9718-13f7cef581bd-kube-api-access-mbmgd\") on node \"crc\" DevicePath \"\"" Mar 12 08:44:05 crc kubenswrapper[4809]: I0312 08:44:05.284107 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555084-k2gpc" event={"ID":"4a1a295d-1106-45a2-9718-13f7cef581bd","Type":"ContainerDied","Data":"c6b6ec78bee4fd8feca6cbe2cbc6157836944e283f06eeb76c7089c9b32532cf"} Mar 12 08:44:05 crc kubenswrapper[4809]: I0312 08:44:05.284208 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6b6ec78bee4fd8feca6cbe2cbc6157836944e283f06eeb76c7089c9b32532cf" Mar 12 08:44:05 crc kubenswrapper[4809]: I0312 08:44:05.284223 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555084-k2gpc" Mar 12 08:44:05 crc kubenswrapper[4809]: I0312 08:44:05.334121 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555078-5hcvx"] Mar 12 08:44:05 crc kubenswrapper[4809]: I0312 08:44:05.346150 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555078-5hcvx"] Mar 12 08:44:07 crc kubenswrapper[4809]: I0312 08:44:07.122333 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e95f881b-5b87-442e-bd68-b9dfaf6c1bf3" path="/var/lib/kubelet/pods/e95f881b-5b87-442e-bd68-b9dfaf6c1bf3/volumes" Mar 12 08:44:15 crc kubenswrapper[4809]: I0312 08:44:15.048855 4809 patch_prober.go:28] interesting pod/machine-config-daemon-h6d4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 08:44:15 crc kubenswrapper[4809]: I0312 08:44:15.049882 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 08:44:15 crc kubenswrapper[4809]: I0312 08:44:15.049955 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" Mar 12 08:44:15 crc kubenswrapper[4809]: I0312 08:44:15.051254 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"347f0d93199140295c3a4a9ffbf332939fc6998442e7b2f348c94f44201c547b"} pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 08:44:15 crc kubenswrapper[4809]: I0312 08:44:15.051318 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" containerID="cri-o://347f0d93199140295c3a4a9ffbf332939fc6998442e7b2f348c94f44201c547b" gracePeriod=600 Mar 12 08:44:15 crc kubenswrapper[4809]: I0312 08:44:15.435609 4809 generic.go:334] "Generic (PLEG): container finished" podID="101483ba-8ed3-40eb-9855-077e9add029f" containerID="347f0d93199140295c3a4a9ffbf332939fc6998442e7b2f348c94f44201c547b" exitCode=0 Mar 12 08:44:15 crc kubenswrapper[4809]: I0312 08:44:15.435666 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" event={"ID":"101483ba-8ed3-40eb-9855-077e9add029f","Type":"ContainerDied","Data":"347f0d93199140295c3a4a9ffbf332939fc6998442e7b2f348c94f44201c547b"} Mar 12 08:44:15 crc kubenswrapper[4809]: I0312 08:44:15.436284 4809 scope.go:117] "RemoveContainer" containerID="bf2cc15f1ee829a164218f56f3866a7078c46a7688edcf75cd90ece45a87d0e0" Mar 12 08:44:16 crc kubenswrapper[4809]: I0312 08:44:16.457463 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" event={"ID":"101483ba-8ed3-40eb-9855-077e9add029f","Type":"ContainerStarted","Data":"09fd2a67830bd13c39e112c63c28313e1f2e66e0b513dca499389267040e529c"} Mar 12 08:44:32 crc kubenswrapper[4809]: I0312 08:44:32.331887 4809 scope.go:117] "RemoveContainer" containerID="b56025b8da2276bfbff31a5a75c3ba43bb0e2549bd4d40235c4f51cbeb3b414a" Mar 12 08:44:39 crc kubenswrapper[4809]: I0312 08:44:39.705339 4809 generic.go:334] "Generic (PLEG): container finished" podID="284de205-702a-4c6f-9623-d11a516113ca" containerID="6a5ce4d8d19a38fd1b02d80c930be246efd0eb38d3bc75bdd136aa863e9720f2" exitCode=0 Mar 12 08:44:39 crc kubenswrapper[4809]: I0312 08:44:39.705431 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vb7zc" event={"ID":"284de205-702a-4c6f-9623-d11a516113ca","Type":"ContainerDied","Data":"6a5ce4d8d19a38fd1b02d80c930be246efd0eb38d3bc75bdd136aa863e9720f2"} Mar 12 08:44:41 crc kubenswrapper[4809]: I0312 08:44:41.243452 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vb7zc" Mar 12 08:44:41 crc kubenswrapper[4809]: I0312 08:44:41.444819 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlbmz\" (UniqueName: \"kubernetes.io/projected/284de205-702a-4c6f-9623-d11a516113ca-kube-api-access-wlbmz\") pod \"284de205-702a-4c6f-9623-d11a516113ca\" (UID: \"284de205-702a-4c6f-9623-d11a516113ca\") " Mar 12 08:44:41 crc kubenswrapper[4809]: I0312 08:44:41.445036 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/284de205-702a-4c6f-9623-d11a516113ca-libvirt-combined-ca-bundle\") pod \"284de205-702a-4c6f-9623-d11a516113ca\" (UID: \"284de205-702a-4c6f-9623-d11a516113ca\") " Mar 12 08:44:41 crc kubenswrapper[4809]: I0312 08:44:41.445080 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/284de205-702a-4c6f-9623-d11a516113ca-ssh-key-openstack-edpm-ipam\") pod \"284de205-702a-4c6f-9623-d11a516113ca\" (UID: \"284de205-702a-4c6f-9623-d11a516113ca\") " Mar 12 08:44:41 crc kubenswrapper[4809]: I0312 08:44:41.445225 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/284de205-702a-4c6f-9623-d11a516113ca-inventory\") pod \"284de205-702a-4c6f-9623-d11a516113ca\" (UID: \"284de205-702a-4c6f-9623-d11a516113ca\") " Mar 12 08:44:41 crc kubenswrapper[4809]: I0312 08:44:41.445289 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/284de205-702a-4c6f-9623-d11a516113ca-libvirt-secret-0\") pod \"284de205-702a-4c6f-9623-d11a516113ca\" (UID: \"284de205-702a-4c6f-9623-d11a516113ca\") " Mar 12 08:44:41 crc kubenswrapper[4809]: I0312 08:44:41.466496 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/284de205-702a-4c6f-9623-d11a516113ca-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "284de205-702a-4c6f-9623-d11a516113ca" (UID: "284de205-702a-4c6f-9623-d11a516113ca"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:44:41 crc kubenswrapper[4809]: I0312 08:44:41.467625 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/284de205-702a-4c6f-9623-d11a516113ca-kube-api-access-wlbmz" (OuterVolumeSpecName: "kube-api-access-wlbmz") pod "284de205-702a-4c6f-9623-d11a516113ca" (UID: "284de205-702a-4c6f-9623-d11a516113ca"). InnerVolumeSpecName "kube-api-access-wlbmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:44:41 crc kubenswrapper[4809]: I0312 08:44:41.478339 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/284de205-702a-4c6f-9623-d11a516113ca-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "284de205-702a-4c6f-9623-d11a516113ca" (UID: "284de205-702a-4c6f-9623-d11a516113ca"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:44:41 crc kubenswrapper[4809]: I0312 08:44:41.478911 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/284de205-702a-4c6f-9623-d11a516113ca-inventory" (OuterVolumeSpecName: "inventory") pod "284de205-702a-4c6f-9623-d11a516113ca" (UID: "284de205-702a-4c6f-9623-d11a516113ca"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:44:41 crc kubenswrapper[4809]: I0312 08:44:41.484945 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/284de205-702a-4c6f-9623-d11a516113ca-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "284de205-702a-4c6f-9623-d11a516113ca" (UID: "284de205-702a-4c6f-9623-d11a516113ca"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:44:41 crc kubenswrapper[4809]: I0312 08:44:41.548706 4809 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/284de205-702a-4c6f-9623-d11a516113ca-inventory\") on node \"crc\" DevicePath \"\"" Mar 12 08:44:41 crc kubenswrapper[4809]: I0312 08:44:41.548743 4809 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/284de205-702a-4c6f-9623-d11a516113ca-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Mar 12 08:44:41 crc kubenswrapper[4809]: I0312 08:44:41.548757 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlbmz\" (UniqueName: \"kubernetes.io/projected/284de205-702a-4c6f-9623-d11a516113ca-kube-api-access-wlbmz\") on node \"crc\" DevicePath \"\"" Mar 12 08:44:41 crc kubenswrapper[4809]: I0312 08:44:41.548767 4809 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/284de205-702a-4c6f-9623-d11a516113ca-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:44:41 crc kubenswrapper[4809]: I0312 08:44:41.548776 4809 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/284de205-702a-4c6f-9623-d11a516113ca-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 08:44:41 crc kubenswrapper[4809]: I0312 08:44:41.735618 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vb7zc" event={"ID":"284de205-702a-4c6f-9623-d11a516113ca","Type":"ContainerDied","Data":"1205436b3e4e54ef2eecee05d850b64c48e062df56391d1de6b7110bf2b44f29"} Mar 12 08:44:41 crc kubenswrapper[4809]: I0312 08:44:41.735669 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1205436b3e4e54ef2eecee05d850b64c48e062df56391d1de6b7110bf2b44f29" Mar 12 08:44:41 crc kubenswrapper[4809]: I0312 08:44:41.735711 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vb7zc" Mar 12 08:44:41 crc kubenswrapper[4809]: I0312 08:44:41.831166 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-w88ml"] Mar 12 08:44:41 crc kubenswrapper[4809]: E0312 08:44:41.831777 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="284de205-702a-4c6f-9623-d11a516113ca" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Mar 12 08:44:41 crc kubenswrapper[4809]: I0312 08:44:41.831797 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="284de205-702a-4c6f-9623-d11a516113ca" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Mar 12 08:44:41 crc kubenswrapper[4809]: E0312 08:44:41.831829 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a1a295d-1106-45a2-9718-13f7cef581bd" containerName="oc" Mar 12 08:44:41 crc kubenswrapper[4809]: I0312 08:44:41.831837 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a1a295d-1106-45a2-9718-13f7cef581bd" containerName="oc" Mar 12 08:44:41 crc kubenswrapper[4809]: I0312 08:44:41.832054 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a1a295d-1106-45a2-9718-13f7cef581bd" containerName="oc" Mar 12 08:44:41 crc kubenswrapper[4809]: I0312 08:44:41.832075 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="284de205-702a-4c6f-9623-d11a516113ca" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Mar 12 08:44:41 crc kubenswrapper[4809]: I0312 08:44:41.832948 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w88ml" Mar 12 08:44:41 crc kubenswrapper[4809]: I0312 08:44:41.836039 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7nx95" Mar 12 08:44:41 crc kubenswrapper[4809]: I0312 08:44:41.836507 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Mar 12 08:44:41 crc kubenswrapper[4809]: I0312 08:44:41.836745 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 12 08:44:41 crc kubenswrapper[4809]: I0312 08:44:41.836988 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 12 08:44:41 crc kubenswrapper[4809]: I0312 08:44:41.837398 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 08:44:41 crc kubenswrapper[4809]: I0312 08:44:41.840338 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Mar 12 08:44:41 crc kubenswrapper[4809]: I0312 08:44:41.848060 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Mar 12 08:44:41 crc kubenswrapper[4809]: I0312 08:44:41.857176 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/0fb437d4-d106-4655-8a3f-05446deb2be1-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w88ml\" (UID: \"0fb437d4-d106-4655-8a3f-05446deb2be1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w88ml" Mar 12 08:44:41 crc kubenswrapper[4809]: I0312 08:44:41.857231 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/0fb437d4-d106-4655-8a3f-05446deb2be1-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w88ml\" (UID: \"0fb437d4-d106-4655-8a3f-05446deb2be1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w88ml" Mar 12 08:44:41 crc kubenswrapper[4809]: I0312 08:44:41.857257 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzdcl\" (UniqueName: \"kubernetes.io/projected/0fb437d4-d106-4655-8a3f-05446deb2be1-kube-api-access-rzdcl\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w88ml\" (UID: \"0fb437d4-d106-4655-8a3f-05446deb2be1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w88ml" Mar 12 08:44:41 crc kubenswrapper[4809]: I0312 08:44:41.857516 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fb437d4-d106-4655-8a3f-05446deb2be1-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w88ml\" (UID: \"0fb437d4-d106-4655-8a3f-05446deb2be1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w88ml" Mar 12 08:44:41 crc kubenswrapper[4809]: I0312 08:44:41.857820 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/0fb437d4-d106-4655-8a3f-05446deb2be1-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w88ml\" (UID: \"0fb437d4-d106-4655-8a3f-05446deb2be1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w88ml" Mar 12 08:44:41 crc kubenswrapper[4809]: I0312 08:44:41.858180 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/0fb437d4-d106-4655-8a3f-05446deb2be1-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w88ml\" (UID: \"0fb437d4-d106-4655-8a3f-05446deb2be1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w88ml" Mar 12 08:44:41 crc kubenswrapper[4809]: I0312 08:44:41.858283 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0fb437d4-d106-4655-8a3f-05446deb2be1-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w88ml\" (UID: \"0fb437d4-d106-4655-8a3f-05446deb2be1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w88ml" Mar 12 08:44:41 crc kubenswrapper[4809]: I0312 08:44:41.858404 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/0fb437d4-d106-4655-8a3f-05446deb2be1-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w88ml\" (UID: \"0fb437d4-d106-4655-8a3f-05446deb2be1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w88ml" Mar 12 08:44:41 crc kubenswrapper[4809]: I0312 08:44:41.858480 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0fb437d4-d106-4655-8a3f-05446deb2be1-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w88ml\" (UID: \"0fb437d4-d106-4655-8a3f-05446deb2be1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w88ml" Mar 12 08:44:41 crc kubenswrapper[4809]: I0312 08:44:41.858530 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/0fb437d4-d106-4655-8a3f-05446deb2be1-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w88ml\" (UID: \"0fb437d4-d106-4655-8a3f-05446deb2be1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w88ml" Mar 12 08:44:41 crc kubenswrapper[4809]: I0312 08:44:41.858584 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/0fb437d4-d106-4655-8a3f-05446deb2be1-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w88ml\" (UID: \"0fb437d4-d106-4655-8a3f-05446deb2be1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w88ml" Mar 12 08:44:41 crc kubenswrapper[4809]: I0312 08:44:41.861648 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-w88ml"] Mar 12 08:44:41 crc kubenswrapper[4809]: I0312 08:44:41.960057 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0fb437d4-d106-4655-8a3f-05446deb2be1-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w88ml\" (UID: \"0fb437d4-d106-4655-8a3f-05446deb2be1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w88ml" Mar 12 08:44:41 crc kubenswrapper[4809]: I0312 08:44:41.960134 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/0fb437d4-d106-4655-8a3f-05446deb2be1-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w88ml\" (UID: \"0fb437d4-d106-4655-8a3f-05446deb2be1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w88ml" Mar 12 08:44:41 crc kubenswrapper[4809]: I0312 08:44:41.960163 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0fb437d4-d106-4655-8a3f-05446deb2be1-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w88ml\" (UID: \"0fb437d4-d106-4655-8a3f-05446deb2be1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w88ml" Mar 12 08:44:41 crc kubenswrapper[4809]: I0312 08:44:41.960182 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/0fb437d4-d106-4655-8a3f-05446deb2be1-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w88ml\" (UID: \"0fb437d4-d106-4655-8a3f-05446deb2be1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w88ml" Mar 12 08:44:41 crc kubenswrapper[4809]: I0312 08:44:41.960203 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/0fb437d4-d106-4655-8a3f-05446deb2be1-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w88ml\" (UID: \"0fb437d4-d106-4655-8a3f-05446deb2be1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w88ml" Mar 12 08:44:41 crc kubenswrapper[4809]: I0312 08:44:41.960354 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/0fb437d4-d106-4655-8a3f-05446deb2be1-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w88ml\" (UID: \"0fb437d4-d106-4655-8a3f-05446deb2be1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w88ml" Mar 12 08:44:41 crc kubenswrapper[4809]: I0312 08:44:41.960388 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/0fb437d4-d106-4655-8a3f-05446deb2be1-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w88ml\" (UID: \"0fb437d4-d106-4655-8a3f-05446deb2be1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w88ml" Mar 12 08:44:41 crc kubenswrapper[4809]: I0312 08:44:41.960416 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzdcl\" (UniqueName: \"kubernetes.io/projected/0fb437d4-d106-4655-8a3f-05446deb2be1-kube-api-access-rzdcl\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w88ml\" (UID: \"0fb437d4-d106-4655-8a3f-05446deb2be1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w88ml" Mar 12 08:44:41 crc kubenswrapper[4809]: I0312 08:44:41.960478 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fb437d4-d106-4655-8a3f-05446deb2be1-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w88ml\" (UID: \"0fb437d4-d106-4655-8a3f-05446deb2be1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w88ml" Mar 12 08:44:41 crc kubenswrapper[4809]: I0312 08:44:41.960548 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/0fb437d4-d106-4655-8a3f-05446deb2be1-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w88ml\" (UID: \"0fb437d4-d106-4655-8a3f-05446deb2be1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w88ml" Mar 12 08:44:41 crc kubenswrapper[4809]: I0312 08:44:41.960621 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/0fb437d4-d106-4655-8a3f-05446deb2be1-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w88ml\" (UID: \"0fb437d4-d106-4655-8a3f-05446deb2be1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w88ml" Mar 12 08:44:41 crc kubenswrapper[4809]: I0312 08:44:41.961805 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/0fb437d4-d106-4655-8a3f-05446deb2be1-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w88ml\" (UID: \"0fb437d4-d106-4655-8a3f-05446deb2be1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w88ml" Mar 12 08:44:41 crc kubenswrapper[4809]: I0312 08:44:41.965025 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/0fb437d4-d106-4655-8a3f-05446deb2be1-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w88ml\" (UID: \"0fb437d4-d106-4655-8a3f-05446deb2be1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w88ml" Mar 12 08:44:41 crc kubenswrapper[4809]: I0312 08:44:41.965794 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fb437d4-d106-4655-8a3f-05446deb2be1-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w88ml\" (UID: \"0fb437d4-d106-4655-8a3f-05446deb2be1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w88ml" Mar 12 08:44:41 crc kubenswrapper[4809]: I0312 08:44:41.966692 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/0fb437d4-d106-4655-8a3f-05446deb2be1-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w88ml\" (UID: \"0fb437d4-d106-4655-8a3f-05446deb2be1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w88ml" Mar 12 08:44:41 crc kubenswrapper[4809]: I0312 08:44:41.967541 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0fb437d4-d106-4655-8a3f-05446deb2be1-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w88ml\" (UID: \"0fb437d4-d106-4655-8a3f-05446deb2be1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w88ml" Mar 12 08:44:41 crc kubenswrapper[4809]: I0312 08:44:41.968189 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/0fb437d4-d106-4655-8a3f-05446deb2be1-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w88ml\" (UID: \"0fb437d4-d106-4655-8a3f-05446deb2be1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w88ml" Mar 12 08:44:41 crc kubenswrapper[4809]: I0312 08:44:41.969508 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/0fb437d4-d106-4655-8a3f-05446deb2be1-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w88ml\" (UID: \"0fb437d4-d106-4655-8a3f-05446deb2be1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w88ml" Mar 12 08:44:41 crc kubenswrapper[4809]: I0312 08:44:41.970246 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/0fb437d4-d106-4655-8a3f-05446deb2be1-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w88ml\" (UID: \"0fb437d4-d106-4655-8a3f-05446deb2be1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w88ml" Mar 12 08:44:41 crc kubenswrapper[4809]: I0312 08:44:41.972721 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0fb437d4-d106-4655-8a3f-05446deb2be1-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w88ml\" (UID: \"0fb437d4-d106-4655-8a3f-05446deb2be1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w88ml" Mar 12 08:44:41 crc kubenswrapper[4809]: I0312 08:44:41.980872 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/0fb437d4-d106-4655-8a3f-05446deb2be1-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w88ml\" (UID: \"0fb437d4-d106-4655-8a3f-05446deb2be1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w88ml" Mar 12 08:44:41 crc kubenswrapper[4809]: I0312 08:44:41.981004 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzdcl\" (UniqueName: \"kubernetes.io/projected/0fb437d4-d106-4655-8a3f-05446deb2be1-kube-api-access-rzdcl\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w88ml\" (UID: \"0fb437d4-d106-4655-8a3f-05446deb2be1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w88ml" Mar 12 08:44:42 crc kubenswrapper[4809]: I0312 08:44:42.152319 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w88ml" Mar 12 08:44:42 crc kubenswrapper[4809]: I0312 08:44:42.757538 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-w88ml"] Mar 12 08:44:42 crc kubenswrapper[4809]: I0312 08:44:42.769476 4809 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 08:44:43 crc kubenswrapper[4809]: I0312 08:44:43.757692 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w88ml" event={"ID":"0fb437d4-d106-4655-8a3f-05446deb2be1","Type":"ContainerStarted","Data":"155b576d8b607f8c899cc74a9ece259c83ea9152ef79abc019be2e77a82b0307"} Mar 12 08:44:43 crc kubenswrapper[4809]: I0312 08:44:43.758304 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w88ml" event={"ID":"0fb437d4-d106-4655-8a3f-05446deb2be1","Type":"ContainerStarted","Data":"0c5e46b4de9f961cea6512640479eecd273fad92664bf0a107dbf3c0815b5105"} Mar 12 08:44:43 crc kubenswrapper[4809]: I0312 08:44:43.786241 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w88ml" podStartSLOduration=2.38256684 podStartE2EDuration="2.78622164s" podCreationTimestamp="2026-03-12 08:44:41 +0000 UTC" firstStartedPulling="2026-03-12 08:44:42.769257242 +0000 UTC m=+2756.351292975" lastFinishedPulling="2026-03-12 08:44:43.172912042 +0000 UTC m=+2756.754947775" observedRunningTime="2026-03-12 08:44:43.773641627 +0000 UTC m=+2757.355677360" watchObservedRunningTime="2026-03-12 08:44:43.78622164 +0000 UTC m=+2757.368257373" Mar 12 08:45:00 crc kubenswrapper[4809]: I0312 08:45:00.136874 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555085-9xbwl"] Mar 12 08:45:00 crc kubenswrapper[4809]: I0312 08:45:00.139539 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555085-9xbwl" Mar 12 08:45:00 crc kubenswrapper[4809]: I0312 08:45:00.144234 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 12 08:45:00 crc kubenswrapper[4809]: I0312 08:45:00.144712 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 12 08:45:00 crc kubenswrapper[4809]: I0312 08:45:00.151898 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555085-9xbwl"] Mar 12 08:45:00 crc kubenswrapper[4809]: I0312 08:45:00.315814 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7dfd98d0-2a15-4bcf-b463-8786260177f4-secret-volume\") pod \"collect-profiles-29555085-9xbwl\" (UID: \"7dfd98d0-2a15-4bcf-b463-8786260177f4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555085-9xbwl" Mar 12 08:45:00 crc kubenswrapper[4809]: I0312 08:45:00.316278 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9hk7\" (UniqueName: \"kubernetes.io/projected/7dfd98d0-2a15-4bcf-b463-8786260177f4-kube-api-access-c9hk7\") pod \"collect-profiles-29555085-9xbwl\" (UID: \"7dfd98d0-2a15-4bcf-b463-8786260177f4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555085-9xbwl" Mar 12 08:45:00 crc kubenswrapper[4809]: I0312 08:45:00.316310 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7dfd98d0-2a15-4bcf-b463-8786260177f4-config-volume\") pod \"collect-profiles-29555085-9xbwl\" (UID: \"7dfd98d0-2a15-4bcf-b463-8786260177f4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555085-9xbwl" Mar 12 08:45:00 crc kubenswrapper[4809]: I0312 08:45:00.455964 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7dfd98d0-2a15-4bcf-b463-8786260177f4-secret-volume\") pod \"collect-profiles-29555085-9xbwl\" (UID: \"7dfd98d0-2a15-4bcf-b463-8786260177f4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555085-9xbwl" Mar 12 08:45:00 crc kubenswrapper[4809]: I0312 08:45:00.456260 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7dfd98d0-2a15-4bcf-b463-8786260177f4-config-volume\") pod \"collect-profiles-29555085-9xbwl\" (UID: \"7dfd98d0-2a15-4bcf-b463-8786260177f4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555085-9xbwl" Mar 12 08:45:00 crc kubenswrapper[4809]: I0312 08:45:00.456281 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9hk7\" (UniqueName: \"kubernetes.io/projected/7dfd98d0-2a15-4bcf-b463-8786260177f4-kube-api-access-c9hk7\") pod \"collect-profiles-29555085-9xbwl\" (UID: \"7dfd98d0-2a15-4bcf-b463-8786260177f4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555085-9xbwl" Mar 12 08:45:00 crc kubenswrapper[4809]: I0312 08:45:00.459273 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7dfd98d0-2a15-4bcf-b463-8786260177f4-config-volume\") pod \"collect-profiles-29555085-9xbwl\" (UID: \"7dfd98d0-2a15-4bcf-b463-8786260177f4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555085-9xbwl" Mar 12 08:45:00 crc kubenswrapper[4809]: I0312 08:45:00.463585 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7dfd98d0-2a15-4bcf-b463-8786260177f4-secret-volume\") pod \"collect-profiles-29555085-9xbwl\" (UID: \"7dfd98d0-2a15-4bcf-b463-8786260177f4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555085-9xbwl" Mar 12 08:45:00 crc kubenswrapper[4809]: I0312 08:45:00.477987 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9hk7\" (UniqueName: \"kubernetes.io/projected/7dfd98d0-2a15-4bcf-b463-8786260177f4-kube-api-access-c9hk7\") pod \"collect-profiles-29555085-9xbwl\" (UID: \"7dfd98d0-2a15-4bcf-b463-8786260177f4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555085-9xbwl" Mar 12 08:45:00 crc kubenswrapper[4809]: I0312 08:45:00.765478 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555085-9xbwl" Mar 12 08:45:01 crc kubenswrapper[4809]: I0312 08:45:01.247968 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555085-9xbwl"] Mar 12 08:45:01 crc kubenswrapper[4809]: I0312 08:45:01.958065 4809 generic.go:334] "Generic (PLEG): container finished" podID="7dfd98d0-2a15-4bcf-b463-8786260177f4" containerID="4c10fb0b966782076e1cbbb15fec4398e05df5e2719ba119de613f948e894a9e" exitCode=0 Mar 12 08:45:01 crc kubenswrapper[4809]: I0312 08:45:01.958294 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555085-9xbwl" event={"ID":"7dfd98d0-2a15-4bcf-b463-8786260177f4","Type":"ContainerDied","Data":"4c10fb0b966782076e1cbbb15fec4398e05df5e2719ba119de613f948e894a9e"} Mar 12 08:45:01 crc kubenswrapper[4809]: I0312 08:45:01.958370 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555085-9xbwl" event={"ID":"7dfd98d0-2a15-4bcf-b463-8786260177f4","Type":"ContainerStarted","Data":"ac92cd177900c22a9de746323376525458aec03a1a1afb9b3a932a5939fa8f5c"} Mar 12 08:45:03 crc kubenswrapper[4809]: I0312 08:45:03.496418 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555085-9xbwl" Mar 12 08:45:03 crc kubenswrapper[4809]: I0312 08:45:03.542911 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7dfd98d0-2a15-4bcf-b463-8786260177f4-config-volume\") pod \"7dfd98d0-2a15-4bcf-b463-8786260177f4\" (UID: \"7dfd98d0-2a15-4bcf-b463-8786260177f4\") " Mar 12 08:45:03 crc kubenswrapper[4809]: I0312 08:45:03.543304 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9hk7\" (UniqueName: \"kubernetes.io/projected/7dfd98d0-2a15-4bcf-b463-8786260177f4-kube-api-access-c9hk7\") pod \"7dfd98d0-2a15-4bcf-b463-8786260177f4\" (UID: \"7dfd98d0-2a15-4bcf-b463-8786260177f4\") " Mar 12 08:45:03 crc kubenswrapper[4809]: I0312 08:45:03.543472 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7dfd98d0-2a15-4bcf-b463-8786260177f4-secret-volume\") pod \"7dfd98d0-2a15-4bcf-b463-8786260177f4\" (UID: \"7dfd98d0-2a15-4bcf-b463-8786260177f4\") " Mar 12 08:45:03 crc kubenswrapper[4809]: I0312 08:45:03.544463 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7dfd98d0-2a15-4bcf-b463-8786260177f4-config-volume" (OuterVolumeSpecName: "config-volume") pod "7dfd98d0-2a15-4bcf-b463-8786260177f4" (UID: "7dfd98d0-2a15-4bcf-b463-8786260177f4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:45:03 crc kubenswrapper[4809]: I0312 08:45:03.551477 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dfd98d0-2a15-4bcf-b463-8786260177f4-kube-api-access-c9hk7" (OuterVolumeSpecName: "kube-api-access-c9hk7") pod "7dfd98d0-2a15-4bcf-b463-8786260177f4" (UID: "7dfd98d0-2a15-4bcf-b463-8786260177f4"). InnerVolumeSpecName "kube-api-access-c9hk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:45:03 crc kubenswrapper[4809]: I0312 08:45:03.551711 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dfd98d0-2a15-4bcf-b463-8786260177f4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7dfd98d0-2a15-4bcf-b463-8786260177f4" (UID: "7dfd98d0-2a15-4bcf-b463-8786260177f4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:45:03 crc kubenswrapper[4809]: I0312 08:45:03.647216 4809 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7dfd98d0-2a15-4bcf-b463-8786260177f4-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 12 08:45:03 crc kubenswrapper[4809]: I0312 08:45:03.647256 4809 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7dfd98d0-2a15-4bcf-b463-8786260177f4-config-volume\") on node \"crc\" DevicePath \"\"" Mar 12 08:45:03 crc kubenswrapper[4809]: I0312 08:45:03.647266 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9hk7\" (UniqueName: \"kubernetes.io/projected/7dfd98d0-2a15-4bcf-b463-8786260177f4-kube-api-access-c9hk7\") on node \"crc\" DevicePath \"\"" Mar 12 08:45:03 crc kubenswrapper[4809]: I0312 08:45:03.999051 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555085-9xbwl" event={"ID":"7dfd98d0-2a15-4bcf-b463-8786260177f4","Type":"ContainerDied","Data":"ac92cd177900c22a9de746323376525458aec03a1a1afb9b3a932a5939fa8f5c"} Mar 12 08:45:03 crc kubenswrapper[4809]: I0312 08:45:03.999106 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac92cd177900c22a9de746323376525458aec03a1a1afb9b3a932a5939fa8f5c" Mar 12 08:45:03 crc kubenswrapper[4809]: I0312 08:45:03.999144 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555085-9xbwl" Mar 12 08:45:04 crc kubenswrapper[4809]: I0312 08:45:04.601311 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555040-4v65h"] Mar 12 08:45:04 crc kubenswrapper[4809]: I0312 08:45:04.616688 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555040-4v65h"] Mar 12 08:45:05 crc kubenswrapper[4809]: I0312 08:45:05.122917 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42f8eef2-550d-4e94-9719-3f2abbbb3ecc" path="/var/lib/kubelet/pods/42f8eef2-550d-4e94-9719-3f2abbbb3ecc/volumes" Mar 12 08:45:32 crc kubenswrapper[4809]: I0312 08:45:32.457849 4809 scope.go:117] "RemoveContainer" containerID="e3ae3ffc84184d6fe0be1bcee6b636b19a55d8c06909baf833e1e44c86967155" Mar 12 08:46:00 crc kubenswrapper[4809]: I0312 08:46:00.164663 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555086-87qxs"] Mar 12 08:46:00 crc kubenswrapper[4809]: E0312 08:46:00.165674 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dfd98d0-2a15-4bcf-b463-8786260177f4" containerName="collect-profiles" Mar 12 08:46:00 crc kubenswrapper[4809]: I0312 08:46:00.165688 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dfd98d0-2a15-4bcf-b463-8786260177f4" containerName="collect-profiles" Mar 12 08:46:00 crc kubenswrapper[4809]: I0312 08:46:00.165952 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dfd98d0-2a15-4bcf-b463-8786260177f4" containerName="collect-profiles" Mar 12 08:46:00 crc kubenswrapper[4809]: I0312 08:46:00.166710 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555086-87qxs" Mar 12 08:46:00 crc kubenswrapper[4809]: I0312 08:46:00.172639 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-tvxqv" Mar 12 08:46:00 crc kubenswrapper[4809]: I0312 08:46:00.172905 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 08:46:00 crc kubenswrapper[4809]: I0312 08:46:00.180903 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 08:46:00 crc kubenswrapper[4809]: I0312 08:46:00.182690 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555086-87qxs"] Mar 12 08:46:00 crc kubenswrapper[4809]: I0312 08:46:00.344346 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-945m7\" (UniqueName: \"kubernetes.io/projected/a9a75ab1-ea2f-4ef8-877f-6a03c946dcb0-kube-api-access-945m7\") pod \"auto-csr-approver-29555086-87qxs\" (UID: \"a9a75ab1-ea2f-4ef8-877f-6a03c946dcb0\") " pod="openshift-infra/auto-csr-approver-29555086-87qxs" Mar 12 08:46:00 crc kubenswrapper[4809]: I0312 08:46:00.447246 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-945m7\" (UniqueName: \"kubernetes.io/projected/a9a75ab1-ea2f-4ef8-877f-6a03c946dcb0-kube-api-access-945m7\") pod \"auto-csr-approver-29555086-87qxs\" (UID: \"a9a75ab1-ea2f-4ef8-877f-6a03c946dcb0\") " pod="openshift-infra/auto-csr-approver-29555086-87qxs" Mar 12 08:46:00 crc kubenswrapper[4809]: I0312 08:46:00.485360 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-945m7\" (UniqueName: \"kubernetes.io/projected/a9a75ab1-ea2f-4ef8-877f-6a03c946dcb0-kube-api-access-945m7\") pod \"auto-csr-approver-29555086-87qxs\" (UID: \"a9a75ab1-ea2f-4ef8-877f-6a03c946dcb0\") " pod="openshift-infra/auto-csr-approver-29555086-87qxs" Mar 12 08:46:00 crc kubenswrapper[4809]: I0312 08:46:00.512632 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555086-87qxs" Mar 12 08:46:01 crc kubenswrapper[4809]: W0312 08:46:01.147746 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9a75ab1_ea2f_4ef8_877f_6a03c946dcb0.slice/crio-7ff4a2e3d82b42f2836b849fcfd84a8c7100affc3e1d91dcd0a3303c9e6d4e4f WatchSource:0}: Error finding container 7ff4a2e3d82b42f2836b849fcfd84a8c7100affc3e1d91dcd0a3303c9e6d4e4f: Status 404 returned error can't find the container with id 7ff4a2e3d82b42f2836b849fcfd84a8c7100affc3e1d91dcd0a3303c9e6d4e4f Mar 12 08:46:01 crc kubenswrapper[4809]: I0312 08:46:01.149592 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555086-87qxs"] Mar 12 08:46:01 crc kubenswrapper[4809]: I0312 08:46:01.865500 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555086-87qxs" event={"ID":"a9a75ab1-ea2f-4ef8-877f-6a03c946dcb0","Type":"ContainerStarted","Data":"7ff4a2e3d82b42f2836b849fcfd84a8c7100affc3e1d91dcd0a3303c9e6d4e4f"} Mar 12 08:46:02 crc kubenswrapper[4809]: I0312 08:46:02.878625 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555086-87qxs" event={"ID":"a9a75ab1-ea2f-4ef8-877f-6a03c946dcb0","Type":"ContainerStarted","Data":"6089abd2f83f30787fcb76e540eea1876da9376ef182c3d389c7b9f38c302955"} Mar 12 08:46:02 crc kubenswrapper[4809]: I0312 08:46:02.912384 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555086-87qxs" podStartSLOduration=1.8851091260000001 podStartE2EDuration="2.912363124s" podCreationTimestamp="2026-03-12 08:46:00 +0000 UTC" firstStartedPulling="2026-03-12 08:46:01.15180005 +0000 UTC m=+2834.733835783" lastFinishedPulling="2026-03-12 08:46:02.179054028 +0000 UTC m=+2835.761089781" observedRunningTime="2026-03-12 08:46:02.898414834 +0000 UTC m=+2836.480450567" watchObservedRunningTime="2026-03-12 08:46:02.912363124 +0000 UTC m=+2836.494398857" Mar 12 08:46:03 crc kubenswrapper[4809]: I0312 08:46:03.893344 4809 generic.go:334] "Generic (PLEG): container finished" podID="a9a75ab1-ea2f-4ef8-877f-6a03c946dcb0" containerID="6089abd2f83f30787fcb76e540eea1876da9376ef182c3d389c7b9f38c302955" exitCode=0 Mar 12 08:46:03 crc kubenswrapper[4809]: I0312 08:46:03.893520 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555086-87qxs" event={"ID":"a9a75ab1-ea2f-4ef8-877f-6a03c946dcb0","Type":"ContainerDied","Data":"6089abd2f83f30787fcb76e540eea1876da9376ef182c3d389c7b9f38c302955"} Mar 12 08:46:05 crc kubenswrapper[4809]: I0312 08:46:05.321899 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555086-87qxs" Mar 12 08:46:05 crc kubenswrapper[4809]: I0312 08:46:05.515656 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-945m7\" (UniqueName: \"kubernetes.io/projected/a9a75ab1-ea2f-4ef8-877f-6a03c946dcb0-kube-api-access-945m7\") pod \"a9a75ab1-ea2f-4ef8-877f-6a03c946dcb0\" (UID: \"a9a75ab1-ea2f-4ef8-877f-6a03c946dcb0\") " Mar 12 08:46:05 crc kubenswrapper[4809]: I0312 08:46:05.522504 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9a75ab1-ea2f-4ef8-877f-6a03c946dcb0-kube-api-access-945m7" (OuterVolumeSpecName: "kube-api-access-945m7") pod "a9a75ab1-ea2f-4ef8-877f-6a03c946dcb0" (UID: "a9a75ab1-ea2f-4ef8-877f-6a03c946dcb0"). InnerVolumeSpecName "kube-api-access-945m7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:46:05 crc kubenswrapper[4809]: I0312 08:46:05.619354 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-945m7\" (UniqueName: \"kubernetes.io/projected/a9a75ab1-ea2f-4ef8-877f-6a03c946dcb0-kube-api-access-945m7\") on node \"crc\" DevicePath \"\"" Mar 12 08:46:05 crc kubenswrapper[4809]: I0312 08:46:05.917435 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555086-87qxs" event={"ID":"a9a75ab1-ea2f-4ef8-877f-6a03c946dcb0","Type":"ContainerDied","Data":"7ff4a2e3d82b42f2836b849fcfd84a8c7100affc3e1d91dcd0a3303c9e6d4e4f"} Mar 12 08:46:05 crc kubenswrapper[4809]: I0312 08:46:05.917789 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ff4a2e3d82b42f2836b849fcfd84a8c7100affc3e1d91dcd0a3303c9e6d4e4f" Mar 12 08:46:05 crc kubenswrapper[4809]: I0312 08:46:05.917555 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555086-87qxs" Mar 12 08:46:05 crc kubenswrapper[4809]: I0312 08:46:05.993188 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555080-lt2dh"] Mar 12 08:46:06 crc kubenswrapper[4809]: I0312 08:46:06.008282 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555080-lt2dh"] Mar 12 08:46:07 crc kubenswrapper[4809]: I0312 08:46:07.120684 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f467235-08a5-4fa4-8e2e-d92ac7f6e831" path="/var/lib/kubelet/pods/7f467235-08a5-4fa4-8e2e-d92ac7f6e831/volumes" Mar 12 08:46:15 crc kubenswrapper[4809]: I0312 08:46:15.048926 4809 patch_prober.go:28] interesting pod/machine-config-daemon-h6d4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 08:46:15 crc kubenswrapper[4809]: I0312 08:46:15.049459 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 08:46:25 crc kubenswrapper[4809]: I0312 08:46:25.268668 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6dsjc"] Mar 12 08:46:25 crc kubenswrapper[4809]: E0312 08:46:25.269635 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9a75ab1-ea2f-4ef8-877f-6a03c946dcb0" containerName="oc" Mar 12 08:46:25 crc kubenswrapper[4809]: I0312 08:46:25.269649 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9a75ab1-ea2f-4ef8-877f-6a03c946dcb0" containerName="oc" Mar 12 08:46:25 crc kubenswrapper[4809]: I0312 08:46:25.269905 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9a75ab1-ea2f-4ef8-877f-6a03c946dcb0" containerName="oc" Mar 12 08:46:25 crc kubenswrapper[4809]: I0312 08:46:25.289229 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6dsjc" Mar 12 08:46:25 crc kubenswrapper[4809]: I0312 08:46:25.309775 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6dsjc"] Mar 12 08:46:25 crc kubenswrapper[4809]: I0312 08:46:25.455376 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h52zr\" (UniqueName: \"kubernetes.io/projected/d8b8e028-6c2f-443a-908e-d509b6e57d09-kube-api-access-h52zr\") pod \"redhat-marketplace-6dsjc\" (UID: \"d8b8e028-6c2f-443a-908e-d509b6e57d09\") " pod="openshift-marketplace/redhat-marketplace-6dsjc" Mar 12 08:46:25 crc kubenswrapper[4809]: I0312 08:46:25.455463 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8b8e028-6c2f-443a-908e-d509b6e57d09-utilities\") pod \"redhat-marketplace-6dsjc\" (UID: \"d8b8e028-6c2f-443a-908e-d509b6e57d09\") " pod="openshift-marketplace/redhat-marketplace-6dsjc" Mar 12 08:46:25 crc kubenswrapper[4809]: I0312 08:46:25.455511 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8b8e028-6c2f-443a-908e-d509b6e57d09-catalog-content\") pod \"redhat-marketplace-6dsjc\" (UID: \"d8b8e028-6c2f-443a-908e-d509b6e57d09\") " pod="openshift-marketplace/redhat-marketplace-6dsjc" Mar 12 08:46:25 crc kubenswrapper[4809]: I0312 08:46:25.467709 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zlwmg"] Mar 12 08:46:25 crc kubenswrapper[4809]: I0312 08:46:25.471808 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zlwmg" Mar 12 08:46:25 crc kubenswrapper[4809]: I0312 08:46:25.488705 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zlwmg"] Mar 12 08:46:25 crc kubenswrapper[4809]: I0312 08:46:25.558115 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h52zr\" (UniqueName: \"kubernetes.io/projected/d8b8e028-6c2f-443a-908e-d509b6e57d09-kube-api-access-h52zr\") pod \"redhat-marketplace-6dsjc\" (UID: \"d8b8e028-6c2f-443a-908e-d509b6e57d09\") " pod="openshift-marketplace/redhat-marketplace-6dsjc" Mar 12 08:46:25 crc kubenswrapper[4809]: I0312 08:46:25.558204 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8b8e028-6c2f-443a-908e-d509b6e57d09-utilities\") pod \"redhat-marketplace-6dsjc\" (UID: \"d8b8e028-6c2f-443a-908e-d509b6e57d09\") " pod="openshift-marketplace/redhat-marketplace-6dsjc" Mar 12 08:46:25 crc kubenswrapper[4809]: I0312 08:46:25.558263 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8b8e028-6c2f-443a-908e-d509b6e57d09-catalog-content\") pod \"redhat-marketplace-6dsjc\" (UID: \"d8b8e028-6c2f-443a-908e-d509b6e57d09\") " pod="openshift-marketplace/redhat-marketplace-6dsjc" Mar 12 08:46:25 crc kubenswrapper[4809]: I0312 08:46:25.558905 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8b8e028-6c2f-443a-908e-d509b6e57d09-catalog-content\") pod \"redhat-marketplace-6dsjc\" (UID: \"d8b8e028-6c2f-443a-908e-d509b6e57d09\") " pod="openshift-marketplace/redhat-marketplace-6dsjc" Mar 12 08:46:25 crc kubenswrapper[4809]: I0312 08:46:25.558990 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8b8e028-6c2f-443a-908e-d509b6e57d09-utilities\") pod \"redhat-marketplace-6dsjc\" (UID: \"d8b8e028-6c2f-443a-908e-d509b6e57d09\") " pod="openshift-marketplace/redhat-marketplace-6dsjc" Mar 12 08:46:25 crc kubenswrapper[4809]: I0312 08:46:25.577324 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h52zr\" (UniqueName: \"kubernetes.io/projected/d8b8e028-6c2f-443a-908e-d509b6e57d09-kube-api-access-h52zr\") pod \"redhat-marketplace-6dsjc\" (UID: \"d8b8e028-6c2f-443a-908e-d509b6e57d09\") " pod="openshift-marketplace/redhat-marketplace-6dsjc" Mar 12 08:46:25 crc kubenswrapper[4809]: I0312 08:46:25.659886 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6dsjc" Mar 12 08:46:25 crc kubenswrapper[4809]: I0312 08:46:25.660749 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42tzk\" (UniqueName: \"kubernetes.io/projected/c6578ada-536b-4584-801a-09e783245b71-kube-api-access-42tzk\") pod \"redhat-operators-zlwmg\" (UID: \"c6578ada-536b-4584-801a-09e783245b71\") " pod="openshift-marketplace/redhat-operators-zlwmg" Mar 12 08:46:25 crc kubenswrapper[4809]: I0312 08:46:25.660885 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6578ada-536b-4584-801a-09e783245b71-utilities\") pod \"redhat-operators-zlwmg\" (UID: \"c6578ada-536b-4584-801a-09e783245b71\") " pod="openshift-marketplace/redhat-operators-zlwmg" Mar 12 08:46:25 crc kubenswrapper[4809]: I0312 08:46:25.661188 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6578ada-536b-4584-801a-09e783245b71-catalog-content\") pod \"redhat-operators-zlwmg\" (UID: \"c6578ada-536b-4584-801a-09e783245b71\") " pod="openshift-marketplace/redhat-operators-zlwmg" Mar 12 08:46:25 crc kubenswrapper[4809]: I0312 08:46:25.765781 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42tzk\" (UniqueName: \"kubernetes.io/projected/c6578ada-536b-4584-801a-09e783245b71-kube-api-access-42tzk\") pod \"redhat-operators-zlwmg\" (UID: \"c6578ada-536b-4584-801a-09e783245b71\") " pod="openshift-marketplace/redhat-operators-zlwmg" Mar 12 08:46:25 crc kubenswrapper[4809]: I0312 08:46:25.766159 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6578ada-536b-4584-801a-09e783245b71-utilities\") pod \"redhat-operators-zlwmg\" (UID: \"c6578ada-536b-4584-801a-09e783245b71\") " pod="openshift-marketplace/redhat-operators-zlwmg" Mar 12 08:46:25 crc kubenswrapper[4809]: I0312 08:46:25.766288 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6578ada-536b-4584-801a-09e783245b71-catalog-content\") pod \"redhat-operators-zlwmg\" (UID: \"c6578ada-536b-4584-801a-09e783245b71\") " pod="openshift-marketplace/redhat-operators-zlwmg" Mar 12 08:46:25 crc kubenswrapper[4809]: I0312 08:46:25.766769 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6578ada-536b-4584-801a-09e783245b71-catalog-content\") pod \"redhat-operators-zlwmg\" (UID: \"c6578ada-536b-4584-801a-09e783245b71\") " pod="openshift-marketplace/redhat-operators-zlwmg" Mar 12 08:46:25 crc kubenswrapper[4809]: I0312 08:46:25.768557 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6578ada-536b-4584-801a-09e783245b71-utilities\") pod \"redhat-operators-zlwmg\" (UID: \"c6578ada-536b-4584-801a-09e783245b71\") " pod="openshift-marketplace/redhat-operators-zlwmg" Mar 12 08:46:25 crc kubenswrapper[4809]: I0312 08:46:25.792735 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42tzk\" (UniqueName: \"kubernetes.io/projected/c6578ada-536b-4584-801a-09e783245b71-kube-api-access-42tzk\") pod \"redhat-operators-zlwmg\" (UID: \"c6578ada-536b-4584-801a-09e783245b71\") " pod="openshift-marketplace/redhat-operators-zlwmg" Mar 12 08:46:25 crc kubenswrapper[4809]: I0312 08:46:25.800904 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zlwmg" Mar 12 08:46:26 crc kubenswrapper[4809]: I0312 08:46:26.317366 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6dsjc"] Mar 12 08:46:26 crc kubenswrapper[4809]: I0312 08:46:26.418180 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zlwmg"] Mar 12 08:46:26 crc kubenswrapper[4809]: W0312 08:46:26.426618 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc6578ada_536b_4584_801a_09e783245b71.slice/crio-fcf701ded53917d3bb066a8d8578e26472d4f1c8d0014e605dd50e512ee4ac5a WatchSource:0}: Error finding container fcf701ded53917d3bb066a8d8578e26472d4f1c8d0014e605dd50e512ee4ac5a: Status 404 returned error can't find the container with id fcf701ded53917d3bb066a8d8578e26472d4f1c8d0014e605dd50e512ee4ac5a Mar 12 08:46:27 crc kubenswrapper[4809]: I0312 08:46:27.197132 4809 generic.go:334] "Generic (PLEG): container finished" podID="c6578ada-536b-4584-801a-09e783245b71" containerID="01e5995eca7defc9c5d4ea4654af52de96487727f4f6cbf2c295b9a2160fcd88" exitCode=0 Mar 12 08:46:27 crc kubenswrapper[4809]: I0312 08:46:27.197528 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zlwmg" event={"ID":"c6578ada-536b-4584-801a-09e783245b71","Type":"ContainerDied","Data":"01e5995eca7defc9c5d4ea4654af52de96487727f4f6cbf2c295b9a2160fcd88"} Mar 12 08:46:27 crc kubenswrapper[4809]: I0312 08:46:27.197564 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zlwmg" event={"ID":"c6578ada-536b-4584-801a-09e783245b71","Type":"ContainerStarted","Data":"fcf701ded53917d3bb066a8d8578e26472d4f1c8d0014e605dd50e512ee4ac5a"} Mar 12 08:46:27 crc kubenswrapper[4809]: I0312 08:46:27.201899 4809 generic.go:334] "Generic (PLEG): container finished" podID="d8b8e028-6c2f-443a-908e-d509b6e57d09" containerID="a58bd5b6b02cbe44e6caa1d6f50b83af5408da48753ad7f3aeb5553d00d78688" exitCode=0 Mar 12 08:46:27 crc kubenswrapper[4809]: I0312 08:46:27.201933 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6dsjc" event={"ID":"d8b8e028-6c2f-443a-908e-d509b6e57d09","Type":"ContainerDied","Data":"a58bd5b6b02cbe44e6caa1d6f50b83af5408da48753ad7f3aeb5553d00d78688"} Mar 12 08:46:27 crc kubenswrapper[4809]: I0312 08:46:27.201959 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6dsjc" event={"ID":"d8b8e028-6c2f-443a-908e-d509b6e57d09","Type":"ContainerStarted","Data":"2de44aae301a957523384456ed3548d77b4bb705aa8c1b82be9684a35881f4af"} Mar 12 08:46:28 crc kubenswrapper[4809]: I0312 08:46:28.219432 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zlwmg" event={"ID":"c6578ada-536b-4584-801a-09e783245b71","Type":"ContainerStarted","Data":"9f50171b025fdfe13480b6d97b13c1249c44cb3372876e7c7a8b2744b673d823"} Mar 12 08:46:29 crc kubenswrapper[4809]: I0312 08:46:29.232201 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6dsjc" event={"ID":"d8b8e028-6c2f-443a-908e-d509b6e57d09","Type":"ContainerStarted","Data":"47d52d640593fe463bd163cfb57ff47c97ea693d62d927c3a5c96a1db41b9558"} Mar 12 08:46:30 crc kubenswrapper[4809]: I0312 08:46:30.243228 4809 generic.go:334] "Generic (PLEG): container finished" podID="d8b8e028-6c2f-443a-908e-d509b6e57d09" containerID="47d52d640593fe463bd163cfb57ff47c97ea693d62d927c3a5c96a1db41b9558" exitCode=0 Mar 12 08:46:30 crc kubenswrapper[4809]: I0312 08:46:30.243618 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6dsjc" event={"ID":"d8b8e028-6c2f-443a-908e-d509b6e57d09","Type":"ContainerDied","Data":"47d52d640593fe463bd163cfb57ff47c97ea693d62d927c3a5c96a1db41b9558"} Mar 12 08:46:31 crc kubenswrapper[4809]: I0312 08:46:31.257960 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6dsjc" event={"ID":"d8b8e028-6c2f-443a-908e-d509b6e57d09","Type":"ContainerStarted","Data":"5b75573b9d3dc119d3b288ca5f7d12c4782154e0051949bee87ec52d192f614f"} Mar 12 08:46:31 crc kubenswrapper[4809]: I0312 08:46:31.276958 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6dsjc" podStartSLOduration=2.819476217 podStartE2EDuration="6.276933619s" podCreationTimestamp="2026-03-12 08:46:25 +0000 UTC" firstStartedPulling="2026-03-12 08:46:27.20474836 +0000 UTC m=+2860.786784093" lastFinishedPulling="2026-03-12 08:46:30.662205762 +0000 UTC m=+2864.244241495" observedRunningTime="2026-03-12 08:46:31.275774907 +0000 UTC m=+2864.857810660" watchObservedRunningTime="2026-03-12 08:46:31.276933619 +0000 UTC m=+2864.858969352" Mar 12 08:46:32 crc kubenswrapper[4809]: I0312 08:46:32.559829 4809 scope.go:117] "RemoveContainer" containerID="08332b8c409d593e6beb57266c62b8a9822f886e549db8f35b3d1dd436718a4f" Mar 12 08:46:34 crc kubenswrapper[4809]: I0312 08:46:34.292941 4809 generic.go:334] "Generic (PLEG): container finished" podID="c6578ada-536b-4584-801a-09e783245b71" containerID="9f50171b025fdfe13480b6d97b13c1249c44cb3372876e7c7a8b2744b673d823" exitCode=0 Mar 12 08:46:34 crc kubenswrapper[4809]: I0312 08:46:34.293209 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zlwmg" event={"ID":"c6578ada-536b-4584-801a-09e783245b71","Type":"ContainerDied","Data":"9f50171b025fdfe13480b6d97b13c1249c44cb3372876e7c7a8b2744b673d823"} Mar 12 08:46:35 crc kubenswrapper[4809]: I0312 08:46:35.306926 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zlwmg" event={"ID":"c6578ada-536b-4584-801a-09e783245b71","Type":"ContainerStarted","Data":"0cc89af5f49877715efd4a32fbebbeaa4eb68f4f99835d98ec225fceee82e083"} Mar 12 08:46:35 crc kubenswrapper[4809]: I0312 08:46:35.332421 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zlwmg" podStartSLOduration=2.833884409 podStartE2EDuration="10.332401913s" podCreationTimestamp="2026-03-12 08:46:25 +0000 UTC" firstStartedPulling="2026-03-12 08:46:27.199458716 +0000 UTC m=+2860.781494449" lastFinishedPulling="2026-03-12 08:46:34.69797622 +0000 UTC m=+2868.280011953" observedRunningTime="2026-03-12 08:46:35.322675718 +0000 UTC m=+2868.904711451" watchObservedRunningTime="2026-03-12 08:46:35.332401913 +0000 UTC m=+2868.914437646" Mar 12 08:46:35 crc kubenswrapper[4809]: I0312 08:46:35.666769 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6dsjc" Mar 12 08:46:35 crc kubenswrapper[4809]: I0312 08:46:35.666850 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6dsjc" Mar 12 08:46:35 crc kubenswrapper[4809]: I0312 08:46:35.802652 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zlwmg" Mar 12 08:46:35 crc kubenswrapper[4809]: I0312 08:46:35.802697 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zlwmg" Mar 12 08:46:36 crc kubenswrapper[4809]: I0312 08:46:36.733531 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-6dsjc" podUID="d8b8e028-6c2f-443a-908e-d509b6e57d09" containerName="registry-server" probeResult="failure" output=< Mar 12 08:46:36 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 08:46:36 crc kubenswrapper[4809]: > Mar 12 08:46:36 crc kubenswrapper[4809]: I0312 08:46:36.849329 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zlwmg" podUID="c6578ada-536b-4584-801a-09e783245b71" containerName="registry-server" probeResult="failure" output=< Mar 12 08:46:36 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 08:46:36 crc kubenswrapper[4809]: > Mar 12 08:46:45 crc kubenswrapper[4809]: I0312 08:46:45.048457 4809 patch_prober.go:28] interesting pod/machine-config-daemon-h6d4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 08:46:45 crc kubenswrapper[4809]: I0312 08:46:45.049023 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 08:46:45 crc kubenswrapper[4809]: I0312 08:46:45.710129 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6dsjc" Mar 12 08:46:45 crc kubenswrapper[4809]: I0312 08:46:45.771199 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6dsjc" Mar 12 08:46:45 crc kubenswrapper[4809]: I0312 08:46:45.950055 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6dsjc"] Mar 12 08:46:46 crc kubenswrapper[4809]: I0312 08:46:46.897099 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zlwmg" podUID="c6578ada-536b-4584-801a-09e783245b71" containerName="registry-server" probeResult="failure" output=< Mar 12 08:46:46 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 08:46:46 crc kubenswrapper[4809]: > Mar 12 08:46:47 crc kubenswrapper[4809]: I0312 08:46:47.441157 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6dsjc" podUID="d8b8e028-6c2f-443a-908e-d509b6e57d09" containerName="registry-server" containerID="cri-o://5b75573b9d3dc119d3b288ca5f7d12c4782154e0051949bee87ec52d192f614f" gracePeriod=2 Mar 12 08:46:48 crc kubenswrapper[4809]: I0312 08:46:48.010247 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6dsjc" Mar 12 08:46:48 crc kubenswrapper[4809]: I0312 08:46:48.104010 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8b8e028-6c2f-443a-908e-d509b6e57d09-catalog-content\") pod \"d8b8e028-6c2f-443a-908e-d509b6e57d09\" (UID: \"d8b8e028-6c2f-443a-908e-d509b6e57d09\") " Mar 12 08:46:48 crc kubenswrapper[4809]: I0312 08:46:48.104102 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h52zr\" (UniqueName: \"kubernetes.io/projected/d8b8e028-6c2f-443a-908e-d509b6e57d09-kube-api-access-h52zr\") pod \"d8b8e028-6c2f-443a-908e-d509b6e57d09\" (UID: \"d8b8e028-6c2f-443a-908e-d509b6e57d09\") " Mar 12 08:46:48 crc kubenswrapper[4809]: I0312 08:46:48.104188 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8b8e028-6c2f-443a-908e-d509b6e57d09-utilities\") pod \"d8b8e028-6c2f-443a-908e-d509b6e57d09\" (UID: \"d8b8e028-6c2f-443a-908e-d509b6e57d09\") " Mar 12 08:46:48 crc kubenswrapper[4809]: I0312 08:46:48.105972 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8b8e028-6c2f-443a-908e-d509b6e57d09-utilities" (OuterVolumeSpecName: "utilities") pod "d8b8e028-6c2f-443a-908e-d509b6e57d09" (UID: "d8b8e028-6c2f-443a-908e-d509b6e57d09"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:46:48 crc kubenswrapper[4809]: I0312 08:46:48.113310 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8b8e028-6c2f-443a-908e-d509b6e57d09-kube-api-access-h52zr" (OuterVolumeSpecName: "kube-api-access-h52zr") pod "d8b8e028-6c2f-443a-908e-d509b6e57d09" (UID: "d8b8e028-6c2f-443a-908e-d509b6e57d09"). InnerVolumeSpecName "kube-api-access-h52zr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:46:48 crc kubenswrapper[4809]: I0312 08:46:48.138222 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8b8e028-6c2f-443a-908e-d509b6e57d09-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d8b8e028-6c2f-443a-908e-d509b6e57d09" (UID: "d8b8e028-6c2f-443a-908e-d509b6e57d09"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:46:48 crc kubenswrapper[4809]: I0312 08:46:48.207015 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8b8e028-6c2f-443a-908e-d509b6e57d09-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 08:46:48 crc kubenswrapper[4809]: I0312 08:46:48.207087 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h52zr\" (UniqueName: \"kubernetes.io/projected/d8b8e028-6c2f-443a-908e-d509b6e57d09-kube-api-access-h52zr\") on node \"crc\" DevicePath \"\"" Mar 12 08:46:48 crc kubenswrapper[4809]: I0312 08:46:48.207099 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8b8e028-6c2f-443a-908e-d509b6e57d09-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 08:46:48 crc kubenswrapper[4809]: I0312 08:46:48.453177 4809 generic.go:334] "Generic (PLEG): container finished" podID="d8b8e028-6c2f-443a-908e-d509b6e57d09" containerID="5b75573b9d3dc119d3b288ca5f7d12c4782154e0051949bee87ec52d192f614f" exitCode=0 Mar 12 08:46:48 crc kubenswrapper[4809]: I0312 08:46:48.453224 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6dsjc" event={"ID":"d8b8e028-6c2f-443a-908e-d509b6e57d09","Type":"ContainerDied","Data":"5b75573b9d3dc119d3b288ca5f7d12c4782154e0051949bee87ec52d192f614f"} Mar 12 08:46:48 crc kubenswrapper[4809]: I0312 08:46:48.453261 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6dsjc" event={"ID":"d8b8e028-6c2f-443a-908e-d509b6e57d09","Type":"ContainerDied","Data":"2de44aae301a957523384456ed3548d77b4bb705aa8c1b82be9684a35881f4af"} Mar 12 08:46:48 crc kubenswrapper[4809]: I0312 08:46:48.453289 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6dsjc" Mar 12 08:46:48 crc kubenswrapper[4809]: I0312 08:46:48.453306 4809 scope.go:117] "RemoveContainer" containerID="5b75573b9d3dc119d3b288ca5f7d12c4782154e0051949bee87ec52d192f614f" Mar 12 08:46:48 crc kubenswrapper[4809]: I0312 08:46:48.493801 4809 scope.go:117] "RemoveContainer" containerID="47d52d640593fe463bd163cfb57ff47c97ea693d62d927c3a5c96a1db41b9558" Mar 12 08:46:48 crc kubenswrapper[4809]: I0312 08:46:48.496755 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6dsjc"] Mar 12 08:46:48 crc kubenswrapper[4809]: I0312 08:46:48.534618 4809 scope.go:117] "RemoveContainer" containerID="a58bd5b6b02cbe44e6caa1d6f50b83af5408da48753ad7f3aeb5553d00d78688" Mar 12 08:46:48 crc kubenswrapper[4809]: I0312 08:46:48.544839 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6dsjc"] Mar 12 08:46:48 crc kubenswrapper[4809]: I0312 08:46:48.602377 4809 scope.go:117] "RemoveContainer" containerID="5b75573b9d3dc119d3b288ca5f7d12c4782154e0051949bee87ec52d192f614f" Mar 12 08:46:48 crc kubenswrapper[4809]: E0312 08:46:48.603195 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b75573b9d3dc119d3b288ca5f7d12c4782154e0051949bee87ec52d192f614f\": container with ID starting with 5b75573b9d3dc119d3b288ca5f7d12c4782154e0051949bee87ec52d192f614f not found: ID does not exist" containerID="5b75573b9d3dc119d3b288ca5f7d12c4782154e0051949bee87ec52d192f614f" Mar 12 08:46:48 crc kubenswrapper[4809]: I0312 08:46:48.603246 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b75573b9d3dc119d3b288ca5f7d12c4782154e0051949bee87ec52d192f614f"} err="failed to get container status \"5b75573b9d3dc119d3b288ca5f7d12c4782154e0051949bee87ec52d192f614f\": rpc error: code = NotFound desc = could not find container \"5b75573b9d3dc119d3b288ca5f7d12c4782154e0051949bee87ec52d192f614f\": container with ID starting with 5b75573b9d3dc119d3b288ca5f7d12c4782154e0051949bee87ec52d192f614f not found: ID does not exist" Mar 12 08:46:48 crc kubenswrapper[4809]: I0312 08:46:48.603272 4809 scope.go:117] "RemoveContainer" containerID="47d52d640593fe463bd163cfb57ff47c97ea693d62d927c3a5c96a1db41b9558" Mar 12 08:46:48 crc kubenswrapper[4809]: E0312 08:46:48.603505 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47d52d640593fe463bd163cfb57ff47c97ea693d62d927c3a5c96a1db41b9558\": container with ID starting with 47d52d640593fe463bd163cfb57ff47c97ea693d62d927c3a5c96a1db41b9558 not found: ID does not exist" containerID="47d52d640593fe463bd163cfb57ff47c97ea693d62d927c3a5c96a1db41b9558" Mar 12 08:46:48 crc kubenswrapper[4809]: I0312 08:46:48.603538 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47d52d640593fe463bd163cfb57ff47c97ea693d62d927c3a5c96a1db41b9558"} err="failed to get container status \"47d52d640593fe463bd163cfb57ff47c97ea693d62d927c3a5c96a1db41b9558\": rpc error: code = NotFound desc = could not find container \"47d52d640593fe463bd163cfb57ff47c97ea693d62d927c3a5c96a1db41b9558\": container with ID starting with 47d52d640593fe463bd163cfb57ff47c97ea693d62d927c3a5c96a1db41b9558 not found: ID does not exist" Mar 12 08:46:48 crc kubenswrapper[4809]: I0312 08:46:48.603557 4809 scope.go:117] "RemoveContainer" containerID="a58bd5b6b02cbe44e6caa1d6f50b83af5408da48753ad7f3aeb5553d00d78688" Mar 12 08:46:48 crc kubenswrapper[4809]: E0312 08:46:48.603731 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a58bd5b6b02cbe44e6caa1d6f50b83af5408da48753ad7f3aeb5553d00d78688\": container with ID starting with a58bd5b6b02cbe44e6caa1d6f50b83af5408da48753ad7f3aeb5553d00d78688 not found: ID does not exist" containerID="a58bd5b6b02cbe44e6caa1d6f50b83af5408da48753ad7f3aeb5553d00d78688" Mar 12 08:46:48 crc kubenswrapper[4809]: I0312 08:46:48.603753 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a58bd5b6b02cbe44e6caa1d6f50b83af5408da48753ad7f3aeb5553d00d78688"} err="failed to get container status \"a58bd5b6b02cbe44e6caa1d6f50b83af5408da48753ad7f3aeb5553d00d78688\": rpc error: code = NotFound desc = could not find container \"a58bd5b6b02cbe44e6caa1d6f50b83af5408da48753ad7f3aeb5553d00d78688\": container with ID starting with a58bd5b6b02cbe44e6caa1d6f50b83af5408da48753ad7f3aeb5553d00d78688 not found: ID does not exist" Mar 12 08:46:49 crc kubenswrapper[4809]: I0312 08:46:49.118990 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8b8e028-6c2f-443a-908e-d509b6e57d09" path="/var/lib/kubelet/pods/d8b8e028-6c2f-443a-908e-d509b6e57d09/volumes" Mar 12 08:46:55 crc kubenswrapper[4809]: I0312 08:46:55.863729 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zlwmg" Mar 12 08:46:55 crc kubenswrapper[4809]: I0312 08:46:55.920782 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zlwmg" Mar 12 08:46:56 crc kubenswrapper[4809]: I0312 08:46:56.477562 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zlwmg"] Mar 12 08:46:57 crc kubenswrapper[4809]: I0312 08:46:57.572267 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zlwmg" podUID="c6578ada-536b-4584-801a-09e783245b71" containerName="registry-server" containerID="cri-o://0cc89af5f49877715efd4a32fbebbeaa4eb68f4f99835d98ec225fceee82e083" gracePeriod=2 Mar 12 08:46:58 crc kubenswrapper[4809]: I0312 08:46:58.160275 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zlwmg" Mar 12 08:46:58 crc kubenswrapper[4809]: I0312 08:46:58.310188 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6578ada-536b-4584-801a-09e783245b71-utilities\") pod \"c6578ada-536b-4584-801a-09e783245b71\" (UID: \"c6578ada-536b-4584-801a-09e783245b71\") " Mar 12 08:46:58 crc kubenswrapper[4809]: I0312 08:46:58.310531 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42tzk\" (UniqueName: \"kubernetes.io/projected/c6578ada-536b-4584-801a-09e783245b71-kube-api-access-42tzk\") pod \"c6578ada-536b-4584-801a-09e783245b71\" (UID: \"c6578ada-536b-4584-801a-09e783245b71\") " Mar 12 08:46:58 crc kubenswrapper[4809]: I0312 08:46:58.310688 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6578ada-536b-4584-801a-09e783245b71-catalog-content\") pod \"c6578ada-536b-4584-801a-09e783245b71\" (UID: \"c6578ada-536b-4584-801a-09e783245b71\") " Mar 12 08:46:58 crc kubenswrapper[4809]: I0312 08:46:58.311965 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6578ada-536b-4584-801a-09e783245b71-utilities" (OuterVolumeSpecName: "utilities") pod "c6578ada-536b-4584-801a-09e783245b71" (UID: "c6578ada-536b-4584-801a-09e783245b71"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:46:58 crc kubenswrapper[4809]: I0312 08:46:58.316505 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6578ada-536b-4584-801a-09e783245b71-kube-api-access-42tzk" (OuterVolumeSpecName: "kube-api-access-42tzk") pod "c6578ada-536b-4584-801a-09e783245b71" (UID: "c6578ada-536b-4584-801a-09e783245b71"). InnerVolumeSpecName "kube-api-access-42tzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:46:58 crc kubenswrapper[4809]: I0312 08:46:58.414742 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42tzk\" (UniqueName: \"kubernetes.io/projected/c6578ada-536b-4584-801a-09e783245b71-kube-api-access-42tzk\") on node \"crc\" DevicePath \"\"" Mar 12 08:46:58 crc kubenswrapper[4809]: I0312 08:46:58.414785 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6578ada-536b-4584-801a-09e783245b71-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 08:46:58 crc kubenswrapper[4809]: I0312 08:46:58.450600 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6578ada-536b-4584-801a-09e783245b71-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c6578ada-536b-4584-801a-09e783245b71" (UID: "c6578ada-536b-4584-801a-09e783245b71"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:46:58 crc kubenswrapper[4809]: I0312 08:46:58.517323 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6578ada-536b-4584-801a-09e783245b71-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 08:46:58 crc kubenswrapper[4809]: I0312 08:46:58.582599 4809 generic.go:334] "Generic (PLEG): container finished" podID="c6578ada-536b-4584-801a-09e783245b71" containerID="0cc89af5f49877715efd4a32fbebbeaa4eb68f4f99835d98ec225fceee82e083" exitCode=0 Mar 12 08:46:58 crc kubenswrapper[4809]: I0312 08:46:58.582654 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zlwmg" Mar 12 08:46:58 crc kubenswrapper[4809]: I0312 08:46:58.582674 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zlwmg" event={"ID":"c6578ada-536b-4584-801a-09e783245b71","Type":"ContainerDied","Data":"0cc89af5f49877715efd4a32fbebbeaa4eb68f4f99835d98ec225fceee82e083"} Mar 12 08:46:58 crc kubenswrapper[4809]: I0312 08:46:58.583385 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zlwmg" event={"ID":"c6578ada-536b-4584-801a-09e783245b71","Type":"ContainerDied","Data":"fcf701ded53917d3bb066a8d8578e26472d4f1c8d0014e605dd50e512ee4ac5a"} Mar 12 08:46:58 crc kubenswrapper[4809]: I0312 08:46:58.583416 4809 scope.go:117] "RemoveContainer" containerID="0cc89af5f49877715efd4a32fbebbeaa4eb68f4f99835d98ec225fceee82e083" Mar 12 08:46:58 crc kubenswrapper[4809]: I0312 08:46:58.619591 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zlwmg"] Mar 12 08:46:58 crc kubenswrapper[4809]: I0312 08:46:58.622998 4809 scope.go:117] "RemoveContainer" containerID="9f50171b025fdfe13480b6d97b13c1249c44cb3372876e7c7a8b2744b673d823" Mar 12 08:46:58 crc kubenswrapper[4809]: I0312 08:46:58.631976 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zlwmg"] Mar 12 08:46:58 crc kubenswrapper[4809]: I0312 08:46:58.647172 4809 scope.go:117] "RemoveContainer" containerID="01e5995eca7defc9c5d4ea4654af52de96487727f4f6cbf2c295b9a2160fcd88" Mar 12 08:46:58 crc kubenswrapper[4809]: I0312 08:46:58.775166 4809 scope.go:117] "RemoveContainer" containerID="0cc89af5f49877715efd4a32fbebbeaa4eb68f4f99835d98ec225fceee82e083" Mar 12 08:46:58 crc kubenswrapper[4809]: E0312 08:46:58.775593 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cc89af5f49877715efd4a32fbebbeaa4eb68f4f99835d98ec225fceee82e083\": container with ID starting with 0cc89af5f49877715efd4a32fbebbeaa4eb68f4f99835d98ec225fceee82e083 not found: ID does not exist" containerID="0cc89af5f49877715efd4a32fbebbeaa4eb68f4f99835d98ec225fceee82e083" Mar 12 08:46:58 crc kubenswrapper[4809]: I0312 08:46:58.775644 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cc89af5f49877715efd4a32fbebbeaa4eb68f4f99835d98ec225fceee82e083"} err="failed to get container status \"0cc89af5f49877715efd4a32fbebbeaa4eb68f4f99835d98ec225fceee82e083\": rpc error: code = NotFound desc = could not find container \"0cc89af5f49877715efd4a32fbebbeaa4eb68f4f99835d98ec225fceee82e083\": container with ID starting with 0cc89af5f49877715efd4a32fbebbeaa4eb68f4f99835d98ec225fceee82e083 not found: ID does not exist" Mar 12 08:46:58 crc kubenswrapper[4809]: I0312 08:46:58.775675 4809 scope.go:117] "RemoveContainer" containerID="9f50171b025fdfe13480b6d97b13c1249c44cb3372876e7c7a8b2744b673d823" Mar 12 08:46:58 crc kubenswrapper[4809]: E0312 08:46:58.776211 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f50171b025fdfe13480b6d97b13c1249c44cb3372876e7c7a8b2744b673d823\": container with ID starting with 9f50171b025fdfe13480b6d97b13c1249c44cb3372876e7c7a8b2744b673d823 not found: ID does not exist" containerID="9f50171b025fdfe13480b6d97b13c1249c44cb3372876e7c7a8b2744b673d823" Mar 12 08:46:58 crc kubenswrapper[4809]: I0312 08:46:58.776291 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f50171b025fdfe13480b6d97b13c1249c44cb3372876e7c7a8b2744b673d823"} err="failed to get container status \"9f50171b025fdfe13480b6d97b13c1249c44cb3372876e7c7a8b2744b673d823\": rpc error: code = NotFound desc = could not find container \"9f50171b025fdfe13480b6d97b13c1249c44cb3372876e7c7a8b2744b673d823\": container with ID starting with 9f50171b025fdfe13480b6d97b13c1249c44cb3372876e7c7a8b2744b673d823 not found: ID does not exist" Mar 12 08:46:58 crc kubenswrapper[4809]: I0312 08:46:58.776337 4809 scope.go:117] "RemoveContainer" containerID="01e5995eca7defc9c5d4ea4654af52de96487727f4f6cbf2c295b9a2160fcd88" Mar 12 08:46:58 crc kubenswrapper[4809]: E0312 08:46:58.776700 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01e5995eca7defc9c5d4ea4654af52de96487727f4f6cbf2c295b9a2160fcd88\": container with ID starting with 01e5995eca7defc9c5d4ea4654af52de96487727f4f6cbf2c295b9a2160fcd88 not found: ID does not exist" containerID="01e5995eca7defc9c5d4ea4654af52de96487727f4f6cbf2c295b9a2160fcd88" Mar 12 08:46:58 crc kubenswrapper[4809]: I0312 08:46:58.776729 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01e5995eca7defc9c5d4ea4654af52de96487727f4f6cbf2c295b9a2160fcd88"} err="failed to get container status \"01e5995eca7defc9c5d4ea4654af52de96487727f4f6cbf2c295b9a2160fcd88\": rpc error: code = NotFound desc = could not find container \"01e5995eca7defc9c5d4ea4654af52de96487727f4f6cbf2c295b9a2160fcd88\": container with ID starting with 01e5995eca7defc9c5d4ea4654af52de96487727f4f6cbf2c295b9a2160fcd88 not found: ID does not exist" Mar 12 08:46:59 crc kubenswrapper[4809]: I0312 08:46:59.129435 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6578ada-536b-4584-801a-09e783245b71" path="/var/lib/kubelet/pods/c6578ada-536b-4584-801a-09e783245b71/volumes" Mar 12 08:46:59 crc kubenswrapper[4809]: I0312 08:46:59.602511 4809 generic.go:334] "Generic (PLEG): container finished" podID="0fb437d4-d106-4655-8a3f-05446deb2be1" containerID="155b576d8b607f8c899cc74a9ece259c83ea9152ef79abc019be2e77a82b0307" exitCode=0 Mar 12 08:46:59 crc kubenswrapper[4809]: I0312 08:46:59.602587 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w88ml" event={"ID":"0fb437d4-d106-4655-8a3f-05446deb2be1","Type":"ContainerDied","Data":"155b576d8b607f8c899cc74a9ece259c83ea9152ef79abc019be2e77a82b0307"} Mar 12 08:47:01 crc kubenswrapper[4809]: I0312 08:47:01.110046 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w88ml" Mar 12 08:47:01 crc kubenswrapper[4809]: I0312 08:47:01.286241 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0fb437d4-d106-4655-8a3f-05446deb2be1-ssh-key-openstack-edpm-ipam\") pod \"0fb437d4-d106-4655-8a3f-05446deb2be1\" (UID: \"0fb437d4-d106-4655-8a3f-05446deb2be1\") " Mar 12 08:47:01 crc kubenswrapper[4809]: I0312 08:47:01.286318 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/0fb437d4-d106-4655-8a3f-05446deb2be1-nova-cell1-compute-config-1\") pod \"0fb437d4-d106-4655-8a3f-05446deb2be1\" (UID: \"0fb437d4-d106-4655-8a3f-05446deb2be1\") " Mar 12 08:47:01 crc kubenswrapper[4809]: I0312 08:47:01.286351 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0fb437d4-d106-4655-8a3f-05446deb2be1-inventory\") pod \"0fb437d4-d106-4655-8a3f-05446deb2be1\" (UID: \"0fb437d4-d106-4655-8a3f-05446deb2be1\") " Mar 12 08:47:01 crc kubenswrapper[4809]: I0312 08:47:01.286379 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fb437d4-d106-4655-8a3f-05446deb2be1-nova-combined-ca-bundle\") pod \"0fb437d4-d106-4655-8a3f-05446deb2be1\" (UID: \"0fb437d4-d106-4655-8a3f-05446deb2be1\") " Mar 12 08:47:01 crc kubenswrapper[4809]: I0312 08:47:01.286504 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/0fb437d4-d106-4655-8a3f-05446deb2be1-nova-cell1-compute-config-3\") pod \"0fb437d4-d106-4655-8a3f-05446deb2be1\" (UID: \"0fb437d4-d106-4655-8a3f-05446deb2be1\") " Mar 12 08:47:01 crc kubenswrapper[4809]: I0312 08:47:01.286528 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/0fb437d4-d106-4655-8a3f-05446deb2be1-nova-migration-ssh-key-0\") pod \"0fb437d4-d106-4655-8a3f-05446deb2be1\" (UID: \"0fb437d4-d106-4655-8a3f-05446deb2be1\") " Mar 12 08:47:01 crc kubenswrapper[4809]: I0312 08:47:01.286670 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/0fb437d4-d106-4655-8a3f-05446deb2be1-nova-cell1-compute-config-0\") pod \"0fb437d4-d106-4655-8a3f-05446deb2be1\" (UID: \"0fb437d4-d106-4655-8a3f-05446deb2be1\") " Mar 12 08:47:01 crc kubenswrapper[4809]: I0312 08:47:01.286718 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/0fb437d4-d106-4655-8a3f-05446deb2be1-nova-migration-ssh-key-1\") pod \"0fb437d4-d106-4655-8a3f-05446deb2be1\" (UID: \"0fb437d4-d106-4655-8a3f-05446deb2be1\") " Mar 12 08:47:01 crc kubenswrapper[4809]: I0312 08:47:01.286762 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/0fb437d4-d106-4655-8a3f-05446deb2be1-nova-cell1-compute-config-2\") pod \"0fb437d4-d106-4655-8a3f-05446deb2be1\" (UID: \"0fb437d4-d106-4655-8a3f-05446deb2be1\") " Mar 12 08:47:01 crc kubenswrapper[4809]: I0312 08:47:01.286786 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/0fb437d4-d106-4655-8a3f-05446deb2be1-nova-extra-config-0\") pod \"0fb437d4-d106-4655-8a3f-05446deb2be1\" (UID: \"0fb437d4-d106-4655-8a3f-05446deb2be1\") " Mar 12 08:47:01 crc kubenswrapper[4809]: I0312 08:47:01.286829 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzdcl\" (UniqueName: \"kubernetes.io/projected/0fb437d4-d106-4655-8a3f-05446deb2be1-kube-api-access-rzdcl\") pod \"0fb437d4-d106-4655-8a3f-05446deb2be1\" (UID: \"0fb437d4-d106-4655-8a3f-05446deb2be1\") " Mar 12 08:47:01 crc kubenswrapper[4809]: I0312 08:47:01.292574 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fb437d4-d106-4655-8a3f-05446deb2be1-kube-api-access-rzdcl" (OuterVolumeSpecName: "kube-api-access-rzdcl") pod "0fb437d4-d106-4655-8a3f-05446deb2be1" (UID: "0fb437d4-d106-4655-8a3f-05446deb2be1"). InnerVolumeSpecName "kube-api-access-rzdcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:47:01 crc kubenswrapper[4809]: I0312 08:47:01.292920 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fb437d4-d106-4655-8a3f-05446deb2be1-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "0fb437d4-d106-4655-8a3f-05446deb2be1" (UID: "0fb437d4-d106-4655-8a3f-05446deb2be1"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:47:01 crc kubenswrapper[4809]: I0312 08:47:01.319236 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fb437d4-d106-4655-8a3f-05446deb2be1-inventory" (OuterVolumeSpecName: "inventory") pod "0fb437d4-d106-4655-8a3f-05446deb2be1" (UID: "0fb437d4-d106-4655-8a3f-05446deb2be1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:47:01 crc kubenswrapper[4809]: I0312 08:47:01.320636 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fb437d4-d106-4655-8a3f-05446deb2be1-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "0fb437d4-d106-4655-8a3f-05446deb2be1" (UID: "0fb437d4-d106-4655-8a3f-05446deb2be1"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:47:01 crc kubenswrapper[4809]: I0312 08:47:01.321502 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fb437d4-d106-4655-8a3f-05446deb2be1-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "0fb437d4-d106-4655-8a3f-05446deb2be1" (UID: "0fb437d4-d106-4655-8a3f-05446deb2be1"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 08:47:01 crc kubenswrapper[4809]: I0312 08:47:01.326070 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fb437d4-d106-4655-8a3f-05446deb2be1-nova-cell1-compute-config-2" (OuterVolumeSpecName: "nova-cell1-compute-config-2") pod "0fb437d4-d106-4655-8a3f-05446deb2be1" (UID: "0fb437d4-d106-4655-8a3f-05446deb2be1"). InnerVolumeSpecName "nova-cell1-compute-config-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:47:01 crc kubenswrapper[4809]: I0312 08:47:01.326709 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fb437d4-d106-4655-8a3f-05446deb2be1-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "0fb437d4-d106-4655-8a3f-05446deb2be1" (UID: "0fb437d4-d106-4655-8a3f-05446deb2be1"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:47:01 crc kubenswrapper[4809]: I0312 08:47:01.327197 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fb437d4-d106-4655-8a3f-05446deb2be1-nova-cell1-compute-config-3" (OuterVolumeSpecName: "nova-cell1-compute-config-3") pod "0fb437d4-d106-4655-8a3f-05446deb2be1" (UID: "0fb437d4-d106-4655-8a3f-05446deb2be1"). InnerVolumeSpecName "nova-cell1-compute-config-3". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:47:01 crc kubenswrapper[4809]: I0312 08:47:01.336037 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fb437d4-d106-4655-8a3f-05446deb2be1-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "0fb437d4-d106-4655-8a3f-05446deb2be1" (UID: "0fb437d4-d106-4655-8a3f-05446deb2be1"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:47:01 crc kubenswrapper[4809]: I0312 08:47:01.343285 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fb437d4-d106-4655-8a3f-05446deb2be1-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "0fb437d4-d106-4655-8a3f-05446deb2be1" (UID: "0fb437d4-d106-4655-8a3f-05446deb2be1"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:47:01 crc kubenswrapper[4809]: I0312 08:47:01.343850 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fb437d4-d106-4655-8a3f-05446deb2be1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0fb437d4-d106-4655-8a3f-05446deb2be1" (UID: "0fb437d4-d106-4655-8a3f-05446deb2be1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:47:01 crc kubenswrapper[4809]: I0312 08:47:01.389419 4809 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/0fb437d4-d106-4655-8a3f-05446deb2be1-nova-cell1-compute-config-2\") on node \"crc\" DevicePath \"\"" Mar 12 08:47:01 crc kubenswrapper[4809]: I0312 08:47:01.389481 4809 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/0fb437d4-d106-4655-8a3f-05446deb2be1-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Mar 12 08:47:01 crc kubenswrapper[4809]: I0312 08:47:01.389491 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzdcl\" (UniqueName: \"kubernetes.io/projected/0fb437d4-d106-4655-8a3f-05446deb2be1-kube-api-access-rzdcl\") on node \"crc\" DevicePath \"\"" Mar 12 08:47:01 crc kubenswrapper[4809]: I0312 08:47:01.389500 4809 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0fb437d4-d106-4655-8a3f-05446deb2be1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 08:47:01 crc kubenswrapper[4809]: I0312 08:47:01.389508 4809 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/0fb437d4-d106-4655-8a3f-05446deb2be1-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Mar 12 08:47:01 crc kubenswrapper[4809]: I0312 08:47:01.389518 4809 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0fb437d4-d106-4655-8a3f-05446deb2be1-inventory\") on node \"crc\" DevicePath \"\"" Mar 12 08:47:01 crc kubenswrapper[4809]: I0312 08:47:01.389528 4809 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fb437d4-d106-4655-8a3f-05446deb2be1-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:47:01 crc kubenswrapper[4809]: I0312 08:47:01.389537 4809 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/0fb437d4-d106-4655-8a3f-05446deb2be1-nova-cell1-compute-config-3\") on node \"crc\" DevicePath \"\"" Mar 12 08:47:01 crc kubenswrapper[4809]: I0312 08:47:01.389550 4809 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/0fb437d4-d106-4655-8a3f-05446deb2be1-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Mar 12 08:47:01 crc kubenswrapper[4809]: I0312 08:47:01.389560 4809 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/0fb437d4-d106-4655-8a3f-05446deb2be1-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Mar 12 08:47:01 crc kubenswrapper[4809]: I0312 08:47:01.389570 4809 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/0fb437d4-d106-4655-8a3f-05446deb2be1-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Mar 12 08:47:01 crc kubenswrapper[4809]: I0312 08:47:01.637300 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w88ml" event={"ID":"0fb437d4-d106-4655-8a3f-05446deb2be1","Type":"ContainerDied","Data":"0c5e46b4de9f961cea6512640479eecd273fad92664bf0a107dbf3c0815b5105"} Mar 12 08:47:01 crc kubenswrapper[4809]: I0312 08:47:01.637350 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c5e46b4de9f961cea6512640479eecd273fad92664bf0a107dbf3c0815b5105" Mar 12 08:47:01 crc kubenswrapper[4809]: I0312 08:47:01.637428 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w88ml" Mar 12 08:47:01 crc kubenswrapper[4809]: I0312 08:47:01.739799 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5b29d"] Mar 12 08:47:01 crc kubenswrapper[4809]: E0312 08:47:01.741739 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8b8e028-6c2f-443a-908e-d509b6e57d09" containerName="extract-utilities" Mar 12 08:47:01 crc kubenswrapper[4809]: I0312 08:47:01.741775 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8b8e028-6c2f-443a-908e-d509b6e57d09" containerName="extract-utilities" Mar 12 08:47:01 crc kubenswrapper[4809]: E0312 08:47:01.741813 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8b8e028-6c2f-443a-908e-d509b6e57d09" containerName="extract-content" Mar 12 08:47:01 crc kubenswrapper[4809]: I0312 08:47:01.741823 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8b8e028-6c2f-443a-908e-d509b6e57d09" containerName="extract-content" Mar 12 08:47:01 crc kubenswrapper[4809]: E0312 08:47:01.741840 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6578ada-536b-4584-801a-09e783245b71" containerName="extract-content" Mar 12 08:47:01 crc kubenswrapper[4809]: I0312 08:47:01.741848 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6578ada-536b-4584-801a-09e783245b71" containerName="extract-content" Mar 12 08:47:01 crc kubenswrapper[4809]: E0312 08:47:01.741875 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fb437d4-d106-4655-8a3f-05446deb2be1" containerName="nova-edpm-deployment-openstack-edpm-ipam" Mar 12 08:47:01 crc kubenswrapper[4809]: I0312 08:47:01.741883 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fb437d4-d106-4655-8a3f-05446deb2be1" containerName="nova-edpm-deployment-openstack-edpm-ipam" Mar 12 08:47:01 crc kubenswrapper[4809]: E0312 08:47:01.741896 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8b8e028-6c2f-443a-908e-d509b6e57d09" containerName="registry-server" Mar 12 08:47:01 crc kubenswrapper[4809]: I0312 08:47:01.741904 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8b8e028-6c2f-443a-908e-d509b6e57d09" containerName="registry-server" Mar 12 08:47:01 crc kubenswrapper[4809]: E0312 08:47:01.741922 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6578ada-536b-4584-801a-09e783245b71" containerName="registry-server" Mar 12 08:47:01 crc kubenswrapper[4809]: I0312 08:47:01.741930 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6578ada-536b-4584-801a-09e783245b71" containerName="registry-server" Mar 12 08:47:01 crc kubenswrapper[4809]: E0312 08:47:01.741953 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6578ada-536b-4584-801a-09e783245b71" containerName="extract-utilities" Mar 12 08:47:01 crc kubenswrapper[4809]: I0312 08:47:01.741963 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6578ada-536b-4584-801a-09e783245b71" containerName="extract-utilities" Mar 12 08:47:01 crc kubenswrapper[4809]: I0312 08:47:01.742266 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6578ada-536b-4584-801a-09e783245b71" containerName="registry-server" Mar 12 08:47:01 crc kubenswrapper[4809]: I0312 08:47:01.742298 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fb437d4-d106-4655-8a3f-05446deb2be1" containerName="nova-edpm-deployment-openstack-edpm-ipam" Mar 12 08:47:01 crc kubenswrapper[4809]: I0312 08:47:01.742316 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8b8e028-6c2f-443a-908e-d509b6e57d09" containerName="registry-server" Mar 12 08:47:01 crc kubenswrapper[4809]: I0312 08:47:01.750291 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5b29d" Mar 12 08:47:01 crc kubenswrapper[4809]: I0312 08:47:01.752742 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 12 08:47:01 crc kubenswrapper[4809]: I0312 08:47:01.753682 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Mar 12 08:47:01 crc kubenswrapper[4809]: I0312 08:47:01.756081 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 08:47:01 crc kubenswrapper[4809]: I0312 08:47:01.756461 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7nx95" Mar 12 08:47:01 crc kubenswrapper[4809]: I0312 08:47:01.756640 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 12 08:47:01 crc kubenswrapper[4809]: I0312 08:47:01.758513 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5b29d"] Mar 12 08:47:01 crc kubenswrapper[4809]: I0312 08:47:01.900362 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/590ac6ab-bccd-4261-bff2-c0027731a4af-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5b29d\" (UID: \"590ac6ab-bccd-4261-bff2-c0027731a4af\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5b29d" Mar 12 08:47:01 crc kubenswrapper[4809]: I0312 08:47:01.900744 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/590ac6ab-bccd-4261-bff2-c0027731a4af-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5b29d\" (UID: \"590ac6ab-bccd-4261-bff2-c0027731a4af\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5b29d" Mar 12 08:47:01 crc kubenswrapper[4809]: I0312 08:47:01.900876 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/590ac6ab-bccd-4261-bff2-c0027731a4af-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5b29d\" (UID: \"590ac6ab-bccd-4261-bff2-c0027731a4af\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5b29d" Mar 12 08:47:01 crc kubenswrapper[4809]: I0312 08:47:01.900956 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85ps4\" (UniqueName: \"kubernetes.io/projected/590ac6ab-bccd-4261-bff2-c0027731a4af-kube-api-access-85ps4\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5b29d\" (UID: \"590ac6ab-bccd-4261-bff2-c0027731a4af\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5b29d" Mar 12 08:47:01 crc kubenswrapper[4809]: I0312 08:47:01.901137 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/590ac6ab-bccd-4261-bff2-c0027731a4af-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5b29d\" (UID: \"590ac6ab-bccd-4261-bff2-c0027731a4af\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5b29d" Mar 12 08:47:01 crc kubenswrapper[4809]: I0312 08:47:01.901192 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/590ac6ab-bccd-4261-bff2-c0027731a4af-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5b29d\" (UID: \"590ac6ab-bccd-4261-bff2-c0027731a4af\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5b29d" Mar 12 08:47:01 crc kubenswrapper[4809]: I0312 08:47:01.901253 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/590ac6ab-bccd-4261-bff2-c0027731a4af-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5b29d\" (UID: \"590ac6ab-bccd-4261-bff2-c0027731a4af\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5b29d" Mar 12 08:47:02 crc kubenswrapper[4809]: I0312 08:47:02.003955 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/590ac6ab-bccd-4261-bff2-c0027731a4af-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5b29d\" (UID: \"590ac6ab-bccd-4261-bff2-c0027731a4af\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5b29d" Mar 12 08:47:02 crc kubenswrapper[4809]: I0312 08:47:02.004350 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85ps4\" (UniqueName: \"kubernetes.io/projected/590ac6ab-bccd-4261-bff2-c0027731a4af-kube-api-access-85ps4\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5b29d\" (UID: \"590ac6ab-bccd-4261-bff2-c0027731a4af\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5b29d" Mar 12 08:47:02 crc kubenswrapper[4809]: I0312 08:47:02.004583 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/590ac6ab-bccd-4261-bff2-c0027731a4af-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5b29d\" (UID: \"590ac6ab-bccd-4261-bff2-c0027731a4af\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5b29d" Mar 12 08:47:02 crc kubenswrapper[4809]: I0312 08:47:02.004724 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/590ac6ab-bccd-4261-bff2-c0027731a4af-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5b29d\" (UID: \"590ac6ab-bccd-4261-bff2-c0027731a4af\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5b29d" Mar 12 08:47:02 crc kubenswrapper[4809]: I0312 08:47:02.004872 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/590ac6ab-bccd-4261-bff2-c0027731a4af-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5b29d\" (UID: \"590ac6ab-bccd-4261-bff2-c0027731a4af\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5b29d" Mar 12 08:47:02 crc kubenswrapper[4809]: I0312 08:47:02.005074 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/590ac6ab-bccd-4261-bff2-c0027731a4af-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5b29d\" (UID: \"590ac6ab-bccd-4261-bff2-c0027731a4af\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5b29d" Mar 12 08:47:02 crc kubenswrapper[4809]: I0312 08:47:02.005197 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/590ac6ab-bccd-4261-bff2-c0027731a4af-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5b29d\" (UID: \"590ac6ab-bccd-4261-bff2-c0027731a4af\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5b29d" Mar 12 08:47:02 crc kubenswrapper[4809]: I0312 08:47:02.008525 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/590ac6ab-bccd-4261-bff2-c0027731a4af-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5b29d\" (UID: \"590ac6ab-bccd-4261-bff2-c0027731a4af\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5b29d" Mar 12 08:47:02 crc kubenswrapper[4809]: I0312 08:47:02.008916 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/590ac6ab-bccd-4261-bff2-c0027731a4af-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5b29d\" (UID: \"590ac6ab-bccd-4261-bff2-c0027731a4af\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5b29d" Mar 12 08:47:02 crc kubenswrapper[4809]: I0312 08:47:02.009404 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/590ac6ab-bccd-4261-bff2-c0027731a4af-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5b29d\" (UID: \"590ac6ab-bccd-4261-bff2-c0027731a4af\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5b29d" Mar 12 08:47:02 crc kubenswrapper[4809]: I0312 08:47:02.009433 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/590ac6ab-bccd-4261-bff2-c0027731a4af-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5b29d\" (UID: \"590ac6ab-bccd-4261-bff2-c0027731a4af\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5b29d" Mar 12 08:47:02 crc kubenswrapper[4809]: I0312 08:47:02.010163 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/590ac6ab-bccd-4261-bff2-c0027731a4af-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5b29d\" (UID: \"590ac6ab-bccd-4261-bff2-c0027731a4af\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5b29d" Mar 12 08:47:02 crc kubenswrapper[4809]: I0312 08:47:02.010259 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/590ac6ab-bccd-4261-bff2-c0027731a4af-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5b29d\" (UID: \"590ac6ab-bccd-4261-bff2-c0027731a4af\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5b29d" Mar 12 08:47:02 crc kubenswrapper[4809]: I0312 08:47:02.022650 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85ps4\" (UniqueName: \"kubernetes.io/projected/590ac6ab-bccd-4261-bff2-c0027731a4af-kube-api-access-85ps4\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5b29d\" (UID: \"590ac6ab-bccd-4261-bff2-c0027731a4af\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5b29d" Mar 12 08:47:02 crc kubenswrapper[4809]: I0312 08:47:02.074785 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5b29d" Mar 12 08:47:02 crc kubenswrapper[4809]: I0312 08:47:02.636059 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5b29d"] Mar 12 08:47:02 crc kubenswrapper[4809]: I0312 08:47:02.653678 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5b29d" event={"ID":"590ac6ab-bccd-4261-bff2-c0027731a4af","Type":"ContainerStarted","Data":"f1ddfc5ffba6193fa8817fb46eaa63b09e5d661140976a0d1df84cf6718b8662"} Mar 12 08:47:03 crc kubenswrapper[4809]: I0312 08:47:03.672011 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5b29d" event={"ID":"590ac6ab-bccd-4261-bff2-c0027731a4af","Type":"ContainerStarted","Data":"3fb7b2309c4425d6ee6cedd115c13eb2d6ec52a6fb931121db0a0351700658f4"} Mar 12 08:47:03 crc kubenswrapper[4809]: I0312 08:47:03.694480 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5b29d" podStartSLOduration=2.238486296 podStartE2EDuration="2.694438621s" podCreationTimestamp="2026-03-12 08:47:01 +0000 UTC" firstStartedPulling="2026-03-12 08:47:02.636050835 +0000 UTC m=+2896.218086568" lastFinishedPulling="2026-03-12 08:47:03.09200316 +0000 UTC m=+2896.674038893" observedRunningTime="2026-03-12 08:47:03.689682932 +0000 UTC m=+2897.271718665" watchObservedRunningTime="2026-03-12 08:47:03.694438621 +0000 UTC m=+2897.276474394" Mar 12 08:47:15 crc kubenswrapper[4809]: I0312 08:47:15.048381 4809 patch_prober.go:28] interesting pod/machine-config-daemon-h6d4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 08:47:15 crc kubenswrapper[4809]: I0312 08:47:15.048997 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 08:47:15 crc kubenswrapper[4809]: I0312 08:47:15.049044 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" Mar 12 08:47:15 crc kubenswrapper[4809]: I0312 08:47:15.050065 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"09fd2a67830bd13c39e112c63c28313e1f2e66e0b513dca499389267040e529c"} pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 08:47:15 crc kubenswrapper[4809]: I0312 08:47:15.050147 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" containerID="cri-o://09fd2a67830bd13c39e112c63c28313e1f2e66e0b513dca499389267040e529c" gracePeriod=600 Mar 12 08:47:15 crc kubenswrapper[4809]: E0312 08:47:15.180147 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:47:15 crc kubenswrapper[4809]: I0312 08:47:15.821534 4809 generic.go:334] "Generic (PLEG): container finished" podID="101483ba-8ed3-40eb-9855-077e9add029f" containerID="09fd2a67830bd13c39e112c63c28313e1f2e66e0b513dca499389267040e529c" exitCode=0 Mar 12 08:47:15 crc kubenswrapper[4809]: I0312 08:47:15.821612 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" event={"ID":"101483ba-8ed3-40eb-9855-077e9add029f","Type":"ContainerDied","Data":"09fd2a67830bd13c39e112c63c28313e1f2e66e0b513dca499389267040e529c"} Mar 12 08:47:15 crc kubenswrapper[4809]: I0312 08:47:15.821889 4809 scope.go:117] "RemoveContainer" containerID="347f0d93199140295c3a4a9ffbf332939fc6998442e7b2f348c94f44201c547b" Mar 12 08:47:15 crc kubenswrapper[4809]: I0312 08:47:15.822912 4809 scope.go:117] "RemoveContainer" containerID="09fd2a67830bd13c39e112c63c28313e1f2e66e0b513dca499389267040e529c" Mar 12 08:47:15 crc kubenswrapper[4809]: E0312 08:47:15.823477 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:47:26 crc kubenswrapper[4809]: I0312 08:47:26.106378 4809 scope.go:117] "RemoveContainer" containerID="09fd2a67830bd13c39e112c63c28313e1f2e66e0b513dca499389267040e529c" Mar 12 08:47:26 crc kubenswrapper[4809]: E0312 08:47:26.107180 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:47:40 crc kubenswrapper[4809]: I0312 08:47:40.108328 4809 scope.go:117] "RemoveContainer" containerID="09fd2a67830bd13c39e112c63c28313e1f2e66e0b513dca499389267040e529c" Mar 12 08:47:40 crc kubenswrapper[4809]: E0312 08:47:40.109884 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:47:53 crc kubenswrapper[4809]: I0312 08:47:53.106914 4809 scope.go:117] "RemoveContainer" containerID="09fd2a67830bd13c39e112c63c28313e1f2e66e0b513dca499389267040e529c" Mar 12 08:47:53 crc kubenswrapper[4809]: E0312 08:47:53.108025 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:48:00 crc kubenswrapper[4809]: I0312 08:48:00.171173 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555088-5g6fz"] Mar 12 08:48:00 crc kubenswrapper[4809]: I0312 08:48:00.173885 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555088-5g6fz" Mar 12 08:48:00 crc kubenswrapper[4809]: I0312 08:48:00.176563 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 08:48:00 crc kubenswrapper[4809]: I0312 08:48:00.176742 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 08:48:00 crc kubenswrapper[4809]: I0312 08:48:00.181223 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-tvxqv" Mar 12 08:48:00 crc kubenswrapper[4809]: I0312 08:48:00.192846 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555088-5g6fz"] Mar 12 08:48:00 crc kubenswrapper[4809]: I0312 08:48:00.254471 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsfk4\" (UniqueName: \"kubernetes.io/projected/cf4ca1fa-82c4-44c8-897f-f8e194959127-kube-api-access-bsfk4\") pod \"auto-csr-approver-29555088-5g6fz\" (UID: \"cf4ca1fa-82c4-44c8-897f-f8e194959127\") " pod="openshift-infra/auto-csr-approver-29555088-5g6fz" Mar 12 08:48:00 crc kubenswrapper[4809]: I0312 08:48:00.357065 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsfk4\" (UniqueName: \"kubernetes.io/projected/cf4ca1fa-82c4-44c8-897f-f8e194959127-kube-api-access-bsfk4\") pod \"auto-csr-approver-29555088-5g6fz\" (UID: \"cf4ca1fa-82c4-44c8-897f-f8e194959127\") " pod="openshift-infra/auto-csr-approver-29555088-5g6fz" Mar 12 08:48:00 crc kubenswrapper[4809]: I0312 08:48:00.383239 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsfk4\" (UniqueName: \"kubernetes.io/projected/cf4ca1fa-82c4-44c8-897f-f8e194959127-kube-api-access-bsfk4\") pod \"auto-csr-approver-29555088-5g6fz\" (UID: \"cf4ca1fa-82c4-44c8-897f-f8e194959127\") " pod="openshift-infra/auto-csr-approver-29555088-5g6fz" Mar 12 08:48:00 crc kubenswrapper[4809]: I0312 08:48:00.502266 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555088-5g6fz" Mar 12 08:48:01 crc kubenswrapper[4809]: I0312 08:48:01.038229 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555088-5g6fz"] Mar 12 08:48:01 crc kubenswrapper[4809]: I0312 08:48:01.409817 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555088-5g6fz" event={"ID":"cf4ca1fa-82c4-44c8-897f-f8e194959127","Type":"ContainerStarted","Data":"4676a8d42ac9b9cbc8d41c16fb24ebaed89d7458a7b35dca6010976cd9da6038"} Mar 12 08:48:03 crc kubenswrapper[4809]: I0312 08:48:03.432735 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555088-5g6fz" event={"ID":"cf4ca1fa-82c4-44c8-897f-f8e194959127","Type":"ContainerStarted","Data":"51699eecda0deaaf166b6409a0da95a1e8617d2c877f1784006d242c6c9d8e97"} Mar 12 08:48:03 crc kubenswrapper[4809]: I0312 08:48:03.471871 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555088-5g6fz" podStartSLOduration=2.178161774 podStartE2EDuration="3.471843276s" podCreationTimestamp="2026-03-12 08:48:00 +0000 UTC" firstStartedPulling="2026-03-12 08:48:01.047048607 +0000 UTC m=+2954.629084350" lastFinishedPulling="2026-03-12 08:48:02.340730119 +0000 UTC m=+2955.922765852" observedRunningTime="2026-03-12 08:48:03.4466463 +0000 UTC m=+2957.028682043" watchObservedRunningTime="2026-03-12 08:48:03.471843276 +0000 UTC m=+2957.053879019" Mar 12 08:48:04 crc kubenswrapper[4809]: I0312 08:48:04.443734 4809 generic.go:334] "Generic (PLEG): container finished" podID="cf4ca1fa-82c4-44c8-897f-f8e194959127" containerID="51699eecda0deaaf166b6409a0da95a1e8617d2c877f1784006d242c6c9d8e97" exitCode=0 Mar 12 08:48:04 crc kubenswrapper[4809]: I0312 08:48:04.443840 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555088-5g6fz" event={"ID":"cf4ca1fa-82c4-44c8-897f-f8e194959127","Type":"ContainerDied","Data":"51699eecda0deaaf166b6409a0da95a1e8617d2c877f1784006d242c6c9d8e97"} Mar 12 08:48:05 crc kubenswrapper[4809]: I0312 08:48:05.918609 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555088-5g6fz" Mar 12 08:48:06 crc kubenswrapper[4809]: I0312 08:48:06.017886 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bsfk4\" (UniqueName: \"kubernetes.io/projected/cf4ca1fa-82c4-44c8-897f-f8e194959127-kube-api-access-bsfk4\") pod \"cf4ca1fa-82c4-44c8-897f-f8e194959127\" (UID: \"cf4ca1fa-82c4-44c8-897f-f8e194959127\") " Mar 12 08:48:06 crc kubenswrapper[4809]: I0312 08:48:06.031347 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf4ca1fa-82c4-44c8-897f-f8e194959127-kube-api-access-bsfk4" (OuterVolumeSpecName: "kube-api-access-bsfk4") pod "cf4ca1fa-82c4-44c8-897f-f8e194959127" (UID: "cf4ca1fa-82c4-44c8-897f-f8e194959127"). InnerVolumeSpecName "kube-api-access-bsfk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:48:06 crc kubenswrapper[4809]: I0312 08:48:06.121435 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bsfk4\" (UniqueName: \"kubernetes.io/projected/cf4ca1fa-82c4-44c8-897f-f8e194959127-kube-api-access-bsfk4\") on node \"crc\" DevicePath \"\"" Mar 12 08:48:06 crc kubenswrapper[4809]: I0312 08:48:06.476102 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555088-5g6fz" event={"ID":"cf4ca1fa-82c4-44c8-897f-f8e194959127","Type":"ContainerDied","Data":"4676a8d42ac9b9cbc8d41c16fb24ebaed89d7458a7b35dca6010976cd9da6038"} Mar 12 08:48:06 crc kubenswrapper[4809]: I0312 08:48:06.476232 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4676a8d42ac9b9cbc8d41c16fb24ebaed89d7458a7b35dca6010976cd9da6038" Mar 12 08:48:06 crc kubenswrapper[4809]: I0312 08:48:06.476344 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555088-5g6fz" Mar 12 08:48:06 crc kubenswrapper[4809]: I0312 08:48:06.530214 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555082-k4zh7"] Mar 12 08:48:06 crc kubenswrapper[4809]: I0312 08:48:06.542819 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555082-k4zh7"] Mar 12 08:48:07 crc kubenswrapper[4809]: I0312 08:48:07.125898 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d6d6056-9b82-41d2-8042-ab2e5aa1376f" path="/var/lib/kubelet/pods/3d6d6056-9b82-41d2-8042-ab2e5aa1376f/volumes" Mar 12 08:48:08 crc kubenswrapper[4809]: I0312 08:48:08.106102 4809 scope.go:117] "RemoveContainer" containerID="09fd2a67830bd13c39e112c63c28313e1f2e66e0b513dca499389267040e529c" Mar 12 08:48:08 crc kubenswrapper[4809]: E0312 08:48:08.106619 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:48:23 crc kubenswrapper[4809]: I0312 08:48:23.105560 4809 scope.go:117] "RemoveContainer" containerID="09fd2a67830bd13c39e112c63c28313e1f2e66e0b513dca499389267040e529c" Mar 12 08:48:23 crc kubenswrapper[4809]: E0312 08:48:23.107529 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:48:32 crc kubenswrapper[4809]: I0312 08:48:32.706589 4809 scope.go:117] "RemoveContainer" containerID="ac321b7f95d4b5fe1ace5f28aa530d64e0a7c789d5698607b6eaac8883468dd4" Mar 12 08:48:38 crc kubenswrapper[4809]: I0312 08:48:38.106881 4809 scope.go:117] "RemoveContainer" containerID="09fd2a67830bd13c39e112c63c28313e1f2e66e0b513dca499389267040e529c" Mar 12 08:48:38 crc kubenswrapper[4809]: E0312 08:48:38.107730 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:48:51 crc kubenswrapper[4809]: I0312 08:48:51.107062 4809 scope.go:117] "RemoveContainer" containerID="09fd2a67830bd13c39e112c63c28313e1f2e66e0b513dca499389267040e529c" Mar 12 08:48:51 crc kubenswrapper[4809]: E0312 08:48:51.107862 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:49:02 crc kubenswrapper[4809]: I0312 08:49:02.107568 4809 scope.go:117] "RemoveContainer" containerID="09fd2a67830bd13c39e112c63c28313e1f2e66e0b513dca499389267040e529c" Mar 12 08:49:02 crc kubenswrapper[4809]: E0312 08:49:02.108466 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:49:15 crc kubenswrapper[4809]: I0312 08:49:15.106745 4809 scope.go:117] "RemoveContainer" containerID="09fd2a67830bd13c39e112c63c28313e1f2e66e0b513dca499389267040e529c" Mar 12 08:49:15 crc kubenswrapper[4809]: E0312 08:49:15.107602 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:49:15 crc kubenswrapper[4809]: I0312 08:49:15.349792 4809 generic.go:334] "Generic (PLEG): container finished" podID="590ac6ab-bccd-4261-bff2-c0027731a4af" containerID="3fb7b2309c4425d6ee6cedd115c13eb2d6ec52a6fb931121db0a0351700658f4" exitCode=0 Mar 12 08:49:15 crc kubenswrapper[4809]: I0312 08:49:15.349857 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5b29d" event={"ID":"590ac6ab-bccd-4261-bff2-c0027731a4af","Type":"ContainerDied","Data":"3fb7b2309c4425d6ee6cedd115c13eb2d6ec52a6fb931121db0a0351700658f4"} Mar 12 08:49:16 crc kubenswrapper[4809]: I0312 08:49:16.880618 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5b29d" Mar 12 08:49:16 crc kubenswrapper[4809]: I0312 08:49:16.967053 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/590ac6ab-bccd-4261-bff2-c0027731a4af-ceilometer-compute-config-data-0\") pod \"590ac6ab-bccd-4261-bff2-c0027731a4af\" (UID: \"590ac6ab-bccd-4261-bff2-c0027731a4af\") " Mar 12 08:49:16 crc kubenswrapper[4809]: I0312 08:49:16.967234 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/590ac6ab-bccd-4261-bff2-c0027731a4af-telemetry-combined-ca-bundle\") pod \"590ac6ab-bccd-4261-bff2-c0027731a4af\" (UID: \"590ac6ab-bccd-4261-bff2-c0027731a4af\") " Mar 12 08:49:16 crc kubenswrapper[4809]: I0312 08:49:16.967298 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/590ac6ab-bccd-4261-bff2-c0027731a4af-ssh-key-openstack-edpm-ipam\") pod \"590ac6ab-bccd-4261-bff2-c0027731a4af\" (UID: \"590ac6ab-bccd-4261-bff2-c0027731a4af\") " Mar 12 08:49:16 crc kubenswrapper[4809]: I0312 08:49:16.967383 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/590ac6ab-bccd-4261-bff2-c0027731a4af-inventory\") pod \"590ac6ab-bccd-4261-bff2-c0027731a4af\" (UID: \"590ac6ab-bccd-4261-bff2-c0027731a4af\") " Mar 12 08:49:16 crc kubenswrapper[4809]: I0312 08:49:16.967464 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/590ac6ab-bccd-4261-bff2-c0027731a4af-ceilometer-compute-config-data-1\") pod \"590ac6ab-bccd-4261-bff2-c0027731a4af\" (UID: \"590ac6ab-bccd-4261-bff2-c0027731a4af\") " Mar 12 08:49:16 crc kubenswrapper[4809]: I0312 08:49:16.967522 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85ps4\" (UniqueName: \"kubernetes.io/projected/590ac6ab-bccd-4261-bff2-c0027731a4af-kube-api-access-85ps4\") pod \"590ac6ab-bccd-4261-bff2-c0027731a4af\" (UID: \"590ac6ab-bccd-4261-bff2-c0027731a4af\") " Mar 12 08:49:16 crc kubenswrapper[4809]: I0312 08:49:16.967581 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/590ac6ab-bccd-4261-bff2-c0027731a4af-ceilometer-compute-config-data-2\") pod \"590ac6ab-bccd-4261-bff2-c0027731a4af\" (UID: \"590ac6ab-bccd-4261-bff2-c0027731a4af\") " Mar 12 08:49:16 crc kubenswrapper[4809]: I0312 08:49:16.976394 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/590ac6ab-bccd-4261-bff2-c0027731a4af-kube-api-access-85ps4" (OuterVolumeSpecName: "kube-api-access-85ps4") pod "590ac6ab-bccd-4261-bff2-c0027731a4af" (UID: "590ac6ab-bccd-4261-bff2-c0027731a4af"). InnerVolumeSpecName "kube-api-access-85ps4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:49:16 crc kubenswrapper[4809]: I0312 08:49:16.990429 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/590ac6ab-bccd-4261-bff2-c0027731a4af-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "590ac6ab-bccd-4261-bff2-c0027731a4af" (UID: "590ac6ab-bccd-4261-bff2-c0027731a4af"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:49:17 crc kubenswrapper[4809]: I0312 08:49:17.006327 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/590ac6ab-bccd-4261-bff2-c0027731a4af-inventory" (OuterVolumeSpecName: "inventory") pod "590ac6ab-bccd-4261-bff2-c0027731a4af" (UID: "590ac6ab-bccd-4261-bff2-c0027731a4af"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:49:17 crc kubenswrapper[4809]: I0312 08:49:17.027659 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/590ac6ab-bccd-4261-bff2-c0027731a4af-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "590ac6ab-bccd-4261-bff2-c0027731a4af" (UID: "590ac6ab-bccd-4261-bff2-c0027731a4af"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:49:17 crc kubenswrapper[4809]: I0312 08:49:17.027777 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/590ac6ab-bccd-4261-bff2-c0027731a4af-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "590ac6ab-bccd-4261-bff2-c0027731a4af" (UID: "590ac6ab-bccd-4261-bff2-c0027731a4af"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:49:17 crc kubenswrapper[4809]: I0312 08:49:17.042524 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/590ac6ab-bccd-4261-bff2-c0027731a4af-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "590ac6ab-bccd-4261-bff2-c0027731a4af" (UID: "590ac6ab-bccd-4261-bff2-c0027731a4af"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:49:17 crc kubenswrapper[4809]: I0312 08:49:17.045444 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/590ac6ab-bccd-4261-bff2-c0027731a4af-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "590ac6ab-bccd-4261-bff2-c0027731a4af" (UID: "590ac6ab-bccd-4261-bff2-c0027731a4af"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:49:17 crc kubenswrapper[4809]: I0312 08:49:17.071381 4809 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/590ac6ab-bccd-4261-bff2-c0027731a4af-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Mar 12 08:49:17 crc kubenswrapper[4809]: I0312 08:49:17.071433 4809 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/590ac6ab-bccd-4261-bff2-c0027731a4af-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:49:17 crc kubenswrapper[4809]: I0312 08:49:17.071446 4809 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/590ac6ab-bccd-4261-bff2-c0027731a4af-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 08:49:17 crc kubenswrapper[4809]: I0312 08:49:17.071461 4809 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/590ac6ab-bccd-4261-bff2-c0027731a4af-inventory\") on node \"crc\" DevicePath \"\"" Mar 12 08:49:17 crc kubenswrapper[4809]: I0312 08:49:17.071475 4809 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/590ac6ab-bccd-4261-bff2-c0027731a4af-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Mar 12 08:49:17 crc kubenswrapper[4809]: I0312 08:49:17.071492 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85ps4\" (UniqueName: \"kubernetes.io/projected/590ac6ab-bccd-4261-bff2-c0027731a4af-kube-api-access-85ps4\") on node \"crc\" DevicePath \"\"" Mar 12 08:49:17 crc kubenswrapper[4809]: I0312 08:49:17.071504 4809 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/590ac6ab-bccd-4261-bff2-c0027731a4af-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Mar 12 08:49:17 crc kubenswrapper[4809]: I0312 08:49:17.382019 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5b29d" event={"ID":"590ac6ab-bccd-4261-bff2-c0027731a4af","Type":"ContainerDied","Data":"f1ddfc5ffba6193fa8817fb46eaa63b09e5d661140976a0d1df84cf6718b8662"} Mar 12 08:49:17 crc kubenswrapper[4809]: I0312 08:49:17.382063 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1ddfc5ffba6193fa8817fb46eaa63b09e5d661140976a0d1df84cf6718b8662" Mar 12 08:49:17 crc kubenswrapper[4809]: I0312 08:49:17.382193 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5b29d" Mar 12 08:49:17 crc kubenswrapper[4809]: I0312 08:49:17.517004 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-qtmnh"] Mar 12 08:49:17 crc kubenswrapper[4809]: E0312 08:49:17.518043 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="590ac6ab-bccd-4261-bff2-c0027731a4af" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Mar 12 08:49:17 crc kubenswrapper[4809]: I0312 08:49:17.518070 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="590ac6ab-bccd-4261-bff2-c0027731a4af" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Mar 12 08:49:17 crc kubenswrapper[4809]: E0312 08:49:17.518091 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf4ca1fa-82c4-44c8-897f-f8e194959127" containerName="oc" Mar 12 08:49:17 crc kubenswrapper[4809]: I0312 08:49:17.518100 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf4ca1fa-82c4-44c8-897f-f8e194959127" containerName="oc" Mar 12 08:49:17 crc kubenswrapper[4809]: I0312 08:49:17.518470 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="590ac6ab-bccd-4261-bff2-c0027731a4af" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Mar 12 08:49:17 crc kubenswrapper[4809]: I0312 08:49:17.518507 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf4ca1fa-82c4-44c8-897f-f8e194959127" containerName="oc" Mar 12 08:49:17 crc kubenswrapper[4809]: I0312 08:49:17.519598 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-qtmnh" Mar 12 08:49:17 crc kubenswrapper[4809]: I0312 08:49:17.523653 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 12 08:49:17 crc kubenswrapper[4809]: I0312 08:49:17.523732 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-ipmi-config-data" Mar 12 08:49:17 crc kubenswrapper[4809]: I0312 08:49:17.523819 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7nx95" Mar 12 08:49:17 crc kubenswrapper[4809]: I0312 08:49:17.523870 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 12 08:49:17 crc kubenswrapper[4809]: I0312 08:49:17.523951 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 08:49:17 crc kubenswrapper[4809]: I0312 08:49:17.550647 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-qtmnh"] Mar 12 08:49:17 crc kubenswrapper[4809]: I0312 08:49:17.699812 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/16172e01-c601-4b38-81d4-86a28061049a-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-qtmnh\" (UID: \"16172e01-c601-4b38-81d4-86a28061049a\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-qtmnh" Mar 12 08:49:17 crc kubenswrapper[4809]: I0312 08:49:17.700187 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7kx9\" (UniqueName: \"kubernetes.io/projected/16172e01-c601-4b38-81d4-86a28061049a-kube-api-access-q7kx9\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-qtmnh\" (UID: \"16172e01-c601-4b38-81d4-86a28061049a\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-qtmnh" Mar 12 08:49:17 crc kubenswrapper[4809]: I0312 08:49:17.700311 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/16172e01-c601-4b38-81d4-86a28061049a-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-qtmnh\" (UID: \"16172e01-c601-4b38-81d4-86a28061049a\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-qtmnh" Mar 12 08:49:17 crc kubenswrapper[4809]: I0312 08:49:17.700445 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/16172e01-c601-4b38-81d4-86a28061049a-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-qtmnh\" (UID: \"16172e01-c601-4b38-81d4-86a28061049a\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-qtmnh" Mar 12 08:49:17 crc kubenswrapper[4809]: I0312 08:49:17.700587 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/16172e01-c601-4b38-81d4-86a28061049a-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-qtmnh\" (UID: \"16172e01-c601-4b38-81d4-86a28061049a\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-qtmnh" Mar 12 08:49:17 crc kubenswrapper[4809]: I0312 08:49:17.700681 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16172e01-c601-4b38-81d4-86a28061049a-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-qtmnh\" (UID: \"16172e01-c601-4b38-81d4-86a28061049a\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-qtmnh" Mar 12 08:49:17 crc kubenswrapper[4809]: I0312 08:49:17.701001 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/16172e01-c601-4b38-81d4-86a28061049a-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-qtmnh\" (UID: \"16172e01-c601-4b38-81d4-86a28061049a\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-qtmnh" Mar 12 08:49:17 crc kubenswrapper[4809]: I0312 08:49:17.803310 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/16172e01-c601-4b38-81d4-86a28061049a-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-qtmnh\" (UID: \"16172e01-c601-4b38-81d4-86a28061049a\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-qtmnh" Mar 12 08:49:17 crc kubenswrapper[4809]: I0312 08:49:17.803456 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/16172e01-c601-4b38-81d4-86a28061049a-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-qtmnh\" (UID: \"16172e01-c601-4b38-81d4-86a28061049a\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-qtmnh" Mar 12 08:49:17 crc kubenswrapper[4809]: I0312 08:49:17.803542 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7kx9\" (UniqueName: \"kubernetes.io/projected/16172e01-c601-4b38-81d4-86a28061049a-kube-api-access-q7kx9\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-qtmnh\" (UID: \"16172e01-c601-4b38-81d4-86a28061049a\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-qtmnh" Mar 12 08:49:17 crc kubenswrapper[4809]: I0312 08:49:17.803578 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/16172e01-c601-4b38-81d4-86a28061049a-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-qtmnh\" (UID: \"16172e01-c601-4b38-81d4-86a28061049a\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-qtmnh" Mar 12 08:49:17 crc kubenswrapper[4809]: I0312 08:49:17.803608 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/16172e01-c601-4b38-81d4-86a28061049a-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-qtmnh\" (UID: \"16172e01-c601-4b38-81d4-86a28061049a\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-qtmnh" Mar 12 08:49:17 crc kubenswrapper[4809]: I0312 08:49:17.803646 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/16172e01-c601-4b38-81d4-86a28061049a-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-qtmnh\" (UID: \"16172e01-c601-4b38-81d4-86a28061049a\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-qtmnh" Mar 12 08:49:17 crc kubenswrapper[4809]: I0312 08:49:17.803673 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16172e01-c601-4b38-81d4-86a28061049a-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-qtmnh\" (UID: \"16172e01-c601-4b38-81d4-86a28061049a\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-qtmnh" Mar 12 08:49:17 crc kubenswrapper[4809]: I0312 08:49:17.808651 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/16172e01-c601-4b38-81d4-86a28061049a-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-qtmnh\" (UID: \"16172e01-c601-4b38-81d4-86a28061049a\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-qtmnh" Mar 12 08:49:17 crc kubenswrapper[4809]: I0312 08:49:17.809047 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/16172e01-c601-4b38-81d4-86a28061049a-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-qtmnh\" (UID: \"16172e01-c601-4b38-81d4-86a28061049a\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-qtmnh" Mar 12 08:49:17 crc kubenswrapper[4809]: I0312 08:49:17.809521 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/16172e01-c601-4b38-81d4-86a28061049a-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-qtmnh\" (UID: \"16172e01-c601-4b38-81d4-86a28061049a\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-qtmnh" Mar 12 08:49:17 crc kubenswrapper[4809]: I0312 08:49:17.810093 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/16172e01-c601-4b38-81d4-86a28061049a-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-qtmnh\" (UID: \"16172e01-c601-4b38-81d4-86a28061049a\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-qtmnh" Mar 12 08:49:17 crc kubenswrapper[4809]: I0312 08:49:17.810800 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/16172e01-c601-4b38-81d4-86a28061049a-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-qtmnh\" (UID: \"16172e01-c601-4b38-81d4-86a28061049a\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-qtmnh" Mar 12 08:49:17 crc kubenswrapper[4809]: I0312 08:49:17.817942 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16172e01-c601-4b38-81d4-86a28061049a-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-qtmnh\" (UID: \"16172e01-c601-4b38-81d4-86a28061049a\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-qtmnh" Mar 12 08:49:17 crc kubenswrapper[4809]: I0312 08:49:17.822072 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7kx9\" (UniqueName: \"kubernetes.io/projected/16172e01-c601-4b38-81d4-86a28061049a-kube-api-access-q7kx9\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-qtmnh\" (UID: \"16172e01-c601-4b38-81d4-86a28061049a\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-qtmnh" Mar 12 08:49:17 crc kubenswrapper[4809]: I0312 08:49:17.844953 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-qtmnh" Mar 12 08:49:18 crc kubenswrapper[4809]: I0312 08:49:18.407092 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-qtmnh"] Mar 12 08:49:19 crc kubenswrapper[4809]: I0312 08:49:19.414861 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-qtmnh" event={"ID":"16172e01-c601-4b38-81d4-86a28061049a","Type":"ContainerStarted","Data":"87d54f67cd688ffc05fc0e36e22a9dfebdbf3189a313e5cc7b33549bb2f6b121"} Mar 12 08:49:19 crc kubenswrapper[4809]: I0312 08:49:19.415610 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-qtmnh" event={"ID":"16172e01-c601-4b38-81d4-86a28061049a","Type":"ContainerStarted","Data":"69f4265f89f3a9adf37aad8365a13bba908efc97729de2bf9182d67ce0428c5c"} Mar 12 08:49:19 crc kubenswrapper[4809]: I0312 08:49:19.445815 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-qtmnh" podStartSLOduration=2.033503272 podStartE2EDuration="2.445784098s" podCreationTimestamp="2026-03-12 08:49:17 +0000 UTC" firstStartedPulling="2026-03-12 08:49:18.411490348 +0000 UTC m=+3031.993526081" lastFinishedPulling="2026-03-12 08:49:18.823771174 +0000 UTC m=+3032.405806907" observedRunningTime="2026-03-12 08:49:19.434242984 +0000 UTC m=+3033.016278727" watchObservedRunningTime="2026-03-12 08:49:19.445784098 +0000 UTC m=+3033.027819841" Mar 12 08:49:28 crc kubenswrapper[4809]: I0312 08:49:28.107691 4809 scope.go:117] "RemoveContainer" containerID="09fd2a67830bd13c39e112c63c28313e1f2e66e0b513dca499389267040e529c" Mar 12 08:49:28 crc kubenswrapper[4809]: E0312 08:49:28.108674 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:49:42 crc kubenswrapper[4809]: I0312 08:49:42.106839 4809 scope.go:117] "RemoveContainer" containerID="09fd2a67830bd13c39e112c63c28313e1f2e66e0b513dca499389267040e529c" Mar 12 08:49:42 crc kubenswrapper[4809]: E0312 08:49:42.107651 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:49:54 crc kubenswrapper[4809]: I0312 08:49:54.106749 4809 scope.go:117] "RemoveContainer" containerID="09fd2a67830bd13c39e112c63c28313e1f2e66e0b513dca499389267040e529c" Mar 12 08:49:54 crc kubenswrapper[4809]: E0312 08:49:54.108076 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:50:00 crc kubenswrapper[4809]: I0312 08:50:00.151050 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555090-rsl2p"] Mar 12 08:50:00 crc kubenswrapper[4809]: I0312 08:50:00.154946 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555090-rsl2p" Mar 12 08:50:00 crc kubenswrapper[4809]: I0312 08:50:00.156995 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-tvxqv" Mar 12 08:50:00 crc kubenswrapper[4809]: I0312 08:50:00.157756 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 08:50:00 crc kubenswrapper[4809]: I0312 08:50:00.157781 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 08:50:00 crc kubenswrapper[4809]: I0312 08:50:00.170564 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555090-rsl2p"] Mar 12 08:50:00 crc kubenswrapper[4809]: I0312 08:50:00.266261 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlpss\" (UniqueName: \"kubernetes.io/projected/9a73449e-b50c-4a14-8031-a7b1aacb85ed-kube-api-access-hlpss\") pod \"auto-csr-approver-29555090-rsl2p\" (UID: \"9a73449e-b50c-4a14-8031-a7b1aacb85ed\") " pod="openshift-infra/auto-csr-approver-29555090-rsl2p" Mar 12 08:50:00 crc kubenswrapper[4809]: I0312 08:50:00.369321 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlpss\" (UniqueName: \"kubernetes.io/projected/9a73449e-b50c-4a14-8031-a7b1aacb85ed-kube-api-access-hlpss\") pod \"auto-csr-approver-29555090-rsl2p\" (UID: \"9a73449e-b50c-4a14-8031-a7b1aacb85ed\") " pod="openshift-infra/auto-csr-approver-29555090-rsl2p" Mar 12 08:50:00 crc kubenswrapper[4809]: I0312 08:50:00.393835 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlpss\" (UniqueName: \"kubernetes.io/projected/9a73449e-b50c-4a14-8031-a7b1aacb85ed-kube-api-access-hlpss\") pod \"auto-csr-approver-29555090-rsl2p\" (UID: \"9a73449e-b50c-4a14-8031-a7b1aacb85ed\") " pod="openshift-infra/auto-csr-approver-29555090-rsl2p" Mar 12 08:50:00 crc kubenswrapper[4809]: I0312 08:50:00.479710 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555090-rsl2p" Mar 12 08:50:01 crc kubenswrapper[4809]: I0312 08:50:01.062021 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555090-rsl2p"] Mar 12 08:50:01 crc kubenswrapper[4809]: W0312 08:50:01.073398 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a73449e_b50c_4a14_8031_a7b1aacb85ed.slice/crio-d1bd3e18704102d8e41fd0c29cdf5105bf0c7b9ef1f4a5d64f67dc7da8e91f3f WatchSource:0}: Error finding container d1bd3e18704102d8e41fd0c29cdf5105bf0c7b9ef1f4a5d64f67dc7da8e91f3f: Status 404 returned error can't find the container with id d1bd3e18704102d8e41fd0c29cdf5105bf0c7b9ef1f4a5d64f67dc7da8e91f3f Mar 12 08:50:01 crc kubenswrapper[4809]: I0312 08:50:01.075784 4809 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 08:50:02 crc kubenswrapper[4809]: I0312 08:50:02.007205 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555090-rsl2p" event={"ID":"9a73449e-b50c-4a14-8031-a7b1aacb85ed","Type":"ContainerStarted","Data":"d1bd3e18704102d8e41fd0c29cdf5105bf0c7b9ef1f4a5d64f67dc7da8e91f3f"} Mar 12 08:50:03 crc kubenswrapper[4809]: I0312 08:50:03.028831 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555090-rsl2p" event={"ID":"9a73449e-b50c-4a14-8031-a7b1aacb85ed","Type":"ContainerStarted","Data":"a81b26b487a51ff61c8395660886773f0f94554cb486b6046e4da584b7699817"} Mar 12 08:50:03 crc kubenswrapper[4809]: I0312 08:50:03.056740 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555090-rsl2p" podStartSLOduration=1.823436657 podStartE2EDuration="3.056711034s" podCreationTimestamp="2026-03-12 08:50:00 +0000 UTC" firstStartedPulling="2026-03-12 08:50:01.075599426 +0000 UTC m=+3074.657635159" lastFinishedPulling="2026-03-12 08:50:02.308873803 +0000 UTC m=+3075.890909536" observedRunningTime="2026-03-12 08:50:03.05035054 +0000 UTC m=+3076.632386273" watchObservedRunningTime="2026-03-12 08:50:03.056711034 +0000 UTC m=+3076.638746767" Mar 12 08:50:04 crc kubenswrapper[4809]: I0312 08:50:04.042236 4809 generic.go:334] "Generic (PLEG): container finished" podID="9a73449e-b50c-4a14-8031-a7b1aacb85ed" containerID="a81b26b487a51ff61c8395660886773f0f94554cb486b6046e4da584b7699817" exitCode=0 Mar 12 08:50:04 crc kubenswrapper[4809]: I0312 08:50:04.042289 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555090-rsl2p" event={"ID":"9a73449e-b50c-4a14-8031-a7b1aacb85ed","Type":"ContainerDied","Data":"a81b26b487a51ff61c8395660886773f0f94554cb486b6046e4da584b7699817"} Mar 12 08:50:05 crc kubenswrapper[4809]: I0312 08:50:05.427154 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555090-rsl2p" Mar 12 08:50:05 crc kubenswrapper[4809]: I0312 08:50:05.556730 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlpss\" (UniqueName: \"kubernetes.io/projected/9a73449e-b50c-4a14-8031-a7b1aacb85ed-kube-api-access-hlpss\") pod \"9a73449e-b50c-4a14-8031-a7b1aacb85ed\" (UID: \"9a73449e-b50c-4a14-8031-a7b1aacb85ed\") " Mar 12 08:50:05 crc kubenswrapper[4809]: I0312 08:50:05.564128 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a73449e-b50c-4a14-8031-a7b1aacb85ed-kube-api-access-hlpss" (OuterVolumeSpecName: "kube-api-access-hlpss") pod "9a73449e-b50c-4a14-8031-a7b1aacb85ed" (UID: "9a73449e-b50c-4a14-8031-a7b1aacb85ed"). InnerVolumeSpecName "kube-api-access-hlpss". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:50:05 crc kubenswrapper[4809]: I0312 08:50:05.660247 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlpss\" (UniqueName: \"kubernetes.io/projected/9a73449e-b50c-4a14-8031-a7b1aacb85ed-kube-api-access-hlpss\") on node \"crc\" DevicePath \"\"" Mar 12 08:50:06 crc kubenswrapper[4809]: I0312 08:50:06.066370 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555090-rsl2p" event={"ID":"9a73449e-b50c-4a14-8031-a7b1aacb85ed","Type":"ContainerDied","Data":"d1bd3e18704102d8e41fd0c29cdf5105bf0c7b9ef1f4a5d64f67dc7da8e91f3f"} Mar 12 08:50:06 crc kubenswrapper[4809]: I0312 08:50:06.066743 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1bd3e18704102d8e41fd0c29cdf5105bf0c7b9ef1f4a5d64f67dc7da8e91f3f" Mar 12 08:50:06 crc kubenswrapper[4809]: I0312 08:50:06.066454 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555090-rsl2p" Mar 12 08:50:06 crc kubenswrapper[4809]: I0312 08:50:06.128681 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555084-k2gpc"] Mar 12 08:50:06 crc kubenswrapper[4809]: I0312 08:50:06.139020 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555084-k2gpc"] Mar 12 08:50:07 crc kubenswrapper[4809]: I0312 08:50:07.115084 4809 scope.go:117] "RemoveContainer" containerID="09fd2a67830bd13c39e112c63c28313e1f2e66e0b513dca499389267040e529c" Mar 12 08:50:07 crc kubenswrapper[4809]: E0312 08:50:07.115467 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:50:07 crc kubenswrapper[4809]: I0312 08:50:07.122322 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a1a295d-1106-45a2-9718-13f7cef581bd" path="/var/lib/kubelet/pods/4a1a295d-1106-45a2-9718-13f7cef581bd/volumes" Mar 12 08:50:21 crc kubenswrapper[4809]: I0312 08:50:21.106415 4809 scope.go:117] "RemoveContainer" containerID="09fd2a67830bd13c39e112c63c28313e1f2e66e0b513dca499389267040e529c" Mar 12 08:50:21 crc kubenswrapper[4809]: E0312 08:50:21.107829 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:50:32 crc kubenswrapper[4809]: I0312 08:50:32.107739 4809 scope.go:117] "RemoveContainer" containerID="09fd2a67830bd13c39e112c63c28313e1f2e66e0b513dca499389267040e529c" Mar 12 08:50:32 crc kubenswrapper[4809]: E0312 08:50:32.109319 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:50:32 crc kubenswrapper[4809]: I0312 08:50:32.872456 4809 scope.go:117] "RemoveContainer" containerID="a74fb42a7b0343554a461edd77982d87f15f5ef443fb40ae821b1060e218eca5" Mar 12 08:50:47 crc kubenswrapper[4809]: I0312 08:50:47.128472 4809 scope.go:117] "RemoveContainer" containerID="09fd2a67830bd13c39e112c63c28313e1f2e66e0b513dca499389267040e529c" Mar 12 08:50:47 crc kubenswrapper[4809]: E0312 08:50:47.129262 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:50:59 crc kubenswrapper[4809]: I0312 08:50:59.107155 4809 scope.go:117] "RemoveContainer" containerID="09fd2a67830bd13c39e112c63c28313e1f2e66e0b513dca499389267040e529c" Mar 12 08:50:59 crc kubenswrapper[4809]: E0312 08:50:59.108636 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:51:05 crc kubenswrapper[4809]: I0312 08:51:05.817581 4809 generic.go:334] "Generic (PLEG): container finished" podID="16172e01-c601-4b38-81d4-86a28061049a" containerID="87d54f67cd688ffc05fc0e36e22a9dfebdbf3189a313e5cc7b33549bb2f6b121" exitCode=0 Mar 12 08:51:05 crc kubenswrapper[4809]: I0312 08:51:05.817649 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-qtmnh" event={"ID":"16172e01-c601-4b38-81d4-86a28061049a","Type":"ContainerDied","Data":"87d54f67cd688ffc05fc0e36e22a9dfebdbf3189a313e5cc7b33549bb2f6b121"} Mar 12 08:51:07 crc kubenswrapper[4809]: I0312 08:51:07.391930 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-qtmnh" Mar 12 08:51:07 crc kubenswrapper[4809]: I0312 08:51:07.476500 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/16172e01-c601-4b38-81d4-86a28061049a-ceilometer-ipmi-config-data-2\") pod \"16172e01-c601-4b38-81d4-86a28061049a\" (UID: \"16172e01-c601-4b38-81d4-86a28061049a\") " Mar 12 08:51:07 crc kubenswrapper[4809]: I0312 08:51:07.476716 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/16172e01-c601-4b38-81d4-86a28061049a-ceilometer-ipmi-config-data-1\") pod \"16172e01-c601-4b38-81d4-86a28061049a\" (UID: \"16172e01-c601-4b38-81d4-86a28061049a\") " Mar 12 08:51:07 crc kubenswrapper[4809]: I0312 08:51:07.476794 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/16172e01-c601-4b38-81d4-86a28061049a-ssh-key-openstack-edpm-ipam\") pod \"16172e01-c601-4b38-81d4-86a28061049a\" (UID: \"16172e01-c601-4b38-81d4-86a28061049a\") " Mar 12 08:51:07 crc kubenswrapper[4809]: I0312 08:51:07.476960 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/16172e01-c601-4b38-81d4-86a28061049a-inventory\") pod \"16172e01-c601-4b38-81d4-86a28061049a\" (UID: \"16172e01-c601-4b38-81d4-86a28061049a\") " Mar 12 08:51:07 crc kubenswrapper[4809]: I0312 08:51:07.477001 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/16172e01-c601-4b38-81d4-86a28061049a-ceilometer-ipmi-config-data-0\") pod \"16172e01-c601-4b38-81d4-86a28061049a\" (UID: \"16172e01-c601-4b38-81d4-86a28061049a\") " Mar 12 08:51:07 crc kubenswrapper[4809]: I0312 08:51:07.477040 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7kx9\" (UniqueName: \"kubernetes.io/projected/16172e01-c601-4b38-81d4-86a28061049a-kube-api-access-q7kx9\") pod \"16172e01-c601-4b38-81d4-86a28061049a\" (UID: \"16172e01-c601-4b38-81d4-86a28061049a\") " Mar 12 08:51:07 crc kubenswrapper[4809]: I0312 08:51:07.477325 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16172e01-c601-4b38-81d4-86a28061049a-telemetry-power-monitoring-combined-ca-bundle\") pod \"16172e01-c601-4b38-81d4-86a28061049a\" (UID: \"16172e01-c601-4b38-81d4-86a28061049a\") " Mar 12 08:51:07 crc kubenswrapper[4809]: I0312 08:51:07.498513 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16172e01-c601-4b38-81d4-86a28061049a-kube-api-access-q7kx9" (OuterVolumeSpecName: "kube-api-access-q7kx9") pod "16172e01-c601-4b38-81d4-86a28061049a" (UID: "16172e01-c601-4b38-81d4-86a28061049a"). InnerVolumeSpecName "kube-api-access-q7kx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:51:07 crc kubenswrapper[4809]: I0312 08:51:07.503153 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16172e01-c601-4b38-81d4-86a28061049a-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "16172e01-c601-4b38-81d4-86a28061049a" (UID: "16172e01-c601-4b38-81d4-86a28061049a"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:51:07 crc kubenswrapper[4809]: I0312 08:51:07.518601 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16172e01-c601-4b38-81d4-86a28061049a-inventory" (OuterVolumeSpecName: "inventory") pod "16172e01-c601-4b38-81d4-86a28061049a" (UID: "16172e01-c601-4b38-81d4-86a28061049a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:51:07 crc kubenswrapper[4809]: I0312 08:51:07.521165 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16172e01-c601-4b38-81d4-86a28061049a-ceilometer-ipmi-config-data-2" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-2") pod "16172e01-c601-4b38-81d4-86a28061049a" (UID: "16172e01-c601-4b38-81d4-86a28061049a"). InnerVolumeSpecName "ceilometer-ipmi-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:51:07 crc kubenswrapper[4809]: I0312 08:51:07.524084 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16172e01-c601-4b38-81d4-86a28061049a-ceilometer-ipmi-config-data-0" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-0") pod "16172e01-c601-4b38-81d4-86a28061049a" (UID: "16172e01-c601-4b38-81d4-86a28061049a"). InnerVolumeSpecName "ceilometer-ipmi-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:51:07 crc kubenswrapper[4809]: I0312 08:51:07.527853 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16172e01-c601-4b38-81d4-86a28061049a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "16172e01-c601-4b38-81d4-86a28061049a" (UID: "16172e01-c601-4b38-81d4-86a28061049a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:51:07 crc kubenswrapper[4809]: I0312 08:51:07.533300 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16172e01-c601-4b38-81d4-86a28061049a-ceilometer-ipmi-config-data-1" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-1") pod "16172e01-c601-4b38-81d4-86a28061049a" (UID: "16172e01-c601-4b38-81d4-86a28061049a"). InnerVolumeSpecName "ceilometer-ipmi-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:51:07 crc kubenswrapper[4809]: I0312 08:51:07.582888 4809 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/16172e01-c601-4b38-81d4-86a28061049a-ceilometer-ipmi-config-data-2\") on node \"crc\" DevicePath \"\"" Mar 12 08:51:07 crc kubenswrapper[4809]: I0312 08:51:07.582960 4809 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/16172e01-c601-4b38-81d4-86a28061049a-ceilometer-ipmi-config-data-1\") on node \"crc\" DevicePath \"\"" Mar 12 08:51:07 crc kubenswrapper[4809]: I0312 08:51:07.582985 4809 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/16172e01-c601-4b38-81d4-86a28061049a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 08:51:07 crc kubenswrapper[4809]: I0312 08:51:07.583010 4809 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/16172e01-c601-4b38-81d4-86a28061049a-inventory\") on node \"crc\" DevicePath \"\"" Mar 12 08:51:07 crc kubenswrapper[4809]: I0312 08:51:07.583031 4809 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/16172e01-c601-4b38-81d4-86a28061049a-ceilometer-ipmi-config-data-0\") on node \"crc\" DevicePath \"\"" Mar 12 08:51:07 crc kubenswrapper[4809]: I0312 08:51:07.583050 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7kx9\" (UniqueName: \"kubernetes.io/projected/16172e01-c601-4b38-81d4-86a28061049a-kube-api-access-q7kx9\") on node \"crc\" DevicePath \"\"" Mar 12 08:51:07 crc kubenswrapper[4809]: I0312 08:51:07.583069 4809 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16172e01-c601-4b38-81d4-86a28061049a-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 08:51:07 crc kubenswrapper[4809]: I0312 08:51:07.859500 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-qtmnh" event={"ID":"16172e01-c601-4b38-81d4-86a28061049a","Type":"ContainerDied","Data":"69f4265f89f3a9adf37aad8365a13bba908efc97729de2bf9182d67ce0428c5c"} Mar 12 08:51:07 crc kubenswrapper[4809]: I0312 08:51:07.859967 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69f4265f89f3a9adf37aad8365a13bba908efc97729de2bf9182d67ce0428c5c" Mar 12 08:51:07 crc kubenswrapper[4809]: I0312 08:51:07.859562 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-qtmnh" Mar 12 08:51:08 crc kubenswrapper[4809]: I0312 08:51:07.975076 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-bmpc2"] Mar 12 08:51:08 crc kubenswrapper[4809]: E0312 08:51:07.975735 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16172e01-c601-4b38-81d4-86a28061049a" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Mar 12 08:51:08 crc kubenswrapper[4809]: I0312 08:51:07.975750 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="16172e01-c601-4b38-81d4-86a28061049a" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Mar 12 08:51:08 crc kubenswrapper[4809]: E0312 08:51:07.975777 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a73449e-b50c-4a14-8031-a7b1aacb85ed" containerName="oc" Mar 12 08:51:08 crc kubenswrapper[4809]: I0312 08:51:07.975783 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a73449e-b50c-4a14-8031-a7b1aacb85ed" containerName="oc" Mar 12 08:51:08 crc kubenswrapper[4809]: I0312 08:51:07.976062 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a73449e-b50c-4a14-8031-a7b1aacb85ed" containerName="oc" Mar 12 08:51:08 crc kubenswrapper[4809]: I0312 08:51:07.976078 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="16172e01-c601-4b38-81d4-86a28061049a" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Mar 12 08:51:08 crc kubenswrapper[4809]: I0312 08:51:07.976978 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-bmpc2" Mar 12 08:51:08 crc kubenswrapper[4809]: I0312 08:51:07.982196 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 12 08:51:08 crc kubenswrapper[4809]: I0312 08:51:07.982391 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-bmpc2"] Mar 12 08:51:08 crc kubenswrapper[4809]: I0312 08:51:07.982421 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"logging-compute-config-data" Mar 12 08:51:08 crc kubenswrapper[4809]: I0312 08:51:08.018414 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7nx95" Mar 12 08:51:08 crc kubenswrapper[4809]: I0312 08:51:08.018822 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 12 08:51:08 crc kubenswrapper[4809]: I0312 08:51:08.018953 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 08:51:08 crc kubenswrapper[4809]: I0312 08:51:08.126296 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/91efdd7b-a690-4c5c-8749-2d4589830be9-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-bmpc2\" (UID: \"91efdd7b-a690-4c5c-8749-2d4589830be9\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-bmpc2" Mar 12 08:51:08 crc kubenswrapper[4809]: I0312 08:51:08.126708 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/91efdd7b-a690-4c5c-8749-2d4589830be9-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-bmpc2\" (UID: \"91efdd7b-a690-4c5c-8749-2d4589830be9\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-bmpc2" Mar 12 08:51:08 crc kubenswrapper[4809]: I0312 08:51:08.126888 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/91efdd7b-a690-4c5c-8749-2d4589830be9-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-bmpc2\" (UID: \"91efdd7b-a690-4c5c-8749-2d4589830be9\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-bmpc2" Mar 12 08:51:08 crc kubenswrapper[4809]: I0312 08:51:08.126931 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxddj\" (UniqueName: \"kubernetes.io/projected/91efdd7b-a690-4c5c-8749-2d4589830be9-kube-api-access-rxddj\") pod \"logging-edpm-deployment-openstack-edpm-ipam-bmpc2\" (UID: \"91efdd7b-a690-4c5c-8749-2d4589830be9\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-bmpc2" Mar 12 08:51:08 crc kubenswrapper[4809]: I0312 08:51:08.127188 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/91efdd7b-a690-4c5c-8749-2d4589830be9-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-bmpc2\" (UID: \"91efdd7b-a690-4c5c-8749-2d4589830be9\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-bmpc2" Mar 12 08:51:08 crc kubenswrapper[4809]: I0312 08:51:08.231491 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/91efdd7b-a690-4c5c-8749-2d4589830be9-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-bmpc2\" (UID: \"91efdd7b-a690-4c5c-8749-2d4589830be9\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-bmpc2" Mar 12 08:51:08 crc kubenswrapper[4809]: I0312 08:51:08.233251 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/91efdd7b-a690-4c5c-8749-2d4589830be9-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-bmpc2\" (UID: \"91efdd7b-a690-4c5c-8749-2d4589830be9\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-bmpc2" Mar 12 08:51:08 crc kubenswrapper[4809]: I0312 08:51:08.233347 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxddj\" (UniqueName: \"kubernetes.io/projected/91efdd7b-a690-4c5c-8749-2d4589830be9-kube-api-access-rxddj\") pod \"logging-edpm-deployment-openstack-edpm-ipam-bmpc2\" (UID: \"91efdd7b-a690-4c5c-8749-2d4589830be9\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-bmpc2" Mar 12 08:51:08 crc kubenswrapper[4809]: I0312 08:51:08.233930 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/91efdd7b-a690-4c5c-8749-2d4589830be9-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-bmpc2\" (UID: \"91efdd7b-a690-4c5c-8749-2d4589830be9\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-bmpc2" Mar 12 08:51:08 crc kubenswrapper[4809]: I0312 08:51:08.235021 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/91efdd7b-a690-4c5c-8749-2d4589830be9-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-bmpc2\" (UID: \"91efdd7b-a690-4c5c-8749-2d4589830be9\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-bmpc2" Mar 12 08:51:08 crc kubenswrapper[4809]: I0312 08:51:08.249587 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/91efdd7b-a690-4c5c-8749-2d4589830be9-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-bmpc2\" (UID: \"91efdd7b-a690-4c5c-8749-2d4589830be9\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-bmpc2" Mar 12 08:51:08 crc kubenswrapper[4809]: I0312 08:51:08.250008 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/91efdd7b-a690-4c5c-8749-2d4589830be9-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-bmpc2\" (UID: \"91efdd7b-a690-4c5c-8749-2d4589830be9\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-bmpc2" Mar 12 08:51:08 crc kubenswrapper[4809]: I0312 08:51:08.250202 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/91efdd7b-a690-4c5c-8749-2d4589830be9-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-bmpc2\" (UID: \"91efdd7b-a690-4c5c-8749-2d4589830be9\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-bmpc2" Mar 12 08:51:08 crc kubenswrapper[4809]: I0312 08:51:08.250953 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/91efdd7b-a690-4c5c-8749-2d4589830be9-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-bmpc2\" (UID: \"91efdd7b-a690-4c5c-8749-2d4589830be9\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-bmpc2" Mar 12 08:51:08 crc kubenswrapper[4809]: I0312 08:51:08.252733 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxddj\" (UniqueName: \"kubernetes.io/projected/91efdd7b-a690-4c5c-8749-2d4589830be9-kube-api-access-rxddj\") pod \"logging-edpm-deployment-openstack-edpm-ipam-bmpc2\" (UID: \"91efdd7b-a690-4c5c-8749-2d4589830be9\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-bmpc2" Mar 12 08:51:08 crc kubenswrapper[4809]: I0312 08:51:08.338255 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-bmpc2" Mar 12 08:51:09 crc kubenswrapper[4809]: I0312 08:51:09.064719 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-bmpc2"] Mar 12 08:51:09 crc kubenswrapper[4809]: E0312 08:51:09.194651 4809 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest: Requesting bearer token: invalid status code from registry 502 (Bad Gateway)" image="quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest" Mar 12 08:51:09 crc kubenswrapper[4809]: E0312 08:51:09.195379 4809 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 12 08:51:09 crc kubenswrapper[4809]: container &Container{Name:logging-edpm-deployment-openstack-edpm-ipam,Image:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,Command:[],Args:[ansible-runner run /runner -p osp.edpm.telemetry_logging -i logging-edpm-deployment-openstack-edpm-ipam],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ANSIBLE_VERBOSITY,Value:2,ValueFrom:nil,},EnvVar{Name:RUNNER_PLAYBOOK,Value: Mar 12 08:51:09 crc kubenswrapper[4809]: osp.edpm.telemetry_logging Mar 12 08:51:09 crc kubenswrapper[4809]: Mar 12 08:51:09 crc kubenswrapper[4809]: ,ValueFrom:nil,},EnvVar{Name:RUNNER_EXTRA_VARS,Value: Mar 12 08:51:09 crc kubenswrapper[4809]: edpm_override_hosts: openstack-edpm-ipam Mar 12 08:51:09 crc kubenswrapper[4809]: edpm_service_type: logging Mar 12 08:51:09 crc kubenswrapper[4809]: Mar 12 08:51:09 crc kubenswrapper[4809]: Mar 12 08:51:09 crc kubenswrapper[4809]: ,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logging-compute-config-data-0,ReadOnly:false,MountPath:/var/lib/openstack/configs/logging/10-telemetry.conf,SubPath:10-telemetry.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logging-compute-config-data-1,ReadOnly:false,MountPath:/var/lib/openstack/configs/logging/ca-openshift.crt,SubPath:ca-openshift.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key-openstack-edpm-ipam,ReadOnly:false,MountPath:/runner/env/ssh_key/ssh_key_openstack-edpm-ipam,SubPath:ssh_key_openstack-edpm-ipam,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:inventory,ReadOnly:false,MountPath:/runner/inventory/hosts,SubPath:inventory,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rxddj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:openstack-aee-default-env,},Optional:*true,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod logging-edpm-deployment-openstack-edpm-ipam-bmpc2_openstack(91efdd7b-a690-4c5c-8749-2d4589830be9): ErrImagePull: initializing source docker://quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest: Requesting bearer token: invalid status code from registry 502 (Bad Gateway) Mar 12 08:51:09 crc kubenswrapper[4809]: > logger="UnhandledError" Mar 12 08:51:09 crc kubenswrapper[4809]: E0312 08:51:09.196613 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"logging-edpm-deployment-openstack-edpm-ipam\" with ErrImagePull: \"initializing source docker://quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest: Requesting bearer token: invalid status code from registry 502 (Bad Gateway)\"" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-bmpc2" podUID="91efdd7b-a690-4c5c-8749-2d4589830be9" Mar 12 08:51:09 crc kubenswrapper[4809]: I0312 08:51:09.886896 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-bmpc2" event={"ID":"91efdd7b-a690-4c5c-8749-2d4589830be9","Type":"ContainerStarted","Data":"8c19a5e8b1ea291b41c90b48404652bdd067fa6a71940234f8c64efdb7afa496"} Mar 12 08:51:09 crc kubenswrapper[4809]: E0312 08:51:09.888823 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"logging-edpm-deployment-openstack-edpm-ipam\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest\\\"\"" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-bmpc2" podUID="91efdd7b-a690-4c5c-8749-2d4589830be9" Mar 12 08:51:10 crc kubenswrapper[4809]: I0312 08:51:10.106803 4809 scope.go:117] "RemoveContainer" containerID="09fd2a67830bd13c39e112c63c28313e1f2e66e0b513dca499389267040e529c" Mar 12 08:51:10 crc kubenswrapper[4809]: E0312 08:51:10.107190 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:51:10 crc kubenswrapper[4809]: E0312 08:51:10.898627 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"logging-edpm-deployment-openstack-edpm-ipam\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest\\\"\"" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-bmpc2" podUID="91efdd7b-a690-4c5c-8749-2d4589830be9" Mar 12 08:51:22 crc kubenswrapper[4809]: I0312 08:51:22.107769 4809 scope.go:117] "RemoveContainer" containerID="09fd2a67830bd13c39e112c63c28313e1f2e66e0b513dca499389267040e529c" Mar 12 08:51:22 crc kubenswrapper[4809]: E0312 08:51:22.109136 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:51:24 crc kubenswrapper[4809]: I0312 08:51:24.056614 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-bmpc2" event={"ID":"91efdd7b-a690-4c5c-8749-2d4589830be9","Type":"ContainerStarted","Data":"3f7e41bf41a4369d26782290e55d2b258e6d43fc488ad332ae9de34cc99b0391"} Mar 12 08:51:24 crc kubenswrapper[4809]: I0312 08:51:24.089772 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-bmpc2" podStartSLOduration=2.607243553 podStartE2EDuration="17.089743056s" podCreationTimestamp="2026-03-12 08:51:07 +0000 UTC" firstStartedPulling="2026-03-12 08:51:09.074850118 +0000 UTC m=+3142.656885851" lastFinishedPulling="2026-03-12 08:51:23.557349581 +0000 UTC m=+3157.139385354" observedRunningTime="2026-03-12 08:51:24.079806616 +0000 UTC m=+3157.661842359" watchObservedRunningTime="2026-03-12 08:51:24.089743056 +0000 UTC m=+3157.671778799" Mar 12 08:51:33 crc kubenswrapper[4809]: I0312 08:51:33.106511 4809 scope.go:117] "RemoveContainer" containerID="09fd2a67830bd13c39e112c63c28313e1f2e66e0b513dca499389267040e529c" Mar 12 08:51:33 crc kubenswrapper[4809]: E0312 08:51:33.107300 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:51:38 crc kubenswrapper[4809]: I0312 08:51:38.221231 4809 generic.go:334] "Generic (PLEG): container finished" podID="91efdd7b-a690-4c5c-8749-2d4589830be9" containerID="3f7e41bf41a4369d26782290e55d2b258e6d43fc488ad332ae9de34cc99b0391" exitCode=0 Mar 12 08:51:38 crc kubenswrapper[4809]: I0312 08:51:38.221320 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-bmpc2" event={"ID":"91efdd7b-a690-4c5c-8749-2d4589830be9","Type":"ContainerDied","Data":"3f7e41bf41a4369d26782290e55d2b258e6d43fc488ad332ae9de34cc99b0391"} Mar 12 08:51:39 crc kubenswrapper[4809]: I0312 08:51:39.717522 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-bmpc2" Mar 12 08:51:39 crc kubenswrapper[4809]: I0312 08:51:39.805783 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/91efdd7b-a690-4c5c-8749-2d4589830be9-inventory\") pod \"91efdd7b-a690-4c5c-8749-2d4589830be9\" (UID: \"91efdd7b-a690-4c5c-8749-2d4589830be9\") " Mar 12 08:51:39 crc kubenswrapper[4809]: I0312 08:51:39.806282 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/91efdd7b-a690-4c5c-8749-2d4589830be9-logging-compute-config-data-1\") pod \"91efdd7b-a690-4c5c-8749-2d4589830be9\" (UID: \"91efdd7b-a690-4c5c-8749-2d4589830be9\") " Mar 12 08:51:39 crc kubenswrapper[4809]: I0312 08:51:39.806347 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/91efdd7b-a690-4c5c-8749-2d4589830be9-logging-compute-config-data-0\") pod \"91efdd7b-a690-4c5c-8749-2d4589830be9\" (UID: \"91efdd7b-a690-4c5c-8749-2d4589830be9\") " Mar 12 08:51:39 crc kubenswrapper[4809]: I0312 08:51:39.806376 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxddj\" (UniqueName: \"kubernetes.io/projected/91efdd7b-a690-4c5c-8749-2d4589830be9-kube-api-access-rxddj\") pod \"91efdd7b-a690-4c5c-8749-2d4589830be9\" (UID: \"91efdd7b-a690-4c5c-8749-2d4589830be9\") " Mar 12 08:51:39 crc kubenswrapper[4809]: I0312 08:51:39.806435 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/91efdd7b-a690-4c5c-8749-2d4589830be9-ssh-key-openstack-edpm-ipam\") pod \"91efdd7b-a690-4c5c-8749-2d4589830be9\" (UID: \"91efdd7b-a690-4c5c-8749-2d4589830be9\") " Mar 12 08:51:39 crc kubenswrapper[4809]: I0312 08:51:39.816924 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91efdd7b-a690-4c5c-8749-2d4589830be9-kube-api-access-rxddj" (OuterVolumeSpecName: "kube-api-access-rxddj") pod "91efdd7b-a690-4c5c-8749-2d4589830be9" (UID: "91efdd7b-a690-4c5c-8749-2d4589830be9"). InnerVolumeSpecName "kube-api-access-rxddj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:51:39 crc kubenswrapper[4809]: I0312 08:51:39.838549 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91efdd7b-a690-4c5c-8749-2d4589830be9-inventory" (OuterVolumeSpecName: "inventory") pod "91efdd7b-a690-4c5c-8749-2d4589830be9" (UID: "91efdd7b-a690-4c5c-8749-2d4589830be9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:51:39 crc kubenswrapper[4809]: I0312 08:51:39.839671 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91efdd7b-a690-4c5c-8749-2d4589830be9-logging-compute-config-data-0" (OuterVolumeSpecName: "logging-compute-config-data-0") pod "91efdd7b-a690-4c5c-8749-2d4589830be9" (UID: "91efdd7b-a690-4c5c-8749-2d4589830be9"). InnerVolumeSpecName "logging-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:51:39 crc kubenswrapper[4809]: I0312 08:51:39.840074 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91efdd7b-a690-4c5c-8749-2d4589830be9-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "91efdd7b-a690-4c5c-8749-2d4589830be9" (UID: "91efdd7b-a690-4c5c-8749-2d4589830be9"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:51:39 crc kubenswrapper[4809]: I0312 08:51:39.842037 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91efdd7b-a690-4c5c-8749-2d4589830be9-logging-compute-config-data-1" (OuterVolumeSpecName: "logging-compute-config-data-1") pod "91efdd7b-a690-4c5c-8749-2d4589830be9" (UID: "91efdd7b-a690-4c5c-8749-2d4589830be9"). InnerVolumeSpecName "logging-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 08:51:39 crc kubenswrapper[4809]: I0312 08:51:39.909303 4809 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/91efdd7b-a690-4c5c-8749-2d4589830be9-logging-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Mar 12 08:51:39 crc kubenswrapper[4809]: I0312 08:51:39.909337 4809 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/91efdd7b-a690-4c5c-8749-2d4589830be9-logging-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Mar 12 08:51:39 crc kubenswrapper[4809]: I0312 08:51:39.909350 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxddj\" (UniqueName: \"kubernetes.io/projected/91efdd7b-a690-4c5c-8749-2d4589830be9-kube-api-access-rxddj\") on node \"crc\" DevicePath \"\"" Mar 12 08:51:39 crc kubenswrapper[4809]: I0312 08:51:39.909359 4809 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/91efdd7b-a690-4c5c-8749-2d4589830be9-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 08:51:39 crc kubenswrapper[4809]: I0312 08:51:39.909371 4809 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/91efdd7b-a690-4c5c-8749-2d4589830be9-inventory\") on node \"crc\" DevicePath \"\"" Mar 12 08:51:40 crc kubenswrapper[4809]: I0312 08:51:40.242154 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-bmpc2" event={"ID":"91efdd7b-a690-4c5c-8749-2d4589830be9","Type":"ContainerDied","Data":"8c19a5e8b1ea291b41c90b48404652bdd067fa6a71940234f8c64efdb7afa496"} Mar 12 08:51:40 crc kubenswrapper[4809]: I0312 08:51:40.242196 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c19a5e8b1ea291b41c90b48404652bdd067fa6a71940234f8c64efdb7afa496" Mar 12 08:51:40 crc kubenswrapper[4809]: I0312 08:51:40.242210 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-bmpc2" Mar 12 08:51:45 crc kubenswrapper[4809]: I0312 08:51:45.106626 4809 scope.go:117] "RemoveContainer" containerID="09fd2a67830bd13c39e112c63c28313e1f2e66e0b513dca499389267040e529c" Mar 12 08:51:45 crc kubenswrapper[4809]: E0312 08:51:45.107307 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:51:59 crc kubenswrapper[4809]: I0312 08:51:59.106877 4809 scope.go:117] "RemoveContainer" containerID="09fd2a67830bd13c39e112c63c28313e1f2e66e0b513dca499389267040e529c" Mar 12 08:51:59 crc kubenswrapper[4809]: E0312 08:51:59.107785 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:52:00 crc kubenswrapper[4809]: I0312 08:52:00.161992 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555092-lncvb"] Mar 12 08:52:00 crc kubenswrapper[4809]: E0312 08:52:00.162785 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91efdd7b-a690-4c5c-8749-2d4589830be9" containerName="logging-edpm-deployment-openstack-edpm-ipam" Mar 12 08:52:00 crc kubenswrapper[4809]: I0312 08:52:00.162804 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="91efdd7b-a690-4c5c-8749-2d4589830be9" containerName="logging-edpm-deployment-openstack-edpm-ipam" Mar 12 08:52:00 crc kubenswrapper[4809]: I0312 08:52:00.163175 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="91efdd7b-a690-4c5c-8749-2d4589830be9" containerName="logging-edpm-deployment-openstack-edpm-ipam" Mar 12 08:52:00 crc kubenswrapper[4809]: I0312 08:52:00.164332 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555092-lncvb" Mar 12 08:52:00 crc kubenswrapper[4809]: I0312 08:52:00.168038 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-tvxqv" Mar 12 08:52:00 crc kubenswrapper[4809]: I0312 08:52:00.168043 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 08:52:00 crc kubenswrapper[4809]: I0312 08:52:00.168231 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 08:52:00 crc kubenswrapper[4809]: I0312 08:52:00.172601 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555092-lncvb"] Mar 12 08:52:00 crc kubenswrapper[4809]: I0312 08:52:00.222184 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxx2g\" (UniqueName: \"kubernetes.io/projected/372ceaaf-934a-4ebd-92ab-99b7cc28b068-kube-api-access-hxx2g\") pod \"auto-csr-approver-29555092-lncvb\" (UID: \"372ceaaf-934a-4ebd-92ab-99b7cc28b068\") " pod="openshift-infra/auto-csr-approver-29555092-lncvb" Mar 12 08:52:00 crc kubenswrapper[4809]: I0312 08:52:00.324905 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxx2g\" (UniqueName: \"kubernetes.io/projected/372ceaaf-934a-4ebd-92ab-99b7cc28b068-kube-api-access-hxx2g\") pod \"auto-csr-approver-29555092-lncvb\" (UID: \"372ceaaf-934a-4ebd-92ab-99b7cc28b068\") " pod="openshift-infra/auto-csr-approver-29555092-lncvb" Mar 12 08:52:00 crc kubenswrapper[4809]: I0312 08:52:00.345587 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxx2g\" (UniqueName: \"kubernetes.io/projected/372ceaaf-934a-4ebd-92ab-99b7cc28b068-kube-api-access-hxx2g\") pod \"auto-csr-approver-29555092-lncvb\" (UID: \"372ceaaf-934a-4ebd-92ab-99b7cc28b068\") " pod="openshift-infra/auto-csr-approver-29555092-lncvb" Mar 12 08:52:00 crc kubenswrapper[4809]: I0312 08:52:00.487361 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555092-lncvb" Mar 12 08:52:01 crc kubenswrapper[4809]: I0312 08:52:01.005237 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555092-lncvb"] Mar 12 08:52:01 crc kubenswrapper[4809]: I0312 08:52:01.473945 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555092-lncvb" event={"ID":"372ceaaf-934a-4ebd-92ab-99b7cc28b068","Type":"ContainerStarted","Data":"f3d940065816478c8eb98922284417f05cc0797cbdc4d0572ac5b4e5e9582316"} Mar 12 08:52:02 crc kubenswrapper[4809]: I0312 08:52:02.485432 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555092-lncvb" event={"ID":"372ceaaf-934a-4ebd-92ab-99b7cc28b068","Type":"ContainerStarted","Data":"718b44fa4dfdedfbf02259f1e596810ab177726b4af7be4e3267468ff4d72cb4"} Mar 12 08:52:03 crc kubenswrapper[4809]: I0312 08:52:03.498405 4809 generic.go:334] "Generic (PLEG): container finished" podID="372ceaaf-934a-4ebd-92ab-99b7cc28b068" containerID="718b44fa4dfdedfbf02259f1e596810ab177726b4af7be4e3267468ff4d72cb4" exitCode=0 Mar 12 08:52:03 crc kubenswrapper[4809]: I0312 08:52:03.498450 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555092-lncvb" event={"ID":"372ceaaf-934a-4ebd-92ab-99b7cc28b068","Type":"ContainerDied","Data":"718b44fa4dfdedfbf02259f1e596810ab177726b4af7be4e3267468ff4d72cb4"} Mar 12 08:52:04 crc kubenswrapper[4809]: I0312 08:52:04.919920 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555092-lncvb" Mar 12 08:52:04 crc kubenswrapper[4809]: I0312 08:52:04.983042 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxx2g\" (UniqueName: \"kubernetes.io/projected/372ceaaf-934a-4ebd-92ab-99b7cc28b068-kube-api-access-hxx2g\") pod \"372ceaaf-934a-4ebd-92ab-99b7cc28b068\" (UID: \"372ceaaf-934a-4ebd-92ab-99b7cc28b068\") " Mar 12 08:52:05 crc kubenswrapper[4809]: I0312 08:52:05.009576 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/372ceaaf-934a-4ebd-92ab-99b7cc28b068-kube-api-access-hxx2g" (OuterVolumeSpecName: "kube-api-access-hxx2g") pod "372ceaaf-934a-4ebd-92ab-99b7cc28b068" (UID: "372ceaaf-934a-4ebd-92ab-99b7cc28b068"). InnerVolumeSpecName "kube-api-access-hxx2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:52:05 crc kubenswrapper[4809]: I0312 08:52:05.086797 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxx2g\" (UniqueName: \"kubernetes.io/projected/372ceaaf-934a-4ebd-92ab-99b7cc28b068-kube-api-access-hxx2g\") on node \"crc\" DevicePath \"\"" Mar 12 08:52:05 crc kubenswrapper[4809]: I0312 08:52:05.529860 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555092-lncvb" event={"ID":"372ceaaf-934a-4ebd-92ab-99b7cc28b068","Type":"ContainerDied","Data":"f3d940065816478c8eb98922284417f05cc0797cbdc4d0572ac5b4e5e9582316"} Mar 12 08:52:05 crc kubenswrapper[4809]: I0312 08:52:05.530310 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3d940065816478c8eb98922284417f05cc0797cbdc4d0572ac5b4e5e9582316" Mar 12 08:52:05 crc kubenswrapper[4809]: I0312 08:52:05.530173 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555092-lncvb" Mar 12 08:52:05 crc kubenswrapper[4809]: I0312 08:52:05.630186 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555086-87qxs"] Mar 12 08:52:05 crc kubenswrapper[4809]: I0312 08:52:05.647589 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555086-87qxs"] Mar 12 08:52:07 crc kubenswrapper[4809]: I0312 08:52:07.126289 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9a75ab1-ea2f-4ef8-877f-6a03c946dcb0" path="/var/lib/kubelet/pods/a9a75ab1-ea2f-4ef8-877f-6a03c946dcb0/volumes" Mar 12 08:52:10 crc kubenswrapper[4809]: I0312 08:52:10.106381 4809 scope.go:117] "RemoveContainer" containerID="09fd2a67830bd13c39e112c63c28313e1f2e66e0b513dca499389267040e529c" Mar 12 08:52:10 crc kubenswrapper[4809]: E0312 08:52:10.108449 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:52:23 crc kubenswrapper[4809]: I0312 08:52:23.106033 4809 scope.go:117] "RemoveContainer" containerID="09fd2a67830bd13c39e112c63c28313e1f2e66e0b513dca499389267040e529c" Mar 12 08:52:23 crc kubenswrapper[4809]: I0312 08:52:23.745292 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" event={"ID":"101483ba-8ed3-40eb-9855-077e9add029f","Type":"ContainerStarted","Data":"5792759995ae3718ae365ac07d2eb7940dfaa04425d30dececf1d2e6ef7ac054"} Mar 12 08:52:33 crc kubenswrapper[4809]: I0312 08:52:33.017459 4809 scope.go:117] "RemoveContainer" containerID="6089abd2f83f30787fcb76e540eea1876da9376ef182c3d389c7b9f38c302955" Mar 12 08:52:44 crc kubenswrapper[4809]: E0312 08:52:44.922875 4809 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.80:53452->38.102.83.80:34627: write tcp 38.102.83.80:53452->38.102.83.80:34627: write: broken pipe Mar 12 08:53:32 crc kubenswrapper[4809]: I0312 08:53:32.234190 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cnxp5"] Mar 12 08:53:32 crc kubenswrapper[4809]: E0312 08:53:32.235065 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="372ceaaf-934a-4ebd-92ab-99b7cc28b068" containerName="oc" Mar 12 08:53:32 crc kubenswrapper[4809]: I0312 08:53:32.235077 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="372ceaaf-934a-4ebd-92ab-99b7cc28b068" containerName="oc" Mar 12 08:53:32 crc kubenswrapper[4809]: I0312 08:53:32.235337 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="372ceaaf-934a-4ebd-92ab-99b7cc28b068" containerName="oc" Mar 12 08:53:32 crc kubenswrapper[4809]: I0312 08:53:32.236925 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cnxp5" Mar 12 08:53:32 crc kubenswrapper[4809]: I0312 08:53:32.268050 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cnxp5"] Mar 12 08:53:32 crc kubenswrapper[4809]: I0312 08:53:32.369363 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a0e09f1-cf57-4f3c-a69a-0be101f82e31-utilities\") pod \"certified-operators-cnxp5\" (UID: \"9a0e09f1-cf57-4f3c-a69a-0be101f82e31\") " pod="openshift-marketplace/certified-operators-cnxp5" Mar 12 08:53:32 crc kubenswrapper[4809]: I0312 08:53:32.369413 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2dcc\" (UniqueName: \"kubernetes.io/projected/9a0e09f1-cf57-4f3c-a69a-0be101f82e31-kube-api-access-t2dcc\") pod \"certified-operators-cnxp5\" (UID: \"9a0e09f1-cf57-4f3c-a69a-0be101f82e31\") " pod="openshift-marketplace/certified-operators-cnxp5" Mar 12 08:53:32 crc kubenswrapper[4809]: I0312 08:53:32.369479 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a0e09f1-cf57-4f3c-a69a-0be101f82e31-catalog-content\") pod \"certified-operators-cnxp5\" (UID: \"9a0e09f1-cf57-4f3c-a69a-0be101f82e31\") " pod="openshift-marketplace/certified-operators-cnxp5" Mar 12 08:53:32 crc kubenswrapper[4809]: I0312 08:53:32.472270 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a0e09f1-cf57-4f3c-a69a-0be101f82e31-utilities\") pod \"certified-operators-cnxp5\" (UID: \"9a0e09f1-cf57-4f3c-a69a-0be101f82e31\") " pod="openshift-marketplace/certified-operators-cnxp5" Mar 12 08:53:32 crc kubenswrapper[4809]: I0312 08:53:32.472315 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2dcc\" (UniqueName: \"kubernetes.io/projected/9a0e09f1-cf57-4f3c-a69a-0be101f82e31-kube-api-access-t2dcc\") pod \"certified-operators-cnxp5\" (UID: \"9a0e09f1-cf57-4f3c-a69a-0be101f82e31\") " pod="openshift-marketplace/certified-operators-cnxp5" Mar 12 08:53:32 crc kubenswrapper[4809]: I0312 08:53:32.472406 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a0e09f1-cf57-4f3c-a69a-0be101f82e31-catalog-content\") pod \"certified-operators-cnxp5\" (UID: \"9a0e09f1-cf57-4f3c-a69a-0be101f82e31\") " pod="openshift-marketplace/certified-operators-cnxp5" Mar 12 08:53:32 crc kubenswrapper[4809]: I0312 08:53:32.472877 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a0e09f1-cf57-4f3c-a69a-0be101f82e31-utilities\") pod \"certified-operators-cnxp5\" (UID: \"9a0e09f1-cf57-4f3c-a69a-0be101f82e31\") " pod="openshift-marketplace/certified-operators-cnxp5" Mar 12 08:53:32 crc kubenswrapper[4809]: I0312 08:53:32.472887 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a0e09f1-cf57-4f3c-a69a-0be101f82e31-catalog-content\") pod \"certified-operators-cnxp5\" (UID: \"9a0e09f1-cf57-4f3c-a69a-0be101f82e31\") " pod="openshift-marketplace/certified-operators-cnxp5" Mar 12 08:53:32 crc kubenswrapper[4809]: I0312 08:53:32.495399 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2dcc\" (UniqueName: \"kubernetes.io/projected/9a0e09f1-cf57-4f3c-a69a-0be101f82e31-kube-api-access-t2dcc\") pod \"certified-operators-cnxp5\" (UID: \"9a0e09f1-cf57-4f3c-a69a-0be101f82e31\") " pod="openshift-marketplace/certified-operators-cnxp5" Mar 12 08:53:32 crc kubenswrapper[4809]: I0312 08:53:32.574432 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cnxp5" Mar 12 08:53:33 crc kubenswrapper[4809]: I0312 08:53:33.184027 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cnxp5"] Mar 12 08:53:33 crc kubenswrapper[4809]: I0312 08:53:33.645455 4809 generic.go:334] "Generic (PLEG): container finished" podID="9a0e09f1-cf57-4f3c-a69a-0be101f82e31" containerID="45c76821f1ca4e97ea89c923ce583d7e1d91b76ed9a59adb809132b4432c11d4" exitCode=0 Mar 12 08:53:33 crc kubenswrapper[4809]: I0312 08:53:33.645569 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cnxp5" event={"ID":"9a0e09f1-cf57-4f3c-a69a-0be101f82e31","Type":"ContainerDied","Data":"45c76821f1ca4e97ea89c923ce583d7e1d91b76ed9a59adb809132b4432c11d4"} Mar 12 08:53:33 crc kubenswrapper[4809]: I0312 08:53:33.645832 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cnxp5" event={"ID":"9a0e09f1-cf57-4f3c-a69a-0be101f82e31","Type":"ContainerStarted","Data":"693d5ac0703faa99766018564d0e3b04f225bd22c7f8c26b8e9026867929d008"} Mar 12 08:53:34 crc kubenswrapper[4809]: I0312 08:53:34.623005 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zdl4p"] Mar 12 08:53:34 crc kubenswrapper[4809]: I0312 08:53:34.627673 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zdl4p" Mar 12 08:53:34 crc kubenswrapper[4809]: I0312 08:53:34.638050 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zdl4p"] Mar 12 08:53:34 crc kubenswrapper[4809]: I0312 08:53:34.666480 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cnxp5" event={"ID":"9a0e09f1-cf57-4f3c-a69a-0be101f82e31","Type":"ContainerStarted","Data":"be4847e614d6becee768fb53eb2a1a9f032c8b333b7c3aa1346d223f228f92f8"} Mar 12 08:53:34 crc kubenswrapper[4809]: I0312 08:53:34.733276 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50a4501d-819a-4356-aa1d-b44db59080c8-utilities\") pod \"community-operators-zdl4p\" (UID: \"50a4501d-819a-4356-aa1d-b44db59080c8\") " pod="openshift-marketplace/community-operators-zdl4p" Mar 12 08:53:34 crc kubenswrapper[4809]: I0312 08:53:34.733346 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50a4501d-819a-4356-aa1d-b44db59080c8-catalog-content\") pod \"community-operators-zdl4p\" (UID: \"50a4501d-819a-4356-aa1d-b44db59080c8\") " pod="openshift-marketplace/community-operators-zdl4p" Mar 12 08:53:34 crc kubenswrapper[4809]: I0312 08:53:34.733455 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvlpl\" (UniqueName: \"kubernetes.io/projected/50a4501d-819a-4356-aa1d-b44db59080c8-kube-api-access-fvlpl\") pod \"community-operators-zdl4p\" (UID: \"50a4501d-819a-4356-aa1d-b44db59080c8\") " pod="openshift-marketplace/community-operators-zdl4p" Mar 12 08:53:34 crc kubenswrapper[4809]: I0312 08:53:34.836064 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvlpl\" (UniqueName: \"kubernetes.io/projected/50a4501d-819a-4356-aa1d-b44db59080c8-kube-api-access-fvlpl\") pod \"community-operators-zdl4p\" (UID: \"50a4501d-819a-4356-aa1d-b44db59080c8\") " pod="openshift-marketplace/community-operators-zdl4p" Mar 12 08:53:34 crc kubenswrapper[4809]: I0312 08:53:34.836374 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50a4501d-819a-4356-aa1d-b44db59080c8-utilities\") pod \"community-operators-zdl4p\" (UID: \"50a4501d-819a-4356-aa1d-b44db59080c8\") " pod="openshift-marketplace/community-operators-zdl4p" Mar 12 08:53:34 crc kubenswrapper[4809]: I0312 08:53:34.836455 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50a4501d-819a-4356-aa1d-b44db59080c8-catalog-content\") pod \"community-operators-zdl4p\" (UID: \"50a4501d-819a-4356-aa1d-b44db59080c8\") " pod="openshift-marketplace/community-operators-zdl4p" Mar 12 08:53:34 crc kubenswrapper[4809]: I0312 08:53:34.836837 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50a4501d-819a-4356-aa1d-b44db59080c8-utilities\") pod \"community-operators-zdl4p\" (UID: \"50a4501d-819a-4356-aa1d-b44db59080c8\") " pod="openshift-marketplace/community-operators-zdl4p" Mar 12 08:53:34 crc kubenswrapper[4809]: I0312 08:53:34.836849 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50a4501d-819a-4356-aa1d-b44db59080c8-catalog-content\") pod \"community-operators-zdl4p\" (UID: \"50a4501d-819a-4356-aa1d-b44db59080c8\") " pod="openshift-marketplace/community-operators-zdl4p" Mar 12 08:53:34 crc kubenswrapper[4809]: I0312 08:53:34.861372 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvlpl\" (UniqueName: \"kubernetes.io/projected/50a4501d-819a-4356-aa1d-b44db59080c8-kube-api-access-fvlpl\") pod \"community-operators-zdl4p\" (UID: \"50a4501d-819a-4356-aa1d-b44db59080c8\") " pod="openshift-marketplace/community-operators-zdl4p" Mar 12 08:53:34 crc kubenswrapper[4809]: I0312 08:53:34.969092 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zdl4p" Mar 12 08:53:35 crc kubenswrapper[4809]: I0312 08:53:35.538754 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zdl4p"] Mar 12 08:53:35 crc kubenswrapper[4809]: I0312 08:53:35.678738 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zdl4p" event={"ID":"50a4501d-819a-4356-aa1d-b44db59080c8","Type":"ContainerStarted","Data":"0113b2cc6632ef2dc7523d0ffd1b2132c8b2af3a4317e5b44266cf36b39a34d3"} Mar 12 08:53:36 crc kubenswrapper[4809]: I0312 08:53:36.693383 4809 generic.go:334] "Generic (PLEG): container finished" podID="9a0e09f1-cf57-4f3c-a69a-0be101f82e31" containerID="be4847e614d6becee768fb53eb2a1a9f032c8b333b7c3aa1346d223f228f92f8" exitCode=0 Mar 12 08:53:36 crc kubenswrapper[4809]: I0312 08:53:36.693497 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cnxp5" event={"ID":"9a0e09f1-cf57-4f3c-a69a-0be101f82e31","Type":"ContainerDied","Data":"be4847e614d6becee768fb53eb2a1a9f032c8b333b7c3aa1346d223f228f92f8"} Mar 12 08:53:36 crc kubenswrapper[4809]: I0312 08:53:36.697688 4809 generic.go:334] "Generic (PLEG): container finished" podID="50a4501d-819a-4356-aa1d-b44db59080c8" containerID="8edf11cb27d53da6b9ff1eb08c313d4db5ce2db7c6c12f1098e779beba4bbec7" exitCode=0 Mar 12 08:53:36 crc kubenswrapper[4809]: I0312 08:53:36.697900 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zdl4p" event={"ID":"50a4501d-819a-4356-aa1d-b44db59080c8","Type":"ContainerDied","Data":"8edf11cb27d53da6b9ff1eb08c313d4db5ce2db7c6c12f1098e779beba4bbec7"} Mar 12 08:53:37 crc kubenswrapper[4809]: I0312 08:53:37.710229 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zdl4p" event={"ID":"50a4501d-819a-4356-aa1d-b44db59080c8","Type":"ContainerStarted","Data":"1641d3419dbdc5ca9bd6903c35b60c2fe4db7cfdd3da25aac94b2b791f31d92f"} Mar 12 08:53:37 crc kubenswrapper[4809]: I0312 08:53:37.713804 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cnxp5" event={"ID":"9a0e09f1-cf57-4f3c-a69a-0be101f82e31","Type":"ContainerStarted","Data":"7f1d71278929c38fcd726ce4aeaa8f479c99090cffddb9297e39fdaaf4c57b17"} Mar 12 08:53:37 crc kubenswrapper[4809]: I0312 08:53:37.760967 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cnxp5" podStartSLOduration=2.278658901 podStartE2EDuration="5.760944971s" podCreationTimestamp="2026-03-12 08:53:32 +0000 UTC" firstStartedPulling="2026-03-12 08:53:33.64759999 +0000 UTC m=+3287.229635723" lastFinishedPulling="2026-03-12 08:53:37.12988605 +0000 UTC m=+3290.711921793" observedRunningTime="2026-03-12 08:53:37.743833964 +0000 UTC m=+3291.325869697" watchObservedRunningTime="2026-03-12 08:53:37.760944971 +0000 UTC m=+3291.342980714" Mar 12 08:53:40 crc kubenswrapper[4809]: I0312 08:53:40.754802 4809 generic.go:334] "Generic (PLEG): container finished" podID="50a4501d-819a-4356-aa1d-b44db59080c8" containerID="1641d3419dbdc5ca9bd6903c35b60c2fe4db7cfdd3da25aac94b2b791f31d92f" exitCode=0 Mar 12 08:53:40 crc kubenswrapper[4809]: I0312 08:53:40.754863 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zdl4p" event={"ID":"50a4501d-819a-4356-aa1d-b44db59080c8","Type":"ContainerDied","Data":"1641d3419dbdc5ca9bd6903c35b60c2fe4db7cfdd3da25aac94b2b791f31d92f"} Mar 12 08:53:41 crc kubenswrapper[4809]: I0312 08:53:41.767935 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zdl4p" event={"ID":"50a4501d-819a-4356-aa1d-b44db59080c8","Type":"ContainerStarted","Data":"da6d4359551d8b45db1220110fbfd2fc6edb1a8a794d3c948d78f7ee72746dbd"} Mar 12 08:53:41 crc kubenswrapper[4809]: I0312 08:53:41.795660 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zdl4p" podStartSLOduration=3.3417838 podStartE2EDuration="7.795640786s" podCreationTimestamp="2026-03-12 08:53:34 +0000 UTC" firstStartedPulling="2026-03-12 08:53:36.700215392 +0000 UTC m=+3290.282251165" lastFinishedPulling="2026-03-12 08:53:41.154072428 +0000 UTC m=+3294.736108151" observedRunningTime="2026-03-12 08:53:41.78738173 +0000 UTC m=+3295.369417463" watchObservedRunningTime="2026-03-12 08:53:41.795640786 +0000 UTC m=+3295.377676519" Mar 12 08:53:42 crc kubenswrapper[4809]: I0312 08:53:42.575546 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cnxp5" Mar 12 08:53:42 crc kubenswrapper[4809]: I0312 08:53:42.575921 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cnxp5" Mar 12 08:53:42 crc kubenswrapper[4809]: I0312 08:53:42.626998 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cnxp5" Mar 12 08:53:42 crc kubenswrapper[4809]: I0312 08:53:42.826303 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cnxp5" Mar 12 08:53:44 crc kubenswrapper[4809]: I0312 08:53:44.970191 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zdl4p" Mar 12 08:53:44 crc kubenswrapper[4809]: I0312 08:53:44.970838 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zdl4p" Mar 12 08:53:45 crc kubenswrapper[4809]: I0312 08:53:45.036591 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zdl4p" Mar 12 08:53:46 crc kubenswrapper[4809]: I0312 08:53:46.204961 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cnxp5"] Mar 12 08:53:46 crc kubenswrapper[4809]: I0312 08:53:46.205238 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cnxp5" podUID="9a0e09f1-cf57-4f3c-a69a-0be101f82e31" containerName="registry-server" containerID="cri-o://7f1d71278929c38fcd726ce4aeaa8f479c99090cffddb9297e39fdaaf4c57b17" gracePeriod=2 Mar 12 08:53:46 crc kubenswrapper[4809]: I0312 08:53:46.715588 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cnxp5" Mar 12 08:53:46 crc kubenswrapper[4809]: I0312 08:53:46.765252 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a0e09f1-cf57-4f3c-a69a-0be101f82e31-catalog-content\") pod \"9a0e09f1-cf57-4f3c-a69a-0be101f82e31\" (UID: \"9a0e09f1-cf57-4f3c-a69a-0be101f82e31\") " Mar 12 08:53:46 crc kubenswrapper[4809]: I0312 08:53:46.765616 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2dcc\" (UniqueName: \"kubernetes.io/projected/9a0e09f1-cf57-4f3c-a69a-0be101f82e31-kube-api-access-t2dcc\") pod \"9a0e09f1-cf57-4f3c-a69a-0be101f82e31\" (UID: \"9a0e09f1-cf57-4f3c-a69a-0be101f82e31\") " Mar 12 08:53:46 crc kubenswrapper[4809]: I0312 08:53:46.765680 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a0e09f1-cf57-4f3c-a69a-0be101f82e31-utilities\") pod \"9a0e09f1-cf57-4f3c-a69a-0be101f82e31\" (UID: \"9a0e09f1-cf57-4f3c-a69a-0be101f82e31\") " Mar 12 08:53:46 crc kubenswrapper[4809]: I0312 08:53:46.768208 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a0e09f1-cf57-4f3c-a69a-0be101f82e31-utilities" (OuterVolumeSpecName: "utilities") pod "9a0e09f1-cf57-4f3c-a69a-0be101f82e31" (UID: "9a0e09f1-cf57-4f3c-a69a-0be101f82e31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:53:46 crc kubenswrapper[4809]: I0312 08:53:46.773285 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a0e09f1-cf57-4f3c-a69a-0be101f82e31-kube-api-access-t2dcc" (OuterVolumeSpecName: "kube-api-access-t2dcc") pod "9a0e09f1-cf57-4f3c-a69a-0be101f82e31" (UID: "9a0e09f1-cf57-4f3c-a69a-0be101f82e31"). InnerVolumeSpecName "kube-api-access-t2dcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:53:46 crc kubenswrapper[4809]: I0312 08:53:46.826850 4809 generic.go:334] "Generic (PLEG): container finished" podID="9a0e09f1-cf57-4f3c-a69a-0be101f82e31" containerID="7f1d71278929c38fcd726ce4aeaa8f479c99090cffddb9297e39fdaaf4c57b17" exitCode=0 Mar 12 08:53:46 crc kubenswrapper[4809]: I0312 08:53:46.826906 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cnxp5" event={"ID":"9a0e09f1-cf57-4f3c-a69a-0be101f82e31","Type":"ContainerDied","Data":"7f1d71278929c38fcd726ce4aeaa8f479c99090cffddb9297e39fdaaf4c57b17"} Mar 12 08:53:46 crc kubenswrapper[4809]: I0312 08:53:46.826946 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cnxp5" event={"ID":"9a0e09f1-cf57-4f3c-a69a-0be101f82e31","Type":"ContainerDied","Data":"693d5ac0703faa99766018564d0e3b04f225bd22c7f8c26b8e9026867929d008"} Mar 12 08:53:46 crc kubenswrapper[4809]: I0312 08:53:46.826967 4809 scope.go:117] "RemoveContainer" containerID="7f1d71278929c38fcd726ce4aeaa8f479c99090cffddb9297e39fdaaf4c57b17" Mar 12 08:53:46 crc kubenswrapper[4809]: I0312 08:53:46.826996 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cnxp5" Mar 12 08:53:46 crc kubenswrapper[4809]: I0312 08:53:46.843614 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a0e09f1-cf57-4f3c-a69a-0be101f82e31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9a0e09f1-cf57-4f3c-a69a-0be101f82e31" (UID: "9a0e09f1-cf57-4f3c-a69a-0be101f82e31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:53:46 crc kubenswrapper[4809]: I0312 08:53:46.852559 4809 scope.go:117] "RemoveContainer" containerID="be4847e614d6becee768fb53eb2a1a9f032c8b333b7c3aa1346d223f228f92f8" Mar 12 08:53:46 crc kubenswrapper[4809]: I0312 08:53:46.869534 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a0e09f1-cf57-4f3c-a69a-0be101f82e31-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 08:53:46 crc kubenswrapper[4809]: I0312 08:53:46.869568 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2dcc\" (UniqueName: \"kubernetes.io/projected/9a0e09f1-cf57-4f3c-a69a-0be101f82e31-kube-api-access-t2dcc\") on node \"crc\" DevicePath \"\"" Mar 12 08:53:46 crc kubenswrapper[4809]: I0312 08:53:46.869577 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a0e09f1-cf57-4f3c-a69a-0be101f82e31-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 08:53:46 crc kubenswrapper[4809]: I0312 08:53:46.874077 4809 scope.go:117] "RemoveContainer" containerID="45c76821f1ca4e97ea89c923ce583d7e1d91b76ed9a59adb809132b4432c11d4" Mar 12 08:53:46 crc kubenswrapper[4809]: I0312 08:53:46.927572 4809 scope.go:117] "RemoveContainer" containerID="7f1d71278929c38fcd726ce4aeaa8f479c99090cffddb9297e39fdaaf4c57b17" Mar 12 08:53:46 crc kubenswrapper[4809]: E0312 08:53:46.928239 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f1d71278929c38fcd726ce4aeaa8f479c99090cffddb9297e39fdaaf4c57b17\": container with ID starting with 7f1d71278929c38fcd726ce4aeaa8f479c99090cffddb9297e39fdaaf4c57b17 not found: ID does not exist" containerID="7f1d71278929c38fcd726ce4aeaa8f479c99090cffddb9297e39fdaaf4c57b17" Mar 12 08:53:46 crc kubenswrapper[4809]: I0312 08:53:46.928331 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f1d71278929c38fcd726ce4aeaa8f479c99090cffddb9297e39fdaaf4c57b17"} err="failed to get container status \"7f1d71278929c38fcd726ce4aeaa8f479c99090cffddb9297e39fdaaf4c57b17\": rpc error: code = NotFound desc = could not find container \"7f1d71278929c38fcd726ce4aeaa8f479c99090cffddb9297e39fdaaf4c57b17\": container with ID starting with 7f1d71278929c38fcd726ce4aeaa8f479c99090cffddb9297e39fdaaf4c57b17 not found: ID does not exist" Mar 12 08:53:46 crc kubenswrapper[4809]: I0312 08:53:46.928390 4809 scope.go:117] "RemoveContainer" containerID="be4847e614d6becee768fb53eb2a1a9f032c8b333b7c3aa1346d223f228f92f8" Mar 12 08:53:46 crc kubenswrapper[4809]: E0312 08:53:46.929191 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be4847e614d6becee768fb53eb2a1a9f032c8b333b7c3aa1346d223f228f92f8\": container with ID starting with be4847e614d6becee768fb53eb2a1a9f032c8b333b7c3aa1346d223f228f92f8 not found: ID does not exist" containerID="be4847e614d6becee768fb53eb2a1a9f032c8b333b7c3aa1346d223f228f92f8" Mar 12 08:53:46 crc kubenswrapper[4809]: I0312 08:53:46.929275 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be4847e614d6becee768fb53eb2a1a9f032c8b333b7c3aa1346d223f228f92f8"} err="failed to get container status \"be4847e614d6becee768fb53eb2a1a9f032c8b333b7c3aa1346d223f228f92f8\": rpc error: code = NotFound desc = could not find container \"be4847e614d6becee768fb53eb2a1a9f032c8b333b7c3aa1346d223f228f92f8\": container with ID starting with be4847e614d6becee768fb53eb2a1a9f032c8b333b7c3aa1346d223f228f92f8 not found: ID does not exist" Mar 12 08:53:46 crc kubenswrapper[4809]: I0312 08:53:46.929327 4809 scope.go:117] "RemoveContainer" containerID="45c76821f1ca4e97ea89c923ce583d7e1d91b76ed9a59adb809132b4432c11d4" Mar 12 08:53:46 crc kubenswrapper[4809]: E0312 08:53:46.929762 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45c76821f1ca4e97ea89c923ce583d7e1d91b76ed9a59adb809132b4432c11d4\": container with ID starting with 45c76821f1ca4e97ea89c923ce583d7e1d91b76ed9a59adb809132b4432c11d4 not found: ID does not exist" containerID="45c76821f1ca4e97ea89c923ce583d7e1d91b76ed9a59adb809132b4432c11d4" Mar 12 08:53:46 crc kubenswrapper[4809]: I0312 08:53:46.929808 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45c76821f1ca4e97ea89c923ce583d7e1d91b76ed9a59adb809132b4432c11d4"} err="failed to get container status \"45c76821f1ca4e97ea89c923ce583d7e1d91b76ed9a59adb809132b4432c11d4\": rpc error: code = NotFound desc = could not find container \"45c76821f1ca4e97ea89c923ce583d7e1d91b76ed9a59adb809132b4432c11d4\": container with ID starting with 45c76821f1ca4e97ea89c923ce583d7e1d91b76ed9a59adb809132b4432c11d4 not found: ID does not exist" Mar 12 08:53:47 crc kubenswrapper[4809]: I0312 08:53:47.192642 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cnxp5"] Mar 12 08:53:47 crc kubenswrapper[4809]: I0312 08:53:47.203381 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cnxp5"] Mar 12 08:53:48 crc kubenswrapper[4809]: E0312 08:53:48.301907 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a0e09f1_cf57_4f3c_a69a_0be101f82e31.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a0e09f1_cf57_4f3c_a69a_0be101f82e31.slice/crio-693d5ac0703faa99766018564d0e3b04f225bd22c7f8c26b8e9026867929d008\": RecentStats: unable to find data in memory cache]" Mar 12 08:53:48 crc kubenswrapper[4809]: E0312 08:53:48.302795 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a0e09f1_cf57_4f3c_a69a_0be101f82e31.slice/crio-693d5ac0703faa99766018564d0e3b04f225bd22c7f8c26b8e9026867929d008\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a0e09f1_cf57_4f3c_a69a_0be101f82e31.slice\": RecentStats: unable to find data in memory cache]" Mar 12 08:53:49 crc kubenswrapper[4809]: I0312 08:53:49.129399 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a0e09f1-cf57-4f3c-a69a-0be101f82e31" path="/var/lib/kubelet/pods/9a0e09f1-cf57-4f3c-a69a-0be101f82e31/volumes" Mar 12 08:53:49 crc kubenswrapper[4809]: E0312 08:53:49.453247 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a0e09f1_cf57_4f3c_a69a_0be101f82e31.slice/crio-693d5ac0703faa99766018564d0e3b04f225bd22c7f8c26b8e9026867929d008\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a0e09f1_cf57_4f3c_a69a_0be101f82e31.slice\": RecentStats: unable to find data in memory cache]" Mar 12 08:53:50 crc kubenswrapper[4809]: E0312 08:53:50.477791 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a0e09f1_cf57_4f3c_a69a_0be101f82e31.slice/crio-693d5ac0703faa99766018564d0e3b04f225bd22c7f8c26b8e9026867929d008\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a0e09f1_cf57_4f3c_a69a_0be101f82e31.slice\": RecentStats: unable to find data in memory cache]" Mar 12 08:53:55 crc kubenswrapper[4809]: I0312 08:53:55.058152 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zdl4p" Mar 12 08:53:55 crc kubenswrapper[4809]: I0312 08:53:55.128431 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zdl4p"] Mar 12 08:53:55 crc kubenswrapper[4809]: I0312 08:53:55.965108 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zdl4p" podUID="50a4501d-819a-4356-aa1d-b44db59080c8" containerName="registry-server" containerID="cri-o://da6d4359551d8b45db1220110fbfd2fc6edb1a8a794d3c948d78f7ee72746dbd" gracePeriod=2 Mar 12 08:53:56 crc kubenswrapper[4809]: I0312 08:53:56.497971 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zdl4p" Mar 12 08:53:56 crc kubenswrapper[4809]: I0312 08:53:56.586340 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50a4501d-819a-4356-aa1d-b44db59080c8-catalog-content\") pod \"50a4501d-819a-4356-aa1d-b44db59080c8\" (UID: \"50a4501d-819a-4356-aa1d-b44db59080c8\") " Mar 12 08:53:56 crc kubenswrapper[4809]: I0312 08:53:56.586405 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvlpl\" (UniqueName: \"kubernetes.io/projected/50a4501d-819a-4356-aa1d-b44db59080c8-kube-api-access-fvlpl\") pod \"50a4501d-819a-4356-aa1d-b44db59080c8\" (UID: \"50a4501d-819a-4356-aa1d-b44db59080c8\") " Mar 12 08:53:56 crc kubenswrapper[4809]: I0312 08:53:56.586855 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50a4501d-819a-4356-aa1d-b44db59080c8-utilities\") pod \"50a4501d-819a-4356-aa1d-b44db59080c8\" (UID: \"50a4501d-819a-4356-aa1d-b44db59080c8\") " Mar 12 08:53:56 crc kubenswrapper[4809]: I0312 08:53:56.587832 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50a4501d-819a-4356-aa1d-b44db59080c8-utilities" (OuterVolumeSpecName: "utilities") pod "50a4501d-819a-4356-aa1d-b44db59080c8" (UID: "50a4501d-819a-4356-aa1d-b44db59080c8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:53:56 crc kubenswrapper[4809]: I0312 08:53:56.592638 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50a4501d-819a-4356-aa1d-b44db59080c8-kube-api-access-fvlpl" (OuterVolumeSpecName: "kube-api-access-fvlpl") pod "50a4501d-819a-4356-aa1d-b44db59080c8" (UID: "50a4501d-819a-4356-aa1d-b44db59080c8"). InnerVolumeSpecName "kube-api-access-fvlpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:53:56 crc kubenswrapper[4809]: I0312 08:53:56.648976 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50a4501d-819a-4356-aa1d-b44db59080c8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "50a4501d-819a-4356-aa1d-b44db59080c8" (UID: "50a4501d-819a-4356-aa1d-b44db59080c8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:53:56 crc kubenswrapper[4809]: I0312 08:53:56.690294 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50a4501d-819a-4356-aa1d-b44db59080c8-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 08:53:56 crc kubenswrapper[4809]: I0312 08:53:56.690352 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvlpl\" (UniqueName: \"kubernetes.io/projected/50a4501d-819a-4356-aa1d-b44db59080c8-kube-api-access-fvlpl\") on node \"crc\" DevicePath \"\"" Mar 12 08:53:56 crc kubenswrapper[4809]: I0312 08:53:56.690371 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50a4501d-819a-4356-aa1d-b44db59080c8-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 08:53:56 crc kubenswrapper[4809]: I0312 08:53:56.977912 4809 generic.go:334] "Generic (PLEG): container finished" podID="50a4501d-819a-4356-aa1d-b44db59080c8" containerID="da6d4359551d8b45db1220110fbfd2fc6edb1a8a794d3c948d78f7ee72746dbd" exitCode=0 Mar 12 08:53:56 crc kubenswrapper[4809]: I0312 08:53:56.977987 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zdl4p" event={"ID":"50a4501d-819a-4356-aa1d-b44db59080c8","Type":"ContainerDied","Data":"da6d4359551d8b45db1220110fbfd2fc6edb1a8a794d3c948d78f7ee72746dbd"} Mar 12 08:53:56 crc kubenswrapper[4809]: I0312 08:53:56.978022 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zdl4p" event={"ID":"50a4501d-819a-4356-aa1d-b44db59080c8","Type":"ContainerDied","Data":"0113b2cc6632ef2dc7523d0ffd1b2132c8b2af3a4317e5b44266cf36b39a34d3"} Mar 12 08:53:56 crc kubenswrapper[4809]: I0312 08:53:56.978041 4809 scope.go:117] "RemoveContainer" containerID="da6d4359551d8b45db1220110fbfd2fc6edb1a8a794d3c948d78f7ee72746dbd" Mar 12 08:53:56 crc kubenswrapper[4809]: I0312 08:53:56.978187 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zdl4p" Mar 12 08:53:57 crc kubenswrapper[4809]: I0312 08:53:57.008303 4809 scope.go:117] "RemoveContainer" containerID="1641d3419dbdc5ca9bd6903c35b60c2fe4db7cfdd3da25aac94b2b791f31d92f" Mar 12 08:53:57 crc kubenswrapper[4809]: I0312 08:53:57.021783 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zdl4p"] Mar 12 08:53:57 crc kubenswrapper[4809]: I0312 08:53:57.032033 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zdl4p"] Mar 12 08:53:57 crc kubenswrapper[4809]: I0312 08:53:57.051077 4809 scope.go:117] "RemoveContainer" containerID="8edf11cb27d53da6b9ff1eb08c313d4db5ce2db7c6c12f1098e779beba4bbec7" Mar 12 08:53:57 crc kubenswrapper[4809]: I0312 08:53:57.089691 4809 scope.go:117] "RemoveContainer" containerID="da6d4359551d8b45db1220110fbfd2fc6edb1a8a794d3c948d78f7ee72746dbd" Mar 12 08:53:57 crc kubenswrapper[4809]: E0312 08:53:57.090491 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da6d4359551d8b45db1220110fbfd2fc6edb1a8a794d3c948d78f7ee72746dbd\": container with ID starting with da6d4359551d8b45db1220110fbfd2fc6edb1a8a794d3c948d78f7ee72746dbd not found: ID does not exist" containerID="da6d4359551d8b45db1220110fbfd2fc6edb1a8a794d3c948d78f7ee72746dbd" Mar 12 08:53:57 crc kubenswrapper[4809]: I0312 08:53:57.090525 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da6d4359551d8b45db1220110fbfd2fc6edb1a8a794d3c948d78f7ee72746dbd"} err="failed to get container status \"da6d4359551d8b45db1220110fbfd2fc6edb1a8a794d3c948d78f7ee72746dbd\": rpc error: code = NotFound desc = could not find container \"da6d4359551d8b45db1220110fbfd2fc6edb1a8a794d3c948d78f7ee72746dbd\": container with ID starting with da6d4359551d8b45db1220110fbfd2fc6edb1a8a794d3c948d78f7ee72746dbd not found: ID does not exist" Mar 12 08:53:57 crc kubenswrapper[4809]: I0312 08:53:57.090548 4809 scope.go:117] "RemoveContainer" containerID="1641d3419dbdc5ca9bd6903c35b60c2fe4db7cfdd3da25aac94b2b791f31d92f" Mar 12 08:53:57 crc kubenswrapper[4809]: E0312 08:53:57.090771 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1641d3419dbdc5ca9bd6903c35b60c2fe4db7cfdd3da25aac94b2b791f31d92f\": container with ID starting with 1641d3419dbdc5ca9bd6903c35b60c2fe4db7cfdd3da25aac94b2b791f31d92f not found: ID does not exist" containerID="1641d3419dbdc5ca9bd6903c35b60c2fe4db7cfdd3da25aac94b2b791f31d92f" Mar 12 08:53:57 crc kubenswrapper[4809]: I0312 08:53:57.090797 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1641d3419dbdc5ca9bd6903c35b60c2fe4db7cfdd3da25aac94b2b791f31d92f"} err="failed to get container status \"1641d3419dbdc5ca9bd6903c35b60c2fe4db7cfdd3da25aac94b2b791f31d92f\": rpc error: code = NotFound desc = could not find container \"1641d3419dbdc5ca9bd6903c35b60c2fe4db7cfdd3da25aac94b2b791f31d92f\": container with ID starting with 1641d3419dbdc5ca9bd6903c35b60c2fe4db7cfdd3da25aac94b2b791f31d92f not found: ID does not exist" Mar 12 08:53:57 crc kubenswrapper[4809]: I0312 08:53:57.090812 4809 scope.go:117] "RemoveContainer" containerID="8edf11cb27d53da6b9ff1eb08c313d4db5ce2db7c6c12f1098e779beba4bbec7" Mar 12 08:53:57 crc kubenswrapper[4809]: E0312 08:53:57.091410 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8edf11cb27d53da6b9ff1eb08c313d4db5ce2db7c6c12f1098e779beba4bbec7\": container with ID starting with 8edf11cb27d53da6b9ff1eb08c313d4db5ce2db7c6c12f1098e779beba4bbec7 not found: ID does not exist" containerID="8edf11cb27d53da6b9ff1eb08c313d4db5ce2db7c6c12f1098e779beba4bbec7" Mar 12 08:53:57 crc kubenswrapper[4809]: I0312 08:53:57.091436 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8edf11cb27d53da6b9ff1eb08c313d4db5ce2db7c6c12f1098e779beba4bbec7"} err="failed to get container status \"8edf11cb27d53da6b9ff1eb08c313d4db5ce2db7c6c12f1098e779beba4bbec7\": rpc error: code = NotFound desc = could not find container \"8edf11cb27d53da6b9ff1eb08c313d4db5ce2db7c6c12f1098e779beba4bbec7\": container with ID starting with 8edf11cb27d53da6b9ff1eb08c313d4db5ce2db7c6c12f1098e779beba4bbec7 not found: ID does not exist" Mar 12 08:53:57 crc kubenswrapper[4809]: I0312 08:53:57.121821 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50a4501d-819a-4356-aa1d-b44db59080c8" path="/var/lib/kubelet/pods/50a4501d-819a-4356-aa1d-b44db59080c8/volumes" Mar 12 08:53:59 crc kubenswrapper[4809]: E0312 08:53:59.769309 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a0e09f1_cf57_4f3c_a69a_0be101f82e31.slice/crio-693d5ac0703faa99766018564d0e3b04f225bd22c7f8c26b8e9026867929d008\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a0e09f1_cf57_4f3c_a69a_0be101f82e31.slice\": RecentStats: unable to find data in memory cache]" Mar 12 08:54:00 crc kubenswrapper[4809]: I0312 08:54:00.208942 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555094-zplxn"] Mar 12 08:54:00 crc kubenswrapper[4809]: E0312 08:54:00.209656 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a0e09f1-cf57-4f3c-a69a-0be101f82e31" containerName="extract-content" Mar 12 08:54:00 crc kubenswrapper[4809]: I0312 08:54:00.210048 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a0e09f1-cf57-4f3c-a69a-0be101f82e31" containerName="extract-content" Mar 12 08:54:00 crc kubenswrapper[4809]: E0312 08:54:00.210065 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50a4501d-819a-4356-aa1d-b44db59080c8" containerName="registry-server" Mar 12 08:54:00 crc kubenswrapper[4809]: I0312 08:54:00.210075 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="50a4501d-819a-4356-aa1d-b44db59080c8" containerName="registry-server" Mar 12 08:54:00 crc kubenswrapper[4809]: E0312 08:54:00.210108 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a0e09f1-cf57-4f3c-a69a-0be101f82e31" containerName="extract-utilities" Mar 12 08:54:00 crc kubenswrapper[4809]: I0312 08:54:00.210135 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a0e09f1-cf57-4f3c-a69a-0be101f82e31" containerName="extract-utilities" Mar 12 08:54:00 crc kubenswrapper[4809]: E0312 08:54:00.210152 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50a4501d-819a-4356-aa1d-b44db59080c8" containerName="extract-utilities" Mar 12 08:54:00 crc kubenswrapper[4809]: I0312 08:54:00.210160 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="50a4501d-819a-4356-aa1d-b44db59080c8" containerName="extract-utilities" Mar 12 08:54:00 crc kubenswrapper[4809]: E0312 08:54:00.210180 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50a4501d-819a-4356-aa1d-b44db59080c8" containerName="extract-content" Mar 12 08:54:00 crc kubenswrapper[4809]: I0312 08:54:00.210189 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="50a4501d-819a-4356-aa1d-b44db59080c8" containerName="extract-content" Mar 12 08:54:00 crc kubenswrapper[4809]: E0312 08:54:00.210218 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a0e09f1-cf57-4f3c-a69a-0be101f82e31" containerName="registry-server" Mar 12 08:54:00 crc kubenswrapper[4809]: I0312 08:54:00.210227 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a0e09f1-cf57-4f3c-a69a-0be101f82e31" containerName="registry-server" Mar 12 08:54:00 crc kubenswrapper[4809]: I0312 08:54:00.210552 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="50a4501d-819a-4356-aa1d-b44db59080c8" containerName="registry-server" Mar 12 08:54:00 crc kubenswrapper[4809]: I0312 08:54:00.210574 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a0e09f1-cf57-4f3c-a69a-0be101f82e31" containerName="registry-server" Mar 12 08:54:00 crc kubenswrapper[4809]: I0312 08:54:00.211795 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555094-zplxn" Mar 12 08:54:00 crc kubenswrapper[4809]: I0312 08:54:00.214836 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-tvxqv" Mar 12 08:54:00 crc kubenswrapper[4809]: I0312 08:54:00.216089 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 08:54:00 crc kubenswrapper[4809]: I0312 08:54:00.216357 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 08:54:00 crc kubenswrapper[4809]: I0312 08:54:00.223791 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555094-zplxn"] Mar 12 08:54:00 crc kubenswrapper[4809]: I0312 08:54:00.294455 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-726mg\" (UniqueName: \"kubernetes.io/projected/cf36fd42-9301-443c-844c-50641f761cc1-kube-api-access-726mg\") pod \"auto-csr-approver-29555094-zplxn\" (UID: \"cf36fd42-9301-443c-844c-50641f761cc1\") " pod="openshift-infra/auto-csr-approver-29555094-zplxn" Mar 12 08:54:00 crc kubenswrapper[4809]: I0312 08:54:00.398088 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-726mg\" (UniqueName: \"kubernetes.io/projected/cf36fd42-9301-443c-844c-50641f761cc1-kube-api-access-726mg\") pod \"auto-csr-approver-29555094-zplxn\" (UID: \"cf36fd42-9301-443c-844c-50641f761cc1\") " pod="openshift-infra/auto-csr-approver-29555094-zplxn" Mar 12 08:54:00 crc kubenswrapper[4809]: I0312 08:54:00.422154 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-726mg\" (UniqueName: \"kubernetes.io/projected/cf36fd42-9301-443c-844c-50641f761cc1-kube-api-access-726mg\") pod \"auto-csr-approver-29555094-zplxn\" (UID: \"cf36fd42-9301-443c-844c-50641f761cc1\") " pod="openshift-infra/auto-csr-approver-29555094-zplxn" Mar 12 08:54:00 crc kubenswrapper[4809]: I0312 08:54:00.539882 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555094-zplxn" Mar 12 08:54:01 crc kubenswrapper[4809]: I0312 08:54:01.048625 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555094-zplxn"] Mar 12 08:54:02 crc kubenswrapper[4809]: I0312 08:54:02.051557 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555094-zplxn" event={"ID":"cf36fd42-9301-443c-844c-50641f761cc1","Type":"ContainerStarted","Data":"04fb795550f097c578e39c57a4f0614687ae8cf5685069fc572c854881d3c721"} Mar 12 08:54:03 crc kubenswrapper[4809]: I0312 08:54:03.063860 4809 generic.go:334] "Generic (PLEG): container finished" podID="cf36fd42-9301-443c-844c-50641f761cc1" containerID="cdc90b692dfa53d96d4391805a888ba35d471433b8dbe0098b71c3b8803d3852" exitCode=0 Mar 12 08:54:03 crc kubenswrapper[4809]: I0312 08:54:03.064214 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555094-zplxn" event={"ID":"cf36fd42-9301-443c-844c-50641f761cc1","Type":"ContainerDied","Data":"cdc90b692dfa53d96d4391805a888ba35d471433b8dbe0098b71c3b8803d3852"} Mar 12 08:54:04 crc kubenswrapper[4809]: I0312 08:54:04.479504 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555094-zplxn" Mar 12 08:54:04 crc kubenswrapper[4809]: I0312 08:54:04.643761 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-726mg\" (UniqueName: \"kubernetes.io/projected/cf36fd42-9301-443c-844c-50641f761cc1-kube-api-access-726mg\") pod \"cf36fd42-9301-443c-844c-50641f761cc1\" (UID: \"cf36fd42-9301-443c-844c-50641f761cc1\") " Mar 12 08:54:04 crc kubenswrapper[4809]: I0312 08:54:04.649438 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf36fd42-9301-443c-844c-50641f761cc1-kube-api-access-726mg" (OuterVolumeSpecName: "kube-api-access-726mg") pod "cf36fd42-9301-443c-844c-50641f761cc1" (UID: "cf36fd42-9301-443c-844c-50641f761cc1"). InnerVolumeSpecName "kube-api-access-726mg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:54:04 crc kubenswrapper[4809]: I0312 08:54:04.747958 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-726mg\" (UniqueName: \"kubernetes.io/projected/cf36fd42-9301-443c-844c-50641f761cc1-kube-api-access-726mg\") on node \"crc\" DevicePath \"\"" Mar 12 08:54:05 crc kubenswrapper[4809]: I0312 08:54:05.085014 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555094-zplxn" event={"ID":"cf36fd42-9301-443c-844c-50641f761cc1","Type":"ContainerDied","Data":"04fb795550f097c578e39c57a4f0614687ae8cf5685069fc572c854881d3c721"} Mar 12 08:54:05 crc kubenswrapper[4809]: I0312 08:54:05.085055 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04fb795550f097c578e39c57a4f0614687ae8cf5685069fc572c854881d3c721" Mar 12 08:54:05 crc kubenswrapper[4809]: I0312 08:54:05.085353 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555094-zplxn" Mar 12 08:54:05 crc kubenswrapper[4809]: E0312 08:54:05.472941 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a0e09f1_cf57_4f3c_a69a_0be101f82e31.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a0e09f1_cf57_4f3c_a69a_0be101f82e31.slice/crio-693d5ac0703faa99766018564d0e3b04f225bd22c7f8c26b8e9026867929d008\": RecentStats: unable to find data in memory cache]" Mar 12 08:54:05 crc kubenswrapper[4809]: I0312 08:54:05.561980 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555088-5g6fz"] Mar 12 08:54:05 crc kubenswrapper[4809]: I0312 08:54:05.576237 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555088-5g6fz"] Mar 12 08:54:07 crc kubenswrapper[4809]: I0312 08:54:07.137142 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf4ca1fa-82c4-44c8-897f-f8e194959127" path="/var/lib/kubelet/pods/cf4ca1fa-82c4-44c8-897f-f8e194959127/volumes" Mar 12 08:54:10 crc kubenswrapper[4809]: E0312 08:54:10.082684 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a0e09f1_cf57_4f3c_a69a_0be101f82e31.slice/crio-693d5ac0703faa99766018564d0e3b04f225bd22c7f8c26b8e9026867929d008\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a0e09f1_cf57_4f3c_a69a_0be101f82e31.slice\": RecentStats: unable to find data in memory cache]" Mar 12 08:54:20 crc kubenswrapper[4809]: E0312 08:54:20.477916 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a0e09f1_cf57_4f3c_a69a_0be101f82e31.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a0e09f1_cf57_4f3c_a69a_0be101f82e31.slice/crio-693d5ac0703faa99766018564d0e3b04f225bd22c7f8c26b8e9026867929d008\": RecentStats: unable to find data in memory cache]" Mar 12 08:54:20 crc kubenswrapper[4809]: E0312 08:54:20.478753 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a0e09f1_cf57_4f3c_a69a_0be101f82e31.slice/crio-693d5ac0703faa99766018564d0e3b04f225bd22c7f8c26b8e9026867929d008\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a0e09f1_cf57_4f3c_a69a_0be101f82e31.slice\": RecentStats: unable to find data in memory cache]" Mar 12 08:54:26 crc kubenswrapper[4809]: I0312 08:54:26.292673 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-677488855f-bz28w" podUID="5c65e075-ccaa-4054-9903-ebcd26368c00" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Mar 12 08:54:30 crc kubenswrapper[4809]: E0312 08:54:30.813430 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a0e09f1_cf57_4f3c_a69a_0be101f82e31.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a0e09f1_cf57_4f3c_a69a_0be101f82e31.slice/crio-693d5ac0703faa99766018564d0e3b04f225bd22c7f8c26b8e9026867929d008\": RecentStats: unable to find data in memory cache]" Mar 12 08:54:33 crc kubenswrapper[4809]: I0312 08:54:33.130741 4809 scope.go:117] "RemoveContainer" containerID="51699eecda0deaaf166b6409a0da95a1e8617d2c877f1784006d242c6c9d8e97" Mar 12 08:54:35 crc kubenswrapper[4809]: E0312 08:54:35.475418 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a0e09f1_cf57_4f3c_a69a_0be101f82e31.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a0e09f1_cf57_4f3c_a69a_0be101f82e31.slice/crio-693d5ac0703faa99766018564d0e3b04f225bd22c7f8c26b8e9026867929d008\": RecentStats: unable to find data in memory cache]" Mar 12 08:54:41 crc kubenswrapper[4809]: E0312 08:54:41.102397 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a0e09f1_cf57_4f3c_a69a_0be101f82e31.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a0e09f1_cf57_4f3c_a69a_0be101f82e31.slice/crio-693d5ac0703faa99766018564d0e3b04f225bd22c7f8c26b8e9026867929d008\": RecentStats: unable to find data in memory cache]" Mar 12 08:54:45 crc kubenswrapper[4809]: I0312 08:54:45.049260 4809 patch_prober.go:28] interesting pod/machine-config-daemon-h6d4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 08:54:45 crc kubenswrapper[4809]: I0312 08:54:45.050381 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 08:55:15 crc kubenswrapper[4809]: I0312 08:55:15.048765 4809 patch_prober.go:28] interesting pod/machine-config-daemon-h6d4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 08:55:15 crc kubenswrapper[4809]: I0312 08:55:15.049859 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 08:55:45 crc kubenswrapper[4809]: I0312 08:55:45.048938 4809 patch_prober.go:28] interesting pod/machine-config-daemon-h6d4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 08:55:45 crc kubenswrapper[4809]: I0312 08:55:45.049541 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 08:55:45 crc kubenswrapper[4809]: I0312 08:55:45.049593 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" Mar 12 08:55:45 crc kubenswrapper[4809]: I0312 08:55:45.050538 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5792759995ae3718ae365ac07d2eb7940dfaa04425d30dececf1d2e6ef7ac054"} pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 08:55:45 crc kubenswrapper[4809]: I0312 08:55:45.050591 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" containerID="cri-o://5792759995ae3718ae365ac07d2eb7940dfaa04425d30dececf1d2e6ef7ac054" gracePeriod=600 Mar 12 08:55:45 crc kubenswrapper[4809]: I0312 08:55:45.414464 4809 generic.go:334] "Generic (PLEG): container finished" podID="101483ba-8ed3-40eb-9855-077e9add029f" containerID="5792759995ae3718ae365ac07d2eb7940dfaa04425d30dececf1d2e6ef7ac054" exitCode=0 Mar 12 08:55:45 crc kubenswrapper[4809]: I0312 08:55:45.414529 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" event={"ID":"101483ba-8ed3-40eb-9855-077e9add029f","Type":"ContainerDied","Data":"5792759995ae3718ae365ac07d2eb7940dfaa04425d30dececf1d2e6ef7ac054"} Mar 12 08:55:45 crc kubenswrapper[4809]: I0312 08:55:45.414864 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" event={"ID":"101483ba-8ed3-40eb-9855-077e9add029f","Type":"ContainerStarted","Data":"18e7b6a6acc918a926c068dfaa70c8ee3f2195ac0234a00d247b295f88e7163c"} Mar 12 08:55:45 crc kubenswrapper[4809]: I0312 08:55:45.414883 4809 scope.go:117] "RemoveContainer" containerID="09fd2a67830bd13c39e112c63c28313e1f2e66e0b513dca499389267040e529c" Mar 12 08:56:00 crc kubenswrapper[4809]: I0312 08:56:00.158789 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555096-p8jrr"] Mar 12 08:56:00 crc kubenswrapper[4809]: E0312 08:56:00.160228 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf36fd42-9301-443c-844c-50641f761cc1" containerName="oc" Mar 12 08:56:00 crc kubenswrapper[4809]: I0312 08:56:00.160250 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf36fd42-9301-443c-844c-50641f761cc1" containerName="oc" Mar 12 08:56:00 crc kubenswrapper[4809]: I0312 08:56:00.160561 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf36fd42-9301-443c-844c-50641f761cc1" containerName="oc" Mar 12 08:56:00 crc kubenswrapper[4809]: I0312 08:56:00.161734 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555096-p8jrr" Mar 12 08:56:00 crc kubenswrapper[4809]: I0312 08:56:00.200826 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555096-p8jrr"] Mar 12 08:56:00 crc kubenswrapper[4809]: I0312 08:56:00.209095 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-tvxqv" Mar 12 08:56:00 crc kubenswrapper[4809]: I0312 08:56:00.209197 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 08:56:00 crc kubenswrapper[4809]: I0312 08:56:00.210423 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 08:56:00 crc kubenswrapper[4809]: I0312 08:56:00.292464 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5krr\" (UniqueName: \"kubernetes.io/projected/7c6928d8-a14c-4043-853e-4b3699cf88a9-kube-api-access-s5krr\") pod \"auto-csr-approver-29555096-p8jrr\" (UID: \"7c6928d8-a14c-4043-853e-4b3699cf88a9\") " pod="openshift-infra/auto-csr-approver-29555096-p8jrr" Mar 12 08:56:00 crc kubenswrapper[4809]: I0312 08:56:00.397287 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5krr\" (UniqueName: \"kubernetes.io/projected/7c6928d8-a14c-4043-853e-4b3699cf88a9-kube-api-access-s5krr\") pod \"auto-csr-approver-29555096-p8jrr\" (UID: \"7c6928d8-a14c-4043-853e-4b3699cf88a9\") " pod="openshift-infra/auto-csr-approver-29555096-p8jrr" Mar 12 08:56:00 crc kubenswrapper[4809]: I0312 08:56:00.418526 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5krr\" (UniqueName: \"kubernetes.io/projected/7c6928d8-a14c-4043-853e-4b3699cf88a9-kube-api-access-s5krr\") pod \"auto-csr-approver-29555096-p8jrr\" (UID: \"7c6928d8-a14c-4043-853e-4b3699cf88a9\") " pod="openshift-infra/auto-csr-approver-29555096-p8jrr" Mar 12 08:56:00 crc kubenswrapper[4809]: I0312 08:56:00.536647 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555096-p8jrr" Mar 12 08:56:01 crc kubenswrapper[4809]: I0312 08:56:01.080019 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555096-p8jrr"] Mar 12 08:56:01 crc kubenswrapper[4809]: I0312 08:56:01.084411 4809 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 08:56:01 crc kubenswrapper[4809]: I0312 08:56:01.655938 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555096-p8jrr" event={"ID":"7c6928d8-a14c-4043-853e-4b3699cf88a9","Type":"ContainerStarted","Data":"6593431f627af9b95288834536776f7246df78d0026197dea659ae24e31b5c47"} Mar 12 08:56:02 crc kubenswrapper[4809]: I0312 08:56:02.668715 4809 generic.go:334] "Generic (PLEG): container finished" podID="7c6928d8-a14c-4043-853e-4b3699cf88a9" containerID="265a25b2a415dcdfee6427c05ebb499dc950b596358ddb42073b8cb2b12ef17f" exitCode=0 Mar 12 08:56:02 crc kubenswrapper[4809]: I0312 08:56:02.668808 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555096-p8jrr" event={"ID":"7c6928d8-a14c-4043-853e-4b3699cf88a9","Type":"ContainerDied","Data":"265a25b2a415dcdfee6427c05ebb499dc950b596358ddb42073b8cb2b12ef17f"} Mar 12 08:56:04 crc kubenswrapper[4809]: I0312 08:56:04.148774 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555096-p8jrr" Mar 12 08:56:04 crc kubenswrapper[4809]: I0312 08:56:04.220683 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5krr\" (UniqueName: \"kubernetes.io/projected/7c6928d8-a14c-4043-853e-4b3699cf88a9-kube-api-access-s5krr\") pod \"7c6928d8-a14c-4043-853e-4b3699cf88a9\" (UID: \"7c6928d8-a14c-4043-853e-4b3699cf88a9\") " Mar 12 08:56:04 crc kubenswrapper[4809]: I0312 08:56:04.229864 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c6928d8-a14c-4043-853e-4b3699cf88a9-kube-api-access-s5krr" (OuterVolumeSpecName: "kube-api-access-s5krr") pod "7c6928d8-a14c-4043-853e-4b3699cf88a9" (UID: "7c6928d8-a14c-4043-853e-4b3699cf88a9"). InnerVolumeSpecName "kube-api-access-s5krr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:56:04 crc kubenswrapper[4809]: I0312 08:56:04.323838 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5krr\" (UniqueName: \"kubernetes.io/projected/7c6928d8-a14c-4043-853e-4b3699cf88a9-kube-api-access-s5krr\") on node \"crc\" DevicePath \"\"" Mar 12 08:56:04 crc kubenswrapper[4809]: I0312 08:56:04.695006 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555096-p8jrr" event={"ID":"7c6928d8-a14c-4043-853e-4b3699cf88a9","Type":"ContainerDied","Data":"6593431f627af9b95288834536776f7246df78d0026197dea659ae24e31b5c47"} Mar 12 08:56:04 crc kubenswrapper[4809]: I0312 08:56:04.695056 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6593431f627af9b95288834536776f7246df78d0026197dea659ae24e31b5c47" Mar 12 08:56:04 crc kubenswrapper[4809]: I0312 08:56:04.695598 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555096-p8jrr" Mar 12 08:56:05 crc kubenswrapper[4809]: I0312 08:56:05.233861 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555090-rsl2p"] Mar 12 08:56:05 crc kubenswrapper[4809]: I0312 08:56:05.246930 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555090-rsl2p"] Mar 12 08:56:07 crc kubenswrapper[4809]: I0312 08:56:07.131620 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a73449e-b50c-4a14-8031-a7b1aacb85ed" path="/var/lib/kubelet/pods/9a73449e-b50c-4a14-8031-a7b1aacb85ed/volumes" Mar 12 08:56:26 crc kubenswrapper[4809]: I0312 08:56:26.406086 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4dqth"] Mar 12 08:56:26 crc kubenswrapper[4809]: E0312 08:56:26.407588 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c6928d8-a14c-4043-853e-4b3699cf88a9" containerName="oc" Mar 12 08:56:26 crc kubenswrapper[4809]: I0312 08:56:26.407608 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c6928d8-a14c-4043-853e-4b3699cf88a9" containerName="oc" Mar 12 08:56:26 crc kubenswrapper[4809]: I0312 08:56:26.407830 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c6928d8-a14c-4043-853e-4b3699cf88a9" containerName="oc" Mar 12 08:56:26 crc kubenswrapper[4809]: I0312 08:56:26.409429 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4dqth" Mar 12 08:56:26 crc kubenswrapper[4809]: I0312 08:56:26.517174 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e5e2b83-f0cd-4af4-a5c6-97090623477e-catalog-content\") pod \"redhat-operators-4dqth\" (UID: \"8e5e2b83-f0cd-4af4-a5c6-97090623477e\") " pod="openshift-marketplace/redhat-operators-4dqth" Mar 12 08:56:26 crc kubenswrapper[4809]: I0312 08:56:26.517347 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e5e2b83-f0cd-4af4-a5c6-97090623477e-utilities\") pod \"redhat-operators-4dqth\" (UID: \"8e5e2b83-f0cd-4af4-a5c6-97090623477e\") " pod="openshift-marketplace/redhat-operators-4dqth" Mar 12 08:56:26 crc kubenswrapper[4809]: I0312 08:56:26.518455 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9jw7\" (UniqueName: \"kubernetes.io/projected/8e5e2b83-f0cd-4af4-a5c6-97090623477e-kube-api-access-s9jw7\") pod \"redhat-operators-4dqth\" (UID: \"8e5e2b83-f0cd-4af4-a5c6-97090623477e\") " pod="openshift-marketplace/redhat-operators-4dqth" Mar 12 08:56:26 crc kubenswrapper[4809]: I0312 08:56:26.532736 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4dqth"] Mar 12 08:56:26 crc kubenswrapper[4809]: I0312 08:56:26.623011 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9jw7\" (UniqueName: \"kubernetes.io/projected/8e5e2b83-f0cd-4af4-a5c6-97090623477e-kube-api-access-s9jw7\") pod \"redhat-operators-4dqth\" (UID: \"8e5e2b83-f0cd-4af4-a5c6-97090623477e\") " pod="openshift-marketplace/redhat-operators-4dqth" Mar 12 08:56:26 crc kubenswrapper[4809]: I0312 08:56:26.623360 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e5e2b83-f0cd-4af4-a5c6-97090623477e-catalog-content\") pod \"redhat-operators-4dqth\" (UID: \"8e5e2b83-f0cd-4af4-a5c6-97090623477e\") " pod="openshift-marketplace/redhat-operators-4dqth" Mar 12 08:56:26 crc kubenswrapper[4809]: I0312 08:56:26.623471 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e5e2b83-f0cd-4af4-a5c6-97090623477e-utilities\") pod \"redhat-operators-4dqth\" (UID: \"8e5e2b83-f0cd-4af4-a5c6-97090623477e\") " pod="openshift-marketplace/redhat-operators-4dqth" Mar 12 08:56:26 crc kubenswrapper[4809]: I0312 08:56:26.624306 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e5e2b83-f0cd-4af4-a5c6-97090623477e-catalog-content\") pod \"redhat-operators-4dqth\" (UID: \"8e5e2b83-f0cd-4af4-a5c6-97090623477e\") " pod="openshift-marketplace/redhat-operators-4dqth" Mar 12 08:56:26 crc kubenswrapper[4809]: I0312 08:56:26.624347 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e5e2b83-f0cd-4af4-a5c6-97090623477e-utilities\") pod \"redhat-operators-4dqth\" (UID: \"8e5e2b83-f0cd-4af4-a5c6-97090623477e\") " pod="openshift-marketplace/redhat-operators-4dqth" Mar 12 08:56:26 crc kubenswrapper[4809]: I0312 08:56:26.647534 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9jw7\" (UniqueName: \"kubernetes.io/projected/8e5e2b83-f0cd-4af4-a5c6-97090623477e-kube-api-access-s9jw7\") pod \"redhat-operators-4dqth\" (UID: \"8e5e2b83-f0cd-4af4-a5c6-97090623477e\") " pod="openshift-marketplace/redhat-operators-4dqth" Mar 12 08:56:26 crc kubenswrapper[4809]: I0312 08:56:26.777746 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4dqth" Mar 12 08:56:27 crc kubenswrapper[4809]: I0312 08:56:27.303419 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4dqth"] Mar 12 08:56:28 crc kubenswrapper[4809]: I0312 08:56:28.056575 4809 generic.go:334] "Generic (PLEG): container finished" podID="8e5e2b83-f0cd-4af4-a5c6-97090623477e" containerID="b57638f25994c26f1d3b4a4e49a9c674747b1c3d97a82465a2fb65208a9aae28" exitCode=0 Mar 12 08:56:28 crc kubenswrapper[4809]: I0312 08:56:28.056754 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4dqth" event={"ID":"8e5e2b83-f0cd-4af4-a5c6-97090623477e","Type":"ContainerDied","Data":"b57638f25994c26f1d3b4a4e49a9c674747b1c3d97a82465a2fb65208a9aae28"} Mar 12 08:56:28 crc kubenswrapper[4809]: I0312 08:56:28.056899 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4dqth" event={"ID":"8e5e2b83-f0cd-4af4-a5c6-97090623477e","Type":"ContainerStarted","Data":"2f437ea33317f4f58a79e9d1a3ea774d89b6bd9d7ae729c07baca57f431c7816"} Mar 12 08:56:29 crc kubenswrapper[4809]: I0312 08:56:29.084405 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4dqth" event={"ID":"8e5e2b83-f0cd-4af4-a5c6-97090623477e","Type":"ContainerStarted","Data":"ffb654f4c3253475c2909f2328d7682423579cb2af62c6542b09a781918784f9"} Mar 12 08:56:33 crc kubenswrapper[4809]: I0312 08:56:33.307391 4809 scope.go:117] "RemoveContainer" containerID="a81b26b487a51ff61c8395660886773f0f94554cb486b6046e4da584b7699817" Mar 12 08:56:34 crc kubenswrapper[4809]: I0312 08:56:34.164306 4809 generic.go:334] "Generic (PLEG): container finished" podID="8e5e2b83-f0cd-4af4-a5c6-97090623477e" containerID="ffb654f4c3253475c2909f2328d7682423579cb2af62c6542b09a781918784f9" exitCode=0 Mar 12 08:56:34 crc kubenswrapper[4809]: I0312 08:56:34.164437 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4dqth" event={"ID":"8e5e2b83-f0cd-4af4-a5c6-97090623477e","Type":"ContainerDied","Data":"ffb654f4c3253475c2909f2328d7682423579cb2af62c6542b09a781918784f9"} Mar 12 08:56:35 crc kubenswrapper[4809]: I0312 08:56:35.178759 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4dqth" event={"ID":"8e5e2b83-f0cd-4af4-a5c6-97090623477e","Type":"ContainerStarted","Data":"aee33388d18234e753899aabf0e5e7d848f950969a5a2a3307a89a5c65b34a23"} Mar 12 08:56:35 crc kubenswrapper[4809]: I0312 08:56:35.205621 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4dqth" podStartSLOduration=2.479577559 podStartE2EDuration="9.205597163s" podCreationTimestamp="2026-03-12 08:56:26 +0000 UTC" firstStartedPulling="2026-03-12 08:56:28.058857254 +0000 UTC m=+3461.640893007" lastFinishedPulling="2026-03-12 08:56:34.784876878 +0000 UTC m=+3468.366912611" observedRunningTime="2026-03-12 08:56:35.19666631 +0000 UTC m=+3468.778702043" watchObservedRunningTime="2026-03-12 08:56:35.205597163 +0000 UTC m=+3468.787632906" Mar 12 08:56:36 crc kubenswrapper[4809]: I0312 08:56:36.778641 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4dqth" Mar 12 08:56:36 crc kubenswrapper[4809]: I0312 08:56:36.779350 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4dqth" Mar 12 08:56:37 crc kubenswrapper[4809]: I0312 08:56:37.850743 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4dqth" podUID="8e5e2b83-f0cd-4af4-a5c6-97090623477e" containerName="registry-server" probeResult="failure" output=< Mar 12 08:56:37 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 08:56:37 crc kubenswrapper[4809]: > Mar 12 08:56:47 crc kubenswrapper[4809]: I0312 08:56:47.831748 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4dqth" podUID="8e5e2b83-f0cd-4af4-a5c6-97090623477e" containerName="registry-server" probeResult="failure" output=< Mar 12 08:56:47 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 08:56:47 crc kubenswrapper[4809]: > Mar 12 08:56:56 crc kubenswrapper[4809]: I0312 08:56:56.829262 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4dqth" Mar 12 08:56:56 crc kubenswrapper[4809]: I0312 08:56:56.884425 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4dqth" Mar 12 08:56:57 crc kubenswrapper[4809]: I0312 08:56:57.599615 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4dqth"] Mar 12 08:56:58 crc kubenswrapper[4809]: I0312 08:56:58.426293 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4dqth" podUID="8e5e2b83-f0cd-4af4-a5c6-97090623477e" containerName="registry-server" containerID="cri-o://aee33388d18234e753899aabf0e5e7d848f950969a5a2a3307a89a5c65b34a23" gracePeriod=2 Mar 12 08:56:59 crc kubenswrapper[4809]: I0312 08:56:59.079098 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4dqth" Mar 12 08:56:59 crc kubenswrapper[4809]: I0312 08:56:59.131631 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e5e2b83-f0cd-4af4-a5c6-97090623477e-utilities\") pod \"8e5e2b83-f0cd-4af4-a5c6-97090623477e\" (UID: \"8e5e2b83-f0cd-4af4-a5c6-97090623477e\") " Mar 12 08:56:59 crc kubenswrapper[4809]: I0312 08:56:59.131726 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9jw7\" (UniqueName: \"kubernetes.io/projected/8e5e2b83-f0cd-4af4-a5c6-97090623477e-kube-api-access-s9jw7\") pod \"8e5e2b83-f0cd-4af4-a5c6-97090623477e\" (UID: \"8e5e2b83-f0cd-4af4-a5c6-97090623477e\") " Mar 12 08:56:59 crc kubenswrapper[4809]: I0312 08:56:59.131821 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e5e2b83-f0cd-4af4-a5c6-97090623477e-catalog-content\") pod \"8e5e2b83-f0cd-4af4-a5c6-97090623477e\" (UID: \"8e5e2b83-f0cd-4af4-a5c6-97090623477e\") " Mar 12 08:56:59 crc kubenswrapper[4809]: I0312 08:56:59.133338 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e5e2b83-f0cd-4af4-a5c6-97090623477e-utilities" (OuterVolumeSpecName: "utilities") pod "8e5e2b83-f0cd-4af4-a5c6-97090623477e" (UID: "8e5e2b83-f0cd-4af4-a5c6-97090623477e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:56:59 crc kubenswrapper[4809]: I0312 08:56:59.138990 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e5e2b83-f0cd-4af4-a5c6-97090623477e-kube-api-access-s9jw7" (OuterVolumeSpecName: "kube-api-access-s9jw7") pod "8e5e2b83-f0cd-4af4-a5c6-97090623477e" (UID: "8e5e2b83-f0cd-4af4-a5c6-97090623477e"). InnerVolumeSpecName "kube-api-access-s9jw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:56:59 crc kubenswrapper[4809]: I0312 08:56:59.234408 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e5e2b83-f0cd-4af4-a5c6-97090623477e-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 08:56:59 crc kubenswrapper[4809]: I0312 08:56:59.234444 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9jw7\" (UniqueName: \"kubernetes.io/projected/8e5e2b83-f0cd-4af4-a5c6-97090623477e-kube-api-access-s9jw7\") on node \"crc\" DevicePath \"\"" Mar 12 08:56:59 crc kubenswrapper[4809]: I0312 08:56:59.277955 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e5e2b83-f0cd-4af4-a5c6-97090623477e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8e5e2b83-f0cd-4af4-a5c6-97090623477e" (UID: "8e5e2b83-f0cd-4af4-a5c6-97090623477e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:56:59 crc kubenswrapper[4809]: I0312 08:56:59.337711 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e5e2b83-f0cd-4af4-a5c6-97090623477e-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 08:56:59 crc kubenswrapper[4809]: I0312 08:56:59.439736 4809 generic.go:334] "Generic (PLEG): container finished" podID="8e5e2b83-f0cd-4af4-a5c6-97090623477e" containerID="aee33388d18234e753899aabf0e5e7d848f950969a5a2a3307a89a5c65b34a23" exitCode=0 Mar 12 08:56:59 crc kubenswrapper[4809]: I0312 08:56:59.439785 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4dqth" event={"ID":"8e5e2b83-f0cd-4af4-a5c6-97090623477e","Type":"ContainerDied","Data":"aee33388d18234e753899aabf0e5e7d848f950969a5a2a3307a89a5c65b34a23"} Mar 12 08:56:59 crc kubenswrapper[4809]: I0312 08:56:59.439814 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4dqth" event={"ID":"8e5e2b83-f0cd-4af4-a5c6-97090623477e","Type":"ContainerDied","Data":"2f437ea33317f4f58a79e9d1a3ea774d89b6bd9d7ae729c07baca57f431c7816"} Mar 12 08:56:59 crc kubenswrapper[4809]: I0312 08:56:59.439830 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4dqth" Mar 12 08:56:59 crc kubenswrapper[4809]: I0312 08:56:59.439833 4809 scope.go:117] "RemoveContainer" containerID="aee33388d18234e753899aabf0e5e7d848f950969a5a2a3307a89a5c65b34a23" Mar 12 08:56:59 crc kubenswrapper[4809]: I0312 08:56:59.485273 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4dqth"] Mar 12 08:56:59 crc kubenswrapper[4809]: I0312 08:56:59.497231 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4dqth"] Mar 12 08:56:59 crc kubenswrapper[4809]: I0312 08:56:59.498912 4809 scope.go:117] "RemoveContainer" containerID="ffb654f4c3253475c2909f2328d7682423579cb2af62c6542b09a781918784f9" Mar 12 08:56:59 crc kubenswrapper[4809]: I0312 08:56:59.537149 4809 scope.go:117] "RemoveContainer" containerID="b57638f25994c26f1d3b4a4e49a9c674747b1c3d97a82465a2fb65208a9aae28" Mar 12 08:56:59 crc kubenswrapper[4809]: I0312 08:56:59.597718 4809 scope.go:117] "RemoveContainer" containerID="aee33388d18234e753899aabf0e5e7d848f950969a5a2a3307a89a5c65b34a23" Mar 12 08:56:59 crc kubenswrapper[4809]: E0312 08:56:59.598201 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aee33388d18234e753899aabf0e5e7d848f950969a5a2a3307a89a5c65b34a23\": container with ID starting with aee33388d18234e753899aabf0e5e7d848f950969a5a2a3307a89a5c65b34a23 not found: ID does not exist" containerID="aee33388d18234e753899aabf0e5e7d848f950969a5a2a3307a89a5c65b34a23" Mar 12 08:56:59 crc kubenswrapper[4809]: I0312 08:56:59.598269 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aee33388d18234e753899aabf0e5e7d848f950969a5a2a3307a89a5c65b34a23"} err="failed to get container status \"aee33388d18234e753899aabf0e5e7d848f950969a5a2a3307a89a5c65b34a23\": rpc error: code = NotFound desc = could not find container \"aee33388d18234e753899aabf0e5e7d848f950969a5a2a3307a89a5c65b34a23\": container with ID starting with aee33388d18234e753899aabf0e5e7d848f950969a5a2a3307a89a5c65b34a23 not found: ID does not exist" Mar 12 08:56:59 crc kubenswrapper[4809]: I0312 08:56:59.598305 4809 scope.go:117] "RemoveContainer" containerID="ffb654f4c3253475c2909f2328d7682423579cb2af62c6542b09a781918784f9" Mar 12 08:56:59 crc kubenswrapper[4809]: E0312 08:56:59.598637 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffb654f4c3253475c2909f2328d7682423579cb2af62c6542b09a781918784f9\": container with ID starting with ffb654f4c3253475c2909f2328d7682423579cb2af62c6542b09a781918784f9 not found: ID does not exist" containerID="ffb654f4c3253475c2909f2328d7682423579cb2af62c6542b09a781918784f9" Mar 12 08:56:59 crc kubenswrapper[4809]: I0312 08:56:59.598681 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffb654f4c3253475c2909f2328d7682423579cb2af62c6542b09a781918784f9"} err="failed to get container status \"ffb654f4c3253475c2909f2328d7682423579cb2af62c6542b09a781918784f9\": rpc error: code = NotFound desc = could not find container \"ffb654f4c3253475c2909f2328d7682423579cb2af62c6542b09a781918784f9\": container with ID starting with ffb654f4c3253475c2909f2328d7682423579cb2af62c6542b09a781918784f9 not found: ID does not exist" Mar 12 08:56:59 crc kubenswrapper[4809]: I0312 08:56:59.598711 4809 scope.go:117] "RemoveContainer" containerID="b57638f25994c26f1d3b4a4e49a9c674747b1c3d97a82465a2fb65208a9aae28" Mar 12 08:56:59 crc kubenswrapper[4809]: E0312 08:56:59.598995 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b57638f25994c26f1d3b4a4e49a9c674747b1c3d97a82465a2fb65208a9aae28\": container with ID starting with b57638f25994c26f1d3b4a4e49a9c674747b1c3d97a82465a2fb65208a9aae28 not found: ID does not exist" containerID="b57638f25994c26f1d3b4a4e49a9c674747b1c3d97a82465a2fb65208a9aae28" Mar 12 08:56:59 crc kubenswrapper[4809]: I0312 08:56:59.599035 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b57638f25994c26f1d3b4a4e49a9c674747b1c3d97a82465a2fb65208a9aae28"} err="failed to get container status \"b57638f25994c26f1d3b4a4e49a9c674747b1c3d97a82465a2fb65208a9aae28\": rpc error: code = NotFound desc = could not find container \"b57638f25994c26f1d3b4a4e49a9c674747b1c3d97a82465a2fb65208a9aae28\": container with ID starting with b57638f25994c26f1d3b4a4e49a9c674747b1c3d97a82465a2fb65208a9aae28 not found: ID does not exist" Mar 12 08:57:01 crc kubenswrapper[4809]: I0312 08:57:01.130059 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e5e2b83-f0cd-4af4-a5c6-97090623477e" path="/var/lib/kubelet/pods/8e5e2b83-f0cd-4af4-a5c6-97090623477e/volumes" Mar 12 08:57:38 crc kubenswrapper[4809]: I0312 08:57:38.043695 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-c2j5c"] Mar 12 08:57:38 crc kubenswrapper[4809]: E0312 08:57:38.046416 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e5e2b83-f0cd-4af4-a5c6-97090623477e" containerName="extract-content" Mar 12 08:57:38 crc kubenswrapper[4809]: I0312 08:57:38.046449 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e5e2b83-f0cd-4af4-a5c6-97090623477e" containerName="extract-content" Mar 12 08:57:38 crc kubenswrapper[4809]: E0312 08:57:38.046467 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e5e2b83-f0cd-4af4-a5c6-97090623477e" containerName="extract-utilities" Mar 12 08:57:38 crc kubenswrapper[4809]: I0312 08:57:38.046479 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e5e2b83-f0cd-4af4-a5c6-97090623477e" containerName="extract-utilities" Mar 12 08:57:38 crc kubenswrapper[4809]: E0312 08:57:38.046511 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e5e2b83-f0cd-4af4-a5c6-97090623477e" containerName="registry-server" Mar 12 08:57:38 crc kubenswrapper[4809]: I0312 08:57:38.046520 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e5e2b83-f0cd-4af4-a5c6-97090623477e" containerName="registry-server" Mar 12 08:57:38 crc kubenswrapper[4809]: I0312 08:57:38.051150 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e5e2b83-f0cd-4af4-a5c6-97090623477e" containerName="registry-server" Mar 12 08:57:38 crc kubenswrapper[4809]: I0312 08:57:38.056773 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c2j5c" Mar 12 08:57:38 crc kubenswrapper[4809]: I0312 08:57:38.093755 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c2j5c"] Mar 12 08:57:38 crc kubenswrapper[4809]: I0312 08:57:38.111367 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjvn8\" (UniqueName: \"kubernetes.io/projected/516d90ab-a594-4e5f-a4f9-eb4e94c1f849-kube-api-access-fjvn8\") pod \"redhat-marketplace-c2j5c\" (UID: \"516d90ab-a594-4e5f-a4f9-eb4e94c1f849\") " pod="openshift-marketplace/redhat-marketplace-c2j5c" Mar 12 08:57:38 crc kubenswrapper[4809]: I0312 08:57:38.111515 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/516d90ab-a594-4e5f-a4f9-eb4e94c1f849-utilities\") pod \"redhat-marketplace-c2j5c\" (UID: \"516d90ab-a594-4e5f-a4f9-eb4e94c1f849\") " pod="openshift-marketplace/redhat-marketplace-c2j5c" Mar 12 08:57:38 crc kubenswrapper[4809]: I0312 08:57:38.111612 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/516d90ab-a594-4e5f-a4f9-eb4e94c1f849-catalog-content\") pod \"redhat-marketplace-c2j5c\" (UID: \"516d90ab-a594-4e5f-a4f9-eb4e94c1f849\") " pod="openshift-marketplace/redhat-marketplace-c2j5c" Mar 12 08:57:38 crc kubenswrapper[4809]: I0312 08:57:38.212902 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjvn8\" (UniqueName: \"kubernetes.io/projected/516d90ab-a594-4e5f-a4f9-eb4e94c1f849-kube-api-access-fjvn8\") pod \"redhat-marketplace-c2j5c\" (UID: \"516d90ab-a594-4e5f-a4f9-eb4e94c1f849\") " pod="openshift-marketplace/redhat-marketplace-c2j5c" Mar 12 08:57:38 crc kubenswrapper[4809]: I0312 08:57:38.213007 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/516d90ab-a594-4e5f-a4f9-eb4e94c1f849-utilities\") pod \"redhat-marketplace-c2j5c\" (UID: \"516d90ab-a594-4e5f-a4f9-eb4e94c1f849\") " pod="openshift-marketplace/redhat-marketplace-c2j5c" Mar 12 08:57:38 crc kubenswrapper[4809]: I0312 08:57:38.213087 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/516d90ab-a594-4e5f-a4f9-eb4e94c1f849-catalog-content\") pod \"redhat-marketplace-c2j5c\" (UID: \"516d90ab-a594-4e5f-a4f9-eb4e94c1f849\") " pod="openshift-marketplace/redhat-marketplace-c2j5c" Mar 12 08:57:38 crc kubenswrapper[4809]: I0312 08:57:38.213527 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/516d90ab-a594-4e5f-a4f9-eb4e94c1f849-utilities\") pod \"redhat-marketplace-c2j5c\" (UID: \"516d90ab-a594-4e5f-a4f9-eb4e94c1f849\") " pod="openshift-marketplace/redhat-marketplace-c2j5c" Mar 12 08:57:38 crc kubenswrapper[4809]: I0312 08:57:38.213541 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/516d90ab-a594-4e5f-a4f9-eb4e94c1f849-catalog-content\") pod \"redhat-marketplace-c2j5c\" (UID: \"516d90ab-a594-4e5f-a4f9-eb4e94c1f849\") " pod="openshift-marketplace/redhat-marketplace-c2j5c" Mar 12 08:57:38 crc kubenswrapper[4809]: I0312 08:57:38.247366 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjvn8\" (UniqueName: \"kubernetes.io/projected/516d90ab-a594-4e5f-a4f9-eb4e94c1f849-kube-api-access-fjvn8\") pod \"redhat-marketplace-c2j5c\" (UID: \"516d90ab-a594-4e5f-a4f9-eb4e94c1f849\") " pod="openshift-marketplace/redhat-marketplace-c2j5c" Mar 12 08:57:38 crc kubenswrapper[4809]: I0312 08:57:38.396688 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c2j5c" Mar 12 08:57:38 crc kubenswrapper[4809]: I0312 08:57:38.941415 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c2j5c"] Mar 12 08:57:38 crc kubenswrapper[4809]: W0312 08:57:38.941648 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod516d90ab_a594_4e5f_a4f9_eb4e94c1f849.slice/crio-8e33ac8f6df2fb605017c6e69d357ea4b68da7492ff376b10d09adcfaf02ae53 WatchSource:0}: Error finding container 8e33ac8f6df2fb605017c6e69d357ea4b68da7492ff376b10d09adcfaf02ae53: Status 404 returned error can't find the container with id 8e33ac8f6df2fb605017c6e69d357ea4b68da7492ff376b10d09adcfaf02ae53 Mar 12 08:57:39 crc kubenswrapper[4809]: I0312 08:57:39.903389 4809 generic.go:334] "Generic (PLEG): container finished" podID="516d90ab-a594-4e5f-a4f9-eb4e94c1f849" containerID="1cf58f88a7b999bbc7fe8a42effc8ff6ce5a42eb6601722c5fb1132c21f88a0e" exitCode=0 Mar 12 08:57:39 crc kubenswrapper[4809]: I0312 08:57:39.903438 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c2j5c" event={"ID":"516d90ab-a594-4e5f-a4f9-eb4e94c1f849","Type":"ContainerDied","Data":"1cf58f88a7b999bbc7fe8a42effc8ff6ce5a42eb6601722c5fb1132c21f88a0e"} Mar 12 08:57:39 crc kubenswrapper[4809]: I0312 08:57:39.903789 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c2j5c" event={"ID":"516d90ab-a594-4e5f-a4f9-eb4e94c1f849","Type":"ContainerStarted","Data":"8e33ac8f6df2fb605017c6e69d357ea4b68da7492ff376b10d09adcfaf02ae53"} Mar 12 08:57:41 crc kubenswrapper[4809]: I0312 08:57:41.964525 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c2j5c" event={"ID":"516d90ab-a594-4e5f-a4f9-eb4e94c1f849","Type":"ContainerStarted","Data":"7edb92838984e8e9ccee8661d9004ccfc03b9dad45962045940af107b24a53aa"} Mar 12 08:57:42 crc kubenswrapper[4809]: I0312 08:57:42.981776 4809 generic.go:334] "Generic (PLEG): container finished" podID="516d90ab-a594-4e5f-a4f9-eb4e94c1f849" containerID="7edb92838984e8e9ccee8661d9004ccfc03b9dad45962045940af107b24a53aa" exitCode=0 Mar 12 08:57:42 crc kubenswrapper[4809]: I0312 08:57:42.981894 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c2j5c" event={"ID":"516d90ab-a594-4e5f-a4f9-eb4e94c1f849","Type":"ContainerDied","Data":"7edb92838984e8e9ccee8661d9004ccfc03b9dad45962045940af107b24a53aa"} Mar 12 08:57:44 crc kubenswrapper[4809]: I0312 08:57:44.000901 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c2j5c" event={"ID":"516d90ab-a594-4e5f-a4f9-eb4e94c1f849","Type":"ContainerStarted","Data":"99a73541a252ef0e41a41549f4bf3d1af879a555e2f340cfa73e89bb209ba09f"} Mar 12 08:57:44 crc kubenswrapper[4809]: I0312 08:57:44.027064 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-c2j5c" podStartSLOduration=2.404224481 podStartE2EDuration="6.027043734s" podCreationTimestamp="2026-03-12 08:57:38 +0000 UTC" firstStartedPulling="2026-03-12 08:57:39.909384015 +0000 UTC m=+3533.491419738" lastFinishedPulling="2026-03-12 08:57:43.532203258 +0000 UTC m=+3537.114238991" observedRunningTime="2026-03-12 08:57:44.018240044 +0000 UTC m=+3537.600275807" watchObservedRunningTime="2026-03-12 08:57:44.027043734 +0000 UTC m=+3537.609079467" Mar 12 08:57:45 crc kubenswrapper[4809]: I0312 08:57:45.049058 4809 patch_prober.go:28] interesting pod/machine-config-daemon-h6d4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 08:57:45 crc kubenswrapper[4809]: I0312 08:57:45.049359 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 08:57:48 crc kubenswrapper[4809]: I0312 08:57:48.397042 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-c2j5c" Mar 12 08:57:48 crc kubenswrapper[4809]: I0312 08:57:48.397748 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-c2j5c" Mar 12 08:57:48 crc kubenswrapper[4809]: I0312 08:57:48.457370 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-c2j5c" Mar 12 08:57:49 crc kubenswrapper[4809]: I0312 08:57:49.142245 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-c2j5c" Mar 12 08:57:49 crc kubenswrapper[4809]: I0312 08:57:49.235986 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-c2j5c"] Mar 12 08:57:51 crc kubenswrapper[4809]: I0312 08:57:51.101367 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-c2j5c" podUID="516d90ab-a594-4e5f-a4f9-eb4e94c1f849" containerName="registry-server" containerID="cri-o://99a73541a252ef0e41a41549f4bf3d1af879a555e2f340cfa73e89bb209ba09f" gracePeriod=2 Mar 12 08:57:51 crc kubenswrapper[4809]: I0312 08:57:51.772212 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c2j5c" Mar 12 08:57:51 crc kubenswrapper[4809]: I0312 08:57:51.898677 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/516d90ab-a594-4e5f-a4f9-eb4e94c1f849-utilities\") pod \"516d90ab-a594-4e5f-a4f9-eb4e94c1f849\" (UID: \"516d90ab-a594-4e5f-a4f9-eb4e94c1f849\") " Mar 12 08:57:51 crc kubenswrapper[4809]: I0312 08:57:51.898861 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjvn8\" (UniqueName: \"kubernetes.io/projected/516d90ab-a594-4e5f-a4f9-eb4e94c1f849-kube-api-access-fjvn8\") pod \"516d90ab-a594-4e5f-a4f9-eb4e94c1f849\" (UID: \"516d90ab-a594-4e5f-a4f9-eb4e94c1f849\") " Mar 12 08:57:51 crc kubenswrapper[4809]: I0312 08:57:51.899047 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/516d90ab-a594-4e5f-a4f9-eb4e94c1f849-catalog-content\") pod \"516d90ab-a594-4e5f-a4f9-eb4e94c1f849\" (UID: \"516d90ab-a594-4e5f-a4f9-eb4e94c1f849\") " Mar 12 08:57:51 crc kubenswrapper[4809]: I0312 08:57:51.900008 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/516d90ab-a594-4e5f-a4f9-eb4e94c1f849-utilities" (OuterVolumeSpecName: "utilities") pod "516d90ab-a594-4e5f-a4f9-eb4e94c1f849" (UID: "516d90ab-a594-4e5f-a4f9-eb4e94c1f849"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:57:51 crc kubenswrapper[4809]: I0312 08:57:51.909520 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/516d90ab-a594-4e5f-a4f9-eb4e94c1f849-kube-api-access-fjvn8" (OuterVolumeSpecName: "kube-api-access-fjvn8") pod "516d90ab-a594-4e5f-a4f9-eb4e94c1f849" (UID: "516d90ab-a594-4e5f-a4f9-eb4e94c1f849"). InnerVolumeSpecName "kube-api-access-fjvn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:57:51 crc kubenswrapper[4809]: I0312 08:57:51.930065 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/516d90ab-a594-4e5f-a4f9-eb4e94c1f849-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "516d90ab-a594-4e5f-a4f9-eb4e94c1f849" (UID: "516d90ab-a594-4e5f-a4f9-eb4e94c1f849"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 08:57:52 crc kubenswrapper[4809]: I0312 08:57:52.002232 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/516d90ab-a594-4e5f-a4f9-eb4e94c1f849-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 08:57:52 crc kubenswrapper[4809]: I0312 08:57:52.002276 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjvn8\" (UniqueName: \"kubernetes.io/projected/516d90ab-a594-4e5f-a4f9-eb4e94c1f849-kube-api-access-fjvn8\") on node \"crc\" DevicePath \"\"" Mar 12 08:57:52 crc kubenswrapper[4809]: I0312 08:57:52.002290 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/516d90ab-a594-4e5f-a4f9-eb4e94c1f849-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 08:57:52 crc kubenswrapper[4809]: I0312 08:57:52.120998 4809 generic.go:334] "Generic (PLEG): container finished" podID="516d90ab-a594-4e5f-a4f9-eb4e94c1f849" containerID="99a73541a252ef0e41a41549f4bf3d1af879a555e2f340cfa73e89bb209ba09f" exitCode=0 Mar 12 08:57:52 crc kubenswrapper[4809]: I0312 08:57:52.121049 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c2j5c" event={"ID":"516d90ab-a594-4e5f-a4f9-eb4e94c1f849","Type":"ContainerDied","Data":"99a73541a252ef0e41a41549f4bf3d1af879a555e2f340cfa73e89bb209ba09f"} Mar 12 08:57:52 crc kubenswrapper[4809]: I0312 08:57:52.121073 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c2j5c" Mar 12 08:57:52 crc kubenswrapper[4809]: I0312 08:57:52.121099 4809 scope.go:117] "RemoveContainer" containerID="99a73541a252ef0e41a41549f4bf3d1af879a555e2f340cfa73e89bb209ba09f" Mar 12 08:57:52 crc kubenswrapper[4809]: I0312 08:57:52.121082 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c2j5c" event={"ID":"516d90ab-a594-4e5f-a4f9-eb4e94c1f849","Type":"ContainerDied","Data":"8e33ac8f6df2fb605017c6e69d357ea4b68da7492ff376b10d09adcfaf02ae53"} Mar 12 08:57:52 crc kubenswrapper[4809]: I0312 08:57:52.153488 4809 scope.go:117] "RemoveContainer" containerID="7edb92838984e8e9ccee8661d9004ccfc03b9dad45962045940af107b24a53aa" Mar 12 08:57:52 crc kubenswrapper[4809]: I0312 08:57:52.182705 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-c2j5c"] Mar 12 08:57:52 crc kubenswrapper[4809]: I0312 08:57:52.198338 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-c2j5c"] Mar 12 08:57:52 crc kubenswrapper[4809]: I0312 08:57:52.210847 4809 scope.go:117] "RemoveContainer" containerID="1cf58f88a7b999bbc7fe8a42effc8ff6ce5a42eb6601722c5fb1132c21f88a0e" Mar 12 08:57:52 crc kubenswrapper[4809]: I0312 08:57:52.278510 4809 scope.go:117] "RemoveContainer" containerID="99a73541a252ef0e41a41549f4bf3d1af879a555e2f340cfa73e89bb209ba09f" Mar 12 08:57:52 crc kubenswrapper[4809]: E0312 08:57:52.279126 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99a73541a252ef0e41a41549f4bf3d1af879a555e2f340cfa73e89bb209ba09f\": container with ID starting with 99a73541a252ef0e41a41549f4bf3d1af879a555e2f340cfa73e89bb209ba09f not found: ID does not exist" containerID="99a73541a252ef0e41a41549f4bf3d1af879a555e2f340cfa73e89bb209ba09f" Mar 12 08:57:52 crc kubenswrapper[4809]: I0312 08:57:52.279191 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99a73541a252ef0e41a41549f4bf3d1af879a555e2f340cfa73e89bb209ba09f"} err="failed to get container status \"99a73541a252ef0e41a41549f4bf3d1af879a555e2f340cfa73e89bb209ba09f\": rpc error: code = NotFound desc = could not find container \"99a73541a252ef0e41a41549f4bf3d1af879a555e2f340cfa73e89bb209ba09f\": container with ID starting with 99a73541a252ef0e41a41549f4bf3d1af879a555e2f340cfa73e89bb209ba09f not found: ID does not exist" Mar 12 08:57:52 crc kubenswrapper[4809]: I0312 08:57:52.279232 4809 scope.go:117] "RemoveContainer" containerID="7edb92838984e8e9ccee8661d9004ccfc03b9dad45962045940af107b24a53aa" Mar 12 08:57:52 crc kubenswrapper[4809]: E0312 08:57:52.279783 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7edb92838984e8e9ccee8661d9004ccfc03b9dad45962045940af107b24a53aa\": container with ID starting with 7edb92838984e8e9ccee8661d9004ccfc03b9dad45962045940af107b24a53aa not found: ID does not exist" containerID="7edb92838984e8e9ccee8661d9004ccfc03b9dad45962045940af107b24a53aa" Mar 12 08:57:52 crc kubenswrapper[4809]: I0312 08:57:52.279906 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7edb92838984e8e9ccee8661d9004ccfc03b9dad45962045940af107b24a53aa"} err="failed to get container status \"7edb92838984e8e9ccee8661d9004ccfc03b9dad45962045940af107b24a53aa\": rpc error: code = NotFound desc = could not find container \"7edb92838984e8e9ccee8661d9004ccfc03b9dad45962045940af107b24a53aa\": container with ID starting with 7edb92838984e8e9ccee8661d9004ccfc03b9dad45962045940af107b24a53aa not found: ID does not exist" Mar 12 08:57:52 crc kubenswrapper[4809]: I0312 08:57:52.279958 4809 scope.go:117] "RemoveContainer" containerID="1cf58f88a7b999bbc7fe8a42effc8ff6ce5a42eb6601722c5fb1132c21f88a0e" Mar 12 08:57:52 crc kubenswrapper[4809]: E0312 08:57:52.280591 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1cf58f88a7b999bbc7fe8a42effc8ff6ce5a42eb6601722c5fb1132c21f88a0e\": container with ID starting with 1cf58f88a7b999bbc7fe8a42effc8ff6ce5a42eb6601722c5fb1132c21f88a0e not found: ID does not exist" containerID="1cf58f88a7b999bbc7fe8a42effc8ff6ce5a42eb6601722c5fb1132c21f88a0e" Mar 12 08:57:52 crc kubenswrapper[4809]: I0312 08:57:52.280624 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1cf58f88a7b999bbc7fe8a42effc8ff6ce5a42eb6601722c5fb1132c21f88a0e"} err="failed to get container status \"1cf58f88a7b999bbc7fe8a42effc8ff6ce5a42eb6601722c5fb1132c21f88a0e\": rpc error: code = NotFound desc = could not find container \"1cf58f88a7b999bbc7fe8a42effc8ff6ce5a42eb6601722c5fb1132c21f88a0e\": container with ID starting with 1cf58f88a7b999bbc7fe8a42effc8ff6ce5a42eb6601722c5fb1132c21f88a0e not found: ID does not exist" Mar 12 08:57:53 crc kubenswrapper[4809]: I0312 08:57:53.124517 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="516d90ab-a594-4e5f-a4f9-eb4e94c1f849" path="/var/lib/kubelet/pods/516d90ab-a594-4e5f-a4f9-eb4e94c1f849/volumes" Mar 12 08:58:00 crc kubenswrapper[4809]: I0312 08:58:00.168429 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555098-dp8jh"] Mar 12 08:58:00 crc kubenswrapper[4809]: E0312 08:58:00.169492 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="516d90ab-a594-4e5f-a4f9-eb4e94c1f849" containerName="extract-utilities" Mar 12 08:58:00 crc kubenswrapper[4809]: I0312 08:58:00.169505 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="516d90ab-a594-4e5f-a4f9-eb4e94c1f849" containerName="extract-utilities" Mar 12 08:58:00 crc kubenswrapper[4809]: E0312 08:58:00.169516 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="516d90ab-a594-4e5f-a4f9-eb4e94c1f849" containerName="registry-server" Mar 12 08:58:00 crc kubenswrapper[4809]: I0312 08:58:00.169527 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="516d90ab-a594-4e5f-a4f9-eb4e94c1f849" containerName="registry-server" Mar 12 08:58:00 crc kubenswrapper[4809]: E0312 08:58:00.169554 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="516d90ab-a594-4e5f-a4f9-eb4e94c1f849" containerName="extract-content" Mar 12 08:58:00 crc kubenswrapper[4809]: I0312 08:58:00.169561 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="516d90ab-a594-4e5f-a4f9-eb4e94c1f849" containerName="extract-content" Mar 12 08:58:00 crc kubenswrapper[4809]: I0312 08:58:00.169834 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="516d90ab-a594-4e5f-a4f9-eb4e94c1f849" containerName="registry-server" Mar 12 08:58:00 crc kubenswrapper[4809]: I0312 08:58:00.170694 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555098-dp8jh" Mar 12 08:58:00 crc kubenswrapper[4809]: I0312 08:58:00.173443 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 08:58:00 crc kubenswrapper[4809]: I0312 08:58:00.173701 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 08:58:00 crc kubenswrapper[4809]: I0312 08:58:00.173818 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-tvxqv" Mar 12 08:58:00 crc kubenswrapper[4809]: I0312 08:58:00.182018 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555098-dp8jh"] Mar 12 08:58:00 crc kubenswrapper[4809]: I0312 08:58:00.326220 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2897n\" (UniqueName: \"kubernetes.io/projected/f832f91b-67de-49a6-ab5a-d157be573ce4-kube-api-access-2897n\") pod \"auto-csr-approver-29555098-dp8jh\" (UID: \"f832f91b-67de-49a6-ab5a-d157be573ce4\") " pod="openshift-infra/auto-csr-approver-29555098-dp8jh" Mar 12 08:58:00 crc kubenswrapper[4809]: I0312 08:58:00.428929 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2897n\" (UniqueName: \"kubernetes.io/projected/f832f91b-67de-49a6-ab5a-d157be573ce4-kube-api-access-2897n\") pod \"auto-csr-approver-29555098-dp8jh\" (UID: \"f832f91b-67de-49a6-ab5a-d157be573ce4\") " pod="openshift-infra/auto-csr-approver-29555098-dp8jh" Mar 12 08:58:00 crc kubenswrapper[4809]: I0312 08:58:00.454513 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2897n\" (UniqueName: \"kubernetes.io/projected/f832f91b-67de-49a6-ab5a-d157be573ce4-kube-api-access-2897n\") pod \"auto-csr-approver-29555098-dp8jh\" (UID: \"f832f91b-67de-49a6-ab5a-d157be573ce4\") " pod="openshift-infra/auto-csr-approver-29555098-dp8jh" Mar 12 08:58:00 crc kubenswrapper[4809]: I0312 08:58:00.490382 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555098-dp8jh" Mar 12 08:58:01 crc kubenswrapper[4809]: I0312 08:58:01.055311 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555098-dp8jh"] Mar 12 08:58:01 crc kubenswrapper[4809]: I0312 08:58:01.240325 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555098-dp8jh" event={"ID":"f832f91b-67de-49a6-ab5a-d157be573ce4","Type":"ContainerStarted","Data":"44c2f0eace39761f3ecb34becaafd881e011ceb3a2e7dc8637fba717a18add96"} Mar 12 08:58:03 crc kubenswrapper[4809]: I0312 08:58:03.263724 4809 generic.go:334] "Generic (PLEG): container finished" podID="f832f91b-67de-49a6-ab5a-d157be573ce4" containerID="291dfcb6a1444dea535e270fc84049c359095cc40bea7cdfa4eb2807850a8fd5" exitCode=0 Mar 12 08:58:03 crc kubenswrapper[4809]: I0312 08:58:03.263791 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555098-dp8jh" event={"ID":"f832f91b-67de-49a6-ab5a-d157be573ce4","Type":"ContainerDied","Data":"291dfcb6a1444dea535e270fc84049c359095cc40bea7cdfa4eb2807850a8fd5"} Mar 12 08:58:05 crc kubenswrapper[4809]: I0312 08:58:04.684788 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555098-dp8jh" Mar 12 08:58:05 crc kubenswrapper[4809]: I0312 08:58:04.769105 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2897n\" (UniqueName: \"kubernetes.io/projected/f832f91b-67de-49a6-ab5a-d157be573ce4-kube-api-access-2897n\") pod \"f832f91b-67de-49a6-ab5a-d157be573ce4\" (UID: \"f832f91b-67de-49a6-ab5a-d157be573ce4\") " Mar 12 08:58:05 crc kubenswrapper[4809]: I0312 08:58:04.778848 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f832f91b-67de-49a6-ab5a-d157be573ce4-kube-api-access-2897n" (OuterVolumeSpecName: "kube-api-access-2897n") pod "f832f91b-67de-49a6-ab5a-d157be573ce4" (UID: "f832f91b-67de-49a6-ab5a-d157be573ce4"). InnerVolumeSpecName "kube-api-access-2897n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 08:58:05 crc kubenswrapper[4809]: I0312 08:58:04.874872 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2897n\" (UniqueName: \"kubernetes.io/projected/f832f91b-67de-49a6-ab5a-d157be573ce4-kube-api-access-2897n\") on node \"crc\" DevicePath \"\"" Mar 12 08:58:05 crc kubenswrapper[4809]: I0312 08:58:05.296507 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555098-dp8jh" event={"ID":"f832f91b-67de-49a6-ab5a-d157be573ce4","Type":"ContainerDied","Data":"44c2f0eace39761f3ecb34becaafd881e011ceb3a2e7dc8637fba717a18add96"} Mar 12 08:58:05 crc kubenswrapper[4809]: I0312 08:58:05.296561 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44c2f0eace39761f3ecb34becaafd881e011ceb3a2e7dc8637fba717a18add96" Mar 12 08:58:05 crc kubenswrapper[4809]: I0312 08:58:05.296594 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555098-dp8jh" Mar 12 08:58:05 crc kubenswrapper[4809]: I0312 08:58:05.784578 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555092-lncvb"] Mar 12 08:58:05 crc kubenswrapper[4809]: I0312 08:58:05.796225 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555092-lncvb"] Mar 12 08:58:07 crc kubenswrapper[4809]: I0312 08:58:07.134276 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="372ceaaf-934a-4ebd-92ab-99b7cc28b068" path="/var/lib/kubelet/pods/372ceaaf-934a-4ebd-92ab-99b7cc28b068/volumes" Mar 12 08:58:15 crc kubenswrapper[4809]: I0312 08:58:15.049033 4809 patch_prober.go:28] interesting pod/machine-config-daemon-h6d4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 08:58:15 crc kubenswrapper[4809]: I0312 08:58:15.050003 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 08:58:33 crc kubenswrapper[4809]: I0312 08:58:33.442371 4809 scope.go:117] "RemoveContainer" containerID="718b44fa4dfdedfbf02259f1e596810ab177726b4af7be4e3267468ff4d72cb4" Mar 12 08:58:45 crc kubenswrapper[4809]: I0312 08:58:45.048293 4809 patch_prober.go:28] interesting pod/machine-config-daemon-h6d4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 08:58:45 crc kubenswrapper[4809]: I0312 08:58:45.049916 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 08:58:45 crc kubenswrapper[4809]: I0312 08:58:45.050045 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" Mar 12 08:58:45 crc kubenswrapper[4809]: I0312 08:58:45.051219 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"18e7b6a6acc918a926c068dfaa70c8ee3f2195ac0234a00d247b295f88e7163c"} pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 08:58:45 crc kubenswrapper[4809]: I0312 08:58:45.051406 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" containerID="cri-o://18e7b6a6acc918a926c068dfaa70c8ee3f2195ac0234a00d247b295f88e7163c" gracePeriod=600 Mar 12 08:58:45 crc kubenswrapper[4809]: E0312 08:58:45.177886 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:58:46 crc kubenswrapper[4809]: I0312 08:58:46.036233 4809 generic.go:334] "Generic (PLEG): container finished" podID="101483ba-8ed3-40eb-9855-077e9add029f" containerID="18e7b6a6acc918a926c068dfaa70c8ee3f2195ac0234a00d247b295f88e7163c" exitCode=0 Mar 12 08:58:46 crc kubenswrapper[4809]: I0312 08:58:46.036289 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" event={"ID":"101483ba-8ed3-40eb-9855-077e9add029f","Type":"ContainerDied","Data":"18e7b6a6acc918a926c068dfaa70c8ee3f2195ac0234a00d247b295f88e7163c"} Mar 12 08:58:46 crc kubenswrapper[4809]: I0312 08:58:46.036911 4809 scope.go:117] "RemoveContainer" containerID="5792759995ae3718ae365ac07d2eb7940dfaa04425d30dececf1d2e6ef7ac054" Mar 12 08:58:46 crc kubenswrapper[4809]: I0312 08:58:46.038748 4809 scope.go:117] "RemoveContainer" containerID="18e7b6a6acc918a926c068dfaa70c8ee3f2195ac0234a00d247b295f88e7163c" Mar 12 08:58:46 crc kubenswrapper[4809]: E0312 08:58:46.039708 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:58:59 crc kubenswrapper[4809]: I0312 08:58:59.107773 4809 scope.go:117] "RemoveContainer" containerID="18e7b6a6acc918a926c068dfaa70c8ee3f2195ac0234a00d247b295f88e7163c" Mar 12 08:58:59 crc kubenswrapper[4809]: E0312 08:58:59.108872 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:59:11 crc kubenswrapper[4809]: I0312 08:59:11.106897 4809 scope.go:117] "RemoveContainer" containerID="18e7b6a6acc918a926c068dfaa70c8ee3f2195ac0234a00d247b295f88e7163c" Mar 12 08:59:11 crc kubenswrapper[4809]: E0312 08:59:11.107765 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:59:22 crc kubenswrapper[4809]: I0312 08:59:22.107436 4809 scope.go:117] "RemoveContainer" containerID="18e7b6a6acc918a926c068dfaa70c8ee3f2195ac0234a00d247b295f88e7163c" Mar 12 08:59:22 crc kubenswrapper[4809]: E0312 08:59:22.108627 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:59:37 crc kubenswrapper[4809]: I0312 08:59:37.142623 4809 scope.go:117] "RemoveContainer" containerID="18e7b6a6acc918a926c068dfaa70c8ee3f2195ac0234a00d247b295f88e7163c" Mar 12 08:59:37 crc kubenswrapper[4809]: E0312 08:59:37.144513 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 08:59:50 crc kubenswrapper[4809]: I0312 08:59:50.107827 4809 scope.go:117] "RemoveContainer" containerID="18e7b6a6acc918a926c068dfaa70c8ee3f2195ac0234a00d247b295f88e7163c" Mar 12 08:59:50 crc kubenswrapper[4809]: E0312 08:59:50.108969 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:00:00 crc kubenswrapper[4809]: I0312 09:00:00.169963 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555100-xtvph"] Mar 12 09:00:00 crc kubenswrapper[4809]: E0312 09:00:00.171787 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f832f91b-67de-49a6-ab5a-d157be573ce4" containerName="oc" Mar 12 09:00:00 crc kubenswrapper[4809]: I0312 09:00:00.171806 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="f832f91b-67de-49a6-ab5a-d157be573ce4" containerName="oc" Mar 12 09:00:00 crc kubenswrapper[4809]: I0312 09:00:00.172238 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="f832f91b-67de-49a6-ab5a-d157be573ce4" containerName="oc" Mar 12 09:00:00 crc kubenswrapper[4809]: I0312 09:00:00.173584 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555100-xtvph" Mar 12 09:00:00 crc kubenswrapper[4809]: I0312 09:00:00.175816 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 12 09:00:00 crc kubenswrapper[4809]: I0312 09:00:00.176852 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 12 09:00:00 crc kubenswrapper[4809]: I0312 09:00:00.188426 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555100-2b7rf"] Mar 12 09:00:00 crc kubenswrapper[4809]: I0312 09:00:00.190611 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555100-2b7rf" Mar 12 09:00:00 crc kubenswrapper[4809]: I0312 09:00:00.194672 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 09:00:00 crc kubenswrapper[4809]: I0312 09:00:00.196903 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 09:00:00 crc kubenswrapper[4809]: I0312 09:00:00.197265 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-tvxqv" Mar 12 09:00:00 crc kubenswrapper[4809]: I0312 09:00:00.231138 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555100-xtvph"] Mar 12 09:00:00 crc kubenswrapper[4809]: I0312 09:00:00.252601 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555100-2b7rf"] Mar 12 09:00:00 crc kubenswrapper[4809]: I0312 09:00:00.343179 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch54x\" (UniqueName: \"kubernetes.io/projected/ec3349ba-5be4-417e-a870-0b98408d9b25-kube-api-access-ch54x\") pod \"auto-csr-approver-29555100-2b7rf\" (UID: \"ec3349ba-5be4-417e-a870-0b98408d9b25\") " pod="openshift-infra/auto-csr-approver-29555100-2b7rf" Mar 12 09:00:00 crc kubenswrapper[4809]: I0312 09:00:00.343283 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7ea56f1-9c01-47d5-b5b2-d778e7dc339a-config-volume\") pod \"collect-profiles-29555100-xtvph\" (UID: \"b7ea56f1-9c01-47d5-b5b2-d778e7dc339a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555100-xtvph" Mar 12 09:00:00 crc kubenswrapper[4809]: I0312 09:00:00.343316 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b7ea56f1-9c01-47d5-b5b2-d778e7dc339a-secret-volume\") pod \"collect-profiles-29555100-xtvph\" (UID: \"b7ea56f1-9c01-47d5-b5b2-d778e7dc339a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555100-xtvph" Mar 12 09:00:00 crc kubenswrapper[4809]: I0312 09:00:00.343361 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-777zh\" (UniqueName: \"kubernetes.io/projected/b7ea56f1-9c01-47d5-b5b2-d778e7dc339a-kube-api-access-777zh\") pod \"collect-profiles-29555100-xtvph\" (UID: \"b7ea56f1-9c01-47d5-b5b2-d778e7dc339a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555100-xtvph" Mar 12 09:00:00 crc kubenswrapper[4809]: I0312 09:00:00.445928 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-777zh\" (UniqueName: \"kubernetes.io/projected/b7ea56f1-9c01-47d5-b5b2-d778e7dc339a-kube-api-access-777zh\") pod \"collect-profiles-29555100-xtvph\" (UID: \"b7ea56f1-9c01-47d5-b5b2-d778e7dc339a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555100-xtvph" Mar 12 09:00:00 crc kubenswrapper[4809]: I0312 09:00:00.446116 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ch54x\" (UniqueName: \"kubernetes.io/projected/ec3349ba-5be4-417e-a870-0b98408d9b25-kube-api-access-ch54x\") pod \"auto-csr-approver-29555100-2b7rf\" (UID: \"ec3349ba-5be4-417e-a870-0b98408d9b25\") " pod="openshift-infra/auto-csr-approver-29555100-2b7rf" Mar 12 09:00:00 crc kubenswrapper[4809]: I0312 09:00:00.446199 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7ea56f1-9c01-47d5-b5b2-d778e7dc339a-config-volume\") pod \"collect-profiles-29555100-xtvph\" (UID: \"b7ea56f1-9c01-47d5-b5b2-d778e7dc339a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555100-xtvph" Mar 12 09:00:00 crc kubenswrapper[4809]: I0312 09:00:00.446225 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b7ea56f1-9c01-47d5-b5b2-d778e7dc339a-secret-volume\") pod \"collect-profiles-29555100-xtvph\" (UID: \"b7ea56f1-9c01-47d5-b5b2-d778e7dc339a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555100-xtvph" Mar 12 09:00:00 crc kubenswrapper[4809]: I0312 09:00:00.447148 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7ea56f1-9c01-47d5-b5b2-d778e7dc339a-config-volume\") pod \"collect-profiles-29555100-xtvph\" (UID: \"b7ea56f1-9c01-47d5-b5b2-d778e7dc339a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555100-xtvph" Mar 12 09:00:00 crc kubenswrapper[4809]: I0312 09:00:00.451472 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b7ea56f1-9c01-47d5-b5b2-d778e7dc339a-secret-volume\") pod \"collect-profiles-29555100-xtvph\" (UID: \"b7ea56f1-9c01-47d5-b5b2-d778e7dc339a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555100-xtvph" Mar 12 09:00:00 crc kubenswrapper[4809]: I0312 09:00:00.464893 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-777zh\" (UniqueName: \"kubernetes.io/projected/b7ea56f1-9c01-47d5-b5b2-d778e7dc339a-kube-api-access-777zh\") pod \"collect-profiles-29555100-xtvph\" (UID: \"b7ea56f1-9c01-47d5-b5b2-d778e7dc339a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555100-xtvph" Mar 12 09:00:00 crc kubenswrapper[4809]: I0312 09:00:00.465150 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ch54x\" (UniqueName: \"kubernetes.io/projected/ec3349ba-5be4-417e-a870-0b98408d9b25-kube-api-access-ch54x\") pod \"auto-csr-approver-29555100-2b7rf\" (UID: \"ec3349ba-5be4-417e-a870-0b98408d9b25\") " pod="openshift-infra/auto-csr-approver-29555100-2b7rf" Mar 12 09:00:00 crc kubenswrapper[4809]: I0312 09:00:00.501734 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555100-xtvph" Mar 12 09:00:00 crc kubenswrapper[4809]: I0312 09:00:00.509786 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555100-2b7rf" Mar 12 09:00:01 crc kubenswrapper[4809]: I0312 09:00:01.082512 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555100-xtvph"] Mar 12 09:00:01 crc kubenswrapper[4809]: I0312 09:00:01.105992 4809 scope.go:117] "RemoveContainer" containerID="18e7b6a6acc918a926c068dfaa70c8ee3f2195ac0234a00d247b295f88e7163c" Mar 12 09:00:01 crc kubenswrapper[4809]: E0312 09:00:01.106363 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:00:01 crc kubenswrapper[4809]: I0312 09:00:01.225382 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555100-2b7rf"] Mar 12 09:00:01 crc kubenswrapper[4809]: W0312 09:00:01.229628 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec3349ba_5be4_417e_a870_0b98408d9b25.slice/crio-713b5f1a5538172dbf3ffc170f8eb0874dc6bbb30bdc24309b932951234cd009 WatchSource:0}: Error finding container 713b5f1a5538172dbf3ffc170f8eb0874dc6bbb30bdc24309b932951234cd009: Status 404 returned error can't find the container with id 713b5f1a5538172dbf3ffc170f8eb0874dc6bbb30bdc24309b932951234cd009 Mar 12 09:00:02 crc kubenswrapper[4809]: I0312 09:00:02.026237 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555100-2b7rf" event={"ID":"ec3349ba-5be4-417e-a870-0b98408d9b25","Type":"ContainerStarted","Data":"713b5f1a5538172dbf3ffc170f8eb0874dc6bbb30bdc24309b932951234cd009"} Mar 12 09:00:02 crc kubenswrapper[4809]: I0312 09:00:02.028537 4809 generic.go:334] "Generic (PLEG): container finished" podID="b7ea56f1-9c01-47d5-b5b2-d778e7dc339a" containerID="143305fe37fb0b9dd67eb703f68f087a4205eb79e78b70eb3d7ef557c0b89947" exitCode=0 Mar 12 09:00:02 crc kubenswrapper[4809]: I0312 09:00:02.028609 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555100-xtvph" event={"ID":"b7ea56f1-9c01-47d5-b5b2-d778e7dc339a","Type":"ContainerDied","Data":"143305fe37fb0b9dd67eb703f68f087a4205eb79e78b70eb3d7ef557c0b89947"} Mar 12 09:00:02 crc kubenswrapper[4809]: I0312 09:00:02.028653 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555100-xtvph" event={"ID":"b7ea56f1-9c01-47d5-b5b2-d778e7dc339a","Type":"ContainerStarted","Data":"535e88559af6910f75f39daa649abcbf9dd470922bc20f8e21d86a0ca4f9528f"} Mar 12 09:00:03 crc kubenswrapper[4809]: I0312 09:00:03.516908 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555100-xtvph" Mar 12 09:00:03 crc kubenswrapper[4809]: I0312 09:00:03.652279 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-777zh\" (UniqueName: \"kubernetes.io/projected/b7ea56f1-9c01-47d5-b5b2-d778e7dc339a-kube-api-access-777zh\") pod \"b7ea56f1-9c01-47d5-b5b2-d778e7dc339a\" (UID: \"b7ea56f1-9c01-47d5-b5b2-d778e7dc339a\") " Mar 12 09:00:03 crc kubenswrapper[4809]: I0312 09:00:03.652681 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b7ea56f1-9c01-47d5-b5b2-d778e7dc339a-secret-volume\") pod \"b7ea56f1-9c01-47d5-b5b2-d778e7dc339a\" (UID: \"b7ea56f1-9c01-47d5-b5b2-d778e7dc339a\") " Mar 12 09:00:03 crc kubenswrapper[4809]: I0312 09:00:03.652764 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7ea56f1-9c01-47d5-b5b2-d778e7dc339a-config-volume\") pod \"b7ea56f1-9c01-47d5-b5b2-d778e7dc339a\" (UID: \"b7ea56f1-9c01-47d5-b5b2-d778e7dc339a\") " Mar 12 09:00:03 crc kubenswrapper[4809]: I0312 09:00:03.655922 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7ea56f1-9c01-47d5-b5b2-d778e7dc339a-config-volume" (OuterVolumeSpecName: "config-volume") pod "b7ea56f1-9c01-47d5-b5b2-d778e7dc339a" (UID: "b7ea56f1-9c01-47d5-b5b2-d778e7dc339a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 09:00:03 crc kubenswrapper[4809]: I0312 09:00:03.676617 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7ea56f1-9c01-47d5-b5b2-d778e7dc339a-kube-api-access-777zh" (OuterVolumeSpecName: "kube-api-access-777zh") pod "b7ea56f1-9c01-47d5-b5b2-d778e7dc339a" (UID: "b7ea56f1-9c01-47d5-b5b2-d778e7dc339a"). InnerVolumeSpecName "kube-api-access-777zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 09:00:03 crc kubenswrapper[4809]: I0312 09:00:03.680321 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7ea56f1-9c01-47d5-b5b2-d778e7dc339a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b7ea56f1-9c01-47d5-b5b2-d778e7dc339a" (UID: "b7ea56f1-9c01-47d5-b5b2-d778e7dc339a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 09:00:03 crc kubenswrapper[4809]: I0312 09:00:03.755997 4809 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7ea56f1-9c01-47d5-b5b2-d778e7dc339a-config-volume\") on node \"crc\" DevicePath \"\"" Mar 12 09:00:03 crc kubenswrapper[4809]: I0312 09:00:03.756061 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-777zh\" (UniqueName: \"kubernetes.io/projected/b7ea56f1-9c01-47d5-b5b2-d778e7dc339a-kube-api-access-777zh\") on node \"crc\" DevicePath \"\"" Mar 12 09:00:03 crc kubenswrapper[4809]: I0312 09:00:03.756075 4809 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b7ea56f1-9c01-47d5-b5b2-d778e7dc339a-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 12 09:00:04 crc kubenswrapper[4809]: I0312 09:00:04.066944 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555100-xtvph" event={"ID":"b7ea56f1-9c01-47d5-b5b2-d778e7dc339a","Type":"ContainerDied","Data":"535e88559af6910f75f39daa649abcbf9dd470922bc20f8e21d86a0ca4f9528f"} Mar 12 09:00:04 crc kubenswrapper[4809]: I0312 09:00:04.067427 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="535e88559af6910f75f39daa649abcbf9dd470922bc20f8e21d86a0ca4f9528f" Mar 12 09:00:04 crc kubenswrapper[4809]: I0312 09:00:04.067548 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555100-xtvph" Mar 12 09:00:04 crc kubenswrapper[4809]: I0312 09:00:04.612654 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555055-wxgnb"] Mar 12 09:00:04 crc kubenswrapper[4809]: I0312 09:00:04.625419 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555055-wxgnb"] Mar 12 09:00:05 crc kubenswrapper[4809]: I0312 09:00:05.078333 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555100-2b7rf" event={"ID":"ec3349ba-5be4-417e-a870-0b98408d9b25","Type":"ContainerStarted","Data":"7377204713adfd0c507147626d78dcc6f6d20dcc8bee45b17edaed444804c7cf"} Mar 12 09:00:05 crc kubenswrapper[4809]: I0312 09:00:05.094241 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555100-2b7rf" podStartSLOduration=1.667090749 podStartE2EDuration="5.094223576s" podCreationTimestamp="2026-03-12 09:00:00 +0000 UTC" firstStartedPulling="2026-03-12 09:00:01.235365716 +0000 UTC m=+3674.817401449" lastFinishedPulling="2026-03-12 09:00:04.662498533 +0000 UTC m=+3678.244534276" observedRunningTime="2026-03-12 09:00:05.090097344 +0000 UTC m=+3678.672133077" watchObservedRunningTime="2026-03-12 09:00:05.094223576 +0000 UTC m=+3678.676259309" Mar 12 09:00:05 crc kubenswrapper[4809]: I0312 09:00:05.127191 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8580f49-42bb-4276-8173-4d83e59bb923" path="/var/lib/kubelet/pods/f8580f49-42bb-4276-8173-4d83e59bb923/volumes" Mar 12 09:00:06 crc kubenswrapper[4809]: I0312 09:00:06.092792 4809 generic.go:334] "Generic (PLEG): container finished" podID="ec3349ba-5be4-417e-a870-0b98408d9b25" containerID="7377204713adfd0c507147626d78dcc6f6d20dcc8bee45b17edaed444804c7cf" exitCode=0 Mar 12 09:00:06 crc kubenswrapper[4809]: I0312 09:00:06.092865 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555100-2b7rf" event={"ID":"ec3349ba-5be4-417e-a870-0b98408d9b25","Type":"ContainerDied","Data":"7377204713adfd0c507147626d78dcc6f6d20dcc8bee45b17edaed444804c7cf"} Mar 12 09:00:07 crc kubenswrapper[4809]: I0312 09:00:07.547001 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555100-2b7rf" Mar 12 09:00:07 crc kubenswrapper[4809]: I0312 09:00:07.668762 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ch54x\" (UniqueName: \"kubernetes.io/projected/ec3349ba-5be4-417e-a870-0b98408d9b25-kube-api-access-ch54x\") pod \"ec3349ba-5be4-417e-a870-0b98408d9b25\" (UID: \"ec3349ba-5be4-417e-a870-0b98408d9b25\") " Mar 12 09:00:07 crc kubenswrapper[4809]: I0312 09:00:07.676705 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec3349ba-5be4-417e-a870-0b98408d9b25-kube-api-access-ch54x" (OuterVolumeSpecName: "kube-api-access-ch54x") pod "ec3349ba-5be4-417e-a870-0b98408d9b25" (UID: "ec3349ba-5be4-417e-a870-0b98408d9b25"). InnerVolumeSpecName "kube-api-access-ch54x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 09:00:07 crc kubenswrapper[4809]: I0312 09:00:07.773220 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ch54x\" (UniqueName: \"kubernetes.io/projected/ec3349ba-5be4-417e-a870-0b98408d9b25-kube-api-access-ch54x\") on node \"crc\" DevicePath \"\"" Mar 12 09:00:08 crc kubenswrapper[4809]: I0312 09:00:08.115248 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555100-2b7rf" event={"ID":"ec3349ba-5be4-417e-a870-0b98408d9b25","Type":"ContainerDied","Data":"713b5f1a5538172dbf3ffc170f8eb0874dc6bbb30bdc24309b932951234cd009"} Mar 12 09:00:08 crc kubenswrapper[4809]: I0312 09:00:08.115623 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="713b5f1a5538172dbf3ffc170f8eb0874dc6bbb30bdc24309b932951234cd009" Mar 12 09:00:08 crc kubenswrapper[4809]: I0312 09:00:08.115328 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555100-2b7rf" Mar 12 09:00:08 crc kubenswrapper[4809]: I0312 09:00:08.174168 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555094-zplxn"] Mar 12 09:00:08 crc kubenswrapper[4809]: I0312 09:00:08.192411 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555094-zplxn"] Mar 12 09:00:09 crc kubenswrapper[4809]: I0312 09:00:09.119573 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf36fd42-9301-443c-844c-50641f761cc1" path="/var/lib/kubelet/pods/cf36fd42-9301-443c-844c-50641f761cc1/volumes" Mar 12 09:00:15 crc kubenswrapper[4809]: I0312 09:00:15.106906 4809 scope.go:117] "RemoveContainer" containerID="18e7b6a6acc918a926c068dfaa70c8ee3f2195ac0234a00d247b295f88e7163c" Mar 12 09:00:15 crc kubenswrapper[4809]: E0312 09:00:15.107769 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:00:27 crc kubenswrapper[4809]: I0312 09:00:27.113044 4809 scope.go:117] "RemoveContainer" containerID="18e7b6a6acc918a926c068dfaa70c8ee3f2195ac0234a00d247b295f88e7163c" Mar 12 09:00:27 crc kubenswrapper[4809]: E0312 09:00:27.113846 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:00:33 crc kubenswrapper[4809]: I0312 09:00:33.597892 4809 scope.go:117] "RemoveContainer" containerID="cdc90b692dfa53d96d4391805a888ba35d471433b8dbe0098b71c3b8803d3852" Mar 12 09:00:33 crc kubenswrapper[4809]: I0312 09:00:33.662005 4809 scope.go:117] "RemoveContainer" containerID="b5301b8ec3a60e237fbbbf50703eb0ba1a7bddeb2d250bd34645eba699761fab" Mar 12 09:00:40 crc kubenswrapper[4809]: I0312 09:00:40.107147 4809 scope.go:117] "RemoveContainer" containerID="18e7b6a6acc918a926c068dfaa70c8ee3f2195ac0234a00d247b295f88e7163c" Mar 12 09:00:40 crc kubenswrapper[4809]: E0312 09:00:40.108130 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:00:51 crc kubenswrapper[4809]: I0312 09:00:51.109875 4809 scope.go:117] "RemoveContainer" containerID="18e7b6a6acc918a926c068dfaa70c8ee3f2195ac0234a00d247b295f88e7163c" Mar 12 09:00:51 crc kubenswrapper[4809]: E0312 09:00:51.110889 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:01:00 crc kubenswrapper[4809]: I0312 09:01:00.164012 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29555101-9ndpl"] Mar 12 09:01:00 crc kubenswrapper[4809]: E0312 09:01:00.165214 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7ea56f1-9c01-47d5-b5b2-d778e7dc339a" containerName="collect-profiles" Mar 12 09:01:00 crc kubenswrapper[4809]: I0312 09:01:00.165232 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7ea56f1-9c01-47d5-b5b2-d778e7dc339a" containerName="collect-profiles" Mar 12 09:01:00 crc kubenswrapper[4809]: E0312 09:01:00.165286 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec3349ba-5be4-417e-a870-0b98408d9b25" containerName="oc" Mar 12 09:01:00 crc kubenswrapper[4809]: I0312 09:01:00.165296 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec3349ba-5be4-417e-a870-0b98408d9b25" containerName="oc" Mar 12 09:01:00 crc kubenswrapper[4809]: I0312 09:01:00.165613 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec3349ba-5be4-417e-a870-0b98408d9b25" containerName="oc" Mar 12 09:01:00 crc kubenswrapper[4809]: I0312 09:01:00.165662 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7ea56f1-9c01-47d5-b5b2-d778e7dc339a" containerName="collect-profiles" Mar 12 09:01:00 crc kubenswrapper[4809]: I0312 09:01:00.166673 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29555101-9ndpl" Mar 12 09:01:00 crc kubenswrapper[4809]: I0312 09:01:00.180461 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29555101-9ndpl"] Mar 12 09:01:00 crc kubenswrapper[4809]: I0312 09:01:00.275561 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/51b417a3-2df1-4c98-8e49-b33a834c3e3e-fernet-keys\") pod \"keystone-cron-29555101-9ndpl\" (UID: \"51b417a3-2df1-4c98-8e49-b33a834c3e3e\") " pod="openstack/keystone-cron-29555101-9ndpl" Mar 12 09:01:00 crc kubenswrapper[4809]: I0312 09:01:00.275613 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84sbt\" (UniqueName: \"kubernetes.io/projected/51b417a3-2df1-4c98-8e49-b33a834c3e3e-kube-api-access-84sbt\") pod \"keystone-cron-29555101-9ndpl\" (UID: \"51b417a3-2df1-4c98-8e49-b33a834c3e3e\") " pod="openstack/keystone-cron-29555101-9ndpl" Mar 12 09:01:00 crc kubenswrapper[4809]: I0312 09:01:00.275687 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51b417a3-2df1-4c98-8e49-b33a834c3e3e-config-data\") pod \"keystone-cron-29555101-9ndpl\" (UID: \"51b417a3-2df1-4c98-8e49-b33a834c3e3e\") " pod="openstack/keystone-cron-29555101-9ndpl" Mar 12 09:01:00 crc kubenswrapper[4809]: I0312 09:01:00.275821 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51b417a3-2df1-4c98-8e49-b33a834c3e3e-combined-ca-bundle\") pod \"keystone-cron-29555101-9ndpl\" (UID: \"51b417a3-2df1-4c98-8e49-b33a834c3e3e\") " pod="openstack/keystone-cron-29555101-9ndpl" Mar 12 09:01:00 crc kubenswrapper[4809]: I0312 09:01:00.377681 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51b417a3-2df1-4c98-8e49-b33a834c3e3e-combined-ca-bundle\") pod \"keystone-cron-29555101-9ndpl\" (UID: \"51b417a3-2df1-4c98-8e49-b33a834c3e3e\") " pod="openstack/keystone-cron-29555101-9ndpl" Mar 12 09:01:00 crc kubenswrapper[4809]: I0312 09:01:00.377809 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/51b417a3-2df1-4c98-8e49-b33a834c3e3e-fernet-keys\") pod \"keystone-cron-29555101-9ndpl\" (UID: \"51b417a3-2df1-4c98-8e49-b33a834c3e3e\") " pod="openstack/keystone-cron-29555101-9ndpl" Mar 12 09:01:00 crc kubenswrapper[4809]: I0312 09:01:00.377837 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84sbt\" (UniqueName: \"kubernetes.io/projected/51b417a3-2df1-4c98-8e49-b33a834c3e3e-kube-api-access-84sbt\") pod \"keystone-cron-29555101-9ndpl\" (UID: \"51b417a3-2df1-4c98-8e49-b33a834c3e3e\") " pod="openstack/keystone-cron-29555101-9ndpl" Mar 12 09:01:00 crc kubenswrapper[4809]: I0312 09:01:00.377902 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51b417a3-2df1-4c98-8e49-b33a834c3e3e-config-data\") pod \"keystone-cron-29555101-9ndpl\" (UID: \"51b417a3-2df1-4c98-8e49-b33a834c3e3e\") " pod="openstack/keystone-cron-29555101-9ndpl" Mar 12 09:01:00 crc kubenswrapper[4809]: I0312 09:01:00.384567 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/51b417a3-2df1-4c98-8e49-b33a834c3e3e-fernet-keys\") pod \"keystone-cron-29555101-9ndpl\" (UID: \"51b417a3-2df1-4c98-8e49-b33a834c3e3e\") " pod="openstack/keystone-cron-29555101-9ndpl" Mar 12 09:01:00 crc kubenswrapper[4809]: I0312 09:01:00.384593 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51b417a3-2df1-4c98-8e49-b33a834c3e3e-config-data\") pod \"keystone-cron-29555101-9ndpl\" (UID: \"51b417a3-2df1-4c98-8e49-b33a834c3e3e\") " pod="openstack/keystone-cron-29555101-9ndpl" Mar 12 09:01:00 crc kubenswrapper[4809]: I0312 09:01:00.384943 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51b417a3-2df1-4c98-8e49-b33a834c3e3e-combined-ca-bundle\") pod \"keystone-cron-29555101-9ndpl\" (UID: \"51b417a3-2df1-4c98-8e49-b33a834c3e3e\") " pod="openstack/keystone-cron-29555101-9ndpl" Mar 12 09:01:00 crc kubenswrapper[4809]: I0312 09:01:00.393679 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84sbt\" (UniqueName: \"kubernetes.io/projected/51b417a3-2df1-4c98-8e49-b33a834c3e3e-kube-api-access-84sbt\") pod \"keystone-cron-29555101-9ndpl\" (UID: \"51b417a3-2df1-4c98-8e49-b33a834c3e3e\") " pod="openstack/keystone-cron-29555101-9ndpl" Mar 12 09:01:00 crc kubenswrapper[4809]: I0312 09:01:00.506303 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29555101-9ndpl" Mar 12 09:01:01 crc kubenswrapper[4809]: I0312 09:01:01.040649 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29555101-9ndpl"] Mar 12 09:01:01 crc kubenswrapper[4809]: I0312 09:01:01.783414 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29555101-9ndpl" event={"ID":"51b417a3-2df1-4c98-8e49-b33a834c3e3e","Type":"ContainerStarted","Data":"6e05237fae8eee2d9a81e967f3585add84d3bf80225506435a1e13101793e935"} Mar 12 09:01:01 crc kubenswrapper[4809]: I0312 09:01:01.783752 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29555101-9ndpl" event={"ID":"51b417a3-2df1-4c98-8e49-b33a834c3e3e","Type":"ContainerStarted","Data":"b52fb459f41b5d447207ce0c5125d85818e601dfb0d7c1362b8a5366a2bd52bf"} Mar 12 09:01:01 crc kubenswrapper[4809]: I0312 09:01:01.814031 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29555101-9ndpl" podStartSLOduration=1.81400855 podStartE2EDuration="1.81400855s" podCreationTimestamp="2026-03-12 09:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 09:01:01.796276696 +0000 UTC m=+3735.378312449" watchObservedRunningTime="2026-03-12 09:01:01.81400855 +0000 UTC m=+3735.396044273" Mar 12 09:01:05 crc kubenswrapper[4809]: I0312 09:01:05.831282 4809 generic.go:334] "Generic (PLEG): container finished" podID="51b417a3-2df1-4c98-8e49-b33a834c3e3e" containerID="6e05237fae8eee2d9a81e967f3585add84d3bf80225506435a1e13101793e935" exitCode=0 Mar 12 09:01:05 crc kubenswrapper[4809]: I0312 09:01:05.831347 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29555101-9ndpl" event={"ID":"51b417a3-2df1-4c98-8e49-b33a834c3e3e","Type":"ContainerDied","Data":"6e05237fae8eee2d9a81e967f3585add84d3bf80225506435a1e13101793e935"} Mar 12 09:01:06 crc kubenswrapper[4809]: I0312 09:01:06.106162 4809 scope.go:117] "RemoveContainer" containerID="18e7b6a6acc918a926c068dfaa70c8ee3f2195ac0234a00d247b295f88e7163c" Mar 12 09:01:06 crc kubenswrapper[4809]: E0312 09:01:06.106528 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:01:07 crc kubenswrapper[4809]: I0312 09:01:07.300593 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29555101-9ndpl" Mar 12 09:01:07 crc kubenswrapper[4809]: I0312 09:01:07.463474 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51b417a3-2df1-4c98-8e49-b33a834c3e3e-config-data\") pod \"51b417a3-2df1-4c98-8e49-b33a834c3e3e\" (UID: \"51b417a3-2df1-4c98-8e49-b33a834c3e3e\") " Mar 12 09:01:07 crc kubenswrapper[4809]: I0312 09:01:07.463965 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51b417a3-2df1-4c98-8e49-b33a834c3e3e-combined-ca-bundle\") pod \"51b417a3-2df1-4c98-8e49-b33a834c3e3e\" (UID: \"51b417a3-2df1-4c98-8e49-b33a834c3e3e\") " Mar 12 09:01:07 crc kubenswrapper[4809]: I0312 09:01:07.464172 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84sbt\" (UniqueName: \"kubernetes.io/projected/51b417a3-2df1-4c98-8e49-b33a834c3e3e-kube-api-access-84sbt\") pod \"51b417a3-2df1-4c98-8e49-b33a834c3e3e\" (UID: \"51b417a3-2df1-4c98-8e49-b33a834c3e3e\") " Mar 12 09:01:07 crc kubenswrapper[4809]: I0312 09:01:07.464285 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/51b417a3-2df1-4c98-8e49-b33a834c3e3e-fernet-keys\") pod \"51b417a3-2df1-4c98-8e49-b33a834c3e3e\" (UID: \"51b417a3-2df1-4c98-8e49-b33a834c3e3e\") " Mar 12 09:01:07 crc kubenswrapper[4809]: I0312 09:01:07.470243 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51b417a3-2df1-4c98-8e49-b33a834c3e3e-kube-api-access-84sbt" (OuterVolumeSpecName: "kube-api-access-84sbt") pod "51b417a3-2df1-4c98-8e49-b33a834c3e3e" (UID: "51b417a3-2df1-4c98-8e49-b33a834c3e3e"). InnerVolumeSpecName "kube-api-access-84sbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 09:01:07 crc kubenswrapper[4809]: I0312 09:01:07.489990 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51b417a3-2df1-4c98-8e49-b33a834c3e3e-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "51b417a3-2df1-4c98-8e49-b33a834c3e3e" (UID: "51b417a3-2df1-4c98-8e49-b33a834c3e3e"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 09:01:07 crc kubenswrapper[4809]: I0312 09:01:07.509595 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51b417a3-2df1-4c98-8e49-b33a834c3e3e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "51b417a3-2df1-4c98-8e49-b33a834c3e3e" (UID: "51b417a3-2df1-4c98-8e49-b33a834c3e3e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 09:01:07 crc kubenswrapper[4809]: I0312 09:01:07.567391 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84sbt\" (UniqueName: \"kubernetes.io/projected/51b417a3-2df1-4c98-8e49-b33a834c3e3e-kube-api-access-84sbt\") on node \"crc\" DevicePath \"\"" Mar 12 09:01:07 crc kubenswrapper[4809]: I0312 09:01:07.567430 4809 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/51b417a3-2df1-4c98-8e49-b33a834c3e3e-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 12 09:01:07 crc kubenswrapper[4809]: I0312 09:01:07.567442 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51b417a3-2df1-4c98-8e49-b33a834c3e3e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 09:01:07 crc kubenswrapper[4809]: I0312 09:01:07.567549 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51b417a3-2df1-4c98-8e49-b33a834c3e3e-config-data" (OuterVolumeSpecName: "config-data") pod "51b417a3-2df1-4c98-8e49-b33a834c3e3e" (UID: "51b417a3-2df1-4c98-8e49-b33a834c3e3e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 09:01:07 crc kubenswrapper[4809]: I0312 09:01:07.669474 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51b417a3-2df1-4c98-8e49-b33a834c3e3e-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 09:01:07 crc kubenswrapper[4809]: I0312 09:01:07.852943 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29555101-9ndpl" event={"ID":"51b417a3-2df1-4c98-8e49-b33a834c3e3e","Type":"ContainerDied","Data":"b52fb459f41b5d447207ce0c5125d85818e601dfb0d7c1362b8a5366a2bd52bf"} Mar 12 09:01:07 crc kubenswrapper[4809]: I0312 09:01:07.853165 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b52fb459f41b5d447207ce0c5125d85818e601dfb0d7c1362b8a5366a2bd52bf" Mar 12 09:01:07 crc kubenswrapper[4809]: I0312 09:01:07.853014 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29555101-9ndpl" Mar 12 09:01:18 crc kubenswrapper[4809]: I0312 09:01:18.106091 4809 scope.go:117] "RemoveContainer" containerID="18e7b6a6acc918a926c068dfaa70c8ee3f2195ac0234a00d247b295f88e7163c" Mar 12 09:01:18 crc kubenswrapper[4809]: E0312 09:01:18.107211 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:01:29 crc kubenswrapper[4809]: I0312 09:01:29.106264 4809 scope.go:117] "RemoveContainer" containerID="18e7b6a6acc918a926c068dfaa70c8ee3f2195ac0234a00d247b295f88e7163c" Mar 12 09:01:29 crc kubenswrapper[4809]: E0312 09:01:29.107326 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:01:40 crc kubenswrapper[4809]: I0312 09:01:40.106044 4809 scope.go:117] "RemoveContainer" containerID="18e7b6a6acc918a926c068dfaa70c8ee3f2195ac0234a00d247b295f88e7163c" Mar 12 09:01:40 crc kubenswrapper[4809]: E0312 09:01:40.107012 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:01:51 crc kubenswrapper[4809]: I0312 09:01:51.106822 4809 scope.go:117] "RemoveContainer" containerID="18e7b6a6acc918a926c068dfaa70c8ee3f2195ac0234a00d247b295f88e7163c" Mar 12 09:01:51 crc kubenswrapper[4809]: E0312 09:01:51.108234 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:02:00 crc kubenswrapper[4809]: I0312 09:02:00.238629 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555102-85p4v"] Mar 12 09:02:00 crc kubenswrapper[4809]: E0312 09:02:00.240019 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51b417a3-2df1-4c98-8e49-b33a834c3e3e" containerName="keystone-cron" Mar 12 09:02:00 crc kubenswrapper[4809]: I0312 09:02:00.240037 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="51b417a3-2df1-4c98-8e49-b33a834c3e3e" containerName="keystone-cron" Mar 12 09:02:00 crc kubenswrapper[4809]: I0312 09:02:00.240459 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="51b417a3-2df1-4c98-8e49-b33a834c3e3e" containerName="keystone-cron" Mar 12 09:02:00 crc kubenswrapper[4809]: I0312 09:02:00.241634 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555102-85p4v" Mar 12 09:02:00 crc kubenswrapper[4809]: I0312 09:02:00.247984 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 09:02:00 crc kubenswrapper[4809]: I0312 09:02:00.248216 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 09:02:00 crc kubenswrapper[4809]: I0312 09:02:00.248615 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-tvxqv" Mar 12 09:02:00 crc kubenswrapper[4809]: I0312 09:02:00.254209 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555102-85p4v"] Mar 12 09:02:00 crc kubenswrapper[4809]: I0312 09:02:00.340972 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hscj\" (UniqueName: \"kubernetes.io/projected/541bc502-f7a1-43fd-84e6-00950e43ae2a-kube-api-access-9hscj\") pod \"auto-csr-approver-29555102-85p4v\" (UID: \"541bc502-f7a1-43fd-84e6-00950e43ae2a\") " pod="openshift-infra/auto-csr-approver-29555102-85p4v" Mar 12 09:02:00 crc kubenswrapper[4809]: I0312 09:02:00.443106 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hscj\" (UniqueName: \"kubernetes.io/projected/541bc502-f7a1-43fd-84e6-00950e43ae2a-kube-api-access-9hscj\") pod \"auto-csr-approver-29555102-85p4v\" (UID: \"541bc502-f7a1-43fd-84e6-00950e43ae2a\") " pod="openshift-infra/auto-csr-approver-29555102-85p4v" Mar 12 09:02:00 crc kubenswrapper[4809]: I0312 09:02:00.461696 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hscj\" (UniqueName: \"kubernetes.io/projected/541bc502-f7a1-43fd-84e6-00950e43ae2a-kube-api-access-9hscj\") pod \"auto-csr-approver-29555102-85p4v\" (UID: \"541bc502-f7a1-43fd-84e6-00950e43ae2a\") " pod="openshift-infra/auto-csr-approver-29555102-85p4v" Mar 12 09:02:00 crc kubenswrapper[4809]: I0312 09:02:00.576905 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555102-85p4v" Mar 12 09:02:01 crc kubenswrapper[4809]: I0312 09:02:01.092463 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555102-85p4v"] Mar 12 09:02:01 crc kubenswrapper[4809]: I0312 09:02:01.109053 4809 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 09:02:01 crc kubenswrapper[4809]: I0312 09:02:01.965782 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555102-85p4v" event={"ID":"541bc502-f7a1-43fd-84e6-00950e43ae2a","Type":"ContainerStarted","Data":"14567d1ba3746401331ab0ce8837dd75da0fae6c7a58e3fefdb148c739eeffde"} Mar 12 09:02:02 crc kubenswrapper[4809]: I0312 09:02:02.977640 4809 generic.go:334] "Generic (PLEG): container finished" podID="541bc502-f7a1-43fd-84e6-00950e43ae2a" containerID="9186d13e60ad556ce447fac37db64bbd01abecb4e15b409605b3cd6ffbbdae25" exitCode=0 Mar 12 09:02:02 crc kubenswrapper[4809]: I0312 09:02:02.977691 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555102-85p4v" event={"ID":"541bc502-f7a1-43fd-84e6-00950e43ae2a","Type":"ContainerDied","Data":"9186d13e60ad556ce447fac37db64bbd01abecb4e15b409605b3cd6ffbbdae25"} Mar 12 09:02:03 crc kubenswrapper[4809]: I0312 09:02:03.134974 4809 scope.go:117] "RemoveContainer" containerID="18e7b6a6acc918a926c068dfaa70c8ee3f2195ac0234a00d247b295f88e7163c" Mar 12 09:02:03 crc kubenswrapper[4809]: E0312 09:02:03.135462 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:02:04 crc kubenswrapper[4809]: I0312 09:02:04.497155 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555102-85p4v" Mar 12 09:02:04 crc kubenswrapper[4809]: I0312 09:02:04.672507 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hscj\" (UniqueName: \"kubernetes.io/projected/541bc502-f7a1-43fd-84e6-00950e43ae2a-kube-api-access-9hscj\") pod \"541bc502-f7a1-43fd-84e6-00950e43ae2a\" (UID: \"541bc502-f7a1-43fd-84e6-00950e43ae2a\") " Mar 12 09:02:04 crc kubenswrapper[4809]: I0312 09:02:04.679704 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/541bc502-f7a1-43fd-84e6-00950e43ae2a-kube-api-access-9hscj" (OuterVolumeSpecName: "kube-api-access-9hscj") pod "541bc502-f7a1-43fd-84e6-00950e43ae2a" (UID: "541bc502-f7a1-43fd-84e6-00950e43ae2a"). InnerVolumeSpecName "kube-api-access-9hscj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 09:02:04 crc kubenswrapper[4809]: I0312 09:02:04.776772 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hscj\" (UniqueName: \"kubernetes.io/projected/541bc502-f7a1-43fd-84e6-00950e43ae2a-kube-api-access-9hscj\") on node \"crc\" DevicePath \"\"" Mar 12 09:02:05 crc kubenswrapper[4809]: I0312 09:02:05.025086 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555102-85p4v" event={"ID":"541bc502-f7a1-43fd-84e6-00950e43ae2a","Type":"ContainerDied","Data":"14567d1ba3746401331ab0ce8837dd75da0fae6c7a58e3fefdb148c739eeffde"} Mar 12 09:02:05 crc kubenswrapper[4809]: I0312 09:02:05.025166 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14567d1ba3746401331ab0ce8837dd75da0fae6c7a58e3fefdb148c739eeffde" Mar 12 09:02:05 crc kubenswrapper[4809]: I0312 09:02:05.025244 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555102-85p4v" Mar 12 09:02:05 crc kubenswrapper[4809]: I0312 09:02:05.583596 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555096-p8jrr"] Mar 12 09:02:05 crc kubenswrapper[4809]: I0312 09:02:05.601092 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555096-p8jrr"] Mar 12 09:02:07 crc kubenswrapper[4809]: I0312 09:02:07.121047 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c6928d8-a14c-4043-853e-4b3699cf88a9" path="/var/lib/kubelet/pods/7c6928d8-a14c-4043-853e-4b3699cf88a9/volumes" Mar 12 09:02:14 crc kubenswrapper[4809]: I0312 09:02:14.107550 4809 scope.go:117] "RemoveContainer" containerID="18e7b6a6acc918a926c068dfaa70c8ee3f2195ac0234a00d247b295f88e7163c" Mar 12 09:02:14 crc kubenswrapper[4809]: E0312 09:02:14.108636 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:02:26 crc kubenswrapper[4809]: I0312 09:02:26.106268 4809 scope.go:117] "RemoveContainer" containerID="18e7b6a6acc918a926c068dfaa70c8ee3f2195ac0234a00d247b295f88e7163c" Mar 12 09:02:26 crc kubenswrapper[4809]: E0312 09:02:26.107579 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:02:33 crc kubenswrapper[4809]: I0312 09:02:33.810000 4809 scope.go:117] "RemoveContainer" containerID="265a25b2a415dcdfee6427c05ebb499dc950b596358ddb42073b8cb2b12ef17f" Mar 12 09:02:37 crc kubenswrapper[4809]: I0312 09:02:37.124367 4809 scope.go:117] "RemoveContainer" containerID="18e7b6a6acc918a926c068dfaa70c8ee3f2195ac0234a00d247b295f88e7163c" Mar 12 09:02:37 crc kubenswrapper[4809]: E0312 09:02:37.125264 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:02:51 crc kubenswrapper[4809]: I0312 09:02:51.106607 4809 scope.go:117] "RemoveContainer" containerID="18e7b6a6acc918a926c068dfaa70c8ee3f2195ac0234a00d247b295f88e7163c" Mar 12 09:02:51 crc kubenswrapper[4809]: E0312 09:02:51.107547 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:03:04 crc kubenswrapper[4809]: I0312 09:03:04.106825 4809 scope.go:117] "RemoveContainer" containerID="18e7b6a6acc918a926c068dfaa70c8ee3f2195ac0234a00d247b295f88e7163c" Mar 12 09:03:04 crc kubenswrapper[4809]: E0312 09:03:04.107723 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:03:16 crc kubenswrapper[4809]: I0312 09:03:16.106262 4809 scope.go:117] "RemoveContainer" containerID="18e7b6a6acc918a926c068dfaa70c8ee3f2195ac0234a00d247b295f88e7163c" Mar 12 09:03:16 crc kubenswrapper[4809]: E0312 09:03:16.107340 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:03:28 crc kubenswrapper[4809]: I0312 09:03:28.106206 4809 scope.go:117] "RemoveContainer" containerID="18e7b6a6acc918a926c068dfaa70c8ee3f2195ac0234a00d247b295f88e7163c" Mar 12 09:03:28 crc kubenswrapper[4809]: E0312 09:03:28.107551 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:03:42 crc kubenswrapper[4809]: I0312 09:03:42.106810 4809 scope.go:117] "RemoveContainer" containerID="18e7b6a6acc918a926c068dfaa70c8ee3f2195ac0234a00d247b295f88e7163c" Mar 12 09:03:42 crc kubenswrapper[4809]: E0312 09:03:42.108010 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:03:56 crc kubenswrapper[4809]: I0312 09:03:56.107809 4809 scope.go:117] "RemoveContainer" containerID="18e7b6a6acc918a926c068dfaa70c8ee3f2195ac0234a00d247b295f88e7163c" Mar 12 09:03:56 crc kubenswrapper[4809]: I0312 09:03:56.436893 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" event={"ID":"101483ba-8ed3-40eb-9855-077e9add029f","Type":"ContainerStarted","Data":"f2be3e38a61c19b1d51c5c797b60016e60dd08a75d2793110268f3676c81027a"} Mar 12 09:03:58 crc kubenswrapper[4809]: I0312 09:03:58.779922 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-p4qxw"] Mar 12 09:03:58 crc kubenswrapper[4809]: E0312 09:03:58.781841 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="541bc502-f7a1-43fd-84e6-00950e43ae2a" containerName="oc" Mar 12 09:03:58 crc kubenswrapper[4809]: I0312 09:03:58.781876 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="541bc502-f7a1-43fd-84e6-00950e43ae2a" containerName="oc" Mar 12 09:03:58 crc kubenswrapper[4809]: I0312 09:03:58.782619 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="541bc502-f7a1-43fd-84e6-00950e43ae2a" containerName="oc" Mar 12 09:03:58 crc kubenswrapper[4809]: I0312 09:03:58.786767 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p4qxw" Mar 12 09:03:58 crc kubenswrapper[4809]: I0312 09:03:58.804332 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p4qxw"] Mar 12 09:03:58 crc kubenswrapper[4809]: I0312 09:03:58.889789 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70e7297b-0ede-4566-8315-f0f4f01e544e-catalog-content\") pod \"community-operators-p4qxw\" (UID: \"70e7297b-0ede-4566-8315-f0f4f01e544e\") " pod="openshift-marketplace/community-operators-p4qxw" Mar 12 09:03:58 crc kubenswrapper[4809]: I0312 09:03:58.890234 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhfqn\" (UniqueName: \"kubernetes.io/projected/70e7297b-0ede-4566-8315-f0f4f01e544e-kube-api-access-bhfqn\") pod \"community-operators-p4qxw\" (UID: \"70e7297b-0ede-4566-8315-f0f4f01e544e\") " pod="openshift-marketplace/community-operators-p4qxw" Mar 12 09:03:58 crc kubenswrapper[4809]: I0312 09:03:58.890818 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70e7297b-0ede-4566-8315-f0f4f01e544e-utilities\") pod \"community-operators-p4qxw\" (UID: \"70e7297b-0ede-4566-8315-f0f4f01e544e\") " pod="openshift-marketplace/community-operators-p4qxw" Mar 12 09:03:58 crc kubenswrapper[4809]: I0312 09:03:58.993532 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70e7297b-0ede-4566-8315-f0f4f01e544e-utilities\") pod \"community-operators-p4qxw\" (UID: \"70e7297b-0ede-4566-8315-f0f4f01e544e\") " pod="openshift-marketplace/community-operators-p4qxw" Mar 12 09:03:58 crc kubenswrapper[4809]: I0312 09:03:58.993996 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70e7297b-0ede-4566-8315-f0f4f01e544e-catalog-content\") pod \"community-operators-p4qxw\" (UID: \"70e7297b-0ede-4566-8315-f0f4f01e544e\") " pod="openshift-marketplace/community-operators-p4qxw" Mar 12 09:03:58 crc kubenswrapper[4809]: I0312 09:03:58.994061 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhfqn\" (UniqueName: \"kubernetes.io/projected/70e7297b-0ede-4566-8315-f0f4f01e544e-kube-api-access-bhfqn\") pod \"community-operators-p4qxw\" (UID: \"70e7297b-0ede-4566-8315-f0f4f01e544e\") " pod="openshift-marketplace/community-operators-p4qxw" Mar 12 09:03:58 crc kubenswrapper[4809]: I0312 09:03:58.994329 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70e7297b-0ede-4566-8315-f0f4f01e544e-utilities\") pod \"community-operators-p4qxw\" (UID: \"70e7297b-0ede-4566-8315-f0f4f01e544e\") " pod="openshift-marketplace/community-operators-p4qxw" Mar 12 09:03:58 crc kubenswrapper[4809]: I0312 09:03:58.994665 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70e7297b-0ede-4566-8315-f0f4f01e544e-catalog-content\") pod \"community-operators-p4qxw\" (UID: \"70e7297b-0ede-4566-8315-f0f4f01e544e\") " pod="openshift-marketplace/community-operators-p4qxw" Mar 12 09:03:59 crc kubenswrapper[4809]: I0312 09:03:59.023569 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhfqn\" (UniqueName: \"kubernetes.io/projected/70e7297b-0ede-4566-8315-f0f4f01e544e-kube-api-access-bhfqn\") pod \"community-operators-p4qxw\" (UID: \"70e7297b-0ede-4566-8315-f0f4f01e544e\") " pod="openshift-marketplace/community-operators-p4qxw" Mar 12 09:03:59 crc kubenswrapper[4809]: I0312 09:03:59.145771 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p4qxw" Mar 12 09:03:59 crc kubenswrapper[4809]: I0312 09:03:59.807584 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p4qxw"] Mar 12 09:04:00 crc kubenswrapper[4809]: I0312 09:04:00.161850 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555104-mbbjm"] Mar 12 09:04:00 crc kubenswrapper[4809]: I0312 09:04:00.165200 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555104-mbbjm" Mar 12 09:04:00 crc kubenswrapper[4809]: I0312 09:04:00.168522 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 09:04:00 crc kubenswrapper[4809]: I0312 09:04:00.168993 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-tvxqv" Mar 12 09:04:00 crc kubenswrapper[4809]: I0312 09:04:00.169359 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 09:04:00 crc kubenswrapper[4809]: I0312 09:04:00.178068 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555104-mbbjm"] Mar 12 09:04:00 crc kubenswrapper[4809]: I0312 09:04:00.239453 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgzjm\" (UniqueName: \"kubernetes.io/projected/05046d95-86af-4ebf-90da-9510d0c6d499-kube-api-access-qgzjm\") pod \"auto-csr-approver-29555104-mbbjm\" (UID: \"05046d95-86af-4ebf-90da-9510d0c6d499\") " pod="openshift-infra/auto-csr-approver-29555104-mbbjm" Mar 12 09:04:00 crc kubenswrapper[4809]: I0312 09:04:00.342231 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgzjm\" (UniqueName: \"kubernetes.io/projected/05046d95-86af-4ebf-90da-9510d0c6d499-kube-api-access-qgzjm\") pod \"auto-csr-approver-29555104-mbbjm\" (UID: \"05046d95-86af-4ebf-90da-9510d0c6d499\") " pod="openshift-infra/auto-csr-approver-29555104-mbbjm" Mar 12 09:04:00 crc kubenswrapper[4809]: I0312 09:04:00.374485 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgzjm\" (UniqueName: \"kubernetes.io/projected/05046d95-86af-4ebf-90da-9510d0c6d499-kube-api-access-qgzjm\") pod \"auto-csr-approver-29555104-mbbjm\" (UID: \"05046d95-86af-4ebf-90da-9510d0c6d499\") " pod="openshift-infra/auto-csr-approver-29555104-mbbjm" Mar 12 09:04:00 crc kubenswrapper[4809]: I0312 09:04:00.489519 4809 generic.go:334] "Generic (PLEG): container finished" podID="70e7297b-0ede-4566-8315-f0f4f01e544e" containerID="5296552086ba3f0a0916a918d450d22c80e245b3f0170c76f1ef552eb4092b46" exitCode=0 Mar 12 09:04:00 crc kubenswrapper[4809]: I0312 09:04:00.489761 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p4qxw" event={"ID":"70e7297b-0ede-4566-8315-f0f4f01e544e","Type":"ContainerDied","Data":"5296552086ba3f0a0916a918d450d22c80e245b3f0170c76f1ef552eb4092b46"} Mar 12 09:04:00 crc kubenswrapper[4809]: I0312 09:04:00.490189 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p4qxw" event={"ID":"70e7297b-0ede-4566-8315-f0f4f01e544e","Type":"ContainerStarted","Data":"9c0ea32eef25bbd14ee8c899fcbb0224d83ca5ad5e5dc4fff3453a15971100a9"} Mar 12 09:04:00 crc kubenswrapper[4809]: I0312 09:04:00.499252 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555104-mbbjm" Mar 12 09:04:01 crc kubenswrapper[4809]: I0312 09:04:01.060941 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555104-mbbjm"] Mar 12 09:04:01 crc kubenswrapper[4809]: W0312 09:04:01.065214 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05046d95_86af_4ebf_90da_9510d0c6d499.slice/crio-c4f519e79b4254dd9990543172bdbd4001e8cbfdd367deda41ddd4064d396db3 WatchSource:0}: Error finding container c4f519e79b4254dd9990543172bdbd4001e8cbfdd367deda41ddd4064d396db3: Status 404 returned error can't find the container with id c4f519e79b4254dd9990543172bdbd4001e8cbfdd367deda41ddd4064d396db3 Mar 12 09:04:01 crc kubenswrapper[4809]: I0312 09:04:01.506434 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555104-mbbjm" event={"ID":"05046d95-86af-4ebf-90da-9510d0c6d499","Type":"ContainerStarted","Data":"c4f519e79b4254dd9990543172bdbd4001e8cbfdd367deda41ddd4064d396db3"} Mar 12 09:04:01 crc kubenswrapper[4809]: I0312 09:04:01.508748 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p4qxw" event={"ID":"70e7297b-0ede-4566-8315-f0f4f01e544e","Type":"ContainerStarted","Data":"33d7a244e3c59a4e99653b27f64a059a3c6896acae49580853f5a029ddbb09ae"} Mar 12 09:04:02 crc kubenswrapper[4809]: I0312 09:04:02.549203 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555104-mbbjm" event={"ID":"05046d95-86af-4ebf-90da-9510d0c6d499","Type":"ContainerStarted","Data":"8ca46e6086e4dca25c386705ecbfca85d6860ea9ab000f783c125fc2520dc956"} Mar 12 09:04:02 crc kubenswrapper[4809]: I0312 09:04:02.590293 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555104-mbbjm" podStartSLOduration=1.655070901 podStartE2EDuration="2.590108038s" podCreationTimestamp="2026-03-12 09:04:00 +0000 UTC" firstStartedPulling="2026-03-12 09:04:01.070037183 +0000 UTC m=+3914.652072916" lastFinishedPulling="2026-03-12 09:04:02.00507432 +0000 UTC m=+3915.587110053" observedRunningTime="2026-03-12 09:04:02.573490573 +0000 UTC m=+3916.155526306" watchObservedRunningTime="2026-03-12 09:04:02.590108038 +0000 UTC m=+3916.172143771" Mar 12 09:04:03 crc kubenswrapper[4809]: I0312 09:04:03.570693 4809 generic.go:334] "Generic (PLEG): container finished" podID="05046d95-86af-4ebf-90da-9510d0c6d499" containerID="8ca46e6086e4dca25c386705ecbfca85d6860ea9ab000f783c125fc2520dc956" exitCode=0 Mar 12 09:04:03 crc kubenswrapper[4809]: I0312 09:04:03.571040 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555104-mbbjm" event={"ID":"05046d95-86af-4ebf-90da-9510d0c6d499","Type":"ContainerDied","Data":"8ca46e6086e4dca25c386705ecbfca85d6860ea9ab000f783c125fc2520dc956"} Mar 12 09:04:03 crc kubenswrapper[4809]: I0312 09:04:03.584095 4809 generic.go:334] "Generic (PLEG): container finished" podID="70e7297b-0ede-4566-8315-f0f4f01e544e" containerID="33d7a244e3c59a4e99653b27f64a059a3c6896acae49580853f5a029ddbb09ae" exitCode=0 Mar 12 09:04:03 crc kubenswrapper[4809]: I0312 09:04:03.584169 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p4qxw" event={"ID":"70e7297b-0ede-4566-8315-f0f4f01e544e","Type":"ContainerDied","Data":"33d7a244e3c59a4e99653b27f64a059a3c6896acae49580853f5a029ddbb09ae"} Mar 12 09:04:04 crc kubenswrapper[4809]: I0312 09:04:04.603394 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p4qxw" event={"ID":"70e7297b-0ede-4566-8315-f0f4f01e544e","Type":"ContainerStarted","Data":"d5e5d2f6fad167e76954551eb0ce287c37d0b29c9b976a48ed9651d2ef206b60"} Mar 12 09:04:04 crc kubenswrapper[4809]: I0312 09:04:04.626230 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-p4qxw" podStartSLOduration=3.06331937 podStartE2EDuration="6.626200574s" podCreationTimestamp="2026-03-12 09:03:58 +0000 UTC" firstStartedPulling="2026-03-12 09:04:00.491700879 +0000 UTC m=+3914.073736612" lastFinishedPulling="2026-03-12 09:04:04.054582083 +0000 UTC m=+3917.636617816" observedRunningTime="2026-03-12 09:04:04.624366014 +0000 UTC m=+3918.206401757" watchObservedRunningTime="2026-03-12 09:04:04.626200574 +0000 UTC m=+3918.208236307" Mar 12 09:04:05 crc kubenswrapper[4809]: I0312 09:04:05.243731 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555104-mbbjm" Mar 12 09:04:05 crc kubenswrapper[4809]: I0312 09:04:05.351556 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgzjm\" (UniqueName: \"kubernetes.io/projected/05046d95-86af-4ebf-90da-9510d0c6d499-kube-api-access-qgzjm\") pod \"05046d95-86af-4ebf-90da-9510d0c6d499\" (UID: \"05046d95-86af-4ebf-90da-9510d0c6d499\") " Mar 12 09:04:05 crc kubenswrapper[4809]: I0312 09:04:05.360401 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05046d95-86af-4ebf-90da-9510d0c6d499-kube-api-access-qgzjm" (OuterVolumeSpecName: "kube-api-access-qgzjm") pod "05046d95-86af-4ebf-90da-9510d0c6d499" (UID: "05046d95-86af-4ebf-90da-9510d0c6d499"). InnerVolumeSpecName "kube-api-access-qgzjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 09:04:05 crc kubenswrapper[4809]: I0312 09:04:05.454787 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qgzjm\" (UniqueName: \"kubernetes.io/projected/05046d95-86af-4ebf-90da-9510d0c6d499-kube-api-access-qgzjm\") on node \"crc\" DevicePath \"\"" Mar 12 09:04:05 crc kubenswrapper[4809]: I0312 09:04:05.615770 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555104-mbbjm" event={"ID":"05046d95-86af-4ebf-90da-9510d0c6d499","Type":"ContainerDied","Data":"c4f519e79b4254dd9990543172bdbd4001e8cbfdd367deda41ddd4064d396db3"} Mar 12 09:04:05 crc kubenswrapper[4809]: I0312 09:04:05.615818 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555104-mbbjm" Mar 12 09:04:05 crc kubenswrapper[4809]: I0312 09:04:05.615833 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4f519e79b4254dd9990543172bdbd4001e8cbfdd367deda41ddd4064d396db3" Mar 12 09:04:05 crc kubenswrapper[4809]: I0312 09:04:05.668935 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555098-dp8jh"] Mar 12 09:04:05 crc kubenswrapper[4809]: I0312 09:04:05.691928 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555098-dp8jh"] Mar 12 09:04:07 crc kubenswrapper[4809]: I0312 09:04:07.122768 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f832f91b-67de-49a6-ab5a-d157be573ce4" path="/var/lib/kubelet/pods/f832f91b-67de-49a6-ab5a-d157be573ce4/volumes" Mar 12 09:04:09 crc kubenswrapper[4809]: I0312 09:04:09.146791 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-p4qxw" Mar 12 09:04:09 crc kubenswrapper[4809]: I0312 09:04:09.147300 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-p4qxw" Mar 12 09:04:09 crc kubenswrapper[4809]: I0312 09:04:09.211562 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-p4qxw" Mar 12 09:04:09 crc kubenswrapper[4809]: I0312 09:04:09.735136 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-p4qxw" Mar 12 09:04:09 crc kubenswrapper[4809]: I0312 09:04:09.788874 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p4qxw"] Mar 12 09:04:11 crc kubenswrapper[4809]: I0312 09:04:11.706580 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-p4qxw" podUID="70e7297b-0ede-4566-8315-f0f4f01e544e" containerName="registry-server" containerID="cri-o://d5e5d2f6fad167e76954551eb0ce287c37d0b29c9b976a48ed9651d2ef206b60" gracePeriod=2 Mar 12 09:04:12 crc kubenswrapper[4809]: I0312 09:04:12.341995 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p4qxw" Mar 12 09:04:12 crc kubenswrapper[4809]: I0312 09:04:12.446688 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70e7297b-0ede-4566-8315-f0f4f01e544e-catalog-content\") pod \"70e7297b-0ede-4566-8315-f0f4f01e544e\" (UID: \"70e7297b-0ede-4566-8315-f0f4f01e544e\") " Mar 12 09:04:12 crc kubenswrapper[4809]: I0312 09:04:12.447027 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70e7297b-0ede-4566-8315-f0f4f01e544e-utilities\") pod \"70e7297b-0ede-4566-8315-f0f4f01e544e\" (UID: \"70e7297b-0ede-4566-8315-f0f4f01e544e\") " Mar 12 09:04:12 crc kubenswrapper[4809]: I0312 09:04:12.447223 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhfqn\" (UniqueName: \"kubernetes.io/projected/70e7297b-0ede-4566-8315-f0f4f01e544e-kube-api-access-bhfqn\") pod \"70e7297b-0ede-4566-8315-f0f4f01e544e\" (UID: \"70e7297b-0ede-4566-8315-f0f4f01e544e\") " Mar 12 09:04:12 crc kubenswrapper[4809]: I0312 09:04:12.448429 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70e7297b-0ede-4566-8315-f0f4f01e544e-utilities" (OuterVolumeSpecName: "utilities") pod "70e7297b-0ede-4566-8315-f0f4f01e544e" (UID: "70e7297b-0ede-4566-8315-f0f4f01e544e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 09:04:12 crc kubenswrapper[4809]: I0312 09:04:12.454433 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70e7297b-0ede-4566-8315-f0f4f01e544e-kube-api-access-bhfqn" (OuterVolumeSpecName: "kube-api-access-bhfqn") pod "70e7297b-0ede-4566-8315-f0f4f01e544e" (UID: "70e7297b-0ede-4566-8315-f0f4f01e544e"). InnerVolumeSpecName "kube-api-access-bhfqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 09:04:12 crc kubenswrapper[4809]: I0312 09:04:12.506333 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70e7297b-0ede-4566-8315-f0f4f01e544e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "70e7297b-0ede-4566-8315-f0f4f01e544e" (UID: "70e7297b-0ede-4566-8315-f0f4f01e544e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 09:04:12 crc kubenswrapper[4809]: I0312 09:04:12.550373 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhfqn\" (UniqueName: \"kubernetes.io/projected/70e7297b-0ede-4566-8315-f0f4f01e544e-kube-api-access-bhfqn\") on node \"crc\" DevicePath \"\"" Mar 12 09:04:12 crc kubenswrapper[4809]: I0312 09:04:12.550415 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70e7297b-0ede-4566-8315-f0f4f01e544e-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 09:04:12 crc kubenswrapper[4809]: I0312 09:04:12.550430 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70e7297b-0ede-4566-8315-f0f4f01e544e-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 09:04:12 crc kubenswrapper[4809]: I0312 09:04:12.719483 4809 generic.go:334] "Generic (PLEG): container finished" podID="70e7297b-0ede-4566-8315-f0f4f01e544e" containerID="d5e5d2f6fad167e76954551eb0ce287c37d0b29c9b976a48ed9651d2ef206b60" exitCode=0 Mar 12 09:04:12 crc kubenswrapper[4809]: I0312 09:04:12.719550 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p4qxw" Mar 12 09:04:12 crc kubenswrapper[4809]: I0312 09:04:12.719555 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p4qxw" event={"ID":"70e7297b-0ede-4566-8315-f0f4f01e544e","Type":"ContainerDied","Data":"d5e5d2f6fad167e76954551eb0ce287c37d0b29c9b976a48ed9651d2ef206b60"} Mar 12 09:04:12 crc kubenswrapper[4809]: I0312 09:04:12.719983 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p4qxw" event={"ID":"70e7297b-0ede-4566-8315-f0f4f01e544e","Type":"ContainerDied","Data":"9c0ea32eef25bbd14ee8c899fcbb0224d83ca5ad5e5dc4fff3453a15971100a9"} Mar 12 09:04:12 crc kubenswrapper[4809]: I0312 09:04:12.720013 4809 scope.go:117] "RemoveContainer" containerID="d5e5d2f6fad167e76954551eb0ce287c37d0b29c9b976a48ed9651d2ef206b60" Mar 12 09:04:12 crc kubenswrapper[4809]: I0312 09:04:12.770532 4809 scope.go:117] "RemoveContainer" containerID="33d7a244e3c59a4e99653b27f64a059a3c6896acae49580853f5a029ddbb09ae" Mar 12 09:04:12 crc kubenswrapper[4809]: I0312 09:04:12.783426 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p4qxw"] Mar 12 09:04:12 crc kubenswrapper[4809]: I0312 09:04:12.797035 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-p4qxw"] Mar 12 09:04:12 crc kubenswrapper[4809]: I0312 09:04:12.868334 4809 scope.go:117] "RemoveContainer" containerID="5296552086ba3f0a0916a918d450d22c80e245b3f0170c76f1ef552eb4092b46" Mar 12 09:04:12 crc kubenswrapper[4809]: I0312 09:04:12.923607 4809 scope.go:117] "RemoveContainer" containerID="d5e5d2f6fad167e76954551eb0ce287c37d0b29c9b976a48ed9651d2ef206b60" Mar 12 09:04:12 crc kubenswrapper[4809]: E0312 09:04:12.925579 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5e5d2f6fad167e76954551eb0ce287c37d0b29c9b976a48ed9651d2ef206b60\": container with ID starting with d5e5d2f6fad167e76954551eb0ce287c37d0b29c9b976a48ed9651d2ef206b60 not found: ID does not exist" containerID="d5e5d2f6fad167e76954551eb0ce287c37d0b29c9b976a48ed9651d2ef206b60" Mar 12 09:04:12 crc kubenswrapper[4809]: I0312 09:04:12.925626 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5e5d2f6fad167e76954551eb0ce287c37d0b29c9b976a48ed9651d2ef206b60"} err="failed to get container status \"d5e5d2f6fad167e76954551eb0ce287c37d0b29c9b976a48ed9651d2ef206b60\": rpc error: code = NotFound desc = could not find container \"d5e5d2f6fad167e76954551eb0ce287c37d0b29c9b976a48ed9651d2ef206b60\": container with ID starting with d5e5d2f6fad167e76954551eb0ce287c37d0b29c9b976a48ed9651d2ef206b60 not found: ID does not exist" Mar 12 09:04:12 crc kubenswrapper[4809]: I0312 09:04:12.925655 4809 scope.go:117] "RemoveContainer" containerID="33d7a244e3c59a4e99653b27f64a059a3c6896acae49580853f5a029ddbb09ae" Mar 12 09:04:12 crc kubenswrapper[4809]: E0312 09:04:12.926072 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33d7a244e3c59a4e99653b27f64a059a3c6896acae49580853f5a029ddbb09ae\": container with ID starting with 33d7a244e3c59a4e99653b27f64a059a3c6896acae49580853f5a029ddbb09ae not found: ID does not exist" containerID="33d7a244e3c59a4e99653b27f64a059a3c6896acae49580853f5a029ddbb09ae" Mar 12 09:04:12 crc kubenswrapper[4809]: I0312 09:04:12.926136 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33d7a244e3c59a4e99653b27f64a059a3c6896acae49580853f5a029ddbb09ae"} err="failed to get container status \"33d7a244e3c59a4e99653b27f64a059a3c6896acae49580853f5a029ddbb09ae\": rpc error: code = NotFound desc = could not find container \"33d7a244e3c59a4e99653b27f64a059a3c6896acae49580853f5a029ddbb09ae\": container with ID starting with 33d7a244e3c59a4e99653b27f64a059a3c6896acae49580853f5a029ddbb09ae not found: ID does not exist" Mar 12 09:04:12 crc kubenswrapper[4809]: I0312 09:04:12.926164 4809 scope.go:117] "RemoveContainer" containerID="5296552086ba3f0a0916a918d450d22c80e245b3f0170c76f1ef552eb4092b46" Mar 12 09:04:12 crc kubenswrapper[4809]: E0312 09:04:12.926423 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5296552086ba3f0a0916a918d450d22c80e245b3f0170c76f1ef552eb4092b46\": container with ID starting with 5296552086ba3f0a0916a918d450d22c80e245b3f0170c76f1ef552eb4092b46 not found: ID does not exist" containerID="5296552086ba3f0a0916a918d450d22c80e245b3f0170c76f1ef552eb4092b46" Mar 12 09:04:12 crc kubenswrapper[4809]: I0312 09:04:12.926451 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5296552086ba3f0a0916a918d450d22c80e245b3f0170c76f1ef552eb4092b46"} err="failed to get container status \"5296552086ba3f0a0916a918d450d22c80e245b3f0170c76f1ef552eb4092b46\": rpc error: code = NotFound desc = could not find container \"5296552086ba3f0a0916a918d450d22c80e245b3f0170c76f1ef552eb4092b46\": container with ID starting with 5296552086ba3f0a0916a918d450d22c80e245b3f0170c76f1ef552eb4092b46 not found: ID does not exist" Mar 12 09:04:13 crc kubenswrapper[4809]: I0312 09:04:13.118174 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70e7297b-0ede-4566-8315-f0f4f01e544e" path="/var/lib/kubelet/pods/70e7297b-0ede-4566-8315-f0f4f01e544e/volumes" Mar 12 09:04:33 crc kubenswrapper[4809]: I0312 09:04:33.943592 4809 scope.go:117] "RemoveContainer" containerID="291dfcb6a1444dea535e270fc84049c359095cc40bea7cdfa4eb2807850a8fd5" Mar 12 09:04:52 crc kubenswrapper[4809]: I0312 09:04:52.381382 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mnlks"] Mar 12 09:04:52 crc kubenswrapper[4809]: E0312 09:04:52.382508 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70e7297b-0ede-4566-8315-f0f4f01e544e" containerName="extract-utilities" Mar 12 09:04:52 crc kubenswrapper[4809]: I0312 09:04:52.382525 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="70e7297b-0ede-4566-8315-f0f4f01e544e" containerName="extract-utilities" Mar 12 09:04:52 crc kubenswrapper[4809]: E0312 09:04:52.382544 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70e7297b-0ede-4566-8315-f0f4f01e544e" containerName="extract-content" Mar 12 09:04:52 crc kubenswrapper[4809]: I0312 09:04:52.382556 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="70e7297b-0ede-4566-8315-f0f4f01e544e" containerName="extract-content" Mar 12 09:04:52 crc kubenswrapper[4809]: E0312 09:04:52.382593 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70e7297b-0ede-4566-8315-f0f4f01e544e" containerName="registry-server" Mar 12 09:04:52 crc kubenswrapper[4809]: I0312 09:04:52.382602 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="70e7297b-0ede-4566-8315-f0f4f01e544e" containerName="registry-server" Mar 12 09:04:52 crc kubenswrapper[4809]: E0312 09:04:52.382624 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05046d95-86af-4ebf-90da-9510d0c6d499" containerName="oc" Mar 12 09:04:52 crc kubenswrapper[4809]: I0312 09:04:52.382633 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="05046d95-86af-4ebf-90da-9510d0c6d499" containerName="oc" Mar 12 09:04:52 crc kubenswrapper[4809]: I0312 09:04:52.382934 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="70e7297b-0ede-4566-8315-f0f4f01e544e" containerName="registry-server" Mar 12 09:04:52 crc kubenswrapper[4809]: I0312 09:04:52.382979 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="05046d95-86af-4ebf-90da-9510d0c6d499" containerName="oc" Mar 12 09:04:52 crc kubenswrapper[4809]: I0312 09:04:52.385221 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mnlks" Mar 12 09:04:52 crc kubenswrapper[4809]: I0312 09:04:52.396574 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mnlks"] Mar 12 09:04:52 crc kubenswrapper[4809]: I0312 09:04:52.491699 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7qv8\" (UniqueName: \"kubernetes.io/projected/d39621be-d3d7-42e2-ace8-8aab01932a8e-kube-api-access-d7qv8\") pod \"certified-operators-mnlks\" (UID: \"d39621be-d3d7-42e2-ace8-8aab01932a8e\") " pod="openshift-marketplace/certified-operators-mnlks" Mar 12 09:04:52 crc kubenswrapper[4809]: I0312 09:04:52.491747 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d39621be-d3d7-42e2-ace8-8aab01932a8e-utilities\") pod \"certified-operators-mnlks\" (UID: \"d39621be-d3d7-42e2-ace8-8aab01932a8e\") " pod="openshift-marketplace/certified-operators-mnlks" Mar 12 09:04:52 crc kubenswrapper[4809]: I0312 09:04:52.491922 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d39621be-d3d7-42e2-ace8-8aab01932a8e-catalog-content\") pod \"certified-operators-mnlks\" (UID: \"d39621be-d3d7-42e2-ace8-8aab01932a8e\") " pod="openshift-marketplace/certified-operators-mnlks" Mar 12 09:04:52 crc kubenswrapper[4809]: I0312 09:04:52.594255 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7qv8\" (UniqueName: \"kubernetes.io/projected/d39621be-d3d7-42e2-ace8-8aab01932a8e-kube-api-access-d7qv8\") pod \"certified-operators-mnlks\" (UID: \"d39621be-d3d7-42e2-ace8-8aab01932a8e\") " pod="openshift-marketplace/certified-operators-mnlks" Mar 12 09:04:52 crc kubenswrapper[4809]: I0312 09:04:52.594316 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d39621be-d3d7-42e2-ace8-8aab01932a8e-utilities\") pod \"certified-operators-mnlks\" (UID: \"d39621be-d3d7-42e2-ace8-8aab01932a8e\") " pod="openshift-marketplace/certified-operators-mnlks" Mar 12 09:04:52 crc kubenswrapper[4809]: I0312 09:04:52.594484 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d39621be-d3d7-42e2-ace8-8aab01932a8e-catalog-content\") pod \"certified-operators-mnlks\" (UID: \"d39621be-d3d7-42e2-ace8-8aab01932a8e\") " pod="openshift-marketplace/certified-operators-mnlks" Mar 12 09:04:52 crc kubenswrapper[4809]: I0312 09:04:52.594867 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d39621be-d3d7-42e2-ace8-8aab01932a8e-utilities\") pod \"certified-operators-mnlks\" (UID: \"d39621be-d3d7-42e2-ace8-8aab01932a8e\") " pod="openshift-marketplace/certified-operators-mnlks" Mar 12 09:04:52 crc kubenswrapper[4809]: I0312 09:04:52.595024 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d39621be-d3d7-42e2-ace8-8aab01932a8e-catalog-content\") pod \"certified-operators-mnlks\" (UID: \"d39621be-d3d7-42e2-ace8-8aab01932a8e\") " pod="openshift-marketplace/certified-operators-mnlks" Mar 12 09:04:52 crc kubenswrapper[4809]: I0312 09:04:52.615398 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7qv8\" (UniqueName: \"kubernetes.io/projected/d39621be-d3d7-42e2-ace8-8aab01932a8e-kube-api-access-d7qv8\") pod \"certified-operators-mnlks\" (UID: \"d39621be-d3d7-42e2-ace8-8aab01932a8e\") " pod="openshift-marketplace/certified-operators-mnlks" Mar 12 09:04:52 crc kubenswrapper[4809]: I0312 09:04:52.718748 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mnlks" Mar 12 09:04:53 crc kubenswrapper[4809]: I0312 09:04:53.381907 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mnlks"] Mar 12 09:04:54 crc kubenswrapper[4809]: I0312 09:04:54.254336 4809 generic.go:334] "Generic (PLEG): container finished" podID="d39621be-d3d7-42e2-ace8-8aab01932a8e" containerID="26af0b097b1b0edb93324ba4d5032d8300066796e5e842d0603cd1bed7685c6c" exitCode=0 Mar 12 09:04:54 crc kubenswrapper[4809]: I0312 09:04:54.254441 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mnlks" event={"ID":"d39621be-d3d7-42e2-ace8-8aab01932a8e","Type":"ContainerDied","Data":"26af0b097b1b0edb93324ba4d5032d8300066796e5e842d0603cd1bed7685c6c"} Mar 12 09:04:54 crc kubenswrapper[4809]: I0312 09:04:54.254952 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mnlks" event={"ID":"d39621be-d3d7-42e2-ace8-8aab01932a8e","Type":"ContainerStarted","Data":"a88863a81d6b1678a4d387f1f7b83b6756d0d5a882dbf029a567fb10ac088ceb"} Mar 12 09:04:55 crc kubenswrapper[4809]: I0312 09:04:55.267286 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mnlks" event={"ID":"d39621be-d3d7-42e2-ace8-8aab01932a8e","Type":"ContainerStarted","Data":"14ba641d06021a82ec552604e2c04bbdf14067f572aba9089ac8664677d7b9eb"} Mar 12 09:04:58 crc kubenswrapper[4809]: I0312 09:04:58.296040 4809 generic.go:334] "Generic (PLEG): container finished" podID="d39621be-d3d7-42e2-ace8-8aab01932a8e" containerID="14ba641d06021a82ec552604e2c04bbdf14067f572aba9089ac8664677d7b9eb" exitCode=0 Mar 12 09:04:58 crc kubenswrapper[4809]: I0312 09:04:58.296104 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mnlks" event={"ID":"d39621be-d3d7-42e2-ace8-8aab01932a8e","Type":"ContainerDied","Data":"14ba641d06021a82ec552604e2c04bbdf14067f572aba9089ac8664677d7b9eb"} Mar 12 09:04:59 crc kubenswrapper[4809]: I0312 09:04:59.308955 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mnlks" event={"ID":"d39621be-d3d7-42e2-ace8-8aab01932a8e","Type":"ContainerStarted","Data":"d6797b007688ba7fa00a4ba68dfebc053e6b9d34ee8f23f9f7bf4eef5599ff44"} Mar 12 09:04:59 crc kubenswrapper[4809]: I0312 09:04:59.334737 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mnlks" podStartSLOduration=2.878263472 podStartE2EDuration="7.334716923s" podCreationTimestamp="2026-03-12 09:04:52 +0000 UTC" firstStartedPulling="2026-03-12 09:04:54.259470322 +0000 UTC m=+3967.841506055" lastFinishedPulling="2026-03-12 09:04:58.715923773 +0000 UTC m=+3972.297959506" observedRunningTime="2026-03-12 09:04:59.327922687 +0000 UTC m=+3972.909958430" watchObservedRunningTime="2026-03-12 09:04:59.334716923 +0000 UTC m=+3972.916752656" Mar 12 09:05:02 crc kubenswrapper[4809]: I0312 09:05:02.720399 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mnlks" Mar 12 09:05:02 crc kubenswrapper[4809]: I0312 09:05:02.720939 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mnlks" Mar 12 09:05:02 crc kubenswrapper[4809]: I0312 09:05:02.803570 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mnlks" Mar 12 09:05:03 crc kubenswrapper[4809]: I0312 09:05:03.431945 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mnlks" Mar 12 09:05:03 crc kubenswrapper[4809]: I0312 09:05:03.482165 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mnlks"] Mar 12 09:05:05 crc kubenswrapper[4809]: I0312 09:05:05.389668 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mnlks" podUID="d39621be-d3d7-42e2-ace8-8aab01932a8e" containerName="registry-server" containerID="cri-o://d6797b007688ba7fa00a4ba68dfebc053e6b9d34ee8f23f9f7bf4eef5599ff44" gracePeriod=2 Mar 12 09:05:06 crc kubenswrapper[4809]: I0312 09:05:06.038450 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mnlks" Mar 12 09:05:06 crc kubenswrapper[4809]: I0312 09:05:06.194909 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d39621be-d3d7-42e2-ace8-8aab01932a8e-catalog-content\") pod \"d39621be-d3d7-42e2-ace8-8aab01932a8e\" (UID: \"d39621be-d3d7-42e2-ace8-8aab01932a8e\") " Mar 12 09:05:06 crc kubenswrapper[4809]: I0312 09:05:06.194964 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d39621be-d3d7-42e2-ace8-8aab01932a8e-utilities\") pod \"d39621be-d3d7-42e2-ace8-8aab01932a8e\" (UID: \"d39621be-d3d7-42e2-ace8-8aab01932a8e\") " Mar 12 09:05:06 crc kubenswrapper[4809]: I0312 09:05:06.195149 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7qv8\" (UniqueName: \"kubernetes.io/projected/d39621be-d3d7-42e2-ace8-8aab01932a8e-kube-api-access-d7qv8\") pod \"d39621be-d3d7-42e2-ace8-8aab01932a8e\" (UID: \"d39621be-d3d7-42e2-ace8-8aab01932a8e\") " Mar 12 09:05:06 crc kubenswrapper[4809]: I0312 09:05:06.196162 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d39621be-d3d7-42e2-ace8-8aab01932a8e-utilities" (OuterVolumeSpecName: "utilities") pod "d39621be-d3d7-42e2-ace8-8aab01932a8e" (UID: "d39621be-d3d7-42e2-ace8-8aab01932a8e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 09:05:06 crc kubenswrapper[4809]: I0312 09:05:06.198030 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d39621be-d3d7-42e2-ace8-8aab01932a8e-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 09:05:06 crc kubenswrapper[4809]: I0312 09:05:06.200909 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d39621be-d3d7-42e2-ace8-8aab01932a8e-kube-api-access-d7qv8" (OuterVolumeSpecName: "kube-api-access-d7qv8") pod "d39621be-d3d7-42e2-ace8-8aab01932a8e" (UID: "d39621be-d3d7-42e2-ace8-8aab01932a8e"). InnerVolumeSpecName "kube-api-access-d7qv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 09:05:06 crc kubenswrapper[4809]: I0312 09:05:06.271732 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d39621be-d3d7-42e2-ace8-8aab01932a8e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d39621be-d3d7-42e2-ace8-8aab01932a8e" (UID: "d39621be-d3d7-42e2-ace8-8aab01932a8e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 09:05:06 crc kubenswrapper[4809]: I0312 09:05:06.300751 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7qv8\" (UniqueName: \"kubernetes.io/projected/d39621be-d3d7-42e2-ace8-8aab01932a8e-kube-api-access-d7qv8\") on node \"crc\" DevicePath \"\"" Mar 12 09:05:06 crc kubenswrapper[4809]: I0312 09:05:06.301130 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d39621be-d3d7-42e2-ace8-8aab01932a8e-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 09:05:06 crc kubenswrapper[4809]: I0312 09:05:06.407358 4809 generic.go:334] "Generic (PLEG): container finished" podID="d39621be-d3d7-42e2-ace8-8aab01932a8e" containerID="d6797b007688ba7fa00a4ba68dfebc053e6b9d34ee8f23f9f7bf4eef5599ff44" exitCode=0 Mar 12 09:05:06 crc kubenswrapper[4809]: I0312 09:05:06.407423 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mnlks" event={"ID":"d39621be-d3d7-42e2-ace8-8aab01932a8e","Type":"ContainerDied","Data":"d6797b007688ba7fa00a4ba68dfebc053e6b9d34ee8f23f9f7bf4eef5599ff44"} Mar 12 09:05:06 crc kubenswrapper[4809]: I0312 09:05:06.407436 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mnlks" Mar 12 09:05:06 crc kubenswrapper[4809]: I0312 09:05:06.407462 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mnlks" event={"ID":"d39621be-d3d7-42e2-ace8-8aab01932a8e","Type":"ContainerDied","Data":"a88863a81d6b1678a4d387f1f7b83b6756d0d5a882dbf029a567fb10ac088ceb"} Mar 12 09:05:06 crc kubenswrapper[4809]: I0312 09:05:06.407483 4809 scope.go:117] "RemoveContainer" containerID="d6797b007688ba7fa00a4ba68dfebc053e6b9d34ee8f23f9f7bf4eef5599ff44" Mar 12 09:05:06 crc kubenswrapper[4809]: I0312 09:05:06.441026 4809 scope.go:117] "RemoveContainer" containerID="14ba641d06021a82ec552604e2c04bbdf14067f572aba9089ac8664677d7b9eb" Mar 12 09:05:06 crc kubenswrapper[4809]: I0312 09:05:06.458404 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mnlks"] Mar 12 09:05:06 crc kubenswrapper[4809]: I0312 09:05:06.473427 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mnlks"] Mar 12 09:05:06 crc kubenswrapper[4809]: I0312 09:05:06.478914 4809 scope.go:117] "RemoveContainer" containerID="26af0b097b1b0edb93324ba4d5032d8300066796e5e842d0603cd1bed7685c6c" Mar 12 09:05:06 crc kubenswrapper[4809]: I0312 09:05:06.523218 4809 scope.go:117] "RemoveContainer" containerID="d6797b007688ba7fa00a4ba68dfebc053e6b9d34ee8f23f9f7bf4eef5599ff44" Mar 12 09:05:06 crc kubenswrapper[4809]: E0312 09:05:06.523734 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6797b007688ba7fa00a4ba68dfebc053e6b9d34ee8f23f9f7bf4eef5599ff44\": container with ID starting with d6797b007688ba7fa00a4ba68dfebc053e6b9d34ee8f23f9f7bf4eef5599ff44 not found: ID does not exist" containerID="d6797b007688ba7fa00a4ba68dfebc053e6b9d34ee8f23f9f7bf4eef5599ff44" Mar 12 09:05:06 crc kubenswrapper[4809]: I0312 09:05:06.523774 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6797b007688ba7fa00a4ba68dfebc053e6b9d34ee8f23f9f7bf4eef5599ff44"} err="failed to get container status \"d6797b007688ba7fa00a4ba68dfebc053e6b9d34ee8f23f9f7bf4eef5599ff44\": rpc error: code = NotFound desc = could not find container \"d6797b007688ba7fa00a4ba68dfebc053e6b9d34ee8f23f9f7bf4eef5599ff44\": container with ID starting with d6797b007688ba7fa00a4ba68dfebc053e6b9d34ee8f23f9f7bf4eef5599ff44 not found: ID does not exist" Mar 12 09:05:06 crc kubenswrapper[4809]: I0312 09:05:06.523801 4809 scope.go:117] "RemoveContainer" containerID="14ba641d06021a82ec552604e2c04bbdf14067f572aba9089ac8664677d7b9eb" Mar 12 09:05:06 crc kubenswrapper[4809]: E0312 09:05:06.524302 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14ba641d06021a82ec552604e2c04bbdf14067f572aba9089ac8664677d7b9eb\": container with ID starting with 14ba641d06021a82ec552604e2c04bbdf14067f572aba9089ac8664677d7b9eb not found: ID does not exist" containerID="14ba641d06021a82ec552604e2c04bbdf14067f572aba9089ac8664677d7b9eb" Mar 12 09:05:06 crc kubenswrapper[4809]: I0312 09:05:06.524343 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14ba641d06021a82ec552604e2c04bbdf14067f572aba9089ac8664677d7b9eb"} err="failed to get container status \"14ba641d06021a82ec552604e2c04bbdf14067f572aba9089ac8664677d7b9eb\": rpc error: code = NotFound desc = could not find container \"14ba641d06021a82ec552604e2c04bbdf14067f572aba9089ac8664677d7b9eb\": container with ID starting with 14ba641d06021a82ec552604e2c04bbdf14067f572aba9089ac8664677d7b9eb not found: ID does not exist" Mar 12 09:05:06 crc kubenswrapper[4809]: I0312 09:05:06.524365 4809 scope.go:117] "RemoveContainer" containerID="26af0b097b1b0edb93324ba4d5032d8300066796e5e842d0603cd1bed7685c6c" Mar 12 09:05:06 crc kubenswrapper[4809]: E0312 09:05:06.524658 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26af0b097b1b0edb93324ba4d5032d8300066796e5e842d0603cd1bed7685c6c\": container with ID starting with 26af0b097b1b0edb93324ba4d5032d8300066796e5e842d0603cd1bed7685c6c not found: ID does not exist" containerID="26af0b097b1b0edb93324ba4d5032d8300066796e5e842d0603cd1bed7685c6c" Mar 12 09:05:06 crc kubenswrapper[4809]: I0312 09:05:06.524684 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26af0b097b1b0edb93324ba4d5032d8300066796e5e842d0603cd1bed7685c6c"} err="failed to get container status \"26af0b097b1b0edb93324ba4d5032d8300066796e5e842d0603cd1bed7685c6c\": rpc error: code = NotFound desc = could not find container \"26af0b097b1b0edb93324ba4d5032d8300066796e5e842d0603cd1bed7685c6c\": container with ID starting with 26af0b097b1b0edb93324ba4d5032d8300066796e5e842d0603cd1bed7685c6c not found: ID does not exist" Mar 12 09:05:07 crc kubenswrapper[4809]: I0312 09:05:07.124306 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d39621be-d3d7-42e2-ace8-8aab01932a8e" path="/var/lib/kubelet/pods/d39621be-d3d7-42e2-ace8-8aab01932a8e/volumes" Mar 12 09:06:00 crc kubenswrapper[4809]: I0312 09:06:00.162309 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555106-59jwb"] Mar 12 09:06:00 crc kubenswrapper[4809]: E0312 09:06:00.166597 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d39621be-d3d7-42e2-ace8-8aab01932a8e" containerName="registry-server" Mar 12 09:06:00 crc kubenswrapper[4809]: I0312 09:06:00.166630 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="d39621be-d3d7-42e2-ace8-8aab01932a8e" containerName="registry-server" Mar 12 09:06:00 crc kubenswrapper[4809]: E0312 09:06:00.166667 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d39621be-d3d7-42e2-ace8-8aab01932a8e" containerName="extract-utilities" Mar 12 09:06:00 crc kubenswrapper[4809]: I0312 09:06:00.166676 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="d39621be-d3d7-42e2-ace8-8aab01932a8e" containerName="extract-utilities" Mar 12 09:06:00 crc kubenswrapper[4809]: E0312 09:06:00.166689 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d39621be-d3d7-42e2-ace8-8aab01932a8e" containerName="extract-content" Mar 12 09:06:00 crc kubenswrapper[4809]: I0312 09:06:00.166695 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="d39621be-d3d7-42e2-ace8-8aab01932a8e" containerName="extract-content" Mar 12 09:06:00 crc kubenswrapper[4809]: I0312 09:06:00.167011 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="d39621be-d3d7-42e2-ace8-8aab01932a8e" containerName="registry-server" Mar 12 09:06:00 crc kubenswrapper[4809]: I0312 09:06:00.168260 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555106-59jwb" Mar 12 09:06:00 crc kubenswrapper[4809]: I0312 09:06:00.180627 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 09:06:00 crc kubenswrapper[4809]: I0312 09:06:00.181147 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 09:06:00 crc kubenswrapper[4809]: I0312 09:06:00.181253 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-tvxqv" Mar 12 09:06:00 crc kubenswrapper[4809]: I0312 09:06:00.185223 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555106-59jwb"] Mar 12 09:06:00 crc kubenswrapper[4809]: I0312 09:06:00.268797 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfsqr\" (UniqueName: \"kubernetes.io/projected/25a2ac75-51a4-4fc9-8b9c-22dfa2f0430f-kube-api-access-qfsqr\") pod \"auto-csr-approver-29555106-59jwb\" (UID: \"25a2ac75-51a4-4fc9-8b9c-22dfa2f0430f\") " pod="openshift-infra/auto-csr-approver-29555106-59jwb" Mar 12 09:06:00 crc kubenswrapper[4809]: I0312 09:06:00.372429 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfsqr\" (UniqueName: \"kubernetes.io/projected/25a2ac75-51a4-4fc9-8b9c-22dfa2f0430f-kube-api-access-qfsqr\") pod \"auto-csr-approver-29555106-59jwb\" (UID: \"25a2ac75-51a4-4fc9-8b9c-22dfa2f0430f\") " pod="openshift-infra/auto-csr-approver-29555106-59jwb" Mar 12 09:06:00 crc kubenswrapper[4809]: I0312 09:06:00.395789 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfsqr\" (UniqueName: \"kubernetes.io/projected/25a2ac75-51a4-4fc9-8b9c-22dfa2f0430f-kube-api-access-qfsqr\") pod \"auto-csr-approver-29555106-59jwb\" (UID: \"25a2ac75-51a4-4fc9-8b9c-22dfa2f0430f\") " pod="openshift-infra/auto-csr-approver-29555106-59jwb" Mar 12 09:06:00 crc kubenswrapper[4809]: I0312 09:06:00.493464 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555106-59jwb" Mar 12 09:06:01 crc kubenswrapper[4809]: W0312 09:06:01.048668 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25a2ac75_51a4_4fc9_8b9c_22dfa2f0430f.slice/crio-acb4a46fba73c883f9b4a3d9a207e1b26c5707fda1ab820b8f398e6c10e68bd1 WatchSource:0}: Error finding container acb4a46fba73c883f9b4a3d9a207e1b26c5707fda1ab820b8f398e6c10e68bd1: Status 404 returned error can't find the container with id acb4a46fba73c883f9b4a3d9a207e1b26c5707fda1ab820b8f398e6c10e68bd1 Mar 12 09:06:01 crc kubenswrapper[4809]: I0312 09:06:01.051359 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555106-59jwb"] Mar 12 09:06:01 crc kubenswrapper[4809]: I0312 09:06:01.124685 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555106-59jwb" event={"ID":"25a2ac75-51a4-4fc9-8b9c-22dfa2f0430f","Type":"ContainerStarted","Data":"acb4a46fba73c883f9b4a3d9a207e1b26c5707fda1ab820b8f398e6c10e68bd1"} Mar 12 09:06:03 crc kubenswrapper[4809]: I0312 09:06:03.161771 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555106-59jwb" event={"ID":"25a2ac75-51a4-4fc9-8b9c-22dfa2f0430f","Type":"ContainerStarted","Data":"e218e500f0f6cd6bbb2d7a6f407c9c728aa2de45a318f098dce261ce7b36215c"} Mar 12 09:06:03 crc kubenswrapper[4809]: I0312 09:06:03.225096 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555106-59jwb" podStartSLOduration=2.095936773 podStartE2EDuration="3.225071368s" podCreationTimestamp="2026-03-12 09:06:00 +0000 UTC" firstStartedPulling="2026-03-12 09:06:01.054387352 +0000 UTC m=+4034.636423115" lastFinishedPulling="2026-03-12 09:06:02.183521977 +0000 UTC m=+4035.765557710" observedRunningTime="2026-03-12 09:06:03.214019886 +0000 UTC m=+4036.796055609" watchObservedRunningTime="2026-03-12 09:06:03.225071368 +0000 UTC m=+4036.807107101" Mar 12 09:06:04 crc kubenswrapper[4809]: I0312 09:06:04.179852 4809 generic.go:334] "Generic (PLEG): container finished" podID="25a2ac75-51a4-4fc9-8b9c-22dfa2f0430f" containerID="e218e500f0f6cd6bbb2d7a6f407c9c728aa2de45a318f098dce261ce7b36215c" exitCode=0 Mar 12 09:06:04 crc kubenswrapper[4809]: I0312 09:06:04.180179 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555106-59jwb" event={"ID":"25a2ac75-51a4-4fc9-8b9c-22dfa2f0430f","Type":"ContainerDied","Data":"e218e500f0f6cd6bbb2d7a6f407c9c728aa2de45a318f098dce261ce7b36215c"} Mar 12 09:06:05 crc kubenswrapper[4809]: I0312 09:06:05.664475 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555106-59jwb" Mar 12 09:06:05 crc kubenswrapper[4809]: I0312 09:06:05.731733 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfsqr\" (UniqueName: \"kubernetes.io/projected/25a2ac75-51a4-4fc9-8b9c-22dfa2f0430f-kube-api-access-qfsqr\") pod \"25a2ac75-51a4-4fc9-8b9c-22dfa2f0430f\" (UID: \"25a2ac75-51a4-4fc9-8b9c-22dfa2f0430f\") " Mar 12 09:06:05 crc kubenswrapper[4809]: I0312 09:06:05.743106 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25a2ac75-51a4-4fc9-8b9c-22dfa2f0430f-kube-api-access-qfsqr" (OuterVolumeSpecName: "kube-api-access-qfsqr") pod "25a2ac75-51a4-4fc9-8b9c-22dfa2f0430f" (UID: "25a2ac75-51a4-4fc9-8b9c-22dfa2f0430f"). InnerVolumeSpecName "kube-api-access-qfsqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 09:06:05 crc kubenswrapper[4809]: I0312 09:06:05.837473 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfsqr\" (UniqueName: \"kubernetes.io/projected/25a2ac75-51a4-4fc9-8b9c-22dfa2f0430f-kube-api-access-qfsqr\") on node \"crc\" DevicePath \"\"" Mar 12 09:06:06 crc kubenswrapper[4809]: I0312 09:06:06.221564 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555106-59jwb" event={"ID":"25a2ac75-51a4-4fc9-8b9c-22dfa2f0430f","Type":"ContainerDied","Data":"acb4a46fba73c883f9b4a3d9a207e1b26c5707fda1ab820b8f398e6c10e68bd1"} Mar 12 09:06:06 crc kubenswrapper[4809]: I0312 09:06:06.221615 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="acb4a46fba73c883f9b4a3d9a207e1b26c5707fda1ab820b8f398e6c10e68bd1" Mar 12 09:06:06 crc kubenswrapper[4809]: I0312 09:06:06.221676 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555106-59jwb" Mar 12 09:06:06 crc kubenswrapper[4809]: I0312 09:06:06.773959 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555100-2b7rf"] Mar 12 09:06:06 crc kubenswrapper[4809]: I0312 09:06:06.788603 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555100-2b7rf"] Mar 12 09:06:07 crc kubenswrapper[4809]: I0312 09:06:07.121995 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec3349ba-5be4-417e-a870-0b98408d9b25" path="/var/lib/kubelet/pods/ec3349ba-5be4-417e-a870-0b98408d9b25/volumes" Mar 12 09:06:15 crc kubenswrapper[4809]: I0312 09:06:15.048552 4809 patch_prober.go:28] interesting pod/machine-config-daemon-h6d4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 09:06:15 crc kubenswrapper[4809]: I0312 09:06:15.049273 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 09:06:34 crc kubenswrapper[4809]: I0312 09:06:34.135475 4809 scope.go:117] "RemoveContainer" containerID="7377204713adfd0c507147626d78dcc6f6d20dcc8bee45b17edaed444804c7cf" Mar 12 09:06:45 crc kubenswrapper[4809]: I0312 09:06:45.048656 4809 patch_prober.go:28] interesting pod/machine-config-daemon-h6d4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 09:06:45 crc kubenswrapper[4809]: I0312 09:06:45.049278 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 09:07:15 crc kubenswrapper[4809]: I0312 09:07:15.048439 4809 patch_prober.go:28] interesting pod/machine-config-daemon-h6d4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 09:07:15 crc kubenswrapper[4809]: I0312 09:07:15.049077 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 09:07:15 crc kubenswrapper[4809]: I0312 09:07:15.049152 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" Mar 12 09:07:15 crc kubenswrapper[4809]: I0312 09:07:15.050240 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f2be3e38a61c19b1d51c5c797b60016e60dd08a75d2793110268f3676c81027a"} pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 09:07:15 crc kubenswrapper[4809]: I0312 09:07:15.050294 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" containerID="cri-o://f2be3e38a61c19b1d51c5c797b60016e60dd08a75d2793110268f3676c81027a" gracePeriod=600 Mar 12 09:07:16 crc kubenswrapper[4809]: I0312 09:07:16.140598 4809 generic.go:334] "Generic (PLEG): container finished" podID="101483ba-8ed3-40eb-9855-077e9add029f" containerID="f2be3e38a61c19b1d51c5c797b60016e60dd08a75d2793110268f3676c81027a" exitCode=0 Mar 12 09:07:16 crc kubenswrapper[4809]: I0312 09:07:16.140715 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" event={"ID":"101483ba-8ed3-40eb-9855-077e9add029f","Type":"ContainerDied","Data":"f2be3e38a61c19b1d51c5c797b60016e60dd08a75d2793110268f3676c81027a"} Mar 12 09:07:16 crc kubenswrapper[4809]: I0312 09:07:16.142287 4809 scope.go:117] "RemoveContainer" containerID="18e7b6a6acc918a926c068dfaa70c8ee3f2195ac0234a00d247b295f88e7163c" Mar 12 09:07:17 crc kubenswrapper[4809]: I0312 09:07:17.165254 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" event={"ID":"101483ba-8ed3-40eb-9855-077e9add029f","Type":"ContainerStarted","Data":"9f5b71a157ead4783e53a957b141b066128a6a4df018ea7f5f50f3c38116cbd8"} Mar 12 09:07:36 crc kubenswrapper[4809]: I0312 09:07:36.041729 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fhb8h"] Mar 12 09:07:36 crc kubenswrapper[4809]: E0312 09:07:36.043105 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25a2ac75-51a4-4fc9-8b9c-22dfa2f0430f" containerName="oc" Mar 12 09:07:36 crc kubenswrapper[4809]: I0312 09:07:36.043147 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="25a2ac75-51a4-4fc9-8b9c-22dfa2f0430f" containerName="oc" Mar 12 09:07:36 crc kubenswrapper[4809]: I0312 09:07:36.043515 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="25a2ac75-51a4-4fc9-8b9c-22dfa2f0430f" containerName="oc" Mar 12 09:07:36 crc kubenswrapper[4809]: I0312 09:07:36.045916 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fhb8h" Mar 12 09:07:36 crc kubenswrapper[4809]: I0312 09:07:36.070559 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fhb8h"] Mar 12 09:07:36 crc kubenswrapper[4809]: I0312 09:07:36.201399 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b97e2d01-86ec-4665-878c-3aab2bbf780a-catalog-content\") pod \"redhat-operators-fhb8h\" (UID: \"b97e2d01-86ec-4665-878c-3aab2bbf780a\") " pod="openshift-marketplace/redhat-operators-fhb8h" Mar 12 09:07:36 crc kubenswrapper[4809]: I0312 09:07:36.201499 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b97e2d01-86ec-4665-878c-3aab2bbf780a-utilities\") pod \"redhat-operators-fhb8h\" (UID: \"b97e2d01-86ec-4665-878c-3aab2bbf780a\") " pod="openshift-marketplace/redhat-operators-fhb8h" Mar 12 09:07:36 crc kubenswrapper[4809]: I0312 09:07:36.201610 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5w9p\" (UniqueName: \"kubernetes.io/projected/b97e2d01-86ec-4665-878c-3aab2bbf780a-kube-api-access-d5w9p\") pod \"redhat-operators-fhb8h\" (UID: \"b97e2d01-86ec-4665-878c-3aab2bbf780a\") " pod="openshift-marketplace/redhat-operators-fhb8h" Mar 12 09:07:36 crc kubenswrapper[4809]: I0312 09:07:36.303846 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b97e2d01-86ec-4665-878c-3aab2bbf780a-catalog-content\") pod \"redhat-operators-fhb8h\" (UID: \"b97e2d01-86ec-4665-878c-3aab2bbf780a\") " pod="openshift-marketplace/redhat-operators-fhb8h" Mar 12 09:07:36 crc kubenswrapper[4809]: I0312 09:07:36.304205 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b97e2d01-86ec-4665-878c-3aab2bbf780a-utilities\") pod \"redhat-operators-fhb8h\" (UID: \"b97e2d01-86ec-4665-878c-3aab2bbf780a\") " pod="openshift-marketplace/redhat-operators-fhb8h" Mar 12 09:07:36 crc kubenswrapper[4809]: I0312 09:07:36.304275 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5w9p\" (UniqueName: \"kubernetes.io/projected/b97e2d01-86ec-4665-878c-3aab2bbf780a-kube-api-access-d5w9p\") pod \"redhat-operators-fhb8h\" (UID: \"b97e2d01-86ec-4665-878c-3aab2bbf780a\") " pod="openshift-marketplace/redhat-operators-fhb8h" Mar 12 09:07:36 crc kubenswrapper[4809]: I0312 09:07:36.304653 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b97e2d01-86ec-4665-878c-3aab2bbf780a-catalog-content\") pod \"redhat-operators-fhb8h\" (UID: \"b97e2d01-86ec-4665-878c-3aab2bbf780a\") " pod="openshift-marketplace/redhat-operators-fhb8h" Mar 12 09:07:36 crc kubenswrapper[4809]: I0312 09:07:36.304734 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b97e2d01-86ec-4665-878c-3aab2bbf780a-utilities\") pod \"redhat-operators-fhb8h\" (UID: \"b97e2d01-86ec-4665-878c-3aab2bbf780a\") " pod="openshift-marketplace/redhat-operators-fhb8h" Mar 12 09:07:36 crc kubenswrapper[4809]: I0312 09:07:36.336130 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5w9p\" (UniqueName: \"kubernetes.io/projected/b97e2d01-86ec-4665-878c-3aab2bbf780a-kube-api-access-d5w9p\") pod \"redhat-operators-fhb8h\" (UID: \"b97e2d01-86ec-4665-878c-3aab2bbf780a\") " pod="openshift-marketplace/redhat-operators-fhb8h" Mar 12 09:07:36 crc kubenswrapper[4809]: I0312 09:07:36.384101 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fhb8h" Mar 12 09:07:36 crc kubenswrapper[4809]: I0312 09:07:36.966316 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fhb8h"] Mar 12 09:07:37 crc kubenswrapper[4809]: I0312 09:07:37.428096 4809 generic.go:334] "Generic (PLEG): container finished" podID="b97e2d01-86ec-4665-878c-3aab2bbf780a" containerID="65897570d9cf6c6ab0beca4850c6e4f7602322696af69db577b0d33a731221a5" exitCode=0 Mar 12 09:07:37 crc kubenswrapper[4809]: I0312 09:07:37.428165 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fhb8h" event={"ID":"b97e2d01-86ec-4665-878c-3aab2bbf780a","Type":"ContainerDied","Data":"65897570d9cf6c6ab0beca4850c6e4f7602322696af69db577b0d33a731221a5"} Mar 12 09:07:37 crc kubenswrapper[4809]: I0312 09:07:37.428492 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fhb8h" event={"ID":"b97e2d01-86ec-4665-878c-3aab2bbf780a","Type":"ContainerStarted","Data":"337a8197fcc1e90f3630b86b4283e6a60d29a0eac9649de878fe6b4329205ad7"} Mar 12 09:07:37 crc kubenswrapper[4809]: I0312 09:07:37.430218 4809 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 09:07:39 crc kubenswrapper[4809]: I0312 09:07:39.453234 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fhb8h" event={"ID":"b97e2d01-86ec-4665-878c-3aab2bbf780a","Type":"ContainerStarted","Data":"aab587f7f0a5cd1b67a0193a443ab72f66ffa9692ef5849660e7087c5cf58102"} Mar 12 09:07:43 crc kubenswrapper[4809]: I0312 09:07:43.903223 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qdqnn"] Mar 12 09:07:43 crc kubenswrapper[4809]: I0312 09:07:43.907949 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qdqnn" Mar 12 09:07:43 crc kubenswrapper[4809]: I0312 09:07:43.917216 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qdqnn"] Mar 12 09:07:44 crc kubenswrapper[4809]: I0312 09:07:44.029531 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/943e9195-97d0-448b-ba40-269b62beefd4-utilities\") pod \"redhat-marketplace-qdqnn\" (UID: \"943e9195-97d0-448b-ba40-269b62beefd4\") " pod="openshift-marketplace/redhat-marketplace-qdqnn" Mar 12 09:07:44 crc kubenswrapper[4809]: I0312 09:07:44.029670 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgggq\" (UniqueName: \"kubernetes.io/projected/943e9195-97d0-448b-ba40-269b62beefd4-kube-api-access-sgggq\") pod \"redhat-marketplace-qdqnn\" (UID: \"943e9195-97d0-448b-ba40-269b62beefd4\") " pod="openshift-marketplace/redhat-marketplace-qdqnn" Mar 12 09:07:44 crc kubenswrapper[4809]: I0312 09:07:44.029767 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/943e9195-97d0-448b-ba40-269b62beefd4-catalog-content\") pod \"redhat-marketplace-qdqnn\" (UID: \"943e9195-97d0-448b-ba40-269b62beefd4\") " pod="openshift-marketplace/redhat-marketplace-qdqnn" Mar 12 09:07:44 crc kubenswrapper[4809]: I0312 09:07:44.132408 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgggq\" (UniqueName: \"kubernetes.io/projected/943e9195-97d0-448b-ba40-269b62beefd4-kube-api-access-sgggq\") pod \"redhat-marketplace-qdqnn\" (UID: \"943e9195-97d0-448b-ba40-269b62beefd4\") " pod="openshift-marketplace/redhat-marketplace-qdqnn" Mar 12 09:07:44 crc kubenswrapper[4809]: I0312 09:07:44.132509 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/943e9195-97d0-448b-ba40-269b62beefd4-catalog-content\") pod \"redhat-marketplace-qdqnn\" (UID: \"943e9195-97d0-448b-ba40-269b62beefd4\") " pod="openshift-marketplace/redhat-marketplace-qdqnn" Mar 12 09:07:44 crc kubenswrapper[4809]: I0312 09:07:44.132678 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/943e9195-97d0-448b-ba40-269b62beefd4-utilities\") pod \"redhat-marketplace-qdqnn\" (UID: \"943e9195-97d0-448b-ba40-269b62beefd4\") " pod="openshift-marketplace/redhat-marketplace-qdqnn" Mar 12 09:07:44 crc kubenswrapper[4809]: I0312 09:07:44.133608 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/943e9195-97d0-448b-ba40-269b62beefd4-catalog-content\") pod \"redhat-marketplace-qdqnn\" (UID: \"943e9195-97d0-448b-ba40-269b62beefd4\") " pod="openshift-marketplace/redhat-marketplace-qdqnn" Mar 12 09:07:44 crc kubenswrapper[4809]: I0312 09:07:44.134079 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/943e9195-97d0-448b-ba40-269b62beefd4-utilities\") pod \"redhat-marketplace-qdqnn\" (UID: \"943e9195-97d0-448b-ba40-269b62beefd4\") " pod="openshift-marketplace/redhat-marketplace-qdqnn" Mar 12 09:07:44 crc kubenswrapper[4809]: I0312 09:07:44.156692 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgggq\" (UniqueName: \"kubernetes.io/projected/943e9195-97d0-448b-ba40-269b62beefd4-kube-api-access-sgggq\") pod \"redhat-marketplace-qdqnn\" (UID: \"943e9195-97d0-448b-ba40-269b62beefd4\") " pod="openshift-marketplace/redhat-marketplace-qdqnn" Mar 12 09:07:44 crc kubenswrapper[4809]: I0312 09:07:44.277443 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qdqnn" Mar 12 09:07:44 crc kubenswrapper[4809]: I0312 09:07:44.523340 4809 generic.go:334] "Generic (PLEG): container finished" podID="b97e2d01-86ec-4665-878c-3aab2bbf780a" containerID="aab587f7f0a5cd1b67a0193a443ab72f66ffa9692ef5849660e7087c5cf58102" exitCode=0 Mar 12 09:07:44 crc kubenswrapper[4809]: I0312 09:07:44.523558 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fhb8h" event={"ID":"b97e2d01-86ec-4665-878c-3aab2bbf780a","Type":"ContainerDied","Data":"aab587f7f0a5cd1b67a0193a443ab72f66ffa9692ef5849660e7087c5cf58102"} Mar 12 09:07:44 crc kubenswrapper[4809]: I0312 09:07:44.856673 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qdqnn"] Mar 12 09:07:45 crc kubenswrapper[4809]: I0312 09:07:45.540268 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fhb8h" event={"ID":"b97e2d01-86ec-4665-878c-3aab2bbf780a","Type":"ContainerStarted","Data":"2d17a71d5417a4f403077a80d5bf0272c51f135d5fdf74bd1f6f5ee25e214a51"} Mar 12 09:07:45 crc kubenswrapper[4809]: I0312 09:07:45.541683 4809 generic.go:334] "Generic (PLEG): container finished" podID="943e9195-97d0-448b-ba40-269b62beefd4" containerID="fb8b8b7c492a4b9d4e299f97cc5de78ea62fc9cda44449981731fcec6f147682" exitCode=0 Mar 12 09:07:45 crc kubenswrapper[4809]: I0312 09:07:45.541726 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qdqnn" event={"ID":"943e9195-97d0-448b-ba40-269b62beefd4","Type":"ContainerDied","Data":"fb8b8b7c492a4b9d4e299f97cc5de78ea62fc9cda44449981731fcec6f147682"} Mar 12 09:07:45 crc kubenswrapper[4809]: I0312 09:07:45.541757 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qdqnn" event={"ID":"943e9195-97d0-448b-ba40-269b62beefd4","Type":"ContainerStarted","Data":"2c640f2741c41045b5d8d5ef5cd24878e636bde7c572a45af37e374e946391d4"} Mar 12 09:07:45 crc kubenswrapper[4809]: I0312 09:07:45.565258 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fhb8h" podStartSLOduration=1.9898301699999998 podStartE2EDuration="9.565232057s" podCreationTimestamp="2026-03-12 09:07:36 +0000 UTC" firstStartedPulling="2026-03-12 09:07:37.42995584 +0000 UTC m=+4131.011991573" lastFinishedPulling="2026-03-12 09:07:45.005357727 +0000 UTC m=+4138.587393460" observedRunningTime="2026-03-12 09:07:45.558705298 +0000 UTC m=+4139.140741031" watchObservedRunningTime="2026-03-12 09:07:45.565232057 +0000 UTC m=+4139.147267790" Mar 12 09:07:46 crc kubenswrapper[4809]: I0312 09:07:46.385062 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fhb8h" Mar 12 09:07:46 crc kubenswrapper[4809]: I0312 09:07:46.385775 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fhb8h" Mar 12 09:07:47 crc kubenswrapper[4809]: I0312 09:07:47.477628 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fhb8h" podUID="b97e2d01-86ec-4665-878c-3aab2bbf780a" containerName="registry-server" probeResult="failure" output=< Mar 12 09:07:47 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 09:07:47 crc kubenswrapper[4809]: > Mar 12 09:07:47 crc kubenswrapper[4809]: I0312 09:07:47.572286 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qdqnn" event={"ID":"943e9195-97d0-448b-ba40-269b62beefd4","Type":"ContainerStarted","Data":"8039f4c400fc871ac5507090d86e88999f11f48641931e2e18a4888ff61ca8b0"} Mar 12 09:07:48 crc kubenswrapper[4809]: I0312 09:07:48.584239 4809 generic.go:334] "Generic (PLEG): container finished" podID="943e9195-97d0-448b-ba40-269b62beefd4" containerID="8039f4c400fc871ac5507090d86e88999f11f48641931e2e18a4888ff61ca8b0" exitCode=0 Mar 12 09:07:48 crc kubenswrapper[4809]: I0312 09:07:48.584293 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qdqnn" event={"ID":"943e9195-97d0-448b-ba40-269b62beefd4","Type":"ContainerDied","Data":"8039f4c400fc871ac5507090d86e88999f11f48641931e2e18a4888ff61ca8b0"} Mar 12 09:07:49 crc kubenswrapper[4809]: I0312 09:07:49.596774 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qdqnn" event={"ID":"943e9195-97d0-448b-ba40-269b62beefd4","Type":"ContainerStarted","Data":"d26ee31a229c121660278a3e7075fb01c559385a0910c9d13682887b7cd862d2"} Mar 12 09:07:49 crc kubenswrapper[4809]: I0312 09:07:49.621762 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qdqnn" podStartSLOduration=3.052801149 podStartE2EDuration="6.62173931s" podCreationTimestamp="2026-03-12 09:07:43 +0000 UTC" firstStartedPulling="2026-03-12 09:07:45.543545303 +0000 UTC m=+4139.125581036" lastFinishedPulling="2026-03-12 09:07:49.112483464 +0000 UTC m=+4142.694519197" observedRunningTime="2026-03-12 09:07:49.613548685 +0000 UTC m=+4143.195584418" watchObservedRunningTime="2026-03-12 09:07:49.62173931 +0000 UTC m=+4143.203775043" Mar 12 09:07:54 crc kubenswrapper[4809]: I0312 09:07:54.277930 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qdqnn" Mar 12 09:07:54 crc kubenswrapper[4809]: I0312 09:07:54.278867 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qdqnn" Mar 12 09:07:54 crc kubenswrapper[4809]: I0312 09:07:54.642510 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qdqnn" Mar 12 09:07:54 crc kubenswrapper[4809]: I0312 09:07:54.728316 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qdqnn" Mar 12 09:07:54 crc kubenswrapper[4809]: I0312 09:07:54.902687 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qdqnn"] Mar 12 09:07:56 crc kubenswrapper[4809]: I0312 09:07:56.676497 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qdqnn" podUID="943e9195-97d0-448b-ba40-269b62beefd4" containerName="registry-server" containerID="cri-o://d26ee31a229c121660278a3e7075fb01c559385a0910c9d13682887b7cd862d2" gracePeriod=2 Mar 12 09:07:57 crc kubenswrapper[4809]: I0312 09:07:57.259517 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qdqnn" Mar 12 09:07:57 crc kubenswrapper[4809]: I0312 09:07:57.427621 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgggq\" (UniqueName: \"kubernetes.io/projected/943e9195-97d0-448b-ba40-269b62beefd4-kube-api-access-sgggq\") pod \"943e9195-97d0-448b-ba40-269b62beefd4\" (UID: \"943e9195-97d0-448b-ba40-269b62beefd4\") " Mar 12 09:07:57 crc kubenswrapper[4809]: I0312 09:07:57.427894 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/943e9195-97d0-448b-ba40-269b62beefd4-catalog-content\") pod \"943e9195-97d0-448b-ba40-269b62beefd4\" (UID: \"943e9195-97d0-448b-ba40-269b62beefd4\") " Mar 12 09:07:57 crc kubenswrapper[4809]: I0312 09:07:57.428091 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/943e9195-97d0-448b-ba40-269b62beefd4-utilities\") pod \"943e9195-97d0-448b-ba40-269b62beefd4\" (UID: \"943e9195-97d0-448b-ba40-269b62beefd4\") " Mar 12 09:07:57 crc kubenswrapper[4809]: I0312 09:07:57.429837 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/943e9195-97d0-448b-ba40-269b62beefd4-utilities" (OuterVolumeSpecName: "utilities") pod "943e9195-97d0-448b-ba40-269b62beefd4" (UID: "943e9195-97d0-448b-ba40-269b62beefd4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 09:07:57 crc kubenswrapper[4809]: I0312 09:07:57.435354 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/943e9195-97d0-448b-ba40-269b62beefd4-kube-api-access-sgggq" (OuterVolumeSpecName: "kube-api-access-sgggq") pod "943e9195-97d0-448b-ba40-269b62beefd4" (UID: "943e9195-97d0-448b-ba40-269b62beefd4"). InnerVolumeSpecName "kube-api-access-sgggq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 09:07:57 crc kubenswrapper[4809]: I0312 09:07:57.437504 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fhb8h" podUID="b97e2d01-86ec-4665-878c-3aab2bbf780a" containerName="registry-server" probeResult="failure" output=< Mar 12 09:07:57 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 09:07:57 crc kubenswrapper[4809]: > Mar 12 09:07:57 crc kubenswrapper[4809]: I0312 09:07:57.455274 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/943e9195-97d0-448b-ba40-269b62beefd4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "943e9195-97d0-448b-ba40-269b62beefd4" (UID: "943e9195-97d0-448b-ba40-269b62beefd4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 09:07:57 crc kubenswrapper[4809]: I0312 09:07:57.531064 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/943e9195-97d0-448b-ba40-269b62beefd4-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 09:07:57 crc kubenswrapper[4809]: I0312 09:07:57.531103 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/943e9195-97d0-448b-ba40-269b62beefd4-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 09:07:57 crc kubenswrapper[4809]: I0312 09:07:57.531126 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sgggq\" (UniqueName: \"kubernetes.io/projected/943e9195-97d0-448b-ba40-269b62beefd4-kube-api-access-sgggq\") on node \"crc\" DevicePath \"\"" Mar 12 09:07:57 crc kubenswrapper[4809]: I0312 09:07:57.691716 4809 generic.go:334] "Generic (PLEG): container finished" podID="943e9195-97d0-448b-ba40-269b62beefd4" containerID="d26ee31a229c121660278a3e7075fb01c559385a0910c9d13682887b7cd862d2" exitCode=0 Mar 12 09:07:57 crc kubenswrapper[4809]: I0312 09:07:57.691772 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qdqnn" event={"ID":"943e9195-97d0-448b-ba40-269b62beefd4","Type":"ContainerDied","Data":"d26ee31a229c121660278a3e7075fb01c559385a0910c9d13682887b7cd862d2"} Mar 12 09:07:57 crc kubenswrapper[4809]: I0312 09:07:57.691802 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qdqnn" event={"ID":"943e9195-97d0-448b-ba40-269b62beefd4","Type":"ContainerDied","Data":"2c640f2741c41045b5d8d5ef5cd24878e636bde7c572a45af37e374e946391d4"} Mar 12 09:07:57 crc kubenswrapper[4809]: I0312 09:07:57.691806 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qdqnn" Mar 12 09:07:57 crc kubenswrapper[4809]: I0312 09:07:57.691828 4809 scope.go:117] "RemoveContainer" containerID="d26ee31a229c121660278a3e7075fb01c559385a0910c9d13682887b7cd862d2" Mar 12 09:07:57 crc kubenswrapper[4809]: I0312 09:07:57.719120 4809 scope.go:117] "RemoveContainer" containerID="8039f4c400fc871ac5507090d86e88999f11f48641931e2e18a4888ff61ca8b0" Mar 12 09:07:57 crc kubenswrapper[4809]: I0312 09:07:57.742203 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qdqnn"] Mar 12 09:07:57 crc kubenswrapper[4809]: I0312 09:07:57.758336 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qdqnn"] Mar 12 09:07:57 crc kubenswrapper[4809]: I0312 09:07:57.768648 4809 scope.go:117] "RemoveContainer" containerID="fb8b8b7c492a4b9d4e299f97cc5de78ea62fc9cda44449981731fcec6f147682" Mar 12 09:07:57 crc kubenswrapper[4809]: I0312 09:07:57.809331 4809 scope.go:117] "RemoveContainer" containerID="d26ee31a229c121660278a3e7075fb01c559385a0910c9d13682887b7cd862d2" Mar 12 09:07:57 crc kubenswrapper[4809]: E0312 09:07:57.809752 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d26ee31a229c121660278a3e7075fb01c559385a0910c9d13682887b7cd862d2\": container with ID starting with d26ee31a229c121660278a3e7075fb01c559385a0910c9d13682887b7cd862d2 not found: ID does not exist" containerID="d26ee31a229c121660278a3e7075fb01c559385a0910c9d13682887b7cd862d2" Mar 12 09:07:57 crc kubenswrapper[4809]: I0312 09:07:57.809791 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d26ee31a229c121660278a3e7075fb01c559385a0910c9d13682887b7cd862d2"} err="failed to get container status \"d26ee31a229c121660278a3e7075fb01c559385a0910c9d13682887b7cd862d2\": rpc error: code = NotFound desc = could not find container \"d26ee31a229c121660278a3e7075fb01c559385a0910c9d13682887b7cd862d2\": container with ID starting with d26ee31a229c121660278a3e7075fb01c559385a0910c9d13682887b7cd862d2 not found: ID does not exist" Mar 12 09:07:57 crc kubenswrapper[4809]: I0312 09:07:57.809818 4809 scope.go:117] "RemoveContainer" containerID="8039f4c400fc871ac5507090d86e88999f11f48641931e2e18a4888ff61ca8b0" Mar 12 09:07:57 crc kubenswrapper[4809]: E0312 09:07:57.810196 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8039f4c400fc871ac5507090d86e88999f11f48641931e2e18a4888ff61ca8b0\": container with ID starting with 8039f4c400fc871ac5507090d86e88999f11f48641931e2e18a4888ff61ca8b0 not found: ID does not exist" containerID="8039f4c400fc871ac5507090d86e88999f11f48641931e2e18a4888ff61ca8b0" Mar 12 09:07:57 crc kubenswrapper[4809]: I0312 09:07:57.810223 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8039f4c400fc871ac5507090d86e88999f11f48641931e2e18a4888ff61ca8b0"} err="failed to get container status \"8039f4c400fc871ac5507090d86e88999f11f48641931e2e18a4888ff61ca8b0\": rpc error: code = NotFound desc = could not find container \"8039f4c400fc871ac5507090d86e88999f11f48641931e2e18a4888ff61ca8b0\": container with ID starting with 8039f4c400fc871ac5507090d86e88999f11f48641931e2e18a4888ff61ca8b0 not found: ID does not exist" Mar 12 09:07:57 crc kubenswrapper[4809]: I0312 09:07:57.810239 4809 scope.go:117] "RemoveContainer" containerID="fb8b8b7c492a4b9d4e299f97cc5de78ea62fc9cda44449981731fcec6f147682" Mar 12 09:07:57 crc kubenswrapper[4809]: E0312 09:07:57.810478 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb8b8b7c492a4b9d4e299f97cc5de78ea62fc9cda44449981731fcec6f147682\": container with ID starting with fb8b8b7c492a4b9d4e299f97cc5de78ea62fc9cda44449981731fcec6f147682 not found: ID does not exist" containerID="fb8b8b7c492a4b9d4e299f97cc5de78ea62fc9cda44449981731fcec6f147682" Mar 12 09:07:57 crc kubenswrapper[4809]: I0312 09:07:57.810506 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb8b8b7c492a4b9d4e299f97cc5de78ea62fc9cda44449981731fcec6f147682"} err="failed to get container status \"fb8b8b7c492a4b9d4e299f97cc5de78ea62fc9cda44449981731fcec6f147682\": rpc error: code = NotFound desc = could not find container \"fb8b8b7c492a4b9d4e299f97cc5de78ea62fc9cda44449981731fcec6f147682\": container with ID starting with fb8b8b7c492a4b9d4e299f97cc5de78ea62fc9cda44449981731fcec6f147682 not found: ID does not exist" Mar 12 09:07:59 crc kubenswrapper[4809]: I0312 09:07:59.121445 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="943e9195-97d0-448b-ba40-269b62beefd4" path="/var/lib/kubelet/pods/943e9195-97d0-448b-ba40-269b62beefd4/volumes" Mar 12 09:08:00 crc kubenswrapper[4809]: I0312 09:08:00.157449 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555108-qp7ts"] Mar 12 09:08:00 crc kubenswrapper[4809]: E0312 09:08:00.158795 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="943e9195-97d0-448b-ba40-269b62beefd4" containerName="extract-content" Mar 12 09:08:00 crc kubenswrapper[4809]: I0312 09:08:00.158812 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="943e9195-97d0-448b-ba40-269b62beefd4" containerName="extract-content" Mar 12 09:08:00 crc kubenswrapper[4809]: E0312 09:08:00.158837 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="943e9195-97d0-448b-ba40-269b62beefd4" containerName="registry-server" Mar 12 09:08:00 crc kubenswrapper[4809]: I0312 09:08:00.158844 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="943e9195-97d0-448b-ba40-269b62beefd4" containerName="registry-server" Mar 12 09:08:00 crc kubenswrapper[4809]: E0312 09:08:00.158881 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="943e9195-97d0-448b-ba40-269b62beefd4" containerName="extract-utilities" Mar 12 09:08:00 crc kubenswrapper[4809]: I0312 09:08:00.158888 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="943e9195-97d0-448b-ba40-269b62beefd4" containerName="extract-utilities" Mar 12 09:08:00 crc kubenswrapper[4809]: I0312 09:08:00.159186 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="943e9195-97d0-448b-ba40-269b62beefd4" containerName="registry-server" Mar 12 09:08:00 crc kubenswrapper[4809]: I0312 09:08:00.160084 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555108-qp7ts" Mar 12 09:08:00 crc kubenswrapper[4809]: I0312 09:08:00.162558 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-tvxqv" Mar 12 09:08:00 crc kubenswrapper[4809]: I0312 09:08:00.164503 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 09:08:00 crc kubenswrapper[4809]: I0312 09:08:00.170074 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 09:08:00 crc kubenswrapper[4809]: I0312 09:08:00.171567 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555108-qp7ts"] Mar 12 09:08:00 crc kubenswrapper[4809]: I0312 09:08:00.311664 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj2nz\" (UniqueName: \"kubernetes.io/projected/4e64baa3-814f-425c-97ed-10f85a7dae63-kube-api-access-sj2nz\") pod \"auto-csr-approver-29555108-qp7ts\" (UID: \"4e64baa3-814f-425c-97ed-10f85a7dae63\") " pod="openshift-infra/auto-csr-approver-29555108-qp7ts" Mar 12 09:08:00 crc kubenswrapper[4809]: I0312 09:08:00.415399 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sj2nz\" (UniqueName: \"kubernetes.io/projected/4e64baa3-814f-425c-97ed-10f85a7dae63-kube-api-access-sj2nz\") pod \"auto-csr-approver-29555108-qp7ts\" (UID: \"4e64baa3-814f-425c-97ed-10f85a7dae63\") " pod="openshift-infra/auto-csr-approver-29555108-qp7ts" Mar 12 09:08:00 crc kubenswrapper[4809]: I0312 09:08:00.433738 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sj2nz\" (UniqueName: \"kubernetes.io/projected/4e64baa3-814f-425c-97ed-10f85a7dae63-kube-api-access-sj2nz\") pod \"auto-csr-approver-29555108-qp7ts\" (UID: \"4e64baa3-814f-425c-97ed-10f85a7dae63\") " pod="openshift-infra/auto-csr-approver-29555108-qp7ts" Mar 12 09:08:00 crc kubenswrapper[4809]: I0312 09:08:00.486011 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555108-qp7ts" Mar 12 09:08:00 crc kubenswrapper[4809]: W0312 09:08:00.971272 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e64baa3_814f_425c_97ed_10f85a7dae63.slice/crio-210843fc16cda83f580f25f1bebd991020e1859ea7fe926cf3da052726e751e3 WatchSource:0}: Error finding container 210843fc16cda83f580f25f1bebd991020e1859ea7fe926cf3da052726e751e3: Status 404 returned error can't find the container with id 210843fc16cda83f580f25f1bebd991020e1859ea7fe926cf3da052726e751e3 Mar 12 09:08:00 crc kubenswrapper[4809]: I0312 09:08:00.972568 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555108-qp7ts"] Mar 12 09:08:01 crc kubenswrapper[4809]: I0312 09:08:01.743976 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555108-qp7ts" event={"ID":"4e64baa3-814f-425c-97ed-10f85a7dae63","Type":"ContainerStarted","Data":"210843fc16cda83f580f25f1bebd991020e1859ea7fe926cf3da052726e751e3"} Mar 12 09:08:02 crc kubenswrapper[4809]: I0312 09:08:02.763061 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555108-qp7ts" event={"ID":"4e64baa3-814f-425c-97ed-10f85a7dae63","Type":"ContainerStarted","Data":"abeac68e0d79f34a44247736b437b6fd61ee60a71286d2c1ca6e5c430c4b0aac"} Mar 12 09:08:02 crc kubenswrapper[4809]: I0312 09:08:02.787671 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555108-qp7ts" podStartSLOduration=1.4592235900000001 podStartE2EDuration="2.787651905s" podCreationTimestamp="2026-03-12 09:08:00 +0000 UTC" firstStartedPulling="2026-03-12 09:08:00.973754585 +0000 UTC m=+4154.555790318" lastFinishedPulling="2026-03-12 09:08:02.3021829 +0000 UTC m=+4155.884218633" observedRunningTime="2026-03-12 09:08:02.776839919 +0000 UTC m=+4156.358875662" watchObservedRunningTime="2026-03-12 09:08:02.787651905 +0000 UTC m=+4156.369687628" Mar 12 09:08:03 crc kubenswrapper[4809]: I0312 09:08:03.777249 4809 generic.go:334] "Generic (PLEG): container finished" podID="4e64baa3-814f-425c-97ed-10f85a7dae63" containerID="abeac68e0d79f34a44247736b437b6fd61ee60a71286d2c1ca6e5c430c4b0aac" exitCode=0 Mar 12 09:08:03 crc kubenswrapper[4809]: I0312 09:08:03.777354 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555108-qp7ts" event={"ID":"4e64baa3-814f-425c-97ed-10f85a7dae63","Type":"ContainerDied","Data":"abeac68e0d79f34a44247736b437b6fd61ee60a71286d2c1ca6e5c430c4b0aac"} Mar 12 09:08:05 crc kubenswrapper[4809]: I0312 09:08:05.955634 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555108-qp7ts" Mar 12 09:08:06 crc kubenswrapper[4809]: I0312 09:08:06.076606 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sj2nz\" (UniqueName: \"kubernetes.io/projected/4e64baa3-814f-425c-97ed-10f85a7dae63-kube-api-access-sj2nz\") pod \"4e64baa3-814f-425c-97ed-10f85a7dae63\" (UID: \"4e64baa3-814f-425c-97ed-10f85a7dae63\") " Mar 12 09:08:06 crc kubenswrapper[4809]: I0312 09:08:06.084362 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e64baa3-814f-425c-97ed-10f85a7dae63-kube-api-access-sj2nz" (OuterVolumeSpecName: "kube-api-access-sj2nz") pod "4e64baa3-814f-425c-97ed-10f85a7dae63" (UID: "4e64baa3-814f-425c-97ed-10f85a7dae63"). InnerVolumeSpecName "kube-api-access-sj2nz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 09:08:06 crc kubenswrapper[4809]: I0312 09:08:06.181006 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sj2nz\" (UniqueName: \"kubernetes.io/projected/4e64baa3-814f-425c-97ed-10f85a7dae63-kube-api-access-sj2nz\") on node \"crc\" DevicePath \"\"" Mar 12 09:08:06 crc kubenswrapper[4809]: I0312 09:08:06.814448 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555108-qp7ts" event={"ID":"4e64baa3-814f-425c-97ed-10f85a7dae63","Type":"ContainerDied","Data":"210843fc16cda83f580f25f1bebd991020e1859ea7fe926cf3da052726e751e3"} Mar 12 09:08:06 crc kubenswrapper[4809]: I0312 09:08:06.814930 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="210843fc16cda83f580f25f1bebd991020e1859ea7fe926cf3da052726e751e3" Mar 12 09:08:06 crc kubenswrapper[4809]: I0312 09:08:06.814543 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555108-qp7ts" Mar 12 09:08:07 crc kubenswrapper[4809]: I0312 09:08:07.064901 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555102-85p4v"] Mar 12 09:08:07 crc kubenswrapper[4809]: I0312 09:08:07.082838 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555102-85p4v"] Mar 12 09:08:07 crc kubenswrapper[4809]: I0312 09:08:07.132537 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="541bc502-f7a1-43fd-84e6-00950e43ae2a" path="/var/lib/kubelet/pods/541bc502-f7a1-43fd-84e6-00950e43ae2a/volumes" Mar 12 09:08:07 crc kubenswrapper[4809]: I0312 09:08:07.472987 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fhb8h" podUID="b97e2d01-86ec-4665-878c-3aab2bbf780a" containerName="registry-server" probeResult="failure" output=< Mar 12 09:08:07 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 09:08:07 crc kubenswrapper[4809]: > Mar 12 09:08:17 crc kubenswrapper[4809]: I0312 09:08:17.441707 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fhb8h" podUID="b97e2d01-86ec-4665-878c-3aab2bbf780a" containerName="registry-server" probeResult="failure" output=< Mar 12 09:08:17 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 09:08:17 crc kubenswrapper[4809]: > Mar 12 09:08:26 crc kubenswrapper[4809]: I0312 09:08:26.469538 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fhb8h" Mar 12 09:08:26 crc kubenswrapper[4809]: I0312 09:08:26.547039 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fhb8h" Mar 12 09:08:26 crc kubenswrapper[4809]: I0312 09:08:26.720645 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fhb8h"] Mar 12 09:08:28 crc kubenswrapper[4809]: I0312 09:08:28.104389 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fhb8h" podUID="b97e2d01-86ec-4665-878c-3aab2bbf780a" containerName="registry-server" containerID="cri-o://2d17a71d5417a4f403077a80d5bf0272c51f135d5fdf74bd1f6f5ee25e214a51" gracePeriod=2 Mar 12 09:08:28 crc kubenswrapper[4809]: I0312 09:08:28.677498 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fhb8h" Mar 12 09:08:28 crc kubenswrapper[4809]: I0312 09:08:28.698474 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b97e2d01-86ec-4665-878c-3aab2bbf780a-catalog-content\") pod \"b97e2d01-86ec-4665-878c-3aab2bbf780a\" (UID: \"b97e2d01-86ec-4665-878c-3aab2bbf780a\") " Mar 12 09:08:28 crc kubenswrapper[4809]: I0312 09:08:28.698612 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b97e2d01-86ec-4665-878c-3aab2bbf780a-utilities\") pod \"b97e2d01-86ec-4665-878c-3aab2bbf780a\" (UID: \"b97e2d01-86ec-4665-878c-3aab2bbf780a\") " Mar 12 09:08:28 crc kubenswrapper[4809]: I0312 09:08:28.698682 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5w9p\" (UniqueName: \"kubernetes.io/projected/b97e2d01-86ec-4665-878c-3aab2bbf780a-kube-api-access-d5w9p\") pod \"b97e2d01-86ec-4665-878c-3aab2bbf780a\" (UID: \"b97e2d01-86ec-4665-878c-3aab2bbf780a\") " Mar 12 09:08:28 crc kubenswrapper[4809]: I0312 09:08:28.699501 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b97e2d01-86ec-4665-878c-3aab2bbf780a-utilities" (OuterVolumeSpecName: "utilities") pod "b97e2d01-86ec-4665-878c-3aab2bbf780a" (UID: "b97e2d01-86ec-4665-878c-3aab2bbf780a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 09:08:28 crc kubenswrapper[4809]: I0312 09:08:28.708802 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b97e2d01-86ec-4665-878c-3aab2bbf780a-kube-api-access-d5w9p" (OuterVolumeSpecName: "kube-api-access-d5w9p") pod "b97e2d01-86ec-4665-878c-3aab2bbf780a" (UID: "b97e2d01-86ec-4665-878c-3aab2bbf780a"). InnerVolumeSpecName "kube-api-access-d5w9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 09:08:28 crc kubenswrapper[4809]: I0312 09:08:28.800992 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b97e2d01-86ec-4665-878c-3aab2bbf780a-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 09:08:28 crc kubenswrapper[4809]: I0312 09:08:28.801217 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5w9p\" (UniqueName: \"kubernetes.io/projected/b97e2d01-86ec-4665-878c-3aab2bbf780a-kube-api-access-d5w9p\") on node \"crc\" DevicePath \"\"" Mar 12 09:08:28 crc kubenswrapper[4809]: I0312 09:08:28.875158 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b97e2d01-86ec-4665-878c-3aab2bbf780a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b97e2d01-86ec-4665-878c-3aab2bbf780a" (UID: "b97e2d01-86ec-4665-878c-3aab2bbf780a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 09:08:28 crc kubenswrapper[4809]: I0312 09:08:28.907441 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b97e2d01-86ec-4665-878c-3aab2bbf780a-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 09:08:29 crc kubenswrapper[4809]: I0312 09:08:29.125305 4809 generic.go:334] "Generic (PLEG): container finished" podID="b97e2d01-86ec-4665-878c-3aab2bbf780a" containerID="2d17a71d5417a4f403077a80d5bf0272c51f135d5fdf74bd1f6f5ee25e214a51" exitCode=0 Mar 12 09:08:29 crc kubenswrapper[4809]: I0312 09:08:29.125356 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fhb8h" event={"ID":"b97e2d01-86ec-4665-878c-3aab2bbf780a","Type":"ContainerDied","Data":"2d17a71d5417a4f403077a80d5bf0272c51f135d5fdf74bd1f6f5ee25e214a51"} Mar 12 09:08:29 crc kubenswrapper[4809]: I0312 09:08:29.125387 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fhb8h" event={"ID":"b97e2d01-86ec-4665-878c-3aab2bbf780a","Type":"ContainerDied","Data":"337a8197fcc1e90f3630b86b4283e6a60d29a0eac9649de878fe6b4329205ad7"} Mar 12 09:08:29 crc kubenswrapper[4809]: I0312 09:08:29.125469 4809 scope.go:117] "RemoveContainer" containerID="2d17a71d5417a4f403077a80d5bf0272c51f135d5fdf74bd1f6f5ee25e214a51" Mar 12 09:08:29 crc kubenswrapper[4809]: I0312 09:08:29.125474 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fhb8h" Mar 12 09:08:29 crc kubenswrapper[4809]: I0312 09:08:29.191975 4809 scope.go:117] "RemoveContainer" containerID="aab587f7f0a5cd1b67a0193a443ab72f66ffa9692ef5849660e7087c5cf58102" Mar 12 09:08:29 crc kubenswrapper[4809]: I0312 09:08:29.196104 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fhb8h"] Mar 12 09:08:29 crc kubenswrapper[4809]: I0312 09:08:29.221567 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fhb8h"] Mar 12 09:08:29 crc kubenswrapper[4809]: I0312 09:08:29.224150 4809 scope.go:117] "RemoveContainer" containerID="65897570d9cf6c6ab0beca4850c6e4f7602322696af69db577b0d33a731221a5" Mar 12 09:08:29 crc kubenswrapper[4809]: I0312 09:08:29.305242 4809 scope.go:117] "RemoveContainer" containerID="2d17a71d5417a4f403077a80d5bf0272c51f135d5fdf74bd1f6f5ee25e214a51" Mar 12 09:08:29 crc kubenswrapper[4809]: E0312 09:08:29.307406 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d17a71d5417a4f403077a80d5bf0272c51f135d5fdf74bd1f6f5ee25e214a51\": container with ID starting with 2d17a71d5417a4f403077a80d5bf0272c51f135d5fdf74bd1f6f5ee25e214a51 not found: ID does not exist" containerID="2d17a71d5417a4f403077a80d5bf0272c51f135d5fdf74bd1f6f5ee25e214a51" Mar 12 09:08:29 crc kubenswrapper[4809]: I0312 09:08:29.307456 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d17a71d5417a4f403077a80d5bf0272c51f135d5fdf74bd1f6f5ee25e214a51"} err="failed to get container status \"2d17a71d5417a4f403077a80d5bf0272c51f135d5fdf74bd1f6f5ee25e214a51\": rpc error: code = NotFound desc = could not find container \"2d17a71d5417a4f403077a80d5bf0272c51f135d5fdf74bd1f6f5ee25e214a51\": container with ID starting with 2d17a71d5417a4f403077a80d5bf0272c51f135d5fdf74bd1f6f5ee25e214a51 not found: ID does not exist" Mar 12 09:08:29 crc kubenswrapper[4809]: I0312 09:08:29.307488 4809 scope.go:117] "RemoveContainer" containerID="aab587f7f0a5cd1b67a0193a443ab72f66ffa9692ef5849660e7087c5cf58102" Mar 12 09:08:29 crc kubenswrapper[4809]: E0312 09:08:29.307769 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aab587f7f0a5cd1b67a0193a443ab72f66ffa9692ef5849660e7087c5cf58102\": container with ID starting with aab587f7f0a5cd1b67a0193a443ab72f66ffa9692ef5849660e7087c5cf58102 not found: ID does not exist" containerID="aab587f7f0a5cd1b67a0193a443ab72f66ffa9692ef5849660e7087c5cf58102" Mar 12 09:08:29 crc kubenswrapper[4809]: I0312 09:08:29.307801 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aab587f7f0a5cd1b67a0193a443ab72f66ffa9692ef5849660e7087c5cf58102"} err="failed to get container status \"aab587f7f0a5cd1b67a0193a443ab72f66ffa9692ef5849660e7087c5cf58102\": rpc error: code = NotFound desc = could not find container \"aab587f7f0a5cd1b67a0193a443ab72f66ffa9692ef5849660e7087c5cf58102\": container with ID starting with aab587f7f0a5cd1b67a0193a443ab72f66ffa9692ef5849660e7087c5cf58102 not found: ID does not exist" Mar 12 09:08:29 crc kubenswrapper[4809]: I0312 09:08:29.307823 4809 scope.go:117] "RemoveContainer" containerID="65897570d9cf6c6ab0beca4850c6e4f7602322696af69db577b0d33a731221a5" Mar 12 09:08:29 crc kubenswrapper[4809]: E0312 09:08:29.308079 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65897570d9cf6c6ab0beca4850c6e4f7602322696af69db577b0d33a731221a5\": container with ID starting with 65897570d9cf6c6ab0beca4850c6e4f7602322696af69db577b0d33a731221a5 not found: ID does not exist" containerID="65897570d9cf6c6ab0beca4850c6e4f7602322696af69db577b0d33a731221a5" Mar 12 09:08:29 crc kubenswrapper[4809]: I0312 09:08:29.308113 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65897570d9cf6c6ab0beca4850c6e4f7602322696af69db577b0d33a731221a5"} err="failed to get container status \"65897570d9cf6c6ab0beca4850c6e4f7602322696af69db577b0d33a731221a5\": rpc error: code = NotFound desc = could not find container \"65897570d9cf6c6ab0beca4850c6e4f7602322696af69db577b0d33a731221a5\": container with ID starting with 65897570d9cf6c6ab0beca4850c6e4f7602322696af69db577b0d33a731221a5 not found: ID does not exist" Mar 12 09:08:31 crc kubenswrapper[4809]: I0312 09:08:31.119460 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b97e2d01-86ec-4665-878c-3aab2bbf780a" path="/var/lib/kubelet/pods/b97e2d01-86ec-4665-878c-3aab2bbf780a/volumes" Mar 12 09:08:34 crc kubenswrapper[4809]: I0312 09:08:34.272820 4809 scope.go:117] "RemoveContainer" containerID="9186d13e60ad556ce447fac37db64bbd01abecb4e15b409605b3cd6ffbbdae25" Mar 12 09:09:45 crc kubenswrapper[4809]: I0312 09:09:45.048678 4809 patch_prober.go:28] interesting pod/machine-config-daemon-h6d4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 09:09:45 crc kubenswrapper[4809]: I0312 09:09:45.049265 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 09:09:50 crc kubenswrapper[4809]: E0312 09:09:50.455906 4809 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.80:57010->38.102.83.80:34627: read tcp 38.102.83.80:57010->38.102.83.80:34627: read: connection reset by peer Mar 12 09:09:50 crc kubenswrapper[4809]: E0312 09:09:50.455925 4809 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.80:57010->38.102.83.80:34627: write tcp 38.102.83.80:57010->38.102.83.80:34627: write: broken pipe Mar 12 09:10:00 crc kubenswrapper[4809]: I0312 09:10:00.139990 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555110-zclvc"] Mar 12 09:10:00 crc kubenswrapper[4809]: E0312 09:10:00.141379 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e64baa3-814f-425c-97ed-10f85a7dae63" containerName="oc" Mar 12 09:10:00 crc kubenswrapper[4809]: I0312 09:10:00.141399 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e64baa3-814f-425c-97ed-10f85a7dae63" containerName="oc" Mar 12 09:10:00 crc kubenswrapper[4809]: E0312 09:10:00.141420 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b97e2d01-86ec-4665-878c-3aab2bbf780a" containerName="extract-content" Mar 12 09:10:00 crc kubenswrapper[4809]: I0312 09:10:00.141431 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="b97e2d01-86ec-4665-878c-3aab2bbf780a" containerName="extract-content" Mar 12 09:10:00 crc kubenswrapper[4809]: E0312 09:10:00.141452 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b97e2d01-86ec-4665-878c-3aab2bbf780a" containerName="registry-server" Mar 12 09:10:00 crc kubenswrapper[4809]: I0312 09:10:00.141463 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="b97e2d01-86ec-4665-878c-3aab2bbf780a" containerName="registry-server" Mar 12 09:10:00 crc kubenswrapper[4809]: E0312 09:10:00.141488 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b97e2d01-86ec-4665-878c-3aab2bbf780a" containerName="extract-utilities" Mar 12 09:10:00 crc kubenswrapper[4809]: I0312 09:10:00.141497 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="b97e2d01-86ec-4665-878c-3aab2bbf780a" containerName="extract-utilities" Mar 12 09:10:00 crc kubenswrapper[4809]: I0312 09:10:00.141826 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e64baa3-814f-425c-97ed-10f85a7dae63" containerName="oc" Mar 12 09:10:00 crc kubenswrapper[4809]: I0312 09:10:00.141845 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="b97e2d01-86ec-4665-878c-3aab2bbf780a" containerName="registry-server" Mar 12 09:10:00 crc kubenswrapper[4809]: I0312 09:10:00.142954 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555110-zclvc" Mar 12 09:10:00 crc kubenswrapper[4809]: I0312 09:10:00.145445 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-tvxqv" Mar 12 09:10:00 crc kubenswrapper[4809]: I0312 09:10:00.145523 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 09:10:00 crc kubenswrapper[4809]: I0312 09:10:00.145745 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 09:10:00 crc kubenswrapper[4809]: I0312 09:10:00.156191 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555110-zclvc"] Mar 12 09:10:00 crc kubenswrapper[4809]: I0312 09:10:00.246787 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94vwz\" (UniqueName: \"kubernetes.io/projected/9c05d355-1d83-4903-a3ec-d2be1b2aef2b-kube-api-access-94vwz\") pod \"auto-csr-approver-29555110-zclvc\" (UID: \"9c05d355-1d83-4903-a3ec-d2be1b2aef2b\") " pod="openshift-infra/auto-csr-approver-29555110-zclvc" Mar 12 09:10:00 crc kubenswrapper[4809]: I0312 09:10:00.349705 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94vwz\" (UniqueName: \"kubernetes.io/projected/9c05d355-1d83-4903-a3ec-d2be1b2aef2b-kube-api-access-94vwz\") pod \"auto-csr-approver-29555110-zclvc\" (UID: \"9c05d355-1d83-4903-a3ec-d2be1b2aef2b\") " pod="openshift-infra/auto-csr-approver-29555110-zclvc" Mar 12 09:10:00 crc kubenswrapper[4809]: I0312 09:10:00.381888 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94vwz\" (UniqueName: \"kubernetes.io/projected/9c05d355-1d83-4903-a3ec-d2be1b2aef2b-kube-api-access-94vwz\") pod \"auto-csr-approver-29555110-zclvc\" (UID: \"9c05d355-1d83-4903-a3ec-d2be1b2aef2b\") " pod="openshift-infra/auto-csr-approver-29555110-zclvc" Mar 12 09:10:00 crc kubenswrapper[4809]: I0312 09:10:00.463666 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555110-zclvc" Mar 12 09:10:00 crc kubenswrapper[4809]: I0312 09:10:00.930175 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555110-zclvc"] Mar 12 09:10:01 crc kubenswrapper[4809]: I0312 09:10:01.716825 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555110-zclvc" event={"ID":"9c05d355-1d83-4903-a3ec-d2be1b2aef2b","Type":"ContainerStarted","Data":"8493db0d026b312fa6613e3fbc8921622f842ca8b6657da0f6779e6f0d134b93"} Mar 12 09:10:03 crc kubenswrapper[4809]: I0312 09:10:03.753443 4809 generic.go:334] "Generic (PLEG): container finished" podID="9c05d355-1d83-4903-a3ec-d2be1b2aef2b" containerID="7fa6daeba317d318621aba1fe3d247a3caf19e7921ddc27583a167819e5af40a" exitCode=0 Mar 12 09:10:03 crc kubenswrapper[4809]: I0312 09:10:03.753566 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555110-zclvc" event={"ID":"9c05d355-1d83-4903-a3ec-d2be1b2aef2b","Type":"ContainerDied","Data":"7fa6daeba317d318621aba1fe3d247a3caf19e7921ddc27583a167819e5af40a"} Mar 12 09:10:05 crc kubenswrapper[4809]: I0312 09:10:05.347077 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555110-zclvc" Mar 12 09:10:05 crc kubenswrapper[4809]: I0312 09:10:05.399399 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94vwz\" (UniqueName: \"kubernetes.io/projected/9c05d355-1d83-4903-a3ec-d2be1b2aef2b-kube-api-access-94vwz\") pod \"9c05d355-1d83-4903-a3ec-d2be1b2aef2b\" (UID: \"9c05d355-1d83-4903-a3ec-d2be1b2aef2b\") " Mar 12 09:10:05 crc kubenswrapper[4809]: I0312 09:10:05.412906 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c05d355-1d83-4903-a3ec-d2be1b2aef2b-kube-api-access-94vwz" (OuterVolumeSpecName: "kube-api-access-94vwz") pod "9c05d355-1d83-4903-a3ec-d2be1b2aef2b" (UID: "9c05d355-1d83-4903-a3ec-d2be1b2aef2b"). InnerVolumeSpecName "kube-api-access-94vwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 09:10:05 crc kubenswrapper[4809]: I0312 09:10:05.503850 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94vwz\" (UniqueName: \"kubernetes.io/projected/9c05d355-1d83-4903-a3ec-d2be1b2aef2b-kube-api-access-94vwz\") on node \"crc\" DevicePath \"\"" Mar 12 09:10:05 crc kubenswrapper[4809]: I0312 09:10:05.783772 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555110-zclvc" event={"ID":"9c05d355-1d83-4903-a3ec-d2be1b2aef2b","Type":"ContainerDied","Data":"8493db0d026b312fa6613e3fbc8921622f842ca8b6657da0f6779e6f0d134b93"} Mar 12 09:10:05 crc kubenswrapper[4809]: I0312 09:10:05.783844 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8493db0d026b312fa6613e3fbc8921622f842ca8b6657da0f6779e6f0d134b93" Mar 12 09:10:05 crc kubenswrapper[4809]: I0312 09:10:05.783851 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555110-zclvc" Mar 12 09:10:06 crc kubenswrapper[4809]: I0312 09:10:06.445561 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555104-mbbjm"] Mar 12 09:10:06 crc kubenswrapper[4809]: I0312 09:10:06.462681 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555104-mbbjm"] Mar 12 09:10:07 crc kubenswrapper[4809]: I0312 09:10:07.122839 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05046d95-86af-4ebf-90da-9510d0c6d499" path="/var/lib/kubelet/pods/05046d95-86af-4ebf-90da-9510d0c6d499/volumes" Mar 12 09:10:15 crc kubenswrapper[4809]: I0312 09:10:15.048303 4809 patch_prober.go:28] interesting pod/machine-config-daemon-h6d4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 09:10:15 crc kubenswrapper[4809]: I0312 09:10:15.048956 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 09:10:30 crc kubenswrapper[4809]: E0312 09:10:30.864954 4809 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.80:51182->38.102.83.80:34627: write tcp 38.102.83.80:51182->38.102.83.80:34627: write: broken pipe Mar 12 09:10:35 crc kubenswrapper[4809]: I0312 09:10:35.296175 4809 scope.go:117] "RemoveContainer" containerID="8ca46e6086e4dca25c386705ecbfca85d6860ea9ab000f783c125fc2520dc956" Mar 12 09:10:45 crc kubenswrapper[4809]: I0312 09:10:45.048790 4809 patch_prober.go:28] interesting pod/machine-config-daemon-h6d4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 09:10:45 crc kubenswrapper[4809]: I0312 09:10:45.049640 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 09:10:45 crc kubenswrapper[4809]: I0312 09:10:45.049696 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" Mar 12 09:10:45 crc kubenswrapper[4809]: I0312 09:10:45.050784 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9f5b71a157ead4783e53a957b141b066128a6a4df018ea7f5f50f3c38116cbd8"} pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 09:10:45 crc kubenswrapper[4809]: I0312 09:10:45.050867 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" containerID="cri-o://9f5b71a157ead4783e53a957b141b066128a6a4df018ea7f5f50f3c38116cbd8" gracePeriod=600 Mar 12 09:10:45 crc kubenswrapper[4809]: E0312 09:10:45.186165 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:10:45 crc kubenswrapper[4809]: I0312 09:10:45.313469 4809 generic.go:334] "Generic (PLEG): container finished" podID="101483ba-8ed3-40eb-9855-077e9add029f" containerID="9f5b71a157ead4783e53a957b141b066128a6a4df018ea7f5f50f3c38116cbd8" exitCode=0 Mar 12 09:10:45 crc kubenswrapper[4809]: I0312 09:10:45.313524 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" event={"ID":"101483ba-8ed3-40eb-9855-077e9add029f","Type":"ContainerDied","Data":"9f5b71a157ead4783e53a957b141b066128a6a4df018ea7f5f50f3c38116cbd8"} Mar 12 09:10:45 crc kubenswrapper[4809]: I0312 09:10:45.313564 4809 scope.go:117] "RemoveContainer" containerID="f2be3e38a61c19b1d51c5c797b60016e60dd08a75d2793110268f3676c81027a" Mar 12 09:10:45 crc kubenswrapper[4809]: I0312 09:10:45.314417 4809 scope.go:117] "RemoveContainer" containerID="9f5b71a157ead4783e53a957b141b066128a6a4df018ea7f5f50f3c38116cbd8" Mar 12 09:10:45 crc kubenswrapper[4809]: E0312 09:10:45.314686 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:11:00 crc kubenswrapper[4809]: I0312 09:11:00.106650 4809 scope.go:117] "RemoveContainer" containerID="9f5b71a157ead4783e53a957b141b066128a6a4df018ea7f5f50f3c38116cbd8" Mar 12 09:11:00 crc kubenswrapper[4809]: E0312 09:11:00.107407 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:11:15 crc kubenswrapper[4809]: I0312 09:11:15.106197 4809 scope.go:117] "RemoveContainer" containerID="9f5b71a157ead4783e53a957b141b066128a6a4df018ea7f5f50f3c38116cbd8" Mar 12 09:11:15 crc kubenswrapper[4809]: E0312 09:11:15.107021 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:11:29 crc kubenswrapper[4809]: I0312 09:11:29.107511 4809 scope.go:117] "RemoveContainer" containerID="9f5b71a157ead4783e53a957b141b066128a6a4df018ea7f5f50f3c38116cbd8" Mar 12 09:11:29 crc kubenswrapper[4809]: E0312 09:11:29.108807 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:11:44 crc kubenswrapper[4809]: I0312 09:11:44.107014 4809 scope.go:117] "RemoveContainer" containerID="9f5b71a157ead4783e53a957b141b066128a6a4df018ea7f5f50f3c38116cbd8" Mar 12 09:11:44 crc kubenswrapper[4809]: E0312 09:11:44.111020 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:11:55 crc kubenswrapper[4809]: I0312 09:11:55.107162 4809 scope.go:117] "RemoveContainer" containerID="9f5b71a157ead4783e53a957b141b066128a6a4df018ea7f5f50f3c38116cbd8" Mar 12 09:11:55 crc kubenswrapper[4809]: E0312 09:11:55.109033 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:12:00 crc kubenswrapper[4809]: I0312 09:12:00.178931 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555112-dpqqz"] Mar 12 09:12:00 crc kubenswrapper[4809]: E0312 09:12:00.180310 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c05d355-1d83-4903-a3ec-d2be1b2aef2b" containerName="oc" Mar 12 09:12:00 crc kubenswrapper[4809]: I0312 09:12:00.180328 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c05d355-1d83-4903-a3ec-d2be1b2aef2b" containerName="oc" Mar 12 09:12:00 crc kubenswrapper[4809]: I0312 09:12:00.180639 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c05d355-1d83-4903-a3ec-d2be1b2aef2b" containerName="oc" Mar 12 09:12:00 crc kubenswrapper[4809]: I0312 09:12:00.181725 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555112-dpqqz" Mar 12 09:12:00 crc kubenswrapper[4809]: I0312 09:12:00.185658 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 09:12:00 crc kubenswrapper[4809]: I0312 09:12:00.186165 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 09:12:00 crc kubenswrapper[4809]: I0312 09:12:00.186454 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-tvxqv" Mar 12 09:12:00 crc kubenswrapper[4809]: I0312 09:12:00.193795 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555112-dpqqz"] Mar 12 09:12:00 crc kubenswrapper[4809]: I0312 09:12:00.346005 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbz6d\" (UniqueName: \"kubernetes.io/projected/7ce5c40b-74b3-44f5-a2aa-47a4d8d6de9e-kube-api-access-qbz6d\") pod \"auto-csr-approver-29555112-dpqqz\" (UID: \"7ce5c40b-74b3-44f5-a2aa-47a4d8d6de9e\") " pod="openshift-infra/auto-csr-approver-29555112-dpqqz" Mar 12 09:12:00 crc kubenswrapper[4809]: I0312 09:12:00.449159 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbz6d\" (UniqueName: \"kubernetes.io/projected/7ce5c40b-74b3-44f5-a2aa-47a4d8d6de9e-kube-api-access-qbz6d\") pod \"auto-csr-approver-29555112-dpqqz\" (UID: \"7ce5c40b-74b3-44f5-a2aa-47a4d8d6de9e\") " pod="openshift-infra/auto-csr-approver-29555112-dpqqz" Mar 12 09:12:00 crc kubenswrapper[4809]: I0312 09:12:00.482770 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbz6d\" (UniqueName: \"kubernetes.io/projected/7ce5c40b-74b3-44f5-a2aa-47a4d8d6de9e-kube-api-access-qbz6d\") pod \"auto-csr-approver-29555112-dpqqz\" (UID: \"7ce5c40b-74b3-44f5-a2aa-47a4d8d6de9e\") " pod="openshift-infra/auto-csr-approver-29555112-dpqqz" Mar 12 09:12:00 crc kubenswrapper[4809]: I0312 09:12:00.519440 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555112-dpqqz" Mar 12 09:12:01 crc kubenswrapper[4809]: I0312 09:12:01.097299 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555112-dpqqz"] Mar 12 09:12:01 crc kubenswrapper[4809]: I0312 09:12:01.418424 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555112-dpqqz" event={"ID":"7ce5c40b-74b3-44f5-a2aa-47a4d8d6de9e","Type":"ContainerStarted","Data":"2a914682e78bb81e303adb6ffd2e2e853e0a741a5ce5e38a7ee8ff17f0c58fc3"} Mar 12 09:12:02 crc kubenswrapper[4809]: I0312 09:12:02.435406 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555112-dpqqz" event={"ID":"7ce5c40b-74b3-44f5-a2aa-47a4d8d6de9e","Type":"ContainerStarted","Data":"fb5a5c283e9be3baa3c7ceea4a44954305f059b5e6cbfa0b2c34c8633feb4c6b"} Mar 12 09:12:02 crc kubenswrapper[4809]: I0312 09:12:02.480089 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555112-dpqqz" podStartSLOduration=1.498246406 podStartE2EDuration="2.480067829s" podCreationTimestamp="2026-03-12 09:12:00 +0000 UTC" firstStartedPulling="2026-03-12 09:12:01.084852175 +0000 UTC m=+4394.666887898" lastFinishedPulling="2026-03-12 09:12:02.066673588 +0000 UTC m=+4395.648709321" observedRunningTime="2026-03-12 09:12:02.45336679 +0000 UTC m=+4396.035402533" watchObservedRunningTime="2026-03-12 09:12:02.480067829 +0000 UTC m=+4396.062103572" Mar 12 09:12:03 crc kubenswrapper[4809]: I0312 09:12:03.447081 4809 generic.go:334] "Generic (PLEG): container finished" podID="7ce5c40b-74b3-44f5-a2aa-47a4d8d6de9e" containerID="fb5a5c283e9be3baa3c7ceea4a44954305f059b5e6cbfa0b2c34c8633feb4c6b" exitCode=0 Mar 12 09:12:03 crc kubenswrapper[4809]: I0312 09:12:03.447338 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555112-dpqqz" event={"ID":"7ce5c40b-74b3-44f5-a2aa-47a4d8d6de9e","Type":"ContainerDied","Data":"fb5a5c283e9be3baa3c7ceea4a44954305f059b5e6cbfa0b2c34c8633feb4c6b"} Mar 12 09:12:04 crc kubenswrapper[4809]: I0312 09:12:04.887491 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555112-dpqqz" Mar 12 09:12:04 crc kubenswrapper[4809]: I0312 09:12:04.921785 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbz6d\" (UniqueName: \"kubernetes.io/projected/7ce5c40b-74b3-44f5-a2aa-47a4d8d6de9e-kube-api-access-qbz6d\") pod \"7ce5c40b-74b3-44f5-a2aa-47a4d8d6de9e\" (UID: \"7ce5c40b-74b3-44f5-a2aa-47a4d8d6de9e\") " Mar 12 09:12:04 crc kubenswrapper[4809]: I0312 09:12:04.932464 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ce5c40b-74b3-44f5-a2aa-47a4d8d6de9e-kube-api-access-qbz6d" (OuterVolumeSpecName: "kube-api-access-qbz6d") pod "7ce5c40b-74b3-44f5-a2aa-47a4d8d6de9e" (UID: "7ce5c40b-74b3-44f5-a2aa-47a4d8d6de9e"). InnerVolumeSpecName "kube-api-access-qbz6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 09:12:05 crc kubenswrapper[4809]: I0312 09:12:05.030393 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbz6d\" (UniqueName: \"kubernetes.io/projected/7ce5c40b-74b3-44f5-a2aa-47a4d8d6de9e-kube-api-access-qbz6d\") on node \"crc\" DevicePath \"\"" Mar 12 09:12:05 crc kubenswrapper[4809]: I0312 09:12:05.468645 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555112-dpqqz" event={"ID":"7ce5c40b-74b3-44f5-a2aa-47a4d8d6de9e","Type":"ContainerDied","Data":"2a914682e78bb81e303adb6ffd2e2e853e0a741a5ce5e38a7ee8ff17f0c58fc3"} Mar 12 09:12:05 crc kubenswrapper[4809]: I0312 09:12:05.468695 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a914682e78bb81e303adb6ffd2e2e853e0a741a5ce5e38a7ee8ff17f0c58fc3" Mar 12 09:12:05 crc kubenswrapper[4809]: I0312 09:12:05.468760 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555112-dpqqz" Mar 12 09:12:05 crc kubenswrapper[4809]: I0312 09:12:05.542659 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555106-59jwb"] Mar 12 09:12:05 crc kubenswrapper[4809]: I0312 09:12:05.554203 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555106-59jwb"] Mar 12 09:12:07 crc kubenswrapper[4809]: I0312 09:12:07.121696 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25a2ac75-51a4-4fc9-8b9c-22dfa2f0430f" path="/var/lib/kubelet/pods/25a2ac75-51a4-4fc9-8b9c-22dfa2f0430f/volumes" Mar 12 09:12:08 crc kubenswrapper[4809]: I0312 09:12:08.106377 4809 scope.go:117] "RemoveContainer" containerID="9f5b71a157ead4783e53a957b141b066128a6a4df018ea7f5f50f3c38116cbd8" Mar 12 09:12:08 crc kubenswrapper[4809]: E0312 09:12:08.107227 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:12:23 crc kubenswrapper[4809]: I0312 09:12:23.106975 4809 scope.go:117] "RemoveContainer" containerID="9f5b71a157ead4783e53a957b141b066128a6a4df018ea7f5f50f3c38116cbd8" Mar 12 09:12:23 crc kubenswrapper[4809]: E0312 09:12:23.107815 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:12:35 crc kubenswrapper[4809]: I0312 09:12:35.424215 4809 scope.go:117] "RemoveContainer" containerID="e218e500f0f6cd6bbb2d7a6f407c9c728aa2de45a318f098dce261ce7b36215c" Mar 12 09:12:37 crc kubenswrapper[4809]: I0312 09:12:37.118520 4809 scope.go:117] "RemoveContainer" containerID="9f5b71a157ead4783e53a957b141b066128a6a4df018ea7f5f50f3c38116cbd8" Mar 12 09:12:37 crc kubenswrapper[4809]: E0312 09:12:37.119915 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:12:40 crc kubenswrapper[4809]: I0312 09:12:40.755553 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-rkf7l" podUID="a4ff847c-f029-4537-ab92-0ae803769dfc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:12:40 crc kubenswrapper[4809]: I0312 09:12:40.802508 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-rkf7l" podUID="a4ff847c-f029-4537-ab92-0ae803769dfc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:12:40 crc kubenswrapper[4809]: I0312 09:12:40.983480 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-7c2ts" podUID="b7c605d7-46e5-4daa-beb3-4ef624bc0df9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:12:50 crc kubenswrapper[4809]: I0312 09:12:50.106236 4809 scope.go:117] "RemoveContainer" containerID="9f5b71a157ead4783e53a957b141b066128a6a4df018ea7f5f50f3c38116cbd8" Mar 12 09:12:50 crc kubenswrapper[4809]: E0312 09:12:50.107229 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:13:05 crc kubenswrapper[4809]: I0312 09:13:05.112852 4809 scope.go:117] "RemoveContainer" containerID="9f5b71a157ead4783e53a957b141b066128a6a4df018ea7f5f50f3c38116cbd8" Mar 12 09:13:05 crc kubenswrapper[4809]: E0312 09:13:05.113872 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:13:20 crc kubenswrapper[4809]: I0312 09:13:20.106710 4809 scope.go:117] "RemoveContainer" containerID="9f5b71a157ead4783e53a957b141b066128a6a4df018ea7f5f50f3c38116cbd8" Mar 12 09:13:20 crc kubenswrapper[4809]: E0312 09:13:20.107496 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:13:34 crc kubenswrapper[4809]: I0312 09:13:34.105844 4809 scope.go:117] "RemoveContainer" containerID="9f5b71a157ead4783e53a957b141b066128a6a4df018ea7f5f50f3c38116cbd8" Mar 12 09:13:34 crc kubenswrapper[4809]: E0312 09:13:34.106848 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:13:48 crc kubenswrapper[4809]: I0312 09:13:48.106270 4809 scope.go:117] "RemoveContainer" containerID="9f5b71a157ead4783e53a957b141b066128a6a4df018ea7f5f50f3c38116cbd8" Mar 12 09:13:48 crc kubenswrapper[4809]: E0312 09:13:48.106962 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:13:59 crc kubenswrapper[4809]: I0312 09:13:59.107893 4809 scope.go:117] "RemoveContainer" containerID="9f5b71a157ead4783e53a957b141b066128a6a4df018ea7f5f50f3c38116cbd8" Mar 12 09:13:59 crc kubenswrapper[4809]: E0312 09:13:59.108959 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:14:00 crc kubenswrapper[4809]: I0312 09:14:00.151061 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555114-7sx6x"] Mar 12 09:14:00 crc kubenswrapper[4809]: E0312 09:14:00.151985 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ce5c40b-74b3-44f5-a2aa-47a4d8d6de9e" containerName="oc" Mar 12 09:14:00 crc kubenswrapper[4809]: I0312 09:14:00.152000 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ce5c40b-74b3-44f5-a2aa-47a4d8d6de9e" containerName="oc" Mar 12 09:14:00 crc kubenswrapper[4809]: I0312 09:14:00.152279 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ce5c40b-74b3-44f5-a2aa-47a4d8d6de9e" containerName="oc" Mar 12 09:14:00 crc kubenswrapper[4809]: I0312 09:14:00.153240 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555114-7sx6x" Mar 12 09:14:00 crc kubenswrapper[4809]: I0312 09:14:00.156284 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-tvxqv" Mar 12 09:14:00 crc kubenswrapper[4809]: I0312 09:14:00.156313 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 09:14:00 crc kubenswrapper[4809]: I0312 09:14:00.156645 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 09:14:00 crc kubenswrapper[4809]: I0312 09:14:00.162368 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555114-7sx6x"] Mar 12 09:14:00 crc kubenswrapper[4809]: I0312 09:14:00.272998 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6xlq\" (UniqueName: \"kubernetes.io/projected/5b3ff787-1ddc-4db7-928a-e6b6c685e6f9-kube-api-access-w6xlq\") pod \"auto-csr-approver-29555114-7sx6x\" (UID: \"5b3ff787-1ddc-4db7-928a-e6b6c685e6f9\") " pod="openshift-infra/auto-csr-approver-29555114-7sx6x" Mar 12 09:14:00 crc kubenswrapper[4809]: I0312 09:14:00.376231 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6xlq\" (UniqueName: \"kubernetes.io/projected/5b3ff787-1ddc-4db7-928a-e6b6c685e6f9-kube-api-access-w6xlq\") pod \"auto-csr-approver-29555114-7sx6x\" (UID: \"5b3ff787-1ddc-4db7-928a-e6b6c685e6f9\") " pod="openshift-infra/auto-csr-approver-29555114-7sx6x" Mar 12 09:14:00 crc kubenswrapper[4809]: I0312 09:14:00.402601 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6xlq\" (UniqueName: \"kubernetes.io/projected/5b3ff787-1ddc-4db7-928a-e6b6c685e6f9-kube-api-access-w6xlq\") pod \"auto-csr-approver-29555114-7sx6x\" (UID: \"5b3ff787-1ddc-4db7-928a-e6b6c685e6f9\") " pod="openshift-infra/auto-csr-approver-29555114-7sx6x" Mar 12 09:14:00 crc kubenswrapper[4809]: I0312 09:14:00.472785 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555114-7sx6x" Mar 12 09:14:01 crc kubenswrapper[4809]: W0312 09:14:01.010046 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b3ff787_1ddc_4db7_928a_e6b6c685e6f9.slice/crio-2a9be53d26d95e46e2cdc2aa8acf45b073a07eedccd8ab4e910027dfa4c94978 WatchSource:0}: Error finding container 2a9be53d26d95e46e2cdc2aa8acf45b073a07eedccd8ab4e910027dfa4c94978: Status 404 returned error can't find the container with id 2a9be53d26d95e46e2cdc2aa8acf45b073a07eedccd8ab4e910027dfa4c94978 Mar 12 09:14:01 crc kubenswrapper[4809]: I0312 09:14:01.010915 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555114-7sx6x"] Mar 12 09:14:01 crc kubenswrapper[4809]: I0312 09:14:01.014266 4809 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 09:14:01 crc kubenswrapper[4809]: I0312 09:14:01.986986 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555114-7sx6x" event={"ID":"5b3ff787-1ddc-4db7-928a-e6b6c685e6f9","Type":"ContainerStarted","Data":"2a9be53d26d95e46e2cdc2aa8acf45b073a07eedccd8ab4e910027dfa4c94978"} Mar 12 09:14:03 crc kubenswrapper[4809]: I0312 09:14:03.013182 4809 generic.go:334] "Generic (PLEG): container finished" podID="5b3ff787-1ddc-4db7-928a-e6b6c685e6f9" containerID="afaea90a01e525c8a793c237e158379fce17e54cb95b2150e82113f8f7c9c6d2" exitCode=0 Mar 12 09:14:03 crc kubenswrapper[4809]: I0312 09:14:03.014652 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555114-7sx6x" event={"ID":"5b3ff787-1ddc-4db7-928a-e6b6c685e6f9","Type":"ContainerDied","Data":"afaea90a01e525c8a793c237e158379fce17e54cb95b2150e82113f8f7c9c6d2"} Mar 12 09:14:04 crc kubenswrapper[4809]: I0312 09:14:04.466233 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555114-7sx6x" Mar 12 09:14:04 crc kubenswrapper[4809]: I0312 09:14:04.646354 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6xlq\" (UniqueName: \"kubernetes.io/projected/5b3ff787-1ddc-4db7-928a-e6b6c685e6f9-kube-api-access-w6xlq\") pod \"5b3ff787-1ddc-4db7-928a-e6b6c685e6f9\" (UID: \"5b3ff787-1ddc-4db7-928a-e6b6c685e6f9\") " Mar 12 09:14:04 crc kubenswrapper[4809]: I0312 09:14:04.653540 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b3ff787-1ddc-4db7-928a-e6b6c685e6f9-kube-api-access-w6xlq" (OuterVolumeSpecName: "kube-api-access-w6xlq") pod "5b3ff787-1ddc-4db7-928a-e6b6c685e6f9" (UID: "5b3ff787-1ddc-4db7-928a-e6b6c685e6f9"). InnerVolumeSpecName "kube-api-access-w6xlq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 09:14:04 crc kubenswrapper[4809]: I0312 09:14:04.750171 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6xlq\" (UniqueName: \"kubernetes.io/projected/5b3ff787-1ddc-4db7-928a-e6b6c685e6f9-kube-api-access-w6xlq\") on node \"crc\" DevicePath \"\"" Mar 12 09:14:05 crc kubenswrapper[4809]: I0312 09:14:05.052098 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555114-7sx6x" event={"ID":"5b3ff787-1ddc-4db7-928a-e6b6c685e6f9","Type":"ContainerDied","Data":"2a9be53d26d95e46e2cdc2aa8acf45b073a07eedccd8ab4e910027dfa4c94978"} Mar 12 09:14:05 crc kubenswrapper[4809]: I0312 09:14:05.052174 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a9be53d26d95e46e2cdc2aa8acf45b073a07eedccd8ab4e910027dfa4c94978" Mar 12 09:14:05 crc kubenswrapper[4809]: I0312 09:14:05.052237 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555114-7sx6x" Mar 12 09:14:05 crc kubenswrapper[4809]: I0312 09:14:05.558478 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555108-qp7ts"] Mar 12 09:14:05 crc kubenswrapper[4809]: I0312 09:14:05.574644 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555108-qp7ts"] Mar 12 09:14:07 crc kubenswrapper[4809]: I0312 09:14:07.133948 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e64baa3-814f-425c-97ed-10f85a7dae63" path="/var/lib/kubelet/pods/4e64baa3-814f-425c-97ed-10f85a7dae63/volumes" Mar 12 09:14:11 crc kubenswrapper[4809]: I0312 09:14:11.106398 4809 scope.go:117] "RemoveContainer" containerID="9f5b71a157ead4783e53a957b141b066128a6a4df018ea7f5f50f3c38116cbd8" Mar 12 09:14:11 crc kubenswrapper[4809]: E0312 09:14:11.109238 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:14:18 crc kubenswrapper[4809]: I0312 09:14:18.337669 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pj5xt"] Mar 12 09:14:18 crc kubenswrapper[4809]: E0312 09:14:18.339575 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b3ff787-1ddc-4db7-928a-e6b6c685e6f9" containerName="oc" Mar 12 09:14:18 crc kubenswrapper[4809]: I0312 09:14:18.339593 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b3ff787-1ddc-4db7-928a-e6b6c685e6f9" containerName="oc" Mar 12 09:14:18 crc kubenswrapper[4809]: I0312 09:14:18.339889 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b3ff787-1ddc-4db7-928a-e6b6c685e6f9" containerName="oc" Mar 12 09:14:18 crc kubenswrapper[4809]: I0312 09:14:18.341741 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pj5xt" Mar 12 09:14:18 crc kubenswrapper[4809]: I0312 09:14:18.352522 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pj5xt"] Mar 12 09:14:18 crc kubenswrapper[4809]: I0312 09:14:18.399462 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71db8238-e2ff-4fb0-87ca-858859c69757-catalog-content\") pod \"community-operators-pj5xt\" (UID: \"71db8238-e2ff-4fb0-87ca-858859c69757\") " pod="openshift-marketplace/community-operators-pj5xt" Mar 12 09:14:18 crc kubenswrapper[4809]: I0312 09:14:18.399619 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-958wt\" (UniqueName: \"kubernetes.io/projected/71db8238-e2ff-4fb0-87ca-858859c69757-kube-api-access-958wt\") pod \"community-operators-pj5xt\" (UID: \"71db8238-e2ff-4fb0-87ca-858859c69757\") " pod="openshift-marketplace/community-operators-pj5xt" Mar 12 09:14:18 crc kubenswrapper[4809]: I0312 09:14:18.400260 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71db8238-e2ff-4fb0-87ca-858859c69757-utilities\") pod \"community-operators-pj5xt\" (UID: \"71db8238-e2ff-4fb0-87ca-858859c69757\") " pod="openshift-marketplace/community-operators-pj5xt" Mar 12 09:14:18 crc kubenswrapper[4809]: I0312 09:14:18.504186 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71db8238-e2ff-4fb0-87ca-858859c69757-utilities\") pod \"community-operators-pj5xt\" (UID: \"71db8238-e2ff-4fb0-87ca-858859c69757\") " pod="openshift-marketplace/community-operators-pj5xt" Mar 12 09:14:18 crc kubenswrapper[4809]: I0312 09:14:18.504269 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71db8238-e2ff-4fb0-87ca-858859c69757-catalog-content\") pod \"community-operators-pj5xt\" (UID: \"71db8238-e2ff-4fb0-87ca-858859c69757\") " pod="openshift-marketplace/community-operators-pj5xt" Mar 12 09:14:18 crc kubenswrapper[4809]: I0312 09:14:18.504331 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-958wt\" (UniqueName: \"kubernetes.io/projected/71db8238-e2ff-4fb0-87ca-858859c69757-kube-api-access-958wt\") pod \"community-operators-pj5xt\" (UID: \"71db8238-e2ff-4fb0-87ca-858859c69757\") " pod="openshift-marketplace/community-operators-pj5xt" Mar 12 09:14:18 crc kubenswrapper[4809]: I0312 09:14:18.504856 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71db8238-e2ff-4fb0-87ca-858859c69757-catalog-content\") pod \"community-operators-pj5xt\" (UID: \"71db8238-e2ff-4fb0-87ca-858859c69757\") " pod="openshift-marketplace/community-operators-pj5xt" Mar 12 09:14:18 crc kubenswrapper[4809]: I0312 09:14:18.507241 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71db8238-e2ff-4fb0-87ca-858859c69757-utilities\") pod \"community-operators-pj5xt\" (UID: \"71db8238-e2ff-4fb0-87ca-858859c69757\") " pod="openshift-marketplace/community-operators-pj5xt" Mar 12 09:14:18 crc kubenswrapper[4809]: I0312 09:14:18.543431 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-958wt\" (UniqueName: \"kubernetes.io/projected/71db8238-e2ff-4fb0-87ca-858859c69757-kube-api-access-958wt\") pod \"community-operators-pj5xt\" (UID: \"71db8238-e2ff-4fb0-87ca-858859c69757\") " pod="openshift-marketplace/community-operators-pj5xt" Mar 12 09:14:18 crc kubenswrapper[4809]: I0312 09:14:18.679272 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pj5xt" Mar 12 09:14:19 crc kubenswrapper[4809]: I0312 09:14:19.390144 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pj5xt"] Mar 12 09:14:19 crc kubenswrapper[4809]: W0312 09:14:19.614364 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71db8238_e2ff_4fb0_87ca_858859c69757.slice/crio-c8da7e10f6fcde53cdb3418699a9a7dc55fe69b252ef378d978703ffb5690a21 WatchSource:0}: Error finding container c8da7e10f6fcde53cdb3418699a9a7dc55fe69b252ef378d978703ffb5690a21: Status 404 returned error can't find the container with id c8da7e10f6fcde53cdb3418699a9a7dc55fe69b252ef378d978703ffb5690a21 Mar 12 09:14:20 crc kubenswrapper[4809]: I0312 09:14:20.276405 4809 generic.go:334] "Generic (PLEG): container finished" podID="71db8238-e2ff-4fb0-87ca-858859c69757" containerID="ba7fc66014092b9e461b5828605ade4e58077bbe880eecc7f5466aefd59410e5" exitCode=0 Mar 12 09:14:20 crc kubenswrapper[4809]: I0312 09:14:20.276488 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pj5xt" event={"ID":"71db8238-e2ff-4fb0-87ca-858859c69757","Type":"ContainerDied","Data":"ba7fc66014092b9e461b5828605ade4e58077bbe880eecc7f5466aefd59410e5"} Mar 12 09:14:20 crc kubenswrapper[4809]: I0312 09:14:20.277029 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pj5xt" event={"ID":"71db8238-e2ff-4fb0-87ca-858859c69757","Type":"ContainerStarted","Data":"c8da7e10f6fcde53cdb3418699a9a7dc55fe69b252ef378d978703ffb5690a21"} Mar 12 09:14:22 crc kubenswrapper[4809]: I0312 09:14:22.312572 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pj5xt" event={"ID":"71db8238-e2ff-4fb0-87ca-858859c69757","Type":"ContainerStarted","Data":"193ba0236647d05212297ea378f6360f9f4a3b08483db686cf431526e9b9c690"} Mar 12 09:14:24 crc kubenswrapper[4809]: I0312 09:14:24.347754 4809 generic.go:334] "Generic (PLEG): container finished" podID="71db8238-e2ff-4fb0-87ca-858859c69757" containerID="193ba0236647d05212297ea378f6360f9f4a3b08483db686cf431526e9b9c690" exitCode=0 Mar 12 09:14:24 crc kubenswrapper[4809]: I0312 09:14:24.347875 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pj5xt" event={"ID":"71db8238-e2ff-4fb0-87ca-858859c69757","Type":"ContainerDied","Data":"193ba0236647d05212297ea378f6360f9f4a3b08483db686cf431526e9b9c690"} Mar 12 09:14:25 crc kubenswrapper[4809]: I0312 09:14:25.110625 4809 scope.go:117] "RemoveContainer" containerID="9f5b71a157ead4783e53a957b141b066128a6a4df018ea7f5f50f3c38116cbd8" Mar 12 09:14:25 crc kubenswrapper[4809]: E0312 09:14:25.111479 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:14:25 crc kubenswrapper[4809]: I0312 09:14:25.368697 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pj5xt" event={"ID":"71db8238-e2ff-4fb0-87ca-858859c69757","Type":"ContainerStarted","Data":"81b75fe38bee0c24a74fb2fd81c4ba22a73ffd954550d4796470cab58f7e6333"} Mar 12 09:14:28 crc kubenswrapper[4809]: I0312 09:14:28.680182 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pj5xt" Mar 12 09:14:28 crc kubenswrapper[4809]: I0312 09:14:28.682270 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pj5xt" Mar 12 09:14:29 crc kubenswrapper[4809]: I0312 09:14:29.734686 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-pj5xt" podUID="71db8238-e2ff-4fb0-87ca-858859c69757" containerName="registry-server" probeResult="failure" output=< Mar 12 09:14:29 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 09:14:29 crc kubenswrapper[4809]: > Mar 12 09:14:35 crc kubenswrapper[4809]: I0312 09:14:35.564877 4809 scope.go:117] "RemoveContainer" containerID="abeac68e0d79f34a44247736b437b6fd61ee60a71286d2c1ca6e5c430c4b0aac" Mar 12 09:14:36 crc kubenswrapper[4809]: I0312 09:14:36.106236 4809 scope.go:117] "RemoveContainer" containerID="9f5b71a157ead4783e53a957b141b066128a6a4df018ea7f5f50f3c38116cbd8" Mar 12 09:14:36 crc kubenswrapper[4809]: E0312 09:14:36.106837 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:14:38 crc kubenswrapper[4809]: I0312 09:14:38.746650 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pj5xt" Mar 12 09:14:38 crc kubenswrapper[4809]: I0312 09:14:38.771442 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pj5xt" podStartSLOduration=16.05399772 podStartE2EDuration="20.771416573s" podCreationTimestamp="2026-03-12 09:14:18 +0000 UTC" firstStartedPulling="2026-03-12 09:14:20.279222112 +0000 UTC m=+4533.861257845" lastFinishedPulling="2026-03-12 09:14:24.996640965 +0000 UTC m=+4538.578676698" observedRunningTime="2026-03-12 09:14:25.397724271 +0000 UTC m=+4538.979760054" watchObservedRunningTime="2026-03-12 09:14:38.771416573 +0000 UTC m=+4552.353452306" Mar 12 09:14:38 crc kubenswrapper[4809]: I0312 09:14:38.795779 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pj5xt" Mar 12 09:14:38 crc kubenswrapper[4809]: I0312 09:14:38.988817 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pj5xt"] Mar 12 09:14:40 crc kubenswrapper[4809]: I0312 09:14:40.599670 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pj5xt" podUID="71db8238-e2ff-4fb0-87ca-858859c69757" containerName="registry-server" containerID="cri-o://81b75fe38bee0c24a74fb2fd81c4ba22a73ffd954550d4796470cab58f7e6333" gracePeriod=2 Mar 12 09:14:41 crc kubenswrapper[4809]: I0312 09:14:41.174932 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pj5xt" Mar 12 09:14:41 crc kubenswrapper[4809]: I0312 09:14:41.290539 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-958wt\" (UniqueName: \"kubernetes.io/projected/71db8238-e2ff-4fb0-87ca-858859c69757-kube-api-access-958wt\") pod \"71db8238-e2ff-4fb0-87ca-858859c69757\" (UID: \"71db8238-e2ff-4fb0-87ca-858859c69757\") " Mar 12 09:14:41 crc kubenswrapper[4809]: I0312 09:14:41.290692 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71db8238-e2ff-4fb0-87ca-858859c69757-utilities\") pod \"71db8238-e2ff-4fb0-87ca-858859c69757\" (UID: \"71db8238-e2ff-4fb0-87ca-858859c69757\") " Mar 12 09:14:41 crc kubenswrapper[4809]: I0312 09:14:41.290780 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71db8238-e2ff-4fb0-87ca-858859c69757-catalog-content\") pod \"71db8238-e2ff-4fb0-87ca-858859c69757\" (UID: \"71db8238-e2ff-4fb0-87ca-858859c69757\") " Mar 12 09:14:41 crc kubenswrapper[4809]: I0312 09:14:41.293297 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71db8238-e2ff-4fb0-87ca-858859c69757-utilities" (OuterVolumeSpecName: "utilities") pod "71db8238-e2ff-4fb0-87ca-858859c69757" (UID: "71db8238-e2ff-4fb0-87ca-858859c69757"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 09:14:41 crc kubenswrapper[4809]: I0312 09:14:41.298311 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71db8238-e2ff-4fb0-87ca-858859c69757-kube-api-access-958wt" (OuterVolumeSpecName: "kube-api-access-958wt") pod "71db8238-e2ff-4fb0-87ca-858859c69757" (UID: "71db8238-e2ff-4fb0-87ca-858859c69757"). InnerVolumeSpecName "kube-api-access-958wt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 09:14:41 crc kubenswrapper[4809]: I0312 09:14:41.357265 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71db8238-e2ff-4fb0-87ca-858859c69757-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71db8238-e2ff-4fb0-87ca-858859c69757" (UID: "71db8238-e2ff-4fb0-87ca-858859c69757"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 09:14:41 crc kubenswrapper[4809]: I0312 09:14:41.394644 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-958wt\" (UniqueName: \"kubernetes.io/projected/71db8238-e2ff-4fb0-87ca-858859c69757-kube-api-access-958wt\") on node \"crc\" DevicePath \"\"" Mar 12 09:14:41 crc kubenswrapper[4809]: I0312 09:14:41.394686 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71db8238-e2ff-4fb0-87ca-858859c69757-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 09:14:41 crc kubenswrapper[4809]: I0312 09:14:41.394702 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71db8238-e2ff-4fb0-87ca-858859c69757-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 09:14:41 crc kubenswrapper[4809]: I0312 09:14:41.617351 4809 generic.go:334] "Generic (PLEG): container finished" podID="71db8238-e2ff-4fb0-87ca-858859c69757" containerID="81b75fe38bee0c24a74fb2fd81c4ba22a73ffd954550d4796470cab58f7e6333" exitCode=0 Mar 12 09:14:41 crc kubenswrapper[4809]: I0312 09:14:41.617407 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pj5xt" event={"ID":"71db8238-e2ff-4fb0-87ca-858859c69757","Type":"ContainerDied","Data":"81b75fe38bee0c24a74fb2fd81c4ba22a73ffd954550d4796470cab58f7e6333"} Mar 12 09:14:41 crc kubenswrapper[4809]: I0312 09:14:41.617455 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pj5xt" event={"ID":"71db8238-e2ff-4fb0-87ca-858859c69757","Type":"ContainerDied","Data":"c8da7e10f6fcde53cdb3418699a9a7dc55fe69b252ef378d978703ffb5690a21"} Mar 12 09:14:41 crc kubenswrapper[4809]: I0312 09:14:41.617478 4809 scope.go:117] "RemoveContainer" containerID="81b75fe38bee0c24a74fb2fd81c4ba22a73ffd954550d4796470cab58f7e6333" Mar 12 09:14:41 crc kubenswrapper[4809]: I0312 09:14:41.617531 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pj5xt" Mar 12 09:14:41 crc kubenswrapper[4809]: I0312 09:14:41.650523 4809 scope.go:117] "RemoveContainer" containerID="193ba0236647d05212297ea378f6360f9f4a3b08483db686cf431526e9b9c690" Mar 12 09:14:41 crc kubenswrapper[4809]: I0312 09:14:41.682260 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pj5xt"] Mar 12 09:14:41 crc kubenswrapper[4809]: I0312 09:14:41.690762 4809 scope.go:117] "RemoveContainer" containerID="ba7fc66014092b9e461b5828605ade4e58077bbe880eecc7f5466aefd59410e5" Mar 12 09:14:41 crc kubenswrapper[4809]: I0312 09:14:41.696076 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pj5xt"] Mar 12 09:14:41 crc kubenswrapper[4809]: I0312 09:14:41.751266 4809 scope.go:117] "RemoveContainer" containerID="81b75fe38bee0c24a74fb2fd81c4ba22a73ffd954550d4796470cab58f7e6333" Mar 12 09:14:41 crc kubenswrapper[4809]: E0312 09:14:41.751803 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81b75fe38bee0c24a74fb2fd81c4ba22a73ffd954550d4796470cab58f7e6333\": container with ID starting with 81b75fe38bee0c24a74fb2fd81c4ba22a73ffd954550d4796470cab58f7e6333 not found: ID does not exist" containerID="81b75fe38bee0c24a74fb2fd81c4ba22a73ffd954550d4796470cab58f7e6333" Mar 12 09:14:41 crc kubenswrapper[4809]: I0312 09:14:41.751837 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81b75fe38bee0c24a74fb2fd81c4ba22a73ffd954550d4796470cab58f7e6333"} err="failed to get container status \"81b75fe38bee0c24a74fb2fd81c4ba22a73ffd954550d4796470cab58f7e6333\": rpc error: code = NotFound desc = could not find container \"81b75fe38bee0c24a74fb2fd81c4ba22a73ffd954550d4796470cab58f7e6333\": container with ID starting with 81b75fe38bee0c24a74fb2fd81c4ba22a73ffd954550d4796470cab58f7e6333 not found: ID does not exist" Mar 12 09:14:41 crc kubenswrapper[4809]: I0312 09:14:41.751867 4809 scope.go:117] "RemoveContainer" containerID="193ba0236647d05212297ea378f6360f9f4a3b08483db686cf431526e9b9c690" Mar 12 09:14:41 crc kubenswrapper[4809]: E0312 09:14:41.752441 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"193ba0236647d05212297ea378f6360f9f4a3b08483db686cf431526e9b9c690\": container with ID starting with 193ba0236647d05212297ea378f6360f9f4a3b08483db686cf431526e9b9c690 not found: ID does not exist" containerID="193ba0236647d05212297ea378f6360f9f4a3b08483db686cf431526e9b9c690" Mar 12 09:14:41 crc kubenswrapper[4809]: I0312 09:14:41.752487 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"193ba0236647d05212297ea378f6360f9f4a3b08483db686cf431526e9b9c690"} err="failed to get container status \"193ba0236647d05212297ea378f6360f9f4a3b08483db686cf431526e9b9c690\": rpc error: code = NotFound desc = could not find container \"193ba0236647d05212297ea378f6360f9f4a3b08483db686cf431526e9b9c690\": container with ID starting with 193ba0236647d05212297ea378f6360f9f4a3b08483db686cf431526e9b9c690 not found: ID does not exist" Mar 12 09:14:41 crc kubenswrapper[4809]: I0312 09:14:41.752516 4809 scope.go:117] "RemoveContainer" containerID="ba7fc66014092b9e461b5828605ade4e58077bbe880eecc7f5466aefd59410e5" Mar 12 09:14:41 crc kubenswrapper[4809]: E0312 09:14:41.753615 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba7fc66014092b9e461b5828605ade4e58077bbe880eecc7f5466aefd59410e5\": container with ID starting with ba7fc66014092b9e461b5828605ade4e58077bbe880eecc7f5466aefd59410e5 not found: ID does not exist" containerID="ba7fc66014092b9e461b5828605ade4e58077bbe880eecc7f5466aefd59410e5" Mar 12 09:14:41 crc kubenswrapper[4809]: I0312 09:14:41.753648 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba7fc66014092b9e461b5828605ade4e58077bbe880eecc7f5466aefd59410e5"} err="failed to get container status \"ba7fc66014092b9e461b5828605ade4e58077bbe880eecc7f5466aefd59410e5\": rpc error: code = NotFound desc = could not find container \"ba7fc66014092b9e461b5828605ade4e58077bbe880eecc7f5466aefd59410e5\": container with ID starting with ba7fc66014092b9e461b5828605ade4e58077bbe880eecc7f5466aefd59410e5 not found: ID does not exist" Mar 12 09:14:43 crc kubenswrapper[4809]: I0312 09:14:43.128020 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71db8238-e2ff-4fb0-87ca-858859c69757" path="/var/lib/kubelet/pods/71db8238-e2ff-4fb0-87ca-858859c69757/volumes" Mar 12 09:14:49 crc kubenswrapper[4809]: I0312 09:14:49.108676 4809 scope.go:117] "RemoveContainer" containerID="9f5b71a157ead4783e53a957b141b066128a6a4df018ea7f5f50f3c38116cbd8" Mar 12 09:14:49 crc kubenswrapper[4809]: E0312 09:14:49.109507 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:15:00 crc kubenswrapper[4809]: I0312 09:15:00.165403 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555115-d7cq7"] Mar 12 09:15:00 crc kubenswrapper[4809]: E0312 09:15:00.166394 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71db8238-e2ff-4fb0-87ca-858859c69757" containerName="extract-content" Mar 12 09:15:00 crc kubenswrapper[4809]: I0312 09:15:00.166412 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="71db8238-e2ff-4fb0-87ca-858859c69757" containerName="extract-content" Mar 12 09:15:00 crc kubenswrapper[4809]: E0312 09:15:00.166430 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71db8238-e2ff-4fb0-87ca-858859c69757" containerName="registry-server" Mar 12 09:15:00 crc kubenswrapper[4809]: I0312 09:15:00.166437 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="71db8238-e2ff-4fb0-87ca-858859c69757" containerName="registry-server" Mar 12 09:15:00 crc kubenswrapper[4809]: E0312 09:15:00.166490 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71db8238-e2ff-4fb0-87ca-858859c69757" containerName="extract-utilities" Mar 12 09:15:00 crc kubenswrapper[4809]: I0312 09:15:00.166496 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="71db8238-e2ff-4fb0-87ca-858859c69757" containerName="extract-utilities" Mar 12 09:15:00 crc kubenswrapper[4809]: I0312 09:15:00.166719 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="71db8238-e2ff-4fb0-87ca-858859c69757" containerName="registry-server" Mar 12 09:15:00 crc kubenswrapper[4809]: I0312 09:15:00.167566 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555115-d7cq7" Mar 12 09:15:00 crc kubenswrapper[4809]: I0312 09:15:00.170516 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 12 09:15:00 crc kubenswrapper[4809]: I0312 09:15:00.177647 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 12 09:15:00 crc kubenswrapper[4809]: I0312 09:15:00.183914 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555115-d7cq7"] Mar 12 09:15:00 crc kubenswrapper[4809]: I0312 09:15:00.306840 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4a5b202-6771-4a9e-af29-ea8f14539721-config-volume\") pod \"collect-profiles-29555115-d7cq7\" (UID: \"a4a5b202-6771-4a9e-af29-ea8f14539721\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555115-d7cq7" Mar 12 09:15:00 crc kubenswrapper[4809]: I0312 09:15:00.307098 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4a5b202-6771-4a9e-af29-ea8f14539721-secret-volume\") pod \"collect-profiles-29555115-d7cq7\" (UID: \"a4a5b202-6771-4a9e-af29-ea8f14539721\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555115-d7cq7" Mar 12 09:15:00 crc kubenswrapper[4809]: I0312 09:15:00.307179 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjls5\" (UniqueName: \"kubernetes.io/projected/a4a5b202-6771-4a9e-af29-ea8f14539721-kube-api-access-zjls5\") pod \"collect-profiles-29555115-d7cq7\" (UID: \"a4a5b202-6771-4a9e-af29-ea8f14539721\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555115-d7cq7" Mar 12 09:15:00 crc kubenswrapper[4809]: I0312 09:15:00.409949 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4a5b202-6771-4a9e-af29-ea8f14539721-config-volume\") pod \"collect-profiles-29555115-d7cq7\" (UID: \"a4a5b202-6771-4a9e-af29-ea8f14539721\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555115-d7cq7" Mar 12 09:15:00 crc kubenswrapper[4809]: I0312 09:15:00.410089 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4a5b202-6771-4a9e-af29-ea8f14539721-secret-volume\") pod \"collect-profiles-29555115-d7cq7\" (UID: \"a4a5b202-6771-4a9e-af29-ea8f14539721\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555115-d7cq7" Mar 12 09:15:00 crc kubenswrapper[4809]: I0312 09:15:00.410132 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjls5\" (UniqueName: \"kubernetes.io/projected/a4a5b202-6771-4a9e-af29-ea8f14539721-kube-api-access-zjls5\") pod \"collect-profiles-29555115-d7cq7\" (UID: \"a4a5b202-6771-4a9e-af29-ea8f14539721\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555115-d7cq7" Mar 12 09:15:00 crc kubenswrapper[4809]: I0312 09:15:00.413853 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4a5b202-6771-4a9e-af29-ea8f14539721-config-volume\") pod \"collect-profiles-29555115-d7cq7\" (UID: \"a4a5b202-6771-4a9e-af29-ea8f14539721\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555115-d7cq7" Mar 12 09:15:00 crc kubenswrapper[4809]: I0312 09:15:00.423285 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4a5b202-6771-4a9e-af29-ea8f14539721-secret-volume\") pod \"collect-profiles-29555115-d7cq7\" (UID: \"a4a5b202-6771-4a9e-af29-ea8f14539721\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555115-d7cq7" Mar 12 09:15:00 crc kubenswrapper[4809]: I0312 09:15:00.441551 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjls5\" (UniqueName: \"kubernetes.io/projected/a4a5b202-6771-4a9e-af29-ea8f14539721-kube-api-access-zjls5\") pod \"collect-profiles-29555115-d7cq7\" (UID: \"a4a5b202-6771-4a9e-af29-ea8f14539721\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555115-d7cq7" Mar 12 09:15:00 crc kubenswrapper[4809]: I0312 09:15:00.492560 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555115-d7cq7" Mar 12 09:15:01 crc kubenswrapper[4809]: I0312 09:15:01.090205 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555115-d7cq7"] Mar 12 09:15:01 crc kubenswrapper[4809]: W0312 09:15:01.107088 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4a5b202_6771_4a9e_af29_ea8f14539721.slice/crio-709d9b79d4d8bad8acc22481db99a78f73646e18208b5d1ceddfec424f1904af WatchSource:0}: Error finding container 709d9b79d4d8bad8acc22481db99a78f73646e18208b5d1ceddfec424f1904af: Status 404 returned error can't find the container with id 709d9b79d4d8bad8acc22481db99a78f73646e18208b5d1ceddfec424f1904af Mar 12 09:15:01 crc kubenswrapper[4809]: I0312 09:15:01.863385 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555115-d7cq7" event={"ID":"a4a5b202-6771-4a9e-af29-ea8f14539721","Type":"ContainerStarted","Data":"ac48b9371a7e1d6b62ac5d8c5cfd8d000296493887267d0795959d3c55c04884"} Mar 12 09:15:01 crc kubenswrapper[4809]: I0312 09:15:01.863967 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555115-d7cq7" event={"ID":"a4a5b202-6771-4a9e-af29-ea8f14539721","Type":"ContainerStarted","Data":"709d9b79d4d8bad8acc22481db99a78f73646e18208b5d1ceddfec424f1904af"} Mar 12 09:15:01 crc kubenswrapper[4809]: I0312 09:15:01.889028 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29555115-d7cq7" podStartSLOduration=1.889010927 podStartE2EDuration="1.889010927s" podCreationTimestamp="2026-03-12 09:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 09:15:01.881572514 +0000 UTC m=+4575.463608247" watchObservedRunningTime="2026-03-12 09:15:01.889010927 +0000 UTC m=+4575.471046660" Mar 12 09:15:02 crc kubenswrapper[4809]: I0312 09:15:02.874464 4809 generic.go:334] "Generic (PLEG): container finished" podID="a4a5b202-6771-4a9e-af29-ea8f14539721" containerID="ac48b9371a7e1d6b62ac5d8c5cfd8d000296493887267d0795959d3c55c04884" exitCode=0 Mar 12 09:15:02 crc kubenswrapper[4809]: I0312 09:15:02.874831 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555115-d7cq7" event={"ID":"a4a5b202-6771-4a9e-af29-ea8f14539721","Type":"ContainerDied","Data":"ac48b9371a7e1d6b62ac5d8c5cfd8d000296493887267d0795959d3c55c04884"} Mar 12 09:15:04 crc kubenswrapper[4809]: I0312 09:15:04.106515 4809 scope.go:117] "RemoveContainer" containerID="9f5b71a157ead4783e53a957b141b066128a6a4df018ea7f5f50f3c38116cbd8" Mar 12 09:15:04 crc kubenswrapper[4809]: E0312 09:15:04.107226 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:15:04 crc kubenswrapper[4809]: I0312 09:15:04.445183 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555115-d7cq7" Mar 12 09:15:04 crc kubenswrapper[4809]: I0312 09:15:04.547214 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjls5\" (UniqueName: \"kubernetes.io/projected/a4a5b202-6771-4a9e-af29-ea8f14539721-kube-api-access-zjls5\") pod \"a4a5b202-6771-4a9e-af29-ea8f14539721\" (UID: \"a4a5b202-6771-4a9e-af29-ea8f14539721\") " Mar 12 09:15:04 crc kubenswrapper[4809]: I0312 09:15:04.548997 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4a5b202-6771-4a9e-af29-ea8f14539721-config-volume\") pod \"a4a5b202-6771-4a9e-af29-ea8f14539721\" (UID: \"a4a5b202-6771-4a9e-af29-ea8f14539721\") " Mar 12 09:15:04 crc kubenswrapper[4809]: I0312 09:15:04.549488 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4a5b202-6771-4a9e-af29-ea8f14539721-config-volume" (OuterVolumeSpecName: "config-volume") pod "a4a5b202-6771-4a9e-af29-ea8f14539721" (UID: "a4a5b202-6771-4a9e-af29-ea8f14539721"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 09:15:04 crc kubenswrapper[4809]: I0312 09:15:04.550452 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4a5b202-6771-4a9e-af29-ea8f14539721-secret-volume\") pod \"a4a5b202-6771-4a9e-af29-ea8f14539721\" (UID: \"a4a5b202-6771-4a9e-af29-ea8f14539721\") " Mar 12 09:15:04 crc kubenswrapper[4809]: I0312 09:15:04.556249 4809 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4a5b202-6771-4a9e-af29-ea8f14539721-config-volume\") on node \"crc\" DevicePath \"\"" Mar 12 09:15:04 crc kubenswrapper[4809]: I0312 09:15:04.556448 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4a5b202-6771-4a9e-af29-ea8f14539721-kube-api-access-zjls5" (OuterVolumeSpecName: "kube-api-access-zjls5") pod "a4a5b202-6771-4a9e-af29-ea8f14539721" (UID: "a4a5b202-6771-4a9e-af29-ea8f14539721"). InnerVolumeSpecName "kube-api-access-zjls5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 09:15:04 crc kubenswrapper[4809]: I0312 09:15:04.558596 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4a5b202-6771-4a9e-af29-ea8f14539721-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a4a5b202-6771-4a9e-af29-ea8f14539721" (UID: "a4a5b202-6771-4a9e-af29-ea8f14539721"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 09:15:04 crc kubenswrapper[4809]: I0312 09:15:04.658731 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjls5\" (UniqueName: \"kubernetes.io/projected/a4a5b202-6771-4a9e-af29-ea8f14539721-kube-api-access-zjls5\") on node \"crc\" DevicePath \"\"" Mar 12 09:15:04 crc kubenswrapper[4809]: I0312 09:15:04.658789 4809 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4a5b202-6771-4a9e-af29-ea8f14539721-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 12 09:15:04 crc kubenswrapper[4809]: I0312 09:15:04.914361 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555115-d7cq7" event={"ID":"a4a5b202-6771-4a9e-af29-ea8f14539721","Type":"ContainerDied","Data":"709d9b79d4d8bad8acc22481db99a78f73646e18208b5d1ceddfec424f1904af"} Mar 12 09:15:04 crc kubenswrapper[4809]: I0312 09:15:04.914403 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="709d9b79d4d8bad8acc22481db99a78f73646e18208b5d1ceddfec424f1904af" Mar 12 09:15:04 crc kubenswrapper[4809]: I0312 09:15:04.914670 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555115-d7cq7" Mar 12 09:15:04 crc kubenswrapper[4809]: I0312 09:15:04.953231 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555070-9qckz"] Mar 12 09:15:04 crc kubenswrapper[4809]: I0312 09:15:04.966472 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555070-9qckz"] Mar 12 09:15:05 crc kubenswrapper[4809]: I0312 09:15:05.129459 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1804f4be-3be2-4237-8059-f28f28a33dba" path="/var/lib/kubelet/pods/1804f4be-3be2-4237-8059-f28f28a33dba/volumes" Mar 12 09:15:15 crc kubenswrapper[4809]: I0312 09:15:15.109576 4809 scope.go:117] "RemoveContainer" containerID="9f5b71a157ead4783e53a957b141b066128a6a4df018ea7f5f50f3c38116cbd8" Mar 12 09:15:15 crc kubenswrapper[4809]: E0312 09:15:15.112105 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:15:25 crc kubenswrapper[4809]: I0312 09:15:25.644031 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pg2xr"] Mar 12 09:15:25 crc kubenswrapper[4809]: E0312 09:15:25.645760 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4a5b202-6771-4a9e-af29-ea8f14539721" containerName="collect-profiles" Mar 12 09:15:25 crc kubenswrapper[4809]: I0312 09:15:25.645778 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4a5b202-6771-4a9e-af29-ea8f14539721" containerName="collect-profiles" Mar 12 09:15:25 crc kubenswrapper[4809]: I0312 09:15:25.646084 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4a5b202-6771-4a9e-af29-ea8f14539721" containerName="collect-profiles" Mar 12 09:15:25 crc kubenswrapper[4809]: I0312 09:15:25.648300 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pg2xr" Mar 12 09:15:25 crc kubenswrapper[4809]: I0312 09:15:25.672535 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pg2xr"] Mar 12 09:15:25 crc kubenswrapper[4809]: I0312 09:15:25.722191 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20800b48-5695-4514-bff1-0d355bfcc835-utilities\") pod \"certified-operators-pg2xr\" (UID: \"20800b48-5695-4514-bff1-0d355bfcc835\") " pod="openshift-marketplace/certified-operators-pg2xr" Mar 12 09:15:25 crc kubenswrapper[4809]: I0312 09:15:25.722258 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fncj8\" (UniqueName: \"kubernetes.io/projected/20800b48-5695-4514-bff1-0d355bfcc835-kube-api-access-fncj8\") pod \"certified-operators-pg2xr\" (UID: \"20800b48-5695-4514-bff1-0d355bfcc835\") " pod="openshift-marketplace/certified-operators-pg2xr" Mar 12 09:15:25 crc kubenswrapper[4809]: I0312 09:15:25.722326 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20800b48-5695-4514-bff1-0d355bfcc835-catalog-content\") pod \"certified-operators-pg2xr\" (UID: \"20800b48-5695-4514-bff1-0d355bfcc835\") " pod="openshift-marketplace/certified-operators-pg2xr" Mar 12 09:15:25 crc kubenswrapper[4809]: I0312 09:15:25.827363 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20800b48-5695-4514-bff1-0d355bfcc835-utilities\") pod \"certified-operators-pg2xr\" (UID: \"20800b48-5695-4514-bff1-0d355bfcc835\") " pod="openshift-marketplace/certified-operators-pg2xr" Mar 12 09:15:25 crc kubenswrapper[4809]: I0312 09:15:25.827933 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fncj8\" (UniqueName: \"kubernetes.io/projected/20800b48-5695-4514-bff1-0d355bfcc835-kube-api-access-fncj8\") pod \"certified-operators-pg2xr\" (UID: \"20800b48-5695-4514-bff1-0d355bfcc835\") " pod="openshift-marketplace/certified-operators-pg2xr" Mar 12 09:15:25 crc kubenswrapper[4809]: I0312 09:15:25.829573 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20800b48-5695-4514-bff1-0d355bfcc835-catalog-content\") pod \"certified-operators-pg2xr\" (UID: \"20800b48-5695-4514-bff1-0d355bfcc835\") " pod="openshift-marketplace/certified-operators-pg2xr" Mar 12 09:15:25 crc kubenswrapper[4809]: I0312 09:15:25.829664 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20800b48-5695-4514-bff1-0d355bfcc835-utilities\") pod \"certified-operators-pg2xr\" (UID: \"20800b48-5695-4514-bff1-0d355bfcc835\") " pod="openshift-marketplace/certified-operators-pg2xr" Mar 12 09:15:25 crc kubenswrapper[4809]: I0312 09:15:25.830485 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20800b48-5695-4514-bff1-0d355bfcc835-catalog-content\") pod \"certified-operators-pg2xr\" (UID: \"20800b48-5695-4514-bff1-0d355bfcc835\") " pod="openshift-marketplace/certified-operators-pg2xr" Mar 12 09:15:25 crc kubenswrapper[4809]: I0312 09:15:25.855927 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fncj8\" (UniqueName: \"kubernetes.io/projected/20800b48-5695-4514-bff1-0d355bfcc835-kube-api-access-fncj8\") pod \"certified-operators-pg2xr\" (UID: \"20800b48-5695-4514-bff1-0d355bfcc835\") " pod="openshift-marketplace/certified-operators-pg2xr" Mar 12 09:15:25 crc kubenswrapper[4809]: I0312 09:15:25.993586 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pg2xr" Mar 12 09:15:26 crc kubenswrapper[4809]: I0312 09:15:26.638129 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pg2xr"] Mar 12 09:15:27 crc kubenswrapper[4809]: I0312 09:15:27.120289 4809 scope.go:117] "RemoveContainer" containerID="9f5b71a157ead4783e53a957b141b066128a6a4df018ea7f5f50f3c38116cbd8" Mar 12 09:15:27 crc kubenswrapper[4809]: E0312 09:15:27.120726 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:15:27 crc kubenswrapper[4809]: I0312 09:15:27.228052 4809 generic.go:334] "Generic (PLEG): container finished" podID="20800b48-5695-4514-bff1-0d355bfcc835" containerID="533e84213a3496590c1026b7848ce709d25dceb8f2d23bbe6c6ddd06a894fcbf" exitCode=0 Mar 12 09:15:27 crc kubenswrapper[4809]: I0312 09:15:27.228283 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pg2xr" event={"ID":"20800b48-5695-4514-bff1-0d355bfcc835","Type":"ContainerDied","Data":"533e84213a3496590c1026b7848ce709d25dceb8f2d23bbe6c6ddd06a894fcbf"} Mar 12 09:15:27 crc kubenswrapper[4809]: I0312 09:15:27.228600 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pg2xr" event={"ID":"20800b48-5695-4514-bff1-0d355bfcc835","Type":"ContainerStarted","Data":"b17ed8920badfa69df25caf842dee205512ed94e4dadefcea721ebe21005973d"} Mar 12 09:15:28 crc kubenswrapper[4809]: I0312 09:15:28.246311 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pg2xr" event={"ID":"20800b48-5695-4514-bff1-0d355bfcc835","Type":"ContainerStarted","Data":"f5b332a1d139fbada07f634c4e1af5da961b1a2f8a08d300d90efb62628fa8a7"} Mar 12 09:15:30 crc kubenswrapper[4809]: I0312 09:15:30.271258 4809 generic.go:334] "Generic (PLEG): container finished" podID="20800b48-5695-4514-bff1-0d355bfcc835" containerID="f5b332a1d139fbada07f634c4e1af5da961b1a2f8a08d300d90efb62628fa8a7" exitCode=0 Mar 12 09:15:30 crc kubenswrapper[4809]: I0312 09:15:30.271334 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pg2xr" event={"ID":"20800b48-5695-4514-bff1-0d355bfcc835","Type":"ContainerDied","Data":"f5b332a1d139fbada07f634c4e1af5da961b1a2f8a08d300d90efb62628fa8a7"} Mar 12 09:15:31 crc kubenswrapper[4809]: I0312 09:15:31.284865 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pg2xr" event={"ID":"20800b48-5695-4514-bff1-0d355bfcc835","Type":"ContainerStarted","Data":"baf0e4085645a273430aa84d6c26fe19238f9052209d55c5132da55961dcace6"} Mar 12 09:15:31 crc kubenswrapper[4809]: I0312 09:15:31.307546 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pg2xr" podStartSLOduration=2.833165954 podStartE2EDuration="6.307523275s" podCreationTimestamp="2026-03-12 09:15:25 +0000 UTC" firstStartedPulling="2026-03-12 09:15:27.232945235 +0000 UTC m=+4600.814980988" lastFinishedPulling="2026-03-12 09:15:30.707302546 +0000 UTC m=+4604.289338309" observedRunningTime="2026-03-12 09:15:31.303259049 +0000 UTC m=+4604.885294792" watchObservedRunningTime="2026-03-12 09:15:31.307523275 +0000 UTC m=+4604.889559008" Mar 12 09:15:35 crc kubenswrapper[4809]: I0312 09:15:35.677336 4809 scope.go:117] "RemoveContainer" containerID="cad53017dac45a115ef0ab45c13a086ab88169d1bbc66965d3cbf6a4292a3437" Mar 12 09:15:35 crc kubenswrapper[4809]: I0312 09:15:35.994765 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pg2xr" Mar 12 09:15:35 crc kubenswrapper[4809]: I0312 09:15:35.995540 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pg2xr" Mar 12 09:15:36 crc kubenswrapper[4809]: I0312 09:15:36.307577 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pg2xr" Mar 12 09:15:36 crc kubenswrapper[4809]: I0312 09:15:36.399011 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pg2xr" Mar 12 09:15:36 crc kubenswrapper[4809]: I0312 09:15:36.571250 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pg2xr"] Mar 12 09:15:38 crc kubenswrapper[4809]: I0312 09:15:38.363976 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pg2xr" podUID="20800b48-5695-4514-bff1-0d355bfcc835" containerName="registry-server" containerID="cri-o://baf0e4085645a273430aa84d6c26fe19238f9052209d55c5132da55961dcace6" gracePeriod=2 Mar 12 09:15:39 crc kubenswrapper[4809]: I0312 09:15:39.383476 4809 generic.go:334] "Generic (PLEG): container finished" podID="20800b48-5695-4514-bff1-0d355bfcc835" containerID="baf0e4085645a273430aa84d6c26fe19238f9052209d55c5132da55961dcace6" exitCode=0 Mar 12 09:15:39 crc kubenswrapper[4809]: I0312 09:15:39.383628 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pg2xr" event={"ID":"20800b48-5695-4514-bff1-0d355bfcc835","Type":"ContainerDied","Data":"baf0e4085645a273430aa84d6c26fe19238f9052209d55c5132da55961dcace6"} Mar 12 09:15:39 crc kubenswrapper[4809]: I0312 09:15:39.535984 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pg2xr" Mar 12 09:15:39 crc kubenswrapper[4809]: I0312 09:15:39.550088 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20800b48-5695-4514-bff1-0d355bfcc835-utilities\") pod \"20800b48-5695-4514-bff1-0d355bfcc835\" (UID: \"20800b48-5695-4514-bff1-0d355bfcc835\") " Mar 12 09:15:39 crc kubenswrapper[4809]: I0312 09:15:39.550206 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fncj8\" (UniqueName: \"kubernetes.io/projected/20800b48-5695-4514-bff1-0d355bfcc835-kube-api-access-fncj8\") pod \"20800b48-5695-4514-bff1-0d355bfcc835\" (UID: \"20800b48-5695-4514-bff1-0d355bfcc835\") " Mar 12 09:15:39 crc kubenswrapper[4809]: I0312 09:15:39.550261 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20800b48-5695-4514-bff1-0d355bfcc835-catalog-content\") pod \"20800b48-5695-4514-bff1-0d355bfcc835\" (UID: \"20800b48-5695-4514-bff1-0d355bfcc835\") " Mar 12 09:15:39 crc kubenswrapper[4809]: I0312 09:15:39.553593 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20800b48-5695-4514-bff1-0d355bfcc835-utilities" (OuterVolumeSpecName: "utilities") pod "20800b48-5695-4514-bff1-0d355bfcc835" (UID: "20800b48-5695-4514-bff1-0d355bfcc835"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 09:15:39 crc kubenswrapper[4809]: I0312 09:15:39.562607 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20800b48-5695-4514-bff1-0d355bfcc835-kube-api-access-fncj8" (OuterVolumeSpecName: "kube-api-access-fncj8") pod "20800b48-5695-4514-bff1-0d355bfcc835" (UID: "20800b48-5695-4514-bff1-0d355bfcc835"). InnerVolumeSpecName "kube-api-access-fncj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 09:15:39 crc kubenswrapper[4809]: I0312 09:15:39.653389 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20800b48-5695-4514-bff1-0d355bfcc835-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 09:15:39 crc kubenswrapper[4809]: I0312 09:15:39.653428 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fncj8\" (UniqueName: \"kubernetes.io/projected/20800b48-5695-4514-bff1-0d355bfcc835-kube-api-access-fncj8\") on node \"crc\" DevicePath \"\"" Mar 12 09:15:39 crc kubenswrapper[4809]: I0312 09:15:39.658975 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20800b48-5695-4514-bff1-0d355bfcc835-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "20800b48-5695-4514-bff1-0d355bfcc835" (UID: "20800b48-5695-4514-bff1-0d355bfcc835"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 09:15:39 crc kubenswrapper[4809]: I0312 09:15:39.823951 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20800b48-5695-4514-bff1-0d355bfcc835-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 09:15:40 crc kubenswrapper[4809]: I0312 09:15:40.106558 4809 scope.go:117] "RemoveContainer" containerID="9f5b71a157ead4783e53a957b141b066128a6a4df018ea7f5f50f3c38116cbd8" Mar 12 09:15:40 crc kubenswrapper[4809]: E0312 09:15:40.107052 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:15:40 crc kubenswrapper[4809]: I0312 09:15:40.414140 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pg2xr" event={"ID":"20800b48-5695-4514-bff1-0d355bfcc835","Type":"ContainerDied","Data":"b17ed8920badfa69df25caf842dee205512ed94e4dadefcea721ebe21005973d"} Mar 12 09:15:40 crc kubenswrapper[4809]: I0312 09:15:40.414517 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pg2xr" Mar 12 09:15:40 crc kubenswrapper[4809]: I0312 09:15:40.414863 4809 scope.go:117] "RemoveContainer" containerID="baf0e4085645a273430aa84d6c26fe19238f9052209d55c5132da55961dcace6" Mar 12 09:15:40 crc kubenswrapper[4809]: I0312 09:15:40.453412 4809 scope.go:117] "RemoveContainer" containerID="f5b332a1d139fbada07f634c4e1af5da961b1a2f8a08d300d90efb62628fa8a7" Mar 12 09:15:40 crc kubenswrapper[4809]: I0312 09:15:40.456536 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pg2xr"] Mar 12 09:15:40 crc kubenswrapper[4809]: I0312 09:15:40.471970 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pg2xr"] Mar 12 09:15:40 crc kubenswrapper[4809]: I0312 09:15:40.487006 4809 scope.go:117] "RemoveContainer" containerID="533e84213a3496590c1026b7848ce709d25dceb8f2d23bbe6c6ddd06a894fcbf" Mar 12 09:15:41 crc kubenswrapper[4809]: I0312 09:15:41.122305 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20800b48-5695-4514-bff1-0d355bfcc835" path="/var/lib/kubelet/pods/20800b48-5695-4514-bff1-0d355bfcc835/volumes" Mar 12 09:15:52 crc kubenswrapper[4809]: I0312 09:15:52.107097 4809 scope.go:117] "RemoveContainer" containerID="9f5b71a157ead4783e53a957b141b066128a6a4df018ea7f5f50f3c38116cbd8" Mar 12 09:15:53 crc kubenswrapper[4809]: I0312 09:15:53.584219 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" event={"ID":"101483ba-8ed3-40eb-9855-077e9add029f","Type":"ContainerStarted","Data":"8d31c87fdce03fe5a0350b3981acfc6e7cf41f3cc5482b0734974401077eea98"} Mar 12 09:16:00 crc kubenswrapper[4809]: I0312 09:16:00.156609 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555116-vnrs9"] Mar 12 09:16:00 crc kubenswrapper[4809]: E0312 09:16:00.157731 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20800b48-5695-4514-bff1-0d355bfcc835" containerName="extract-utilities" Mar 12 09:16:00 crc kubenswrapper[4809]: I0312 09:16:00.157748 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="20800b48-5695-4514-bff1-0d355bfcc835" containerName="extract-utilities" Mar 12 09:16:00 crc kubenswrapper[4809]: E0312 09:16:00.157769 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20800b48-5695-4514-bff1-0d355bfcc835" containerName="extract-content" Mar 12 09:16:00 crc kubenswrapper[4809]: I0312 09:16:00.157777 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="20800b48-5695-4514-bff1-0d355bfcc835" containerName="extract-content" Mar 12 09:16:00 crc kubenswrapper[4809]: E0312 09:16:00.157797 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20800b48-5695-4514-bff1-0d355bfcc835" containerName="registry-server" Mar 12 09:16:00 crc kubenswrapper[4809]: I0312 09:16:00.157806 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="20800b48-5695-4514-bff1-0d355bfcc835" containerName="registry-server" Mar 12 09:16:00 crc kubenswrapper[4809]: I0312 09:16:00.158157 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="20800b48-5695-4514-bff1-0d355bfcc835" containerName="registry-server" Mar 12 09:16:00 crc kubenswrapper[4809]: I0312 09:16:00.159095 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555116-vnrs9" Mar 12 09:16:00 crc kubenswrapper[4809]: I0312 09:16:00.168745 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555116-vnrs9"] Mar 12 09:16:00 crc kubenswrapper[4809]: I0312 09:16:00.198147 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 09:16:00 crc kubenswrapper[4809]: I0312 09:16:00.198170 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 09:16:00 crc kubenswrapper[4809]: I0312 09:16:00.198479 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-tvxqv" Mar 12 09:16:00 crc kubenswrapper[4809]: I0312 09:16:00.270239 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fpwj\" (UniqueName: \"kubernetes.io/projected/1f7ed2c6-8773-41ec-aff7-35d3127dc86d-kube-api-access-7fpwj\") pod \"auto-csr-approver-29555116-vnrs9\" (UID: \"1f7ed2c6-8773-41ec-aff7-35d3127dc86d\") " pod="openshift-infra/auto-csr-approver-29555116-vnrs9" Mar 12 09:16:00 crc kubenswrapper[4809]: I0312 09:16:00.372463 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fpwj\" (UniqueName: \"kubernetes.io/projected/1f7ed2c6-8773-41ec-aff7-35d3127dc86d-kube-api-access-7fpwj\") pod \"auto-csr-approver-29555116-vnrs9\" (UID: \"1f7ed2c6-8773-41ec-aff7-35d3127dc86d\") " pod="openshift-infra/auto-csr-approver-29555116-vnrs9" Mar 12 09:16:00 crc kubenswrapper[4809]: I0312 09:16:00.396275 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fpwj\" (UniqueName: \"kubernetes.io/projected/1f7ed2c6-8773-41ec-aff7-35d3127dc86d-kube-api-access-7fpwj\") pod \"auto-csr-approver-29555116-vnrs9\" (UID: \"1f7ed2c6-8773-41ec-aff7-35d3127dc86d\") " pod="openshift-infra/auto-csr-approver-29555116-vnrs9" Mar 12 09:16:00 crc kubenswrapper[4809]: I0312 09:16:00.514148 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555116-vnrs9" Mar 12 09:16:01 crc kubenswrapper[4809]: I0312 09:16:01.017919 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555116-vnrs9"] Mar 12 09:16:01 crc kubenswrapper[4809]: I0312 09:16:01.700332 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555116-vnrs9" event={"ID":"1f7ed2c6-8773-41ec-aff7-35d3127dc86d","Type":"ContainerStarted","Data":"b04bc66dae2a23d15d3e49953154510fec8d702189e13a1a452d23e5a71fa530"} Mar 12 09:16:02 crc kubenswrapper[4809]: I0312 09:16:02.712852 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555116-vnrs9" event={"ID":"1f7ed2c6-8773-41ec-aff7-35d3127dc86d","Type":"ContainerStarted","Data":"ac864f3a8b8d53d727182fe38f205d55ba4da341ec2230a4b2d339751aa35dab"} Mar 12 09:16:02 crc kubenswrapper[4809]: I0312 09:16:02.738415 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555116-vnrs9" podStartSLOduration=1.922247617 podStartE2EDuration="2.738392308s" podCreationTimestamp="2026-03-12 09:16:00 +0000 UTC" firstStartedPulling="2026-03-12 09:16:01.022101553 +0000 UTC m=+4634.604137296" lastFinishedPulling="2026-03-12 09:16:01.838246244 +0000 UTC m=+4635.420281987" observedRunningTime="2026-03-12 09:16:02.733815433 +0000 UTC m=+4636.315851196" watchObservedRunningTime="2026-03-12 09:16:02.738392308 +0000 UTC m=+4636.320428041" Mar 12 09:16:04 crc kubenswrapper[4809]: I0312 09:16:04.735693 4809 generic.go:334] "Generic (PLEG): container finished" podID="1f7ed2c6-8773-41ec-aff7-35d3127dc86d" containerID="ac864f3a8b8d53d727182fe38f205d55ba4da341ec2230a4b2d339751aa35dab" exitCode=0 Mar 12 09:16:04 crc kubenswrapper[4809]: I0312 09:16:04.735776 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555116-vnrs9" event={"ID":"1f7ed2c6-8773-41ec-aff7-35d3127dc86d","Type":"ContainerDied","Data":"ac864f3a8b8d53d727182fe38f205d55ba4da341ec2230a4b2d339751aa35dab"} Mar 12 09:16:06 crc kubenswrapper[4809]: I0312 09:16:06.534642 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555116-vnrs9" Mar 12 09:16:06 crc kubenswrapper[4809]: I0312 09:16:06.595137 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fpwj\" (UniqueName: \"kubernetes.io/projected/1f7ed2c6-8773-41ec-aff7-35d3127dc86d-kube-api-access-7fpwj\") pod \"1f7ed2c6-8773-41ec-aff7-35d3127dc86d\" (UID: \"1f7ed2c6-8773-41ec-aff7-35d3127dc86d\") " Mar 12 09:16:06 crc kubenswrapper[4809]: I0312 09:16:06.696211 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f7ed2c6-8773-41ec-aff7-35d3127dc86d-kube-api-access-7fpwj" (OuterVolumeSpecName: "kube-api-access-7fpwj") pod "1f7ed2c6-8773-41ec-aff7-35d3127dc86d" (UID: "1f7ed2c6-8773-41ec-aff7-35d3127dc86d"). InnerVolumeSpecName "kube-api-access-7fpwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 09:16:06 crc kubenswrapper[4809]: I0312 09:16:06.700650 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7fpwj\" (UniqueName: \"kubernetes.io/projected/1f7ed2c6-8773-41ec-aff7-35d3127dc86d-kube-api-access-7fpwj\") on node \"crc\" DevicePath \"\"" Mar 12 09:16:06 crc kubenswrapper[4809]: I0312 09:16:06.766203 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555116-vnrs9" event={"ID":"1f7ed2c6-8773-41ec-aff7-35d3127dc86d","Type":"ContainerDied","Data":"b04bc66dae2a23d15d3e49953154510fec8d702189e13a1a452d23e5a71fa530"} Mar 12 09:16:06 crc kubenswrapper[4809]: I0312 09:16:06.766271 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b04bc66dae2a23d15d3e49953154510fec8d702189e13a1a452d23e5a71fa530" Mar 12 09:16:06 crc kubenswrapper[4809]: I0312 09:16:06.766293 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555116-vnrs9" Mar 12 09:16:06 crc kubenswrapper[4809]: I0312 09:16:06.837246 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555110-zclvc"] Mar 12 09:16:06 crc kubenswrapper[4809]: I0312 09:16:06.856656 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555110-zclvc"] Mar 12 09:16:07 crc kubenswrapper[4809]: I0312 09:16:07.128814 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c05d355-1d83-4903-a3ec-d2be1b2aef2b" path="/var/lib/kubelet/pods/9c05d355-1d83-4903-a3ec-d2be1b2aef2b/volumes" Mar 12 09:16:10 crc kubenswrapper[4809]: I0312 09:16:10.948835 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Mar 12 09:16:10 crc kubenswrapper[4809]: E0312 09:16:10.950468 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f7ed2c6-8773-41ec-aff7-35d3127dc86d" containerName="oc" Mar 12 09:16:10 crc kubenswrapper[4809]: I0312 09:16:10.950489 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f7ed2c6-8773-41ec-aff7-35d3127dc86d" containerName="oc" Mar 12 09:16:10 crc kubenswrapper[4809]: I0312 09:16:10.950781 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f7ed2c6-8773-41ec-aff7-35d3127dc86d" containerName="oc" Mar 12 09:16:10 crc kubenswrapper[4809]: I0312 09:16:10.952001 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Mar 12 09:16:10 crc kubenswrapper[4809]: I0312 09:16:10.954466 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Mar 12 09:16:10 crc kubenswrapper[4809]: I0312 09:16:10.955302 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Mar 12 09:16:10 crc kubenswrapper[4809]: I0312 09:16:10.955359 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-rzjwh" Mar 12 09:16:10 crc kubenswrapper[4809]: I0312 09:16:10.955425 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Mar 12 09:16:10 crc kubenswrapper[4809]: I0312 09:16:10.965060 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Mar 12 09:16:11 crc kubenswrapper[4809]: I0312 09:16:11.034429 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d4b8d2be-fafc-4d7e-9348-053c53d3cb4d-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"d4b8d2be-fafc-4d7e-9348-053c53d3cb4d\") " pod="openstack/tempest-tests-tempest" Mar 12 09:16:11 crc kubenswrapper[4809]: I0312 09:16:11.034503 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d4b8d2be-fafc-4d7e-9348-053c53d3cb4d-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"d4b8d2be-fafc-4d7e-9348-053c53d3cb4d\") " pod="openstack/tempest-tests-tempest" Mar 12 09:16:11 crc kubenswrapper[4809]: I0312 09:16:11.034739 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfzf8\" (UniqueName: \"kubernetes.io/projected/d4b8d2be-fafc-4d7e-9348-053c53d3cb4d-kube-api-access-bfzf8\") pod \"tempest-tests-tempest\" (UID: \"d4b8d2be-fafc-4d7e-9348-053c53d3cb4d\") " pod="openstack/tempest-tests-tempest" Mar 12 09:16:11 crc kubenswrapper[4809]: I0312 09:16:11.034893 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d4b8d2be-fafc-4d7e-9348-053c53d3cb4d-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"d4b8d2be-fafc-4d7e-9348-053c53d3cb4d\") " pod="openstack/tempest-tests-tempest" Mar 12 09:16:11 crc kubenswrapper[4809]: I0312 09:16:11.035164 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"tempest-tests-tempest\" (UID: \"d4b8d2be-fafc-4d7e-9348-053c53d3cb4d\") " pod="openstack/tempest-tests-tempest" Mar 12 09:16:11 crc kubenswrapper[4809]: I0312 09:16:11.035244 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d4b8d2be-fafc-4d7e-9348-053c53d3cb4d-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"d4b8d2be-fafc-4d7e-9348-053c53d3cb4d\") " pod="openstack/tempest-tests-tempest" Mar 12 09:16:11 crc kubenswrapper[4809]: I0312 09:16:11.035283 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d4b8d2be-fafc-4d7e-9348-053c53d3cb4d-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"d4b8d2be-fafc-4d7e-9348-053c53d3cb4d\") " pod="openstack/tempest-tests-tempest" Mar 12 09:16:11 crc kubenswrapper[4809]: I0312 09:16:11.035368 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d4b8d2be-fafc-4d7e-9348-053c53d3cb4d-config-data\") pod \"tempest-tests-tempest\" (UID: \"d4b8d2be-fafc-4d7e-9348-053c53d3cb4d\") " pod="openstack/tempest-tests-tempest" Mar 12 09:16:11 crc kubenswrapper[4809]: I0312 09:16:11.035515 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d4b8d2be-fafc-4d7e-9348-053c53d3cb4d-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"d4b8d2be-fafc-4d7e-9348-053c53d3cb4d\") " pod="openstack/tempest-tests-tempest" Mar 12 09:16:11 crc kubenswrapper[4809]: I0312 09:16:11.140749 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d4b8d2be-fafc-4d7e-9348-053c53d3cb4d-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"d4b8d2be-fafc-4d7e-9348-053c53d3cb4d\") " pod="openstack/tempest-tests-tempest" Mar 12 09:16:11 crc kubenswrapper[4809]: I0312 09:16:11.140811 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d4b8d2be-fafc-4d7e-9348-053c53d3cb4d-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"d4b8d2be-fafc-4d7e-9348-053c53d3cb4d\") " pod="openstack/tempest-tests-tempest" Mar 12 09:16:11 crc kubenswrapper[4809]: I0312 09:16:11.140868 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfzf8\" (UniqueName: \"kubernetes.io/projected/d4b8d2be-fafc-4d7e-9348-053c53d3cb4d-kube-api-access-bfzf8\") pod \"tempest-tests-tempest\" (UID: \"d4b8d2be-fafc-4d7e-9348-053c53d3cb4d\") " pod="openstack/tempest-tests-tempest" Mar 12 09:16:11 crc kubenswrapper[4809]: I0312 09:16:11.140913 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d4b8d2be-fafc-4d7e-9348-053c53d3cb4d-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"d4b8d2be-fafc-4d7e-9348-053c53d3cb4d\") " pod="openstack/tempest-tests-tempest" Mar 12 09:16:11 crc kubenswrapper[4809]: I0312 09:16:11.140975 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"tempest-tests-tempest\" (UID: \"d4b8d2be-fafc-4d7e-9348-053c53d3cb4d\") " pod="openstack/tempest-tests-tempest" Mar 12 09:16:11 crc kubenswrapper[4809]: I0312 09:16:11.141003 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d4b8d2be-fafc-4d7e-9348-053c53d3cb4d-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"d4b8d2be-fafc-4d7e-9348-053c53d3cb4d\") " pod="openstack/tempest-tests-tempest" Mar 12 09:16:11 crc kubenswrapper[4809]: I0312 09:16:11.141021 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d4b8d2be-fafc-4d7e-9348-053c53d3cb4d-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"d4b8d2be-fafc-4d7e-9348-053c53d3cb4d\") " pod="openstack/tempest-tests-tempest" Mar 12 09:16:11 crc kubenswrapper[4809]: I0312 09:16:11.141051 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d4b8d2be-fafc-4d7e-9348-053c53d3cb4d-config-data\") pod \"tempest-tests-tempest\" (UID: \"d4b8d2be-fafc-4d7e-9348-053c53d3cb4d\") " pod="openstack/tempest-tests-tempest" Mar 12 09:16:11 crc kubenswrapper[4809]: I0312 09:16:11.141128 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d4b8d2be-fafc-4d7e-9348-053c53d3cb4d-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"d4b8d2be-fafc-4d7e-9348-053c53d3cb4d\") " pod="openstack/tempest-tests-tempest" Mar 12 09:16:11 crc kubenswrapper[4809]: I0312 09:16:11.141314 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d4b8d2be-fafc-4d7e-9348-053c53d3cb4d-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"d4b8d2be-fafc-4d7e-9348-053c53d3cb4d\") " pod="openstack/tempest-tests-tempest" Mar 12 09:16:11 crc kubenswrapper[4809]: I0312 09:16:11.141633 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d4b8d2be-fafc-4d7e-9348-053c53d3cb4d-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"d4b8d2be-fafc-4d7e-9348-053c53d3cb4d\") " pod="openstack/tempest-tests-tempest" Mar 12 09:16:11 crc kubenswrapper[4809]: I0312 09:16:11.142153 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d4b8d2be-fafc-4d7e-9348-053c53d3cb4d-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"d4b8d2be-fafc-4d7e-9348-053c53d3cb4d\") " pod="openstack/tempest-tests-tempest" Mar 12 09:16:11 crc kubenswrapper[4809]: I0312 09:16:11.143388 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d4b8d2be-fafc-4d7e-9348-053c53d3cb4d-config-data\") pod \"tempest-tests-tempest\" (UID: \"d4b8d2be-fafc-4d7e-9348-053c53d3cb4d\") " pod="openstack/tempest-tests-tempest" Mar 12 09:16:11 crc kubenswrapper[4809]: I0312 09:16:11.144915 4809 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"tempest-tests-tempest\" (UID: \"d4b8d2be-fafc-4d7e-9348-053c53d3cb4d\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/tempest-tests-tempest" Mar 12 09:16:11 crc kubenswrapper[4809]: I0312 09:16:11.151614 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d4b8d2be-fafc-4d7e-9348-053c53d3cb4d-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"d4b8d2be-fafc-4d7e-9348-053c53d3cb4d\") " pod="openstack/tempest-tests-tempest" Mar 12 09:16:11 crc kubenswrapper[4809]: I0312 09:16:11.151971 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d4b8d2be-fafc-4d7e-9348-053c53d3cb4d-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"d4b8d2be-fafc-4d7e-9348-053c53d3cb4d\") " pod="openstack/tempest-tests-tempest" Mar 12 09:16:11 crc kubenswrapper[4809]: I0312 09:16:11.151952 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d4b8d2be-fafc-4d7e-9348-053c53d3cb4d-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"d4b8d2be-fafc-4d7e-9348-053c53d3cb4d\") " pod="openstack/tempest-tests-tempest" Mar 12 09:16:11 crc kubenswrapper[4809]: I0312 09:16:11.168770 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfzf8\" (UniqueName: \"kubernetes.io/projected/d4b8d2be-fafc-4d7e-9348-053c53d3cb4d-kube-api-access-bfzf8\") pod \"tempest-tests-tempest\" (UID: \"d4b8d2be-fafc-4d7e-9348-053c53d3cb4d\") " pod="openstack/tempest-tests-tempest" Mar 12 09:16:11 crc kubenswrapper[4809]: I0312 09:16:11.207662 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"tempest-tests-tempest\" (UID: \"d4b8d2be-fafc-4d7e-9348-053c53d3cb4d\") " pod="openstack/tempest-tests-tempest" Mar 12 09:16:11 crc kubenswrapper[4809]: I0312 09:16:11.276101 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Mar 12 09:16:11 crc kubenswrapper[4809]: W0312 09:16:11.843362 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4b8d2be_fafc_4d7e_9348_053c53d3cb4d.slice/crio-fa4f835a8bf6c604809bcdd04d9466f13532f158c03007c0012cd7bc489ef801 WatchSource:0}: Error finding container fa4f835a8bf6c604809bcdd04d9466f13532f158c03007c0012cd7bc489ef801: Status 404 returned error can't find the container with id fa4f835a8bf6c604809bcdd04d9466f13532f158c03007c0012cd7bc489ef801 Mar 12 09:16:11 crc kubenswrapper[4809]: I0312 09:16:11.861281 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Mar 12 09:16:12 crc kubenswrapper[4809]: I0312 09:16:12.849035 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"d4b8d2be-fafc-4d7e-9348-053c53d3cb4d","Type":"ContainerStarted","Data":"fa4f835a8bf6c604809bcdd04d9466f13532f158c03007c0012cd7bc489ef801"} Mar 12 09:16:36 crc kubenswrapper[4809]: I0312 09:16:36.339101 4809 scope.go:117] "RemoveContainer" containerID="7fa6daeba317d318621aba1fe3d247a3caf19e7921ddc27583a167819e5af40a" Mar 12 09:16:47 crc kubenswrapper[4809]: E0312 09:16:47.761182 4809 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Mar 12 09:16:47 crc kubenswrapper[4809]: E0312 09:16:47.765447 4809 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bfzf8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(d4b8d2be-fafc-4d7e-9348-053c53d3cb4d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 09:16:47 crc kubenswrapper[4809]: E0312 09:16:47.766715 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="d4b8d2be-fafc-4d7e-9348-053c53d3cb4d" Mar 12 09:16:48 crc kubenswrapper[4809]: E0312 09:16:48.336748 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="d4b8d2be-fafc-4d7e-9348-053c53d3cb4d" Mar 12 09:17:00 crc kubenswrapper[4809]: I0312 09:17:00.779388 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Mar 12 09:17:03 crc kubenswrapper[4809]: I0312 09:17:03.514894 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"d4b8d2be-fafc-4d7e-9348-053c53d3cb4d","Type":"ContainerStarted","Data":"397e2f02561b964b375eb590587fb3968f9e132cfa4c93400edbc56886647694"} Mar 12 09:17:03 crc kubenswrapper[4809]: I0312 09:17:03.538698 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=5.609971949 podStartE2EDuration="54.53867156s" podCreationTimestamp="2026-03-12 09:16:09 +0000 UTC" firstStartedPulling="2026-03-12 09:16:11.847019342 +0000 UTC m=+4645.429055075" lastFinishedPulling="2026-03-12 09:17:00.775718953 +0000 UTC m=+4694.357754686" observedRunningTime="2026-03-12 09:17:03.530872537 +0000 UTC m=+4697.112908300" watchObservedRunningTime="2026-03-12 09:17:03.53867156 +0000 UTC m=+4697.120707293" Mar 12 09:18:00 crc kubenswrapper[4809]: I0312 09:18:00.239255 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555118-h7hjw"] Mar 12 09:18:00 crc kubenswrapper[4809]: I0312 09:18:00.256424 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555118-h7hjw" Mar 12 09:18:00 crc kubenswrapper[4809]: I0312 09:18:00.269724 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxq67\" (UniqueName: \"kubernetes.io/projected/95c097ab-15b8-4188-a5ee-bf0f310c4d50-kube-api-access-sxq67\") pod \"auto-csr-approver-29555118-h7hjw\" (UID: \"95c097ab-15b8-4188-a5ee-bf0f310c4d50\") " pod="openshift-infra/auto-csr-approver-29555118-h7hjw" Mar 12 09:18:00 crc kubenswrapper[4809]: I0312 09:18:00.271339 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 09:18:00 crc kubenswrapper[4809]: I0312 09:18:00.273168 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 09:18:00 crc kubenswrapper[4809]: I0312 09:18:00.273850 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-tvxqv" Mar 12 09:18:00 crc kubenswrapper[4809]: I0312 09:18:00.295164 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555118-h7hjw"] Mar 12 09:18:00 crc kubenswrapper[4809]: I0312 09:18:00.372006 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxq67\" (UniqueName: \"kubernetes.io/projected/95c097ab-15b8-4188-a5ee-bf0f310c4d50-kube-api-access-sxq67\") pod \"auto-csr-approver-29555118-h7hjw\" (UID: \"95c097ab-15b8-4188-a5ee-bf0f310c4d50\") " pod="openshift-infra/auto-csr-approver-29555118-h7hjw" Mar 12 09:18:00 crc kubenswrapper[4809]: I0312 09:18:00.386879 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-46txv"] Mar 12 09:18:00 crc kubenswrapper[4809]: I0312 09:18:00.391157 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-46txv" Mar 12 09:18:00 crc kubenswrapper[4809]: I0312 09:18:00.399196 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-46txv"] Mar 12 09:18:00 crc kubenswrapper[4809]: I0312 09:18:00.426149 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxq67\" (UniqueName: \"kubernetes.io/projected/95c097ab-15b8-4188-a5ee-bf0f310c4d50-kube-api-access-sxq67\") pod \"auto-csr-approver-29555118-h7hjw\" (UID: \"95c097ab-15b8-4188-a5ee-bf0f310c4d50\") " pod="openshift-infra/auto-csr-approver-29555118-h7hjw" Mar 12 09:18:00 crc kubenswrapper[4809]: I0312 09:18:00.475050 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31c13acb-9eed-41f3-b4a5-63c4cdbdaa40-utilities\") pod \"redhat-operators-46txv\" (UID: \"31c13acb-9eed-41f3-b4a5-63c4cdbdaa40\") " pod="openshift-marketplace/redhat-operators-46txv" Mar 12 09:18:00 crc kubenswrapper[4809]: I0312 09:18:00.475173 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdh7p\" (UniqueName: \"kubernetes.io/projected/31c13acb-9eed-41f3-b4a5-63c4cdbdaa40-kube-api-access-tdh7p\") pod \"redhat-operators-46txv\" (UID: \"31c13acb-9eed-41f3-b4a5-63c4cdbdaa40\") " pod="openshift-marketplace/redhat-operators-46txv" Mar 12 09:18:00 crc kubenswrapper[4809]: I0312 09:18:00.475257 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31c13acb-9eed-41f3-b4a5-63c4cdbdaa40-catalog-content\") pod \"redhat-operators-46txv\" (UID: \"31c13acb-9eed-41f3-b4a5-63c4cdbdaa40\") " pod="openshift-marketplace/redhat-operators-46txv" Mar 12 09:18:00 crc kubenswrapper[4809]: I0312 09:18:00.578493 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31c13acb-9eed-41f3-b4a5-63c4cdbdaa40-utilities\") pod \"redhat-operators-46txv\" (UID: \"31c13acb-9eed-41f3-b4a5-63c4cdbdaa40\") " pod="openshift-marketplace/redhat-operators-46txv" Mar 12 09:18:00 crc kubenswrapper[4809]: I0312 09:18:00.578578 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdh7p\" (UniqueName: \"kubernetes.io/projected/31c13acb-9eed-41f3-b4a5-63c4cdbdaa40-kube-api-access-tdh7p\") pod \"redhat-operators-46txv\" (UID: \"31c13acb-9eed-41f3-b4a5-63c4cdbdaa40\") " pod="openshift-marketplace/redhat-operators-46txv" Mar 12 09:18:00 crc kubenswrapper[4809]: I0312 09:18:00.578677 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31c13acb-9eed-41f3-b4a5-63c4cdbdaa40-catalog-content\") pod \"redhat-operators-46txv\" (UID: \"31c13acb-9eed-41f3-b4a5-63c4cdbdaa40\") " pod="openshift-marketplace/redhat-operators-46txv" Mar 12 09:18:00 crc kubenswrapper[4809]: I0312 09:18:00.580470 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31c13acb-9eed-41f3-b4a5-63c4cdbdaa40-utilities\") pod \"redhat-operators-46txv\" (UID: \"31c13acb-9eed-41f3-b4a5-63c4cdbdaa40\") " pod="openshift-marketplace/redhat-operators-46txv" Mar 12 09:18:00 crc kubenswrapper[4809]: I0312 09:18:00.582748 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31c13acb-9eed-41f3-b4a5-63c4cdbdaa40-catalog-content\") pod \"redhat-operators-46txv\" (UID: \"31c13acb-9eed-41f3-b4a5-63c4cdbdaa40\") " pod="openshift-marketplace/redhat-operators-46txv" Mar 12 09:18:00 crc kubenswrapper[4809]: I0312 09:18:00.617364 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555118-h7hjw" Mar 12 09:18:00 crc kubenswrapper[4809]: I0312 09:18:00.617909 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdh7p\" (UniqueName: \"kubernetes.io/projected/31c13acb-9eed-41f3-b4a5-63c4cdbdaa40-kube-api-access-tdh7p\") pod \"redhat-operators-46txv\" (UID: \"31c13acb-9eed-41f3-b4a5-63c4cdbdaa40\") " pod="openshift-marketplace/redhat-operators-46txv" Mar 12 09:18:00 crc kubenswrapper[4809]: I0312 09:18:00.719995 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-46txv" Mar 12 09:18:02 crc kubenswrapper[4809]: I0312 09:18:02.037593 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555118-h7hjw"] Mar 12 09:18:02 crc kubenswrapper[4809]: I0312 09:18:02.074192 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-46txv"] Mar 12 09:18:02 crc kubenswrapper[4809]: I0312 09:18:02.313801 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555118-h7hjw" event={"ID":"95c097ab-15b8-4188-a5ee-bf0f310c4d50","Type":"ContainerStarted","Data":"6f0bb71b31628f372f878f414b85c649fee99f72a64c65bd3b03b89af8b3117e"} Mar 12 09:18:02 crc kubenswrapper[4809]: I0312 09:18:02.314854 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-46txv" event={"ID":"31c13acb-9eed-41f3-b4a5-63c4cdbdaa40","Type":"ContainerStarted","Data":"7a8b9fdb8dd18dde852347db3dd330348b113389c3c370464698be77a868bafb"} Mar 12 09:18:03 crc kubenswrapper[4809]: I0312 09:18:03.328736 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-46txv" event={"ID":"31c13acb-9eed-41f3-b4a5-63c4cdbdaa40","Type":"ContainerDied","Data":"630f38dc4a43b9e34c3ed2c530afdc8f3d32dc5b8a9b97865374fa7eecb84da9"} Mar 12 09:18:03 crc kubenswrapper[4809]: I0312 09:18:03.330056 4809 generic.go:334] "Generic (PLEG): container finished" podID="31c13acb-9eed-41f3-b4a5-63c4cdbdaa40" containerID="630f38dc4a43b9e34c3ed2c530afdc8f3d32dc5b8a9b97865374fa7eecb84da9" exitCode=0 Mar 12 09:18:05 crc kubenswrapper[4809]: I0312 09:18:05.356015 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555118-h7hjw" event={"ID":"95c097ab-15b8-4188-a5ee-bf0f310c4d50","Type":"ContainerStarted","Data":"d3eb763285b96ac6f8961c5b135287314917683899aa30d29647d0b87c4b7b57"} Mar 12 09:18:05 crc kubenswrapper[4809]: I0312 09:18:05.382264 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555118-h7hjw" podStartSLOduration=3.638247489 podStartE2EDuration="5.382222631s" podCreationTimestamp="2026-03-12 09:18:00 +0000 UTC" firstStartedPulling="2026-03-12 09:18:02.090593736 +0000 UTC m=+4755.672629469" lastFinishedPulling="2026-03-12 09:18:03.834568878 +0000 UTC m=+4757.416604611" observedRunningTime="2026-03-12 09:18:05.379924819 +0000 UTC m=+4758.961960552" watchObservedRunningTime="2026-03-12 09:18:05.382222631 +0000 UTC m=+4758.964258364" Mar 12 09:18:06 crc kubenswrapper[4809]: I0312 09:18:06.372642 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-46txv" event={"ID":"31c13acb-9eed-41f3-b4a5-63c4cdbdaa40","Type":"ContainerStarted","Data":"a921f29721fba5bf17ebb48892cd422ae94fdf32f90985132d70452042a7204c"} Mar 12 09:18:07 crc kubenswrapper[4809]: I0312 09:18:07.386574 4809 generic.go:334] "Generic (PLEG): container finished" podID="95c097ab-15b8-4188-a5ee-bf0f310c4d50" containerID="d3eb763285b96ac6f8961c5b135287314917683899aa30d29647d0b87c4b7b57" exitCode=0 Mar 12 09:18:07 crc kubenswrapper[4809]: I0312 09:18:07.386662 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555118-h7hjw" event={"ID":"95c097ab-15b8-4188-a5ee-bf0f310c4d50","Type":"ContainerDied","Data":"d3eb763285b96ac6f8961c5b135287314917683899aa30d29647d0b87c4b7b57"} Mar 12 09:18:09 crc kubenswrapper[4809]: I0312 09:18:09.026748 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555118-h7hjw" Mar 12 09:18:09 crc kubenswrapper[4809]: I0312 09:18:09.080853 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxq67\" (UniqueName: \"kubernetes.io/projected/95c097ab-15b8-4188-a5ee-bf0f310c4d50-kube-api-access-sxq67\") pod \"95c097ab-15b8-4188-a5ee-bf0f310c4d50\" (UID: \"95c097ab-15b8-4188-a5ee-bf0f310c4d50\") " Mar 12 09:18:09 crc kubenswrapper[4809]: I0312 09:18:09.099598 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95c097ab-15b8-4188-a5ee-bf0f310c4d50-kube-api-access-sxq67" (OuterVolumeSpecName: "kube-api-access-sxq67") pod "95c097ab-15b8-4188-a5ee-bf0f310c4d50" (UID: "95c097ab-15b8-4188-a5ee-bf0f310c4d50"). InnerVolumeSpecName "kube-api-access-sxq67". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 09:18:09 crc kubenswrapper[4809]: I0312 09:18:09.195029 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxq67\" (UniqueName: \"kubernetes.io/projected/95c097ab-15b8-4188-a5ee-bf0f310c4d50-kube-api-access-sxq67\") on node \"crc\" DevicePath \"\"" Mar 12 09:18:09 crc kubenswrapper[4809]: I0312 09:18:09.412489 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555118-h7hjw" event={"ID":"95c097ab-15b8-4188-a5ee-bf0f310c4d50","Type":"ContainerDied","Data":"6f0bb71b31628f372f878f414b85c649fee99f72a64c65bd3b03b89af8b3117e"} Mar 12 09:18:09 crc kubenswrapper[4809]: I0312 09:18:09.413369 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555118-h7hjw" Mar 12 09:18:09 crc kubenswrapper[4809]: I0312 09:18:09.413794 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f0bb71b31628f372f878f414b85c649fee99f72a64c65bd3b03b89af8b3117e" Mar 12 09:18:09 crc kubenswrapper[4809]: I0312 09:18:09.530128 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555112-dpqqz"] Mar 12 09:18:09 crc kubenswrapper[4809]: I0312 09:18:09.549828 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555112-dpqqz"] Mar 12 09:18:11 crc kubenswrapper[4809]: I0312 09:18:11.152343 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ce5c40b-74b3-44f5-a2aa-47a4d8d6de9e" path="/var/lib/kubelet/pods/7ce5c40b-74b3-44f5-a2aa-47a4d8d6de9e/volumes" Mar 12 09:18:13 crc kubenswrapper[4809]: I0312 09:18:13.462461 4809 generic.go:334] "Generic (PLEG): container finished" podID="31c13acb-9eed-41f3-b4a5-63c4cdbdaa40" containerID="a921f29721fba5bf17ebb48892cd422ae94fdf32f90985132d70452042a7204c" exitCode=0 Mar 12 09:18:13 crc kubenswrapper[4809]: I0312 09:18:13.462562 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-46txv" event={"ID":"31c13acb-9eed-41f3-b4a5-63c4cdbdaa40","Type":"ContainerDied","Data":"a921f29721fba5bf17ebb48892cd422ae94fdf32f90985132d70452042a7204c"} Mar 12 09:18:14 crc kubenswrapper[4809]: I0312 09:18:14.479389 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-46txv" event={"ID":"31c13acb-9eed-41f3-b4a5-63c4cdbdaa40","Type":"ContainerStarted","Data":"022ddad6061f352965a5f12228036f080521580cb0df65a26d724712ed694648"} Mar 12 09:18:14 crc kubenswrapper[4809]: I0312 09:18:14.519205 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-46txv" podStartSLOduration=3.968219795 podStartE2EDuration="14.519183138s" podCreationTimestamp="2026-03-12 09:18:00 +0000 UTC" firstStartedPulling="2026-03-12 09:18:03.33186875 +0000 UTC m=+4756.913904473" lastFinishedPulling="2026-03-12 09:18:13.882832083 +0000 UTC m=+4767.464867816" observedRunningTime="2026-03-12 09:18:14.509483654 +0000 UTC m=+4768.091519417" watchObservedRunningTime="2026-03-12 09:18:14.519183138 +0000 UTC m=+4768.101218871" Mar 12 09:18:15 crc kubenswrapper[4809]: I0312 09:18:15.050160 4809 patch_prober.go:28] interesting pod/machine-config-daemon-h6d4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 09:18:15 crc kubenswrapper[4809]: I0312 09:18:15.050566 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 09:18:20 crc kubenswrapper[4809]: I0312 09:18:20.722374 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-46txv" Mar 12 09:18:20 crc kubenswrapper[4809]: I0312 09:18:20.723208 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-46txv" Mar 12 09:18:21 crc kubenswrapper[4809]: I0312 09:18:21.880376 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-46txv" podUID="31c13acb-9eed-41f3-b4a5-63c4cdbdaa40" containerName="registry-server" probeResult="failure" output=< Mar 12 09:18:21 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 09:18:21 crc kubenswrapper[4809]: > Mar 12 09:18:31 crc kubenswrapper[4809]: I0312 09:18:31.872529 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-46txv" podUID="31c13acb-9eed-41f3-b4a5-63c4cdbdaa40" containerName="registry-server" probeResult="failure" output=< Mar 12 09:18:31 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 09:18:31 crc kubenswrapper[4809]: > Mar 12 09:18:41 crc kubenswrapper[4809]: I0312 09:18:41.799072 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-46txv" podUID="31c13acb-9eed-41f3-b4a5-63c4cdbdaa40" containerName="registry-server" probeResult="failure" output=< Mar 12 09:18:41 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 09:18:41 crc kubenswrapper[4809]: > Mar 12 09:18:45 crc kubenswrapper[4809]: I0312 09:18:45.048900 4809 patch_prober.go:28] interesting pod/machine-config-daemon-h6d4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 09:18:45 crc kubenswrapper[4809]: I0312 09:18:45.052981 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 09:18:48 crc kubenswrapper[4809]: I0312 09:18:48.024953 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dpfq8"] Mar 12 09:18:48 crc kubenswrapper[4809]: E0312 09:18:48.032053 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95c097ab-15b8-4188-a5ee-bf0f310c4d50" containerName="oc" Mar 12 09:18:48 crc kubenswrapper[4809]: I0312 09:18:48.032498 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="95c097ab-15b8-4188-a5ee-bf0f310c4d50" containerName="oc" Mar 12 09:18:48 crc kubenswrapper[4809]: I0312 09:18:48.037242 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="95c097ab-15b8-4188-a5ee-bf0f310c4d50" containerName="oc" Mar 12 09:18:48 crc kubenswrapper[4809]: I0312 09:18:48.047657 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dpfq8" Mar 12 09:18:48 crc kubenswrapper[4809]: I0312 09:18:48.162521 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58f90f01-2a85-4755-a6ea-fef9035e8982-catalog-content\") pod \"redhat-marketplace-dpfq8\" (UID: \"58f90f01-2a85-4755-a6ea-fef9035e8982\") " pod="openshift-marketplace/redhat-marketplace-dpfq8" Mar 12 09:18:48 crc kubenswrapper[4809]: I0312 09:18:48.162595 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tccv6\" (UniqueName: \"kubernetes.io/projected/58f90f01-2a85-4755-a6ea-fef9035e8982-kube-api-access-tccv6\") pod \"redhat-marketplace-dpfq8\" (UID: \"58f90f01-2a85-4755-a6ea-fef9035e8982\") " pod="openshift-marketplace/redhat-marketplace-dpfq8" Mar 12 09:18:48 crc kubenswrapper[4809]: I0312 09:18:48.162947 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58f90f01-2a85-4755-a6ea-fef9035e8982-utilities\") pod \"redhat-marketplace-dpfq8\" (UID: \"58f90f01-2a85-4755-a6ea-fef9035e8982\") " pod="openshift-marketplace/redhat-marketplace-dpfq8" Mar 12 09:18:48 crc kubenswrapper[4809]: I0312 09:18:48.267492 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58f90f01-2a85-4755-a6ea-fef9035e8982-catalog-content\") pod \"redhat-marketplace-dpfq8\" (UID: \"58f90f01-2a85-4755-a6ea-fef9035e8982\") " pod="openshift-marketplace/redhat-marketplace-dpfq8" Mar 12 09:18:48 crc kubenswrapper[4809]: I0312 09:18:48.267561 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tccv6\" (UniqueName: \"kubernetes.io/projected/58f90f01-2a85-4755-a6ea-fef9035e8982-kube-api-access-tccv6\") pod \"redhat-marketplace-dpfq8\" (UID: \"58f90f01-2a85-4755-a6ea-fef9035e8982\") " pod="openshift-marketplace/redhat-marketplace-dpfq8" Mar 12 09:18:48 crc kubenswrapper[4809]: I0312 09:18:48.267719 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58f90f01-2a85-4755-a6ea-fef9035e8982-utilities\") pod \"redhat-marketplace-dpfq8\" (UID: \"58f90f01-2a85-4755-a6ea-fef9035e8982\") " pod="openshift-marketplace/redhat-marketplace-dpfq8" Mar 12 09:18:48 crc kubenswrapper[4809]: I0312 09:18:48.283581 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58f90f01-2a85-4755-a6ea-fef9035e8982-catalog-content\") pod \"redhat-marketplace-dpfq8\" (UID: \"58f90f01-2a85-4755-a6ea-fef9035e8982\") " pod="openshift-marketplace/redhat-marketplace-dpfq8" Mar 12 09:18:48 crc kubenswrapper[4809]: I0312 09:18:48.285321 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58f90f01-2a85-4755-a6ea-fef9035e8982-utilities\") pod \"redhat-marketplace-dpfq8\" (UID: \"58f90f01-2a85-4755-a6ea-fef9035e8982\") " pod="openshift-marketplace/redhat-marketplace-dpfq8" Mar 12 09:18:48 crc kubenswrapper[4809]: I0312 09:18:48.322753 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tccv6\" (UniqueName: \"kubernetes.io/projected/58f90f01-2a85-4755-a6ea-fef9035e8982-kube-api-access-tccv6\") pod \"redhat-marketplace-dpfq8\" (UID: \"58f90f01-2a85-4755-a6ea-fef9035e8982\") " pod="openshift-marketplace/redhat-marketplace-dpfq8" Mar 12 09:18:48 crc kubenswrapper[4809]: I0312 09:18:48.403272 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dpfq8"] Mar 12 09:18:48 crc kubenswrapper[4809]: I0312 09:18:48.422075 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dpfq8" Mar 12 09:18:48 crc kubenswrapper[4809]: I0312 09:18:48.516266 4809 scope.go:117] "RemoveContainer" containerID="fb5a5c283e9be3baa3c7ceea4a44954305f059b5e6cbfa0b2c34c8633feb4c6b" Mar 12 09:18:50 crc kubenswrapper[4809]: I0312 09:18:50.219946 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dpfq8"] Mar 12 09:18:50 crc kubenswrapper[4809]: I0312 09:18:50.947584 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dpfq8" event={"ID":"58f90f01-2a85-4755-a6ea-fef9035e8982","Type":"ContainerDied","Data":"ea2c3109ce72d9b310552a518a4d11ed1ffb1970f054e487fe17d1151103740b"} Mar 12 09:18:50 crc kubenswrapper[4809]: I0312 09:18:50.948235 4809 generic.go:334] "Generic (PLEG): container finished" podID="58f90f01-2a85-4755-a6ea-fef9035e8982" containerID="ea2c3109ce72d9b310552a518a4d11ed1ffb1970f054e487fe17d1151103740b" exitCode=0 Mar 12 09:18:50 crc kubenswrapper[4809]: I0312 09:18:50.948337 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dpfq8" event={"ID":"58f90f01-2a85-4755-a6ea-fef9035e8982","Type":"ContainerStarted","Data":"368a9e66a8a48ff8f44ee166cbdccf5b13cc5d176421d480acaa7d4acf1b0ba3"} Mar 12 09:18:50 crc kubenswrapper[4809]: E0312 09:18:50.978559 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58f90f01_2a85_4755_a6ea_fef9035e8982.slice/crio-ea2c3109ce72d9b310552a518a4d11ed1ffb1970f054e487fe17d1151103740b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58f90f01_2a85_4755_a6ea_fef9035e8982.slice/crio-conmon-ea2c3109ce72d9b310552a518a4d11ed1ffb1970f054e487fe17d1151103740b.scope\": RecentStats: unable to find data in memory cache]" Mar 12 09:18:52 crc kubenswrapper[4809]: I0312 09:18:52.102794 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-46txv" podUID="31c13acb-9eed-41f3-b4a5-63c4cdbdaa40" containerName="registry-server" probeResult="failure" output=< Mar 12 09:18:52 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 09:18:52 crc kubenswrapper[4809]: > Mar 12 09:18:52 crc kubenswrapper[4809]: I0312 09:18:52.987315 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dpfq8" event={"ID":"58f90f01-2a85-4755-a6ea-fef9035e8982","Type":"ContainerStarted","Data":"a4a9a12036286fbfb0b54c7538dc53bdd4bde136374beef43d56fb0262152906"} Mar 12 09:18:56 crc kubenswrapper[4809]: I0312 09:18:56.042156 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dpfq8" event={"ID":"58f90f01-2a85-4755-a6ea-fef9035e8982","Type":"ContainerDied","Data":"a4a9a12036286fbfb0b54c7538dc53bdd4bde136374beef43d56fb0262152906"} Mar 12 09:18:56 crc kubenswrapper[4809]: I0312 09:18:56.043209 4809 generic.go:334] "Generic (PLEG): container finished" podID="58f90f01-2a85-4755-a6ea-fef9035e8982" containerID="a4a9a12036286fbfb0b54c7538dc53bdd4bde136374beef43d56fb0262152906" exitCode=0 Mar 12 09:18:57 crc kubenswrapper[4809]: I0312 09:18:57.058141 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dpfq8" event={"ID":"58f90f01-2a85-4755-a6ea-fef9035e8982","Type":"ContainerStarted","Data":"95ce856174ba6755f10553d8bce99f04d6833ec4cd2b775ddc73afa17d61e2d6"} Mar 12 09:18:57 crc kubenswrapper[4809]: I0312 09:18:57.102853 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dpfq8" podStartSLOduration=4.491117761 podStartE2EDuration="10.100280859s" podCreationTimestamp="2026-03-12 09:18:47 +0000 UTC" firstStartedPulling="2026-03-12 09:18:50.953734386 +0000 UTC m=+4804.535770119" lastFinishedPulling="2026-03-12 09:18:56.562897484 +0000 UTC m=+4810.144933217" observedRunningTime="2026-03-12 09:18:57.084844687 +0000 UTC m=+4810.666880420" watchObservedRunningTime="2026-03-12 09:18:57.100280859 +0000 UTC m=+4810.682316592" Mar 12 09:18:58 crc kubenswrapper[4809]: I0312 09:18:58.426656 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dpfq8" Mar 12 09:18:58 crc kubenswrapper[4809]: I0312 09:18:58.427191 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dpfq8" Mar 12 09:18:59 crc kubenswrapper[4809]: I0312 09:18:59.801650 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-dpfq8" podUID="58f90f01-2a85-4755-a6ea-fef9035e8982" containerName="registry-server" probeResult="failure" output=< Mar 12 09:18:59 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 09:18:59 crc kubenswrapper[4809]: > Mar 12 09:19:02 crc kubenswrapper[4809]: I0312 09:19:02.299427 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-46txv" podUID="31c13acb-9eed-41f3-b4a5-63c4cdbdaa40" containerName="registry-server" probeResult="failure" output=< Mar 12 09:19:02 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 09:19:02 crc kubenswrapper[4809]: > Mar 12 09:19:09 crc kubenswrapper[4809]: I0312 09:19:09.590808 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-dpfq8" podUID="58f90f01-2a85-4755-a6ea-fef9035e8982" containerName="registry-server" probeResult="failure" output=< Mar 12 09:19:09 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 09:19:09 crc kubenswrapper[4809]: > Mar 12 09:19:11 crc kubenswrapper[4809]: I0312 09:19:11.872751 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-46txv" podUID="31c13acb-9eed-41f3-b4a5-63c4cdbdaa40" containerName="registry-server" probeResult="failure" output=< Mar 12 09:19:11 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 09:19:11 crc kubenswrapper[4809]: > Mar 12 09:19:14 crc kubenswrapper[4809]: I0312 09:19:14.899542 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="3d541616-6c38-428f-bd28-7dc54dceab8c" containerName="galera" probeResult="failure" output="command timed out" Mar 12 09:19:14 crc kubenswrapper[4809]: I0312 09:19:14.899843 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="3d541616-6c38-428f-bd28-7dc54dceab8c" containerName="galera" probeResult="failure" output="command timed out" Mar 12 09:19:14 crc kubenswrapper[4809]: I0312 09:19:14.994710 4809 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-dppmm container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:14 crc kubenswrapper[4809]: I0312 09:19:14.998329 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-dppmm" podUID="1189f657-b031-4ece-859b-95d3eadd8221" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:15 crc kubenswrapper[4809]: I0312 09:19:15.049045 4809 patch_prober.go:28] interesting pod/machine-config-daemon-h6d4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 09:19:15 crc kubenswrapper[4809]: I0312 09:19:15.049139 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 09:19:15 crc kubenswrapper[4809]: I0312 09:19:15.052641 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" Mar 12 09:19:15 crc kubenswrapper[4809]: I0312 09:19:15.058836 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8d31c87fdce03fe5a0350b3981acfc6e7cf41f3cc5482b0734974401077eea98"} pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 09:19:15 crc kubenswrapper[4809]: I0312 09:19:15.060556 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" containerID="cri-o://8d31c87fdce03fe5a0350b3981acfc6e7cf41f3cc5482b0734974401077eea98" gracePeriod=600 Mar 12 09:19:15 crc kubenswrapper[4809]: I0312 09:19:15.344160 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" event={"ID":"101483ba-8ed3-40eb-9855-077e9add029f","Type":"ContainerDied","Data":"8d31c87fdce03fe5a0350b3981acfc6e7cf41f3cc5482b0734974401077eea98"} Mar 12 09:19:15 crc kubenswrapper[4809]: I0312 09:19:15.345302 4809 generic.go:334] "Generic (PLEG): container finished" podID="101483ba-8ed3-40eb-9855-077e9add029f" containerID="8d31c87fdce03fe5a0350b3981acfc6e7cf41f3cc5482b0734974401077eea98" exitCode=0 Mar 12 09:19:15 crc kubenswrapper[4809]: I0312 09:19:15.346895 4809 scope.go:117] "RemoveContainer" containerID="9f5b71a157ead4783e53a957b141b066128a6a4df018ea7f5f50f3c38116cbd8" Mar 12 09:19:15 crc kubenswrapper[4809]: I0312 09:19:15.539563 4809 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-xx52w container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:15 crc kubenswrapper[4809]: I0312 09:19:15.540173 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xx52w" podUID="ac7cc05e-989a-4474-9685-9600e3502dfd" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.41:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:15 crc kubenswrapper[4809]: I0312 09:19:15.539621 4809 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-xx52w container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.41:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:15 crc kubenswrapper[4809]: I0312 09:19:15.540288 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xx52w" podUID="ac7cc05e-989a-4474-9685-9600e3502dfd" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.41:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:16 crc kubenswrapper[4809]: I0312 09:19:16.382605 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" event={"ID":"101483ba-8ed3-40eb-9855-077e9add029f","Type":"ContainerStarted","Data":"02f01e8c5df87af422c42c4ed2f0dc879d516281446aca0abce3266da364c81e"} Mar 12 09:19:18 crc kubenswrapper[4809]: I0312 09:19:18.874333 4809 patch_prober.go:28] interesting pod/thanos-querier-5b6975bb77-fcddt container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.84:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:18 crc kubenswrapper[4809]: I0312 09:19:18.880026 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-5b6975bb77-fcddt" podUID="8716fe6b-cd87-4777-8291-6078ce9929bc" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.84:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:19 crc kubenswrapper[4809]: I0312 09:19:19.010297 4809 patch_prober.go:28] interesting pod/logging-loki-distributor-5d5548c9f5-pbfks container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:19 crc kubenswrapper[4809]: I0312 09:19:19.010407 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-pbfks" podUID="8ac4723a-9ff0-4186-8177-8a86f6db8b9f" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.53:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:19 crc kubenswrapper[4809]: I0312 09:19:19.291217 4809 patch_prober.go:28] interesting pod/logging-loki-querier-76bf7b6d45-zb8df container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:19 crc kubenswrapper[4809]: I0312 09:19:19.291616 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76bf7b6d45-zb8df" podUID="c3630a5f-f4c4-42af-8335-60dbcbdb4961" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.54:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:19 crc kubenswrapper[4809]: I0312 09:19:19.336825 4809 patch_prober.go:28] interesting pod/logging-loki-query-frontend-6d6859c548-6pp5s container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:19 crc kubenswrapper[4809]: I0312 09:19:19.336938 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-6pp5s" podUID="9fc47673-0fe3-49f6-a2bb-06845a2f3fc4" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.55:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:20 crc kubenswrapper[4809]: I0312 09:19:20.038768 4809 patch_prober.go:28] interesting pod/console-5b99b64b5d-4hf4l container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.143:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:20 crc kubenswrapper[4809]: I0312 09:19:20.039008 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-5b99b64b5d-4hf4l" podUID="4210061b-64cb-414a-be09-bf56697ad409" containerName="console" probeResult="failure" output="Get \"https://10.217.0.143:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:20 crc kubenswrapper[4809]: I0312 09:19:20.592379 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-7dq74" podUID="8684cb78-fad5-4998-a52f-ba39be875af1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:20 crc kubenswrapper[4809]: I0312 09:19:20.592603 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-gtzwg" podUID="ead62bdc-2a69-4b3a-a6c5-b60614a34263" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:20 crc kubenswrapper[4809]: I0312 09:19:20.592674 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-gtzwg" podUID="ead62bdc-2a69-4b3a-a6c5-b60614a34263" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:20 crc kubenswrapper[4809]: I0312 09:19:20.674392 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-qtwhh" podUID="12b71885-6cb4-4888-9056-a39becec3670" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:20 crc kubenswrapper[4809]: I0312 09:19:20.674708 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-7dq74" podUID="8684cb78-fad5-4998-a52f-ba39be875af1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:20 crc kubenswrapper[4809]: I0312 09:19:20.680538 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-cvm8g" podUID="da29e412-21cc-4249-9791-55335156ff1b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:20 crc kubenswrapper[4809]: I0312 09:19:20.680680 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-qtwhh" podUID="12b71885-6cb4-4888-9056-a39becec3670" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:20 crc kubenswrapper[4809]: I0312 09:19:20.750399 4809 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-lncvk container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.73:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:20 crc kubenswrapper[4809]: I0312 09:19:20.750431 4809 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-lncvk container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.73:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:20 crc kubenswrapper[4809]: I0312 09:19:20.750491 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-lncvk" podUID="49c3f940-85d8-49c5-a529-367c56018858" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.73:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:20 crc kubenswrapper[4809]: I0312 09:19:20.750494 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-lncvk" podUID="49c3f940-85d8-49c5-a529-367c56018858" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.73:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:20 crc kubenswrapper[4809]: I0312 09:19:20.975308 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-kwt4n" podUID="ddab063f-ed2f-416c-8730-55de13229f58" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:20 crc kubenswrapper[4809]: I0312 09:19:20.975321 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-kwt4n" podUID="ddab063f-ed2f-416c-8730-55de13229f58" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:21 crc kubenswrapper[4809]: I0312 09:19:21.063535 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-dbw4q" podUID="a5138546-10af-4d98-96b5-b39dd71e9af1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:21 crc kubenswrapper[4809]: I0312 09:19:21.063603 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-dbw4q" podUID="a5138546-10af-4d98-96b5-b39dd71e9af1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:21 crc kubenswrapper[4809]: I0312 09:19:21.229485 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-tlxd5" podUID="b9586c2f-2fbd-4ba8-a5a5-45ed306bc53e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:21 crc kubenswrapper[4809]: I0312 09:19:21.229792 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-tlxd5" podUID="b9586c2f-2fbd-4ba8-a5a5-45ed306bc53e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:21 crc kubenswrapper[4809]: I0312 09:19:21.229483 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-s5r4z" podUID="f3a998c8-9d1c-46ce-9bf6-adcbf704fb5c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:21 crc kubenswrapper[4809]: I0312 09:19:21.313352 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-nwrl4" podUID="e349e256-24bd-459e-b5d5-4bf9d85b2a5d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:21 crc kubenswrapper[4809]: I0312 09:19:21.313399 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-s5r4z" podUID="f3a998c8-9d1c-46ce-9bf6-adcbf704fb5c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:21 crc kubenswrapper[4809]: I0312 09:19:21.355505 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-6bbc5b75f-zqc46" podUID="53dea28f-c986-4b4e-a4da-757b2bc9435e" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:21 crc kubenswrapper[4809]: I0312 09:19:21.355632 4809 patch_prober.go:28] interesting pod/controller-manager-f7c76cdd5-nbjpc container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:21 crc kubenswrapper[4809]: I0312 09:19:21.355684 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-f7c76cdd5-nbjpc" podUID="c1cae54e-b80d-4bac-bfd6-fcc3b6e8832b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:21 crc kubenswrapper[4809]: I0312 09:19:21.355743 4809 patch_prober.go:28] interesting pod/controller-manager-f7c76cdd5-nbjpc container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:21 crc kubenswrapper[4809]: I0312 09:19:21.355758 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-f7c76cdd5-nbjpc" podUID="c1cae54e-b80d-4bac-bfd6-fcc3b6e8832b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:21 crc kubenswrapper[4809]: I0312 09:19:21.355787 4809 patch_prober.go:28] interesting pod/route-controller-manager-5dbd9bc447-2v8gh container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.68:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:21 crc kubenswrapper[4809]: I0312 09:19:21.355798 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5dbd9bc447-2v8gh" podUID="5ba37d8e-9139-402a-9909-8a9c3fa4d103" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.68:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:21 crc kubenswrapper[4809]: I0312 09:19:21.355846 4809 patch_prober.go:28] interesting pod/route-controller-manager-5dbd9bc447-2v8gh container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.68:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:21 crc kubenswrapper[4809]: I0312 09:19:21.355893 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-5dbd9bc447-2v8gh" podUID="5ba37d8e-9139-402a-9909-8a9c3fa4d103" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.68:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:21 crc kubenswrapper[4809]: I0312 09:19:21.355904 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-nwrl4" podUID="e349e256-24bd-459e-b5d5-4bf9d85b2a5d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:21 crc kubenswrapper[4809]: I0312 09:19:21.698497 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/telemetry-operator-controller-manager-57c6b5bd58-tt85t" podUID="b40480af-2b15-4c8f-9bf2-f63ca0dd6870" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:21 crc kubenswrapper[4809]: I0312 09:19:21.698582 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/swift-operator-controller-manager-677c674df7-p24mv" podUID="60e08cbe-2284-4030-8073-892fd74bcdc6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.124:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:21 crc kubenswrapper[4809]: I0312 09:19:21.698834 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-57c6b5bd58-tt85t" podUID="b40480af-2b15-4c8f-9bf2-f63ca0dd6870" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:21 crc kubenswrapper[4809]: I0312 09:19:21.781396 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-677c674df7-p24mv" podUID="60e08cbe-2284-4030-8073-892fd74bcdc6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.124:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:21 crc kubenswrapper[4809]: I0312 09:19:21.781738 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-svzr9" podUID="bd9084af-4a31-4802-b9b2-827b0ad53628" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.126:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:21 crc kubenswrapper[4809]: I0312 09:19:21.781894 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-svzr9" podUID="bd9084af-4a31-4802-b9b2-827b0ad53628" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.126:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:21 crc kubenswrapper[4809]: I0312 09:19:21.885343 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-rw46j" podUID="4abc098b-51aa-4483-93e1-4880178f6167" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.125:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:21 crc kubenswrapper[4809]: I0312 09:19:21.885468 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-rw46j" podUID="4abc098b-51aa-4483-93e1-4880178f6167" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.125:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:21 crc kubenswrapper[4809]: I0312 09:19:21.899857 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-jgxtq" podUID="f89f6199-4afe-4ace-a7a9-2b8c91451d40" containerName="nmstate-handler" probeResult="failure" output="command timed out" Mar 12 09:19:21 crc kubenswrapper[4809]: I0312 09:19:21.982445 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-648f7d48f7-wdwdw" podUID="d177b9be-4037-4f81-8227-9c4361eba85f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.97:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:22 crc kubenswrapper[4809]: I0312 09:19:22.059907 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.198:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:22 crc kubenswrapper[4809]: I0312 09:19:22.060036 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.198:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:22 crc kubenswrapper[4809]: I0312 09:19:22.145504 4809 patch_prober.go:28] interesting pod/nmstate-webhook-5f558f5558-xxj6b container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.93:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:22 crc kubenswrapper[4809]: I0312 09:19:22.145577 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-5f558f5558-xxj6b" podUID="293a6f5b-33ba-4398-a1c7-a5f97db11950" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.93:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:22 crc kubenswrapper[4809]: I0312 09:19:22.203543 4809 patch_prober.go:28] interesting pod/metrics-server-59b6cb496c-xqg5c container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.86:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:22 crc kubenswrapper[4809]: I0312 09:19:22.203627 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-59b6cb496c-xqg5c" podUID="f3699ab3-2222-401a-b14c-3fef168b6861" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.86:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:22 crc kubenswrapper[4809]: I0312 09:19:22.349209 4809 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:22 crc kubenswrapper[4809]: I0312 09:19:22.349295 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:22 crc kubenswrapper[4809]: I0312 09:19:22.397339 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="b92111bc-ddbe-401a-83c3-2b0c1e805c6a" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.21:8080/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:22 crc kubenswrapper[4809]: I0312 09:19:22.579556 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-59fcfb5dc8-gpg2j" podUID="6befee19-0c78-47ca-a608-be246c0d7bb5" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.98:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:22 crc kubenswrapper[4809]: I0312 09:19:22.579593 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-59fcfb5dc8-gpg2j" podUID="6befee19-0c78-47ca-a608-be246c0d7bb5" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.98:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:22 crc kubenswrapper[4809]: I0312 09:19:22.603250 4809 patch_prober.go:28] interesting pod/monitoring-plugin-56cf9d75b7-58kgc container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.87:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:22 crc kubenswrapper[4809]: I0312 09:19:22.603306 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-56cf9d75b7-58kgc" podUID="9b29409f-7b59-433f-9daf-1c9bd70ef6a8" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.87:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:22 crc kubenswrapper[4809]: I0312 09:19:22.985286 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-4jx86" podUID="5097b432-e4b9-407e-97a3-3821992f9f91" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.42:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:23 crc kubenswrapper[4809]: I0312 09:19:23.086462 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7bdkr4" podUID="d42ca3a9-74a0-4e76-ac25-730f412c28de" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:23 crc kubenswrapper[4809]: I0312 09:19:23.088769 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-46txv" podUID="31c13acb-9eed-41f3-b4a5-63c4cdbdaa40" containerName="registry-server" probeResult="failure" output=< Mar 12 09:19:23 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 09:19:23 crc kubenswrapper[4809]: > Mar 12 09:19:23 crc kubenswrapper[4809]: I0312 09:19:23.089467 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-dpfq8" podUID="58f90f01-2a85-4755-a6ea-fef9035e8982" containerName="registry-server" probeResult="failure" output=< Mar 12 09:19:23 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 09:19:23 crc kubenswrapper[4809]: > Mar 12 09:19:23 crc kubenswrapper[4809]: I0312 09:19:23.205333 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-hrqzc" podUID="d0d3a3fc-1ad7-4a41-a732-d16ca4e7ceb1" containerName="registry-server" probeResult="failure" output=< Mar 12 09:19:23 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 09:19:23 crc kubenswrapper[4809]: > Mar 12 09:19:23 crc kubenswrapper[4809]: I0312 09:19:23.205385 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-hrqzc" podUID="d0d3a3fc-1ad7-4a41-a732-d16ca4e7ceb1" containerName="registry-server" probeResult="failure" output=< Mar 12 09:19:23 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 09:19:23 crc kubenswrapper[4809]: > Mar 12 09:19:23 crc kubenswrapper[4809]: I0312 09:19:23.276721 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-pd2vq" podUID="ec52f8eb-40dd-4475-9726-69b84829233d" containerName="registry-server" probeResult="failure" output=< Mar 12 09:19:23 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 09:19:23 crc kubenswrapper[4809]: > Mar 12 09:19:23 crc kubenswrapper[4809]: I0312 09:19:23.282770 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-pd2vq" podUID="ec52f8eb-40dd-4475-9726-69b84829233d" containerName="registry-server" probeResult="failure" output=< Mar 12 09:19:23 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 09:19:23 crc kubenswrapper[4809]: > Mar 12 09:19:23 crc kubenswrapper[4809]: I0312 09:19:23.304302 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-n22vs" podUID="773274bc-3d57-4d1c-aaf9-f81ce1b981c4" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:23 crc kubenswrapper[4809]: I0312 09:19:23.304404 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-n22vs" podUID="773274bc-3d57-4d1c-aaf9-f81ce1b981c4" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:23 crc kubenswrapper[4809]: I0312 09:19:23.304443 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-n22vs" podUID="773274bc-3d57-4d1c-aaf9-f81ce1b981c4" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:23 crc kubenswrapper[4809]: I0312 09:19:23.304559 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-rmrt9" podUID="6b5e61e4-2d13-491e-be53-aed7ae027cb1" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.99:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:23 crc kubenswrapper[4809]: I0312 09:19:23.305099 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-rmrt9" podUID="6b5e61e4-2d13-491e-be53-aed7ae027cb1" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.99:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:23 crc kubenswrapper[4809]: I0312 09:19:23.386443 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-7bb4cc7c98-7jdts" podUID="b7ddc716-c09f-4923-8e70-f2251873aea9" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.100:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:23 crc kubenswrapper[4809]: I0312 09:19:23.386741 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-7bb4cc7c98-7jdts" podUID="b7ddc716-c09f-4923-8e70-f2251873aea9" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.100:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:23 crc kubenswrapper[4809]: I0312 09:19:23.919476 4809 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-wwrvc container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.12:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:23 crc kubenswrapper[4809]: I0312 09:19:23.920446 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-wwrvc" podUID="8f1a2dab-e883-409f-ba21-a52ea0947c1b" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.12:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:23 crc kubenswrapper[4809]: I0312 09:19:23.919534 4809 patch_prober.go:28] interesting pod/thanos-querier-5b6975bb77-fcddt container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.84:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:23 crc kubenswrapper[4809]: I0312 09:19:23.919492 4809 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-wwrvc container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.12:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:23 crc kubenswrapper[4809]: I0312 09:19:23.920580 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-5b6975bb77-fcddt" podUID="8716fe6b-cd87-4777-8291-6078ce9929bc" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.84:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:23 crc kubenswrapper[4809]: I0312 09:19:23.920656 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-59bdc8b94-wwrvc" podUID="8f1a2dab-e883-409f-ba21-a52ea0947c1b" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.12:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:24 crc kubenswrapper[4809]: I0312 09:19:24.097491 4809 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-gb2mk container/perses-operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.15:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:24 crc kubenswrapper[4809]: I0312 09:19:24.097576 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/perses-operator-5bf474d74f-gb2mk" podUID="68cdd0d8-8927-4777-8067-995b7a404794" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.15:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:24 crc kubenswrapper[4809]: I0312 09:19:24.179371 4809 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-2qrsm container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:24 crc kubenswrapper[4809]: I0312 09:19:24.179484 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qrsm" podUID="63625da7-0f0d-48f1-8b58-c75e04bc31e4" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:24 crc kubenswrapper[4809]: I0312 09:19:24.180415 4809 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-gb2mk container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.15:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:24 crc kubenswrapper[4809]: I0312 09:19:24.180484 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5bf474d74f-gb2mk" podUID="68cdd0d8-8927-4777-8067-995b7a404794" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.15:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:24 crc kubenswrapper[4809]: I0312 09:19:24.180525 4809 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-2qrsm container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:24 crc kubenswrapper[4809]: I0312 09:19:24.180582 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qrsm" podUID="63625da7-0f0d-48f1-8b58-c75e04bc31e4" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:24 crc kubenswrapper[4809]: I0312 09:19:24.297094 4809 patch_prober.go:28] interesting pod/oauth-openshift-7d9c768c99-69kh8 container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.69:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:24 crc kubenswrapper[4809]: I0312 09:19:24.297143 4809 patch_prober.go:28] interesting pod/oauth-openshift-7d9c768c99-69kh8 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.69:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:24 crc kubenswrapper[4809]: I0312 09:19:24.297176 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-7d9c768c99-69kh8" podUID="0a8a753b-e49b-4631-8630-ecc01634d644" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.69:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:24 crc kubenswrapper[4809]: I0312 09:19:24.297206 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-7d9c768c99-69kh8" podUID="0a8a753b-e49b-4631-8630-ecc01634d644" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.69:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:24 crc kubenswrapper[4809]: I0312 09:19:24.794323 4809 patch_prober.go:28] interesting pod/console-operator-58897d9998-6vssd container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:24 crc kubenswrapper[4809]: I0312 09:19:24.794387 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-jt7s5" podUID="d64bdb22-6590-41af-94ad-0e725ca0355a" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:24 crc kubenswrapper[4809]: I0312 09:19:24.794407 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-6vssd" podUID="6f849ade-9f03-46fd-b9c5-5ddd61a27d1c" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:24 crc kubenswrapper[4809]: I0312 09:19:24.794499 4809 patch_prober.go:28] interesting pod/console-operator-58897d9998-6vssd container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:24 crc kubenswrapper[4809]: I0312 09:19:24.794567 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-jt7s5" podUID="d64bdb22-6590-41af-94ad-0e725ca0355a" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:24 crc kubenswrapper[4809]: I0312 09:19:24.794563 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-6vssd" podUID="6f849ade-9f03-46fd-b9c5-5ddd61a27d1c" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:24 crc kubenswrapper[4809]: I0312 09:19:24.897307 4809 patch_prober.go:28] interesting pod/loki-operator-controller-manager-65bb5b59df-8jk5h container/manager namespace/openshift-operators-redhat: Liveness probe status=failure output="Get \"http://10.217.0.50:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:24 crc kubenswrapper[4809]: I0312 09:19:24.897371 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators-redhat/loki-operator-controller-manager-65bb5b59df-8jk5h" podUID="861c2912-a932-4142-9b25-c7c0e1aaf062" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.50:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:24 crc kubenswrapper[4809]: I0312 09:19:24.897393 4809 patch_prober.go:28] interesting pod/loki-operator-controller-manager-65bb5b59df-8jk5h container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.50:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:24 crc kubenswrapper[4809]: I0312 09:19:24.897468 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-65bb5b59df-8jk5h" podUID="861c2912-a932-4142-9b25-c7c0e1aaf062" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.50:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:24 crc kubenswrapper[4809]: I0312 09:19:24.899825 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="3d541616-6c38-428f-bd28-7dc54dceab8c" containerName="galera" probeResult="failure" output="command timed out" Mar 12 09:19:24 crc kubenswrapper[4809]: I0312 09:19:24.899933 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="3d541616-6c38-428f-bd28-7dc54dceab8c" containerName="galera" probeResult="failure" output="command timed out" Mar 12 09:19:24 crc kubenswrapper[4809]: I0312 09:19:24.950126 4809 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-dppmm container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:24 crc kubenswrapper[4809]: I0312 09:19:24.950236 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-dppmm" podUID="1189f657-b031-4ece-859b-95d3eadd8221" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:25 crc kubenswrapper[4809]: I0312 09:19:25.048847 4809 patch_prober.go:28] interesting pod/logging-loki-gateway-fd898bfdd-dhc65 container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:25 crc kubenswrapper[4809]: I0312 09:19:25.048929 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-fd898bfdd-dhc65" podUID="8232d992-4bfb-46ca-a440-647d8c006309" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.56:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:25 crc kubenswrapper[4809]: I0312 09:19:25.052385 4809 patch_prober.go:28] interesting pod/downloads-7954f5f757-fl8dg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:25 crc kubenswrapper[4809]: I0312 09:19:25.052445 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fl8dg" podUID="a752ff7b-9553-492d-83d0-42bb9ea5dfa9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:25 crc kubenswrapper[4809]: I0312 09:19:25.052462 4809 patch_prober.go:28] interesting pod/downloads-7954f5f757-fl8dg container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.21:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:25 crc kubenswrapper[4809]: I0312 09:19:25.052504 4809 patch_prober.go:28] interesting pod/logging-loki-gateway-fd898bfdd-dhc65 container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:25 crc kubenswrapper[4809]: I0312 09:19:25.052521 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-fl8dg" podUID="a752ff7b-9553-492d-83d0-42bb9ea5dfa9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:25 crc kubenswrapper[4809]: I0312 09:19:25.052524 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-fd898bfdd-dhc65" podUID="8232d992-4bfb-46ca-a440-647d8c006309" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.56:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:25 crc kubenswrapper[4809]: I0312 09:19:25.104272 4809 patch_prober.go:28] interesting pod/logging-loki-gateway-fd898bfdd-tbqw2 container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:25 crc kubenswrapper[4809]: I0312 09:19:25.104332 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-fd898bfdd-tbqw2" podUID="7546fb46-f601-417f-ad26-69a4fb625fdc" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:25 crc kubenswrapper[4809]: I0312 09:19:25.199280 4809 patch_prober.go:28] interesting pod/router-default-5444994796-ttmrw container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:25 crc kubenswrapper[4809]: I0312 09:19:25.199367 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-ttmrw" podUID="b24173e0-5140-4c4d-ab8b-ea3696db0b74" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:25 crc kubenswrapper[4809]: I0312 09:19:25.200032 4809 patch_prober.go:28] interesting pod/logging-loki-gateway-fd898bfdd-tbqw2 container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:25 crc kubenswrapper[4809]: I0312 09:19:25.200073 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-fd898bfdd-tbqw2" podUID="7546fb46-f601-417f-ad26-69a4fb625fdc" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:25 crc kubenswrapper[4809]: I0312 09:19:25.200256 4809 patch_prober.go:28] interesting pod/router-default-5444994796-ttmrw container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:25 crc kubenswrapper[4809]: I0312 09:19:25.200324 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-ttmrw" podUID="b24173e0-5140-4c4d-ab8b-ea3696db0b74" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:25 crc kubenswrapper[4809]: I0312 09:19:25.530690 4809 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-tjjbt container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:25 crc kubenswrapper[4809]: I0312 09:19:25.530768 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tjjbt" podUID="0621879e-29ff-49a3-81fc-1bde4a2d22ae" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:25 crc kubenswrapper[4809]: I0312 09:19:25.530924 4809 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-tjjbt container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:25 crc kubenswrapper[4809]: I0312 09:19:25.530951 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tjjbt" podUID="0621879e-29ff-49a3-81fc-1bde4a2d22ae" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:25 crc kubenswrapper[4809]: I0312 09:19:25.540360 4809 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-xx52w container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:25 crc kubenswrapper[4809]: I0312 09:19:25.540432 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xx52w" podUID="ac7cc05e-989a-4474-9685-9600e3502dfd" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.41:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:25 crc kubenswrapper[4809]: I0312 09:19:25.540599 4809 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-xx52w container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.41:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:25 crc kubenswrapper[4809]: I0312 09:19:25.540625 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xx52w" podUID="ac7cc05e-989a-4474-9685-9600e3502dfd" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.41:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:25 crc kubenswrapper[4809]: I0312 09:19:25.898276 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="729d6f4c-335b-486c-bea9-812d4abfdfd9" containerName="galera" probeResult="failure" output="command timed out" Mar 12 09:19:25 crc kubenswrapper[4809]: I0312 09:19:25.898959 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="729d6f4c-335b-486c-bea9-812d4abfdfd9" containerName="galera" probeResult="failure" output="command timed out" Mar 12 09:19:25 crc kubenswrapper[4809]: I0312 09:19:25.902301 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-fxhzz" podUID="1e560783-0ec2-4688-a79e-59a1df5b2e61" containerName="registry-server" probeResult="failure" output="command timed out" Mar 12 09:19:25 crc kubenswrapper[4809]: I0312 09:19:25.902625 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-fxhzz" podUID="1e560783-0ec2-4688-a79e-59a1df5b2e61" containerName="registry-server" probeResult="failure" output="command timed out" Mar 12 09:19:25 crc kubenswrapper[4809]: I0312 09:19:25.903096 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="b7c728ba-8361-4d19-833d-b3494509f355" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Mar 12 09:19:26 crc kubenswrapper[4809]: I0312 09:19:26.897782 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-jgxtq" podUID="f89f6199-4afe-4ace-a7a9-2b8c91451d40" containerName="nmstate-handler" probeResult="failure" output="command timed out" Mar 12 09:19:26 crc kubenswrapper[4809]: I0312 09:19:26.898096 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="97133eff-e7a3-42a1-833a-674e836f7be8" containerName="prometheus" probeResult="failure" output="command timed out" Mar 12 09:19:26 crc kubenswrapper[4809]: I0312 09:19:26.898132 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="97133eff-e7a3-42a1-833a-674e836f7be8" containerName="prometheus" probeResult="failure" output="command timed out" Mar 12 09:19:27 crc kubenswrapper[4809]: I0312 09:19:27.293940 4809 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-5qhpm container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.79:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:27 crc kubenswrapper[4809]: I0312 09:19:27.294018 4809 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-5qhpm container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.79:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:27 crc kubenswrapper[4809]: I0312 09:19:27.294676 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-5qhpm" podUID="2204a165-ee7b-4609-bed7-9683860bce5d" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.79:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:27 crc kubenswrapper[4809]: I0312 09:19:27.294746 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-5qhpm" podUID="2204a165-ee7b-4609-bed7-9683860bce5d" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.79:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:27 crc kubenswrapper[4809]: I0312 09:19:27.483738 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-5d5444f5b-xmqds" podUID="2ef4d6d0-1c93-4f10-bd15-5de5ede76c62" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.127:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:28 crc kubenswrapper[4809]: I0312 09:19:28.090891 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-zqzjb" podUID="f9d9e6d3-d87f-485b-bb03-6ed4f067de44" containerName="registry-server" probeResult="failure" output=< Mar 12 09:19:28 crc kubenswrapper[4809]: timeout: health rpc did not complete within 1s Mar 12 09:19:28 crc kubenswrapper[4809]: > Mar 12 09:19:28 crc kubenswrapper[4809]: I0312 09:19:28.091302 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-q7hlx" podUID="3e5c36fc-835f-40b8-8b0c-380d00797bb0" containerName="registry-server" probeResult="failure" output=< Mar 12 09:19:28 crc kubenswrapper[4809]: timeout: health rpc did not complete within 1s Mar 12 09:19:28 crc kubenswrapper[4809]: > Mar 12 09:19:28 crc kubenswrapper[4809]: I0312 09:19:28.091360 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-zqzjb" podUID="f9d9e6d3-d87f-485b-bb03-6ed4f067de44" containerName="registry-server" probeResult="failure" output=< Mar 12 09:19:28 crc kubenswrapper[4809]: timeout: health rpc did not complete within 1s Mar 12 09:19:28 crc kubenswrapper[4809]: > Mar 12 09:19:28 crc kubenswrapper[4809]: I0312 09:19:28.091446 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-q7hlx" podUID="3e5c36fc-835f-40b8-8b0c-380d00797bb0" containerName="registry-server" probeResult="failure" output=< Mar 12 09:19:28 crc kubenswrapper[4809]: timeout: health rpc did not complete within 1s Mar 12 09:19:28 crc kubenswrapper[4809]: > Mar 12 09:19:28 crc kubenswrapper[4809]: I0312 09:19:28.873222 4809 patch_prober.go:28] interesting pod/thanos-querier-5b6975bb77-fcddt container/kube-rbac-proxy-web namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.84:9091/-/healthy\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:28 crc kubenswrapper[4809]: I0312 09:19:28.873863 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/thanos-querier-5b6975bb77-fcddt" podUID="8716fe6b-cd87-4777-8291-6078ce9929bc" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.84:9091/-/healthy\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:28 crc kubenswrapper[4809]: I0312 09:19:28.873426 4809 patch_prober.go:28] interesting pod/thanos-querier-5b6975bb77-fcddt container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.84:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:28 crc kubenswrapper[4809]: I0312 09:19:28.874056 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-5b6975bb77-fcddt" podUID="8716fe6b-cd87-4777-8291-6078ce9929bc" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.84:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:29 crc kubenswrapper[4809]: I0312 09:19:29.010362 4809 patch_prober.go:28] interesting pod/logging-loki-distributor-5d5548c9f5-pbfks container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:29 crc kubenswrapper[4809]: I0312 09:19:29.010465 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-pbfks" podUID="8ac4723a-9ff0-4186-8177-8a86f6db8b9f" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.53:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:29 crc kubenswrapper[4809]: I0312 09:19:29.290614 4809 patch_prober.go:28] interesting pod/logging-loki-querier-76bf7b6d45-zb8df container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:29 crc kubenswrapper[4809]: I0312 09:19:29.291142 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76bf7b6d45-zb8df" podUID="c3630a5f-f4c4-42af-8335-60dbcbdb4961" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.54:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:29 crc kubenswrapper[4809]: I0312 09:19:29.336231 4809 patch_prober.go:28] interesting pod/logging-loki-query-frontend-6d6859c548-6pp5s container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:29 crc kubenswrapper[4809]: I0312 09:19:29.336339 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-6pp5s" podUID="9fc47673-0fe3-49f6-a2bb-06845a2f3fc4" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.55:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:29 crc kubenswrapper[4809]: I0312 09:19:29.404073 4809 trace.go:236] Trace[1097490478]: "Calculate volume metrics of registry-storage for pod openshift-image-registry/image-registry-66df7c8f76-pfh6p" (12-Mar-2026 09:19:28.010) (total time: 1388ms): Mar 12 09:19:29 crc kubenswrapper[4809]: Trace[1097490478]: [1.388200763s] [1.388200763s] END Mar 12 09:19:30 crc kubenswrapper[4809]: I0312 09:19:30.010280 4809 patch_prober.go:28] interesting pod/logging-loki-distributor-5d5548c9f5-pbfks container/loki-distributor namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.53:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:30 crc kubenswrapper[4809]: I0312 09:19:30.010388 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-pbfks" podUID="8ac4723a-9ff0-4186-8177-8a86f6db8b9f" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.53:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:30 crc kubenswrapper[4809]: I0312 09:19:30.049335 4809 patch_prober.go:28] interesting pod/logging-loki-gateway-fd898bfdd-dhc65 container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:30 crc kubenswrapper[4809]: I0312 09:19:30.049665 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-fd898bfdd-dhc65" podUID="8232d992-4bfb-46ca-a440-647d8c006309" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.56:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:30 crc kubenswrapper[4809]: I0312 09:19:30.081355 4809 patch_prober.go:28] interesting pod/console-5b99b64b5d-4hf4l container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.143:8443/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:30 crc kubenswrapper[4809]: I0312 09:19:30.081470 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-5b99b64b5d-4hf4l" podUID="4210061b-64cb-414a-be09-bf56697ad409" containerName="console" probeResult="failure" output="Get \"https://10.217.0.143:8443/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:30 crc kubenswrapper[4809]: I0312 09:19:30.102648 4809 patch_prober.go:28] interesting pod/logging-loki-gateway-fd898bfdd-tbqw2 container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:30 crc kubenswrapper[4809]: I0312 09:19:30.103192 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-fd898bfdd-tbqw2" podUID="7546fb46-f601-417f-ad26-69a4fb625fdc" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:30 crc kubenswrapper[4809]: I0312 09:19:30.203985 4809 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:30 crc kubenswrapper[4809]: I0312 09:19:30.204087 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="1ef2f625-286b-49c8-97d9-a98350cfea7b" containerName="loki-ingester" probeResult="failure" output="Get \"https://10.217.0.58:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:30 crc kubenswrapper[4809]: I0312 09:19:30.241214 4809 patch_prober.go:28] interesting pod/logging-loki-compactor-0 container/loki-compactor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.60:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:30 crc kubenswrapper[4809]: I0312 09:19:30.241297 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-compactor-0" podUID="ba5833e7-becf-412f-879b-6cab8777fb0b" containerName="loki-compactor" probeResult="failure" output="Get \"https://10.217.0.60:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:30 crc kubenswrapper[4809]: I0312 09:19:30.290203 4809 patch_prober.go:28] interesting pod/logging-loki-querier-76bf7b6d45-zb8df container/loki-querier namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.54:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:30 crc kubenswrapper[4809]: I0312 09:19:30.290254 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-dpfq8" podUID="58f90f01-2a85-4755-a6ea-fef9035e8982" containerName="registry-server" probeResult="failure" output=< Mar 12 09:19:30 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 09:19:30 crc kubenswrapper[4809]: > Mar 12 09:19:30 crc kubenswrapper[4809]: I0312 09:19:30.290270 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-querier-76bf7b6d45-zb8df" podUID="c3630a5f-f4c4-42af-8335-60dbcbdb4961" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.54:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:30 crc kubenswrapper[4809]: I0312 09:19:30.336273 4809 patch_prober.go:28] interesting pod/logging-loki-query-frontend-6d6859c548-6pp5s container/loki-query-frontend namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.55:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:30 crc kubenswrapper[4809]: I0312 09:19:30.336383 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-6pp5s" podUID="9fc47673-0fe3-49f6-a2bb-06845a2f3fc4" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.55:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:30 crc kubenswrapper[4809]: I0312 09:19:30.341631 4809 patch_prober.go:28] interesting pod/logging-loki-index-gateway-0 container/loki-index-gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.61:3101/ready\": context deadline exceeded" start-of-body= Mar 12 09:19:30 crc kubenswrapper[4809]: I0312 09:19:30.341726 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-index-gateway-0" podUID="bcc0a610-6eb0-4a5f-88d9-5d069f760c14" containerName="loki-index-gateway" probeResult="failure" output="Get \"https://10.217.0.61:3101/ready\": context deadline exceeded" Mar 12 09:19:30 crc kubenswrapper[4809]: I0312 09:19:30.526035 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-gtzwg" podUID="ead62bdc-2a69-4b3a-a6c5-b60614a34263" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:30 crc kubenswrapper[4809]: I0312 09:19:30.526041 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-7dq74" podUID="8684cb78-fad5-4998-a52f-ba39be875af1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:30 crc kubenswrapper[4809]: I0312 09:19:30.576370 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-qtwhh" podUID="12b71885-6cb4-4888-9056-a39becec3670" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:30 crc kubenswrapper[4809]: I0312 09:19:30.629490 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-cvm8g" podUID="da29e412-21cc-4249-9791-55335156ff1b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:30 crc kubenswrapper[4809]: I0312 09:19:30.882030 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-7c2ts" podUID="b7c605d7-46e5-4daa-beb3-4ef624bc0df9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:30 crc kubenswrapper[4809]: I0312 09:19:30.897831 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="97133eff-e7a3-42a1-833a-674e836f7be8" containerName="prometheus" probeResult="failure" output="command timed out" Mar 12 09:19:30 crc kubenswrapper[4809]: I0312 09:19:30.899092 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="97133eff-e7a3-42a1-833a-674e836f7be8" containerName="prometheus" probeResult="failure" output="command timed out" Mar 12 09:19:30 crc kubenswrapper[4809]: I0312 09:19:30.962307 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-kwt4n" podUID="ddab063f-ed2f-416c-8730-55de13229f58" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:31 crc kubenswrapper[4809]: I0312 09:19:31.021817 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-dbw4q" podUID="a5138546-10af-4d98-96b5-b39dd71e9af1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:31 crc kubenswrapper[4809]: I0312 09:19:31.115356 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-tlxd5" podUID="b9586c2f-2fbd-4ba8-a5a5-45ed306bc53e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:31 crc kubenswrapper[4809]: I0312 09:19:31.115417 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-s5r4z" podUID="f3a998c8-9d1c-46ce-9bf6-adcbf704fb5c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:31 crc kubenswrapper[4809]: I0312 09:19:31.226611 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-nwrl4" podUID="e349e256-24bd-459e-b5d5-4bf9d85b2a5d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:31 crc kubenswrapper[4809]: I0312 09:19:31.309409 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-6bbc5b75f-zqc46" podUID="53dea28f-c986-4b4e-a4da-757b2bc9435e" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:31 crc kubenswrapper[4809]: I0312 09:19:31.309999 4809 patch_prober.go:28] interesting pod/controller-manager-f7c76cdd5-nbjpc container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:31 crc kubenswrapper[4809]: I0312 09:19:31.310054 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-init-6bbc5b75f-zqc46" podUID="53dea28f-c986-4b4e-a4da-757b2bc9435e" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.105:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:31 crc kubenswrapper[4809]: I0312 09:19:31.310026 4809 patch_prober.go:28] interesting pod/controller-manager-f7c76cdd5-nbjpc container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:31 crc kubenswrapper[4809]: I0312 09:19:31.311262 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-f7c76cdd5-nbjpc" podUID="c1cae54e-b80d-4bac-bfd6-fcc3b6e8832b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:31 crc kubenswrapper[4809]: I0312 09:19:31.311304 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-f7c76cdd5-nbjpc" podUID="c1cae54e-b80d-4bac-bfd6-fcc3b6e8832b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:31 crc kubenswrapper[4809]: I0312 09:19:31.310144 4809 patch_prober.go:28] interesting pod/route-controller-manager-5dbd9bc447-2v8gh container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.68:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:31 crc kubenswrapper[4809]: I0312 09:19:31.310170 4809 patch_prober.go:28] interesting pod/route-controller-manager-5dbd9bc447-2v8gh container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.68:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:31 crc kubenswrapper[4809]: I0312 09:19:31.311466 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5dbd9bc447-2v8gh" podUID="5ba37d8e-9139-402a-9909-8a9c3fa4d103" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.68:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:31 crc kubenswrapper[4809]: I0312 09:19:31.311603 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-5dbd9bc447-2v8gh" podUID="5ba37d8e-9139-402a-9909-8a9c3fa4d103" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.68:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:31 crc kubenswrapper[4809]: I0312 09:19:31.622426 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-57c6b5bd58-tt85t" podUID="b40480af-2b15-4c8f-9bf2-f63ca0dd6870" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:31 crc kubenswrapper[4809]: I0312 09:19:31.622426 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-677c674df7-p24mv" podUID="60e08cbe-2284-4030-8073-892fd74bcdc6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.124:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:31 crc kubenswrapper[4809]: I0312 09:19:31.681577 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-svzr9" podUID="bd9084af-4a31-4802-b9b2-827b0ad53628" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.126:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:31 crc kubenswrapper[4809]: I0312 09:19:31.839486 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-rw46j" podUID="4abc098b-51aa-4483-93e1-4880178f6167" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.125:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:31 crc kubenswrapper[4809]: I0312 09:19:31.902073 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="b7c728ba-8361-4d19-833d-b3494509f355" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Mar 12 09:19:32 crc kubenswrapper[4809]: I0312 09:19:32.004369 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-648f7d48f7-wdwdw" podUID="d177b9be-4037-4f81-8227-9c4361eba85f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.97:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:32 crc kubenswrapper[4809]: I0312 09:19:32.208899 4809 patch_prober.go:28] interesting pod/metrics-server-59b6cb496c-xqg5c container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.86:10250/livez\": context deadline exceeded" start-of-body= Mar 12 09:19:32 crc kubenswrapper[4809]: I0312 09:19:32.208983 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-59b6cb496c-xqg5c" podUID="f3699ab3-2222-401a-b14c-3fef168b6861" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.86:10250/livez\": context deadline exceeded" Mar 12 09:19:32 crc kubenswrapper[4809]: I0312 09:19:32.208846 4809 patch_prober.go:28] interesting pod/metrics-server-59b6cb496c-xqg5c container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.86:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:32 crc kubenswrapper[4809]: I0312 09:19:32.209084 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-59b6cb496c-xqg5c" podUID="f3699ab3-2222-401a-b14c-3fef168b6861" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.86:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:32 crc kubenswrapper[4809]: I0312 09:19:32.348484 4809 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:32 crc kubenswrapper[4809]: I0312 09:19:32.348611 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:32 crc kubenswrapper[4809]: I0312 09:19:32.397191 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="b92111bc-ddbe-401a-83c3-2b0c1e805c6a" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.21:8080/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:32 crc kubenswrapper[4809]: I0312 09:19:32.580928 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-59fcfb5dc8-gpg2j" podUID="6befee19-0c78-47ca-a608-be246c0d7bb5" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.98:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:32 crc kubenswrapper[4809]: I0312 09:19:32.580928 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-59fcfb5dc8-gpg2j" podUID="6befee19-0c78-47ca-a608-be246c0d7bb5" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.98:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:32 crc kubenswrapper[4809]: I0312 09:19:32.602964 4809 patch_prober.go:28] interesting pod/monitoring-plugin-56cf9d75b7-58kgc container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.87:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:32 crc kubenswrapper[4809]: I0312 09:19:32.603016 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-56cf9d75b7-58kgc" podUID="9b29409f-7b59-433f-9daf-1c9bd70ef6a8" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.87:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:33 crc kubenswrapper[4809]: I0312 09:19:33.128342 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7bdkr4" podUID="d42ca3a9-74a0-4e76-ac25-730f412c28de" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:33 crc kubenswrapper[4809]: I0312 09:19:33.128348 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7bdkr4" podUID="d42ca3a9-74a0-4e76-ac25-730f412c28de" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:33 crc kubenswrapper[4809]: I0312 09:19:33.211479 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-n22vs" podUID="773274bc-3d57-4d1c-aaf9-f81ce1b981c4" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:33 crc kubenswrapper[4809]: I0312 09:19:33.335585 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-rmrt9" podUID="6b5e61e4-2d13-491e-be53-aed7ae027cb1" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.99:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:33 crc kubenswrapper[4809]: I0312 09:19:33.335594 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-n22vs" podUID="773274bc-3d57-4d1c-aaf9-f81ce1b981c4" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:33 crc kubenswrapper[4809]: I0312 09:19:33.335885 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-n22vs" podUID="773274bc-3d57-4d1c-aaf9-f81ce1b981c4" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:33 crc kubenswrapper[4809]: I0312 09:19:33.335944 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-rmrt9" podUID="6b5e61e4-2d13-491e-be53-aed7ae027cb1" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.99:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:33 crc kubenswrapper[4809]: I0312 09:19:33.419498 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-7bb4cc7c98-7jdts" podUID="b7ddc716-c09f-4923-8e70-f2251873aea9" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.100:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:33 crc kubenswrapper[4809]: I0312 09:19:33.419721 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-7bb4cc7c98-7jdts" podUID="b7ddc716-c09f-4923-8e70-f2251873aea9" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.100:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:33 crc kubenswrapper[4809]: I0312 09:19:33.917626 4809 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-wwrvc container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.12:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:33 crc kubenswrapper[4809]: I0312 09:19:33.917929 4809 patch_prober.go:28] interesting pod/thanos-querier-5b6975bb77-fcddt container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.84:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:33 crc kubenswrapper[4809]: I0312 09:19:33.918423 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-59bdc8b94-wwrvc" podUID="8f1a2dab-e883-409f-ba21-a52ea0947c1b" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.12:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:33 crc kubenswrapper[4809]: I0312 09:19:33.918513 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-5b6975bb77-fcddt" podUID="8716fe6b-cd87-4777-8291-6078ce9929bc" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.84:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:33 crc kubenswrapper[4809]: I0312 09:19:33.917976 4809 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-wwrvc container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.12:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:33 crc kubenswrapper[4809]: I0312 09:19:33.918595 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-wwrvc" podUID="8f1a2dab-e883-409f-ba21-a52ea0947c1b" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.12:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:34 crc kubenswrapper[4809]: I0312 09:19:34.056407 4809 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-gb2mk container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.15:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:34 crc kubenswrapper[4809]: I0312 09:19:34.056492 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5bf474d74f-gb2mk" podUID="68cdd0d8-8927-4777-8067-995b7a404794" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.15:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:34 crc kubenswrapper[4809]: I0312 09:19:34.084482 4809 trace.go:236] Trace[59500036]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-cell1-server-0" (12-Mar-2026 09:19:31.415) (total time: 2665ms): Mar 12 09:19:34 crc kubenswrapper[4809]: Trace[59500036]: [2.665331083s] [2.665331083s] END Mar 12 09:19:34 crc kubenswrapper[4809]: I0312 09:19:34.296602 4809 patch_prober.go:28] interesting pod/oauth-openshift-7d9c768c99-69kh8 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.69:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:34 crc kubenswrapper[4809]: I0312 09:19:34.296671 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-7d9c768c99-69kh8" podUID="0a8a753b-e49b-4631-8630-ecc01634d644" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.69:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:34 crc kubenswrapper[4809]: I0312 09:19:34.296744 4809 patch_prober.go:28] interesting pod/oauth-openshift-7d9c768c99-69kh8 container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.69:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:34 crc kubenswrapper[4809]: I0312 09:19:34.296760 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-7d9c768c99-69kh8" podUID="0a8a753b-e49b-4631-8630-ecc01634d644" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.69:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:34 crc kubenswrapper[4809]: I0312 09:19:34.795318 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-jt7s5" podUID="d64bdb22-6590-41af-94ad-0e725ca0355a" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:34 crc kubenswrapper[4809]: I0312 09:19:34.795354 4809 patch_prober.go:28] interesting pod/console-operator-58897d9998-6vssd container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:34 crc kubenswrapper[4809]: I0312 09:19:34.795424 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-jt7s5" podUID="d64bdb22-6590-41af-94ad-0e725ca0355a" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:34 crc kubenswrapper[4809]: I0312 09:19:34.795460 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-6vssd" podUID="6f849ade-9f03-46fd-b9c5-5ddd61a27d1c" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:34 crc kubenswrapper[4809]: I0312 09:19:34.795304 4809 patch_prober.go:28] interesting pod/console-operator-58897d9998-6vssd container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:34 crc kubenswrapper[4809]: I0312 09:19:34.795570 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-6vssd" podUID="6f849ade-9f03-46fd-b9c5-5ddd61a27d1c" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:34 crc kubenswrapper[4809]: I0312 09:19:34.855385 4809 patch_prober.go:28] interesting pod/loki-operator-controller-manager-65bb5b59df-8jk5h container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.50:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:34 crc kubenswrapper[4809]: I0312 09:19:34.855473 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-65bb5b59df-8jk5h" podUID="861c2912-a932-4142-9b25-c7c0e1aaf062" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.50:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:34 crc kubenswrapper[4809]: I0312 09:19:34.914096 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="3d541616-6c38-428f-bd28-7dc54dceab8c" containerName="galera" probeResult="failure" output="command timed out" Mar 12 09:19:34 crc kubenswrapper[4809]: I0312 09:19:34.914259 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="3d541616-6c38-428f-bd28-7dc54dceab8c" containerName="galera" probeResult="failure" output="command timed out" Mar 12 09:19:34 crc kubenswrapper[4809]: I0312 09:19:34.916779 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Mar 12 09:19:34 crc kubenswrapper[4809]: I0312 09:19:34.916844 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-galera-0" Mar 12 09:19:34 crc kubenswrapper[4809]: I0312 09:19:34.941553 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"15b484e8ab97b9e9fca1f4e65fcc961082eb7604226d160ee802686d1cd47b9e"} pod="openstack/openstack-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Mar 12 09:19:34 crc kubenswrapper[4809]: I0312 09:19:34.953583 4809 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-dppmm container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:34 crc kubenswrapper[4809]: I0312 09:19:34.953687 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-dppmm" podUID="1189f657-b031-4ece-859b-95d3eadd8221" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:34 crc kubenswrapper[4809]: I0312 09:19:34.953745 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-69f744f599-dppmm" Mar 12 09:19:34 crc kubenswrapper[4809]: I0312 09:19:34.958866 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="authentication-operator" containerStatusID={"Type":"cri-o","ID":"2fa15bd351bb201693c81881d012f11f8d49df826a5bf0e3f9c7ab0304d163f4"} pod="openshift-authentication-operator/authentication-operator-69f744f599-dppmm" containerMessage="Container authentication-operator failed liveness probe, will be restarted" Mar 12 09:19:34 crc kubenswrapper[4809]: I0312 09:19:34.959024 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication-operator/authentication-operator-69f744f599-dppmm" podUID="1189f657-b031-4ece-859b-95d3eadd8221" containerName="authentication-operator" containerID="cri-o://2fa15bd351bb201693c81881d012f11f8d49df826a5bf0e3f9c7ab0304d163f4" gracePeriod=30 Mar 12 09:19:35 crc kubenswrapper[4809]: I0312 09:19:35.048728 4809 patch_prober.go:28] interesting pod/logging-loki-gateway-fd898bfdd-dhc65 container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:35 crc kubenswrapper[4809]: I0312 09:19:35.048809 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-fd898bfdd-dhc65" podUID="8232d992-4bfb-46ca-a440-647d8c006309" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.56:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:35 crc kubenswrapper[4809]: I0312 09:19:35.102190 4809 patch_prober.go:28] interesting pod/logging-loki-gateway-fd898bfdd-tbqw2 container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:35 crc kubenswrapper[4809]: I0312 09:19:35.102257 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-fd898bfdd-tbqw2" podUID="7546fb46-f601-417f-ad26-69a4fb625fdc" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:35 crc kubenswrapper[4809]: I0312 09:19:35.159365 4809 patch_prober.go:28] interesting pod/router-default-5444994796-ttmrw container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:35 crc kubenswrapper[4809]: I0312 09:19:35.159410 4809 patch_prober.go:28] interesting pod/router-default-5444994796-ttmrw container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:35 crc kubenswrapper[4809]: I0312 09:19:35.159440 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-ttmrw" podUID="b24173e0-5140-4c4d-ab8b-ea3696db0b74" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:35 crc kubenswrapper[4809]: I0312 09:19:35.159494 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-ttmrw" podUID="b24173e0-5140-4c4d-ab8b-ea3696db0b74" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:35 crc kubenswrapper[4809]: I0312 09:19:35.393442 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-pd2vq" podUID="ec52f8eb-40dd-4475-9726-69b84829233d" containerName="registry-server" probeResult="failure" output=< Mar 12 09:19:35 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 09:19:35 crc kubenswrapper[4809]: > Mar 12 09:19:35 crc kubenswrapper[4809]: I0312 09:19:35.393432 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-hrqzc" podUID="d0d3a3fc-1ad7-4a41-a732-d16ca4e7ceb1" containerName="registry-server" probeResult="failure" output=< Mar 12 09:19:35 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 09:19:35 crc kubenswrapper[4809]: > Mar 12 09:19:35 crc kubenswrapper[4809]: I0312 09:19:35.397062 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-hrqzc" podUID="d0d3a3fc-1ad7-4a41-a732-d16ca4e7ceb1" containerName="registry-server" probeResult="failure" output=< Mar 12 09:19:35 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 09:19:35 crc kubenswrapper[4809]: > Mar 12 09:19:35 crc kubenswrapper[4809]: I0312 09:19:35.432539 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-46txv" podUID="31c13acb-9eed-41f3-b4a5-63c4cdbdaa40" containerName="registry-server" probeResult="failure" output=< Mar 12 09:19:35 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 09:19:35 crc kubenswrapper[4809]: > Mar 12 09:19:35 crc kubenswrapper[4809]: I0312 09:19:35.433452 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-pd2vq" podUID="ec52f8eb-40dd-4475-9726-69b84829233d" containerName="registry-server" probeResult="failure" output=< Mar 12 09:19:35 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 09:19:35 crc kubenswrapper[4809]: > Mar 12 09:19:35 crc kubenswrapper[4809]: I0312 09:19:35.460346 4809 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-vqtvl container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:35 crc kubenswrapper[4809]: I0312 09:19:35.460453 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vqtvl" podUID="50dc6c2c-21cc-4fe3-83c6-9b2a69eb039d" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:35 crc kubenswrapper[4809]: I0312 09:19:35.460351 4809 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-vqtvl container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:35 crc kubenswrapper[4809]: I0312 09:19:35.466226 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vqtvl" podUID="50dc6c2c-21cc-4fe3-83c6-9b2a69eb039d" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:35 crc kubenswrapper[4809]: I0312 09:19:35.533772 4809 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-tjjbt container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:35 crc kubenswrapper[4809]: I0312 09:19:35.533817 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tjjbt" podUID="0621879e-29ff-49a3-81fc-1bde4a2d22ae" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:35 crc kubenswrapper[4809]: I0312 09:19:35.537482 4809 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-tjjbt container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:35 crc kubenswrapper[4809]: I0312 09:19:35.537541 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tjjbt" podUID="0621879e-29ff-49a3-81fc-1bde4a2d22ae" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:35 crc kubenswrapper[4809]: I0312 09:19:35.540443 4809 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-xx52w container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:35 crc kubenswrapper[4809]: I0312 09:19:35.540551 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xx52w" podUID="ac7cc05e-989a-4474-9685-9600e3502dfd" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.41:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:35 crc kubenswrapper[4809]: I0312 09:19:35.540468 4809 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-xx52w container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.41:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:35 crc kubenswrapper[4809]: I0312 09:19:35.541272 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xx52w" podUID="ac7cc05e-989a-4474-9685-9600e3502dfd" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.41:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:35 crc kubenswrapper[4809]: I0312 09:19:35.546259 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xx52w" Mar 12 09:19:35 crc kubenswrapper[4809]: I0312 09:19:35.546320 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xx52w" Mar 12 09:19:35 crc kubenswrapper[4809]: I0312 09:19:35.550646 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="packageserver" containerStatusID={"Type":"cri-o","ID":"6dd326b9a3bcd0f8b552830e0fa15fc8392f2f0fb450c2e7b71b827759bce6a5"} pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xx52w" containerMessage="Container packageserver failed liveness probe, will be restarted" Mar 12 09:19:35 crc kubenswrapper[4809]: I0312 09:19:35.550735 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xx52w" podUID="ac7cc05e-989a-4474-9685-9600e3502dfd" containerName="packageserver" containerID="cri-o://6dd326b9a3bcd0f8b552830e0fa15fc8392f2f0fb450c2e7b71b827759bce6a5" gracePeriod=30 Mar 12 09:19:35 crc kubenswrapper[4809]: I0312 09:19:35.594545 4809 trace.go:236] Trace[2139793361]: "Calculate volume metrics of catalog-content for pod openshift-marketplace/redhat-operators-pd2vq" (12-Mar-2026 09:19:34.401) (total time: 1192ms): Mar 12 09:19:35 crc kubenswrapper[4809]: Trace[2139793361]: [1.192954674s] [1.192954674s] END Mar 12 09:19:35 crc kubenswrapper[4809]: I0312 09:19:35.790301 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-zqzjb" podUID="f9d9e6d3-d87f-485b-bb03-6ed4f067de44" containerName="registry-server" probeResult="failure" output=< Mar 12 09:19:35 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 09:19:35 crc kubenswrapper[4809]: > Mar 12 09:19:35 crc kubenswrapper[4809]: I0312 09:19:35.800747 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-zqzjb" podUID="f9d9e6d3-d87f-485b-bb03-6ed4f067de44" containerName="registry-server" probeResult="failure" output=< Mar 12 09:19:35 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 09:19:35 crc kubenswrapper[4809]: > Mar 12 09:19:35 crc kubenswrapper[4809]: I0312 09:19:35.898793 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="729d6f4c-335b-486c-bea9-812d4abfdfd9" containerName="galera" probeResult="failure" output="command timed out" Mar 12 09:19:35 crc kubenswrapper[4809]: I0312 09:19:35.898825 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="97133eff-e7a3-42a1-833a-674e836f7be8" containerName="prometheus" probeResult="failure" output="command timed out" Mar 12 09:19:35 crc kubenswrapper[4809]: I0312 09:19:35.898958 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="729d6f4c-335b-486c-bea9-812d4abfdfd9" containerName="galera" probeResult="failure" output="command timed out" Mar 12 09:19:35 crc kubenswrapper[4809]: I0312 09:19:35.899280 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="3d541616-6c38-428f-bd28-7dc54dceab8c" containerName="galera" probeResult="failure" output="command timed out" Mar 12 09:19:35 crc kubenswrapper[4809]: I0312 09:19:35.907626 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="97133eff-e7a3-42a1-833a-674e836f7be8" containerName="prometheus" probeResult="failure" output="command timed out" Mar 12 09:19:35 crc kubenswrapper[4809]: I0312 09:19:35.908056 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Mar 12 09:19:35 crc kubenswrapper[4809]: I0312 09:19:35.909476 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-fxhzz" podUID="1e560783-0ec2-4688-a79e-59a1df5b2e61" containerName="registry-server" probeResult="failure" output="command timed out" Mar 12 09:19:35 crc kubenswrapper[4809]: I0312 09:19:35.911502 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-fxhzz" podUID="1e560783-0ec2-4688-a79e-59a1df5b2e61" containerName="registry-server" probeResult="failure" output="command timed out" Mar 12 09:19:36 crc kubenswrapper[4809]: I0312 09:19:36.060764 4809 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-2qrsm container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:36 crc kubenswrapper[4809]: I0312 09:19:36.060836 4809 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-2qrsm container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:36 crc kubenswrapper[4809]: I0312 09:19:36.060847 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qrsm" podUID="63625da7-0f0d-48f1-8b58-c75e04bc31e4" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:36 crc kubenswrapper[4809]: I0312 09:19:36.060924 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qrsm" podUID="63625da7-0f0d-48f1-8b58-c75e04bc31e4" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:36 crc kubenswrapper[4809]: I0312 09:19:36.329646 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-mz9kq" podUID="9da05ba1-fc66-48d8-a8ce-c99c04f0e416" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:36 crc kubenswrapper[4809]: I0312 09:19:36.370321 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-mz9kq" podUID="9da05ba1-fc66-48d8-a8ce-c99c04f0e416" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:36 crc kubenswrapper[4809]: I0312 09:19:36.550100 4809 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-xx52w container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:36 crc kubenswrapper[4809]: I0312 09:19:36.550217 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xx52w" podUID="ac7cc05e-989a-4474-9685-9600e3502dfd" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.41:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:36 crc kubenswrapper[4809]: I0312 09:19:36.899006 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-jgxtq" podUID="f89f6199-4afe-4ace-a7a9-2b8c91451d40" containerName="nmstate-handler" probeResult="failure" output="command timed out" Mar 12 09:19:37 crc kubenswrapper[4809]: I0312 09:19:37.022200 4809 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:37 crc kubenswrapper[4809]: I0312 09:19:37.022548 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:37 crc kubenswrapper[4809]: I0312 09:19:37.150549 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-q7hlx" podUID="3e5c36fc-835f-40b8-8b0c-380d00797bb0" containerName="registry-server" probeResult="failure" output=< Mar 12 09:19:37 crc kubenswrapper[4809]: timeout: health rpc did not complete within 1s Mar 12 09:19:37 crc kubenswrapper[4809]: > Mar 12 09:19:37 crc kubenswrapper[4809]: I0312 09:19:37.150923 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-q7hlx" podUID="3e5c36fc-835f-40b8-8b0c-380d00797bb0" containerName="registry-server" probeResult="failure" output=< Mar 12 09:19:37 crc kubenswrapper[4809]: timeout: health rpc did not complete within 1s Mar 12 09:19:37 crc kubenswrapper[4809]: > Mar 12 09:19:37 crc kubenswrapper[4809]: I0312 09:19:37.295205 4809 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-5qhpm container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.79:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:37 crc kubenswrapper[4809]: I0312 09:19:37.295259 4809 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-5qhpm container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.79:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:37 crc kubenswrapper[4809]: I0312 09:19:37.295280 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-5qhpm" podUID="2204a165-ee7b-4609-bed7-9683860bce5d" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.79:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:37 crc kubenswrapper[4809]: I0312 09:19:37.295332 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-5qhpm" podUID="2204a165-ee7b-4609-bed7-9683860bce5d" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.79:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:37 crc kubenswrapper[4809]: I0312 09:19:37.523424 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-5d5444f5b-xmqds" podUID="2ef4d6d0-1c93-4f10-bd15-5de5ede76c62" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.127:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:37 crc kubenswrapper[4809]: I0312 09:19:37.523431 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-5d5444f5b-xmqds" podUID="2ef4d6d0-1c93-4f10-bd15-5de5ede76c62" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.127:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:37 crc kubenswrapper[4809]: I0312 09:19:37.773791 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Mar 12 09:19:37 crc kubenswrapper[4809]: I0312 09:19:37.902335 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="b7c728ba-8361-4d19-833d-b3494509f355" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Mar 12 09:19:37 crc kubenswrapper[4809]: I0312 09:19:37.902442 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ceilometer-0" Mar 12 09:19:37 crc kubenswrapper[4809]: I0312 09:19:37.941016 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ceilometer-central-agent" containerStatusID={"Type":"cri-o","ID":"b96fb9463b48b47c51299bf419244fbc30fcc5ab57bebf0d2a86617d3123c3f7"} pod="openstack/ceilometer-0" containerMessage="Container ceilometer-central-agent failed liveness probe, will be restarted" Mar 12 09:19:37 crc kubenswrapper[4809]: I0312 09:19:37.941141 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b7c728ba-8361-4d19-833d-b3494509f355" containerName="ceilometer-central-agent" containerID="cri-o://b96fb9463b48b47c51299bf419244fbc30fcc5ab57bebf0d2a86617d3123c3f7" gracePeriod=30 Mar 12 09:19:38 crc kubenswrapper[4809]: I0312 09:19:38.025332 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="cert-manager/cert-manager-webhook-687f57d79b-4jx86" podUID="5097b432-e4b9-407e-97a3-3821992f9f91" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.42:6080/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:38 crc kubenswrapper[4809]: I0312 09:19:38.025433 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-4jx86" podUID="5097b432-e4b9-407e-97a3-3821992f9f91" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.42:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:38 crc kubenswrapper[4809]: I0312 09:19:38.873573 4809 patch_prober.go:28] interesting pod/thanos-querier-5b6975bb77-fcddt container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.84:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:38 crc kubenswrapper[4809]: I0312 09:19:38.874195 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-5b6975bb77-fcddt" podUID="8716fe6b-cd87-4777-8291-6078ce9929bc" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.84:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:39 crc kubenswrapper[4809]: I0312 09:19:39.009586 4809 patch_prober.go:28] interesting pod/logging-loki-distributor-5d5548c9f5-pbfks container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:39 crc kubenswrapper[4809]: I0312 09:19:39.009695 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-pbfks" podUID="8ac4723a-9ff0-4186-8177-8a86f6db8b9f" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.53:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:39 crc kubenswrapper[4809]: I0312 09:19:39.009803 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-pbfks" Mar 12 09:19:39 crc kubenswrapper[4809]: I0312 09:19:39.030566 4809 trace.go:236] Trace[786201286]: "Calculate volume metrics of storage for pod openshift-logging/logging-loki-compactor-0" (12-Mar-2026 09:19:35.300) (total time: 3730ms): Mar 12 09:19:39 crc kubenswrapper[4809]: Trace[786201286]: [3.730273655s] [3.730273655s] END Mar 12 09:19:39 crc kubenswrapper[4809]: I0312 09:19:39.143307 4809 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-2qrsm container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:39 crc kubenswrapper[4809]: I0312 09:19:39.143370 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qrsm" podUID="63625da7-0f0d-48f1-8b58-c75e04bc31e4" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:39 crc kubenswrapper[4809]: I0312 09:19:39.143640 4809 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-2qrsm container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:39 crc kubenswrapper[4809]: I0312 09:19:39.143656 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qrsm" podUID="63625da7-0f0d-48f1-8b58-c75e04bc31e4" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:39 crc kubenswrapper[4809]: I0312 09:19:39.290517 4809 patch_prober.go:28] interesting pod/logging-loki-querier-76bf7b6d45-zb8df container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:39 crc kubenswrapper[4809]: I0312 09:19:39.290590 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76bf7b6d45-zb8df" podUID="c3630a5f-f4c4-42af-8335-60dbcbdb4961" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.54:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:39 crc kubenswrapper[4809]: I0312 09:19:39.290704 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-76bf7b6d45-zb8df" Mar 12 09:19:39 crc kubenswrapper[4809]: I0312 09:19:39.336733 4809 patch_prober.go:28] interesting pod/logging-loki-query-frontend-6d6859c548-6pp5s container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:39 crc kubenswrapper[4809]: I0312 09:19:39.336817 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-6pp5s" podUID="9fc47673-0fe3-49f6-a2bb-06845a2f3fc4" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.55:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:39 crc kubenswrapper[4809]: I0312 09:19:39.336917 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-6pp5s" Mar 12 09:19:39 crc kubenswrapper[4809]: I0312 09:19:39.712923 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xx52w" event={"ID":"ac7cc05e-989a-4474-9685-9600e3502dfd","Type":"ContainerDied","Data":"6dd326b9a3bcd0f8b552830e0fa15fc8392f2f0fb450c2e7b71b827759bce6a5"} Mar 12 09:19:39 crc kubenswrapper[4809]: I0312 09:19:39.713283 4809 generic.go:334] "Generic (PLEG): container finished" podID="ac7cc05e-989a-4474-9685-9600e3502dfd" containerID="6dd326b9a3bcd0f8b552830e0fa15fc8392f2f0fb450c2e7b71b827759bce6a5" exitCode=0 Mar 12 09:19:39 crc kubenswrapper[4809]: I0312 09:19:39.715597 4809 generic.go:334] "Generic (PLEG): container finished" podID="1189f657-b031-4ece-859b-95d3eadd8221" containerID="2fa15bd351bb201693c81881d012f11f8d49df826a5bf0e3f9c7ab0304d163f4" exitCode=0 Mar 12 09:19:39 crc kubenswrapper[4809]: I0312 09:19:39.715625 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-dppmm" event={"ID":"1189f657-b031-4ece-859b-95d3eadd8221","Type":"ContainerDied","Data":"2fa15bd351bb201693c81881d012f11f8d49df826a5bf0e3f9c7ab0304d163f4"} Mar 12 09:19:40 crc kubenswrapper[4809]: I0312 09:19:40.010838 4809 patch_prober.go:28] interesting pod/logging-loki-distributor-5d5548c9f5-pbfks container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:40 crc kubenswrapper[4809]: I0312 09:19:40.010897 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-pbfks" podUID="8ac4723a-9ff0-4186-8177-8a86f6db8b9f" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.53:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:40 crc kubenswrapper[4809]: I0312 09:19:40.038131 4809 patch_prober.go:28] interesting pod/console-5b99b64b5d-4hf4l container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.143:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:40 crc kubenswrapper[4809]: I0312 09:19:40.038195 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-5b99b64b5d-4hf4l" podUID="4210061b-64cb-414a-be09-bf56697ad409" containerName="console" probeResult="failure" output="Get \"https://10.217.0.143:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:40 crc kubenswrapper[4809]: I0312 09:19:40.038362 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5b99b64b5d-4hf4l" Mar 12 09:19:40 crc kubenswrapper[4809]: I0312 09:19:40.048677 4809 patch_prober.go:28] interesting pod/logging-loki-gateway-fd898bfdd-dhc65 container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:40 crc kubenswrapper[4809]: I0312 09:19:40.048903 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-fd898bfdd-dhc65" podUID="8232d992-4bfb-46ca-a440-647d8c006309" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.56:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:40 crc kubenswrapper[4809]: I0312 09:19:40.102800 4809 patch_prober.go:28] interesting pod/logging-loki-gateway-fd898bfdd-tbqw2 container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:40 crc kubenswrapper[4809]: I0312 09:19:40.102859 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-fd898bfdd-tbqw2" podUID="7546fb46-f601-417f-ad26-69a4fb625fdc" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:40 crc kubenswrapper[4809]: I0312 09:19:40.215919 4809 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:40 crc kubenswrapper[4809]: I0312 09:19:40.215978 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="1ef2f625-286b-49c8-97d9-a98350cfea7b" containerName="loki-ingester" probeResult="failure" output="Get \"https://10.217.0.58:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:40 crc kubenswrapper[4809]: I0312 09:19:40.241138 4809 patch_prober.go:28] interesting pod/logging-loki-compactor-0 container/loki-compactor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.60:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:40 crc kubenswrapper[4809]: I0312 09:19:40.241334 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-compactor-0" podUID="ba5833e7-becf-412f-879b-6cab8777fb0b" containerName="loki-compactor" probeResult="failure" output="Get \"https://10.217.0.60:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:40 crc kubenswrapper[4809]: I0312 09:19:40.291394 4809 patch_prober.go:28] interesting pod/logging-loki-querier-76bf7b6d45-zb8df container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:40 crc kubenswrapper[4809]: I0312 09:19:40.291500 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76bf7b6d45-zb8df" podUID="c3630a5f-f4c4-42af-8335-60dbcbdb4961" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.54:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:40 crc kubenswrapper[4809]: I0312 09:19:40.337936 4809 patch_prober.go:28] interesting pod/logging-loki-query-frontend-6d6859c548-6pp5s container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:40 crc kubenswrapper[4809]: I0312 09:19:40.338029 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-6pp5s" podUID="9fc47673-0fe3-49f6-a2bb-06845a2f3fc4" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.55:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:40 crc kubenswrapper[4809]: I0312 09:19:40.343178 4809 patch_prober.go:28] interesting pod/logging-loki-index-gateway-0 container/loki-index-gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.61:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:40 crc kubenswrapper[4809]: I0312 09:19:40.343229 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-index-gateway-0" podUID="bcc0a610-6eb0-4a5f-88d9-5d069f760c14" containerName="loki-index-gateway" probeResult="failure" output="Get \"https://10.217.0.61:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:40 crc kubenswrapper[4809]: I0312 09:19:40.468353 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-gtzwg" podUID="ead62bdc-2a69-4b3a-a6c5-b60614a34263" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:40 crc kubenswrapper[4809]: I0312 09:19:40.591481 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-7dq74" podUID="8684cb78-fad5-4998-a52f-ba39be875af1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:40 crc kubenswrapper[4809]: I0312 09:19:40.591512 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-gtzwg" podUID="ead62bdc-2a69-4b3a-a6c5-b60614a34263" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:40 crc kubenswrapper[4809]: I0312 09:19:40.591613 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-7dq74" Mar 12 09:19:40 crc kubenswrapper[4809]: I0312 09:19:40.591661 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-gtzwg" Mar 12 09:19:40 crc kubenswrapper[4809]: I0312 09:19:40.674703 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-qtwhh" podUID="12b71885-6cb4-4888-9056-a39becec3670" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:40 crc kubenswrapper[4809]: I0312 09:19:40.675621 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-7dq74" podUID="8684cb78-fad5-4998-a52f-ba39be875af1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:40 crc kubenswrapper[4809]: I0312 09:19:40.760551 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-qtwhh" podUID="12b71885-6cb4-4888-9056-a39becec3670" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:40 crc kubenswrapper[4809]: I0312 09:19:40.760642 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-cvm8g" podUID="da29e412-21cc-4249-9791-55335156ff1b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:40 crc kubenswrapper[4809]: I0312 09:19:40.760694 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-qtwhh" Mar 12 09:19:40 crc kubenswrapper[4809]: I0312 09:19:40.802528 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-cvm8g" podUID="da29e412-21cc-4249-9791-55335156ff1b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:40 crc kubenswrapper[4809]: I0312 09:19:40.928314 4809 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-lncvk container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.73:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:40 crc kubenswrapper[4809]: I0312 09:19:40.928712 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-lncvk" podUID="49c3f940-85d8-49c5-a529-367c56018858" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.73:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:40 crc kubenswrapper[4809]: I0312 09:19:40.928824 4809 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-lncvk container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.73:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:40 crc kubenswrapper[4809]: I0312 09:19:40.928881 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-lncvk" podUID="49c3f940-85d8-49c5-a529-367c56018858" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.73:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:40 crc kubenswrapper[4809]: I0312 09:19:40.928935 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-mnhzr" podUID="87b1729d-5a9d-4e35-bec1-21d7307020f2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:41 crc kubenswrapper[4809]: I0312 09:19:41.011382 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-mnhzr" podUID="87b1729d-5a9d-4e35-bec1-21d7307020f2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:41 crc kubenswrapper[4809]: I0312 09:19:41.013080 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-7c2ts" podUID="b7c605d7-46e5-4daa-beb3-4ef624bc0df9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:41 crc kubenswrapper[4809]: I0312 09:19:41.093377 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-kwt4n" podUID="ddab063f-ed2f-416c-8730-55de13229f58" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:41 crc kubenswrapper[4809]: I0312 09:19:41.093490 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-kwt4n" Mar 12 09:19:41 crc kubenswrapper[4809]: I0312 09:19:41.094331 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-7c2ts" podUID="b7c605d7-46e5-4daa-beb3-4ef624bc0df9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:41 crc kubenswrapper[4809]: I0312 09:19:41.094524 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-7dq74" Mar 12 09:19:41 crc kubenswrapper[4809]: I0312 09:19:41.178451 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-dbw4q" podUID="a5138546-10af-4d98-96b5-b39dd71e9af1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:41 crc kubenswrapper[4809]: I0312 09:19:41.345276 4809 patch_prober.go:28] interesting pod/console-5b99b64b5d-4hf4l container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.143:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:41 crc kubenswrapper[4809]: I0312 09:19:41.345346 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-5b99b64b5d-4hf4l" podUID="4210061b-64cb-414a-be09-bf56697ad409" containerName="console" probeResult="failure" output="Get \"https://10.217.0.143:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:41 crc kubenswrapper[4809]: I0312 09:19:41.345364 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-s5r4z" podUID="f3a998c8-9d1c-46ce-9bf6-adcbf704fb5c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:41 crc kubenswrapper[4809]: I0312 09:19:41.345365 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-kwt4n" podUID="ddab063f-ed2f-416c-8730-55de13229f58" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:41 crc kubenswrapper[4809]: I0312 09:19:41.345892 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-dbw4q" podUID="a5138546-10af-4d98-96b5-b39dd71e9af1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:41 crc kubenswrapper[4809]: I0312 09:19:41.346001 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-dbw4q" Mar 12 09:19:41 crc kubenswrapper[4809]: I0312 09:19:41.471381 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-6bbc5b75f-zqc46" podUID="53dea28f-c986-4b4e-a4da-757b2bc9435e" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:41 crc kubenswrapper[4809]: I0312 09:19:41.471505 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-6bbc5b75f-zqc46" Mar 12 09:19:41 crc kubenswrapper[4809]: I0312 09:19:41.471784 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-tlxd5" podUID="b9586c2f-2fbd-4ba8-a5a5-45ed306bc53e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:41 crc kubenswrapper[4809]: I0312 09:19:41.471903 4809 patch_prober.go:28] interesting pod/controller-manager-f7c76cdd5-nbjpc container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:41 crc kubenswrapper[4809]: I0312 09:19:41.471932 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-f7c76cdd5-nbjpc" podUID="c1cae54e-b80d-4bac-bfd6-fcc3b6e8832b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:41 crc kubenswrapper[4809]: I0312 09:19:41.471959 4809 patch_prober.go:28] interesting pod/controller-manager-f7c76cdd5-nbjpc container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:41 crc kubenswrapper[4809]: I0312 09:19:41.471984 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-f7c76cdd5-nbjpc" podUID="c1cae54e-b80d-4bac-bfd6-fcc3b6e8832b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:41 crc kubenswrapper[4809]: I0312 09:19:41.472022 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-tlxd5" podUID="b9586c2f-2fbd-4ba8-a5a5-45ed306bc53e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:41 crc kubenswrapper[4809]: I0312 09:19:41.472060 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-tlxd5" Mar 12 09:19:41 crc kubenswrapper[4809]: I0312 09:19:41.472088 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-controller-manager/controller-manager-f7c76cdd5-nbjpc" Mar 12 09:19:41 crc kubenswrapper[4809]: I0312 09:19:41.472131 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-qtwhh" Mar 12 09:19:41 crc kubenswrapper[4809]: I0312 09:19:41.472216 4809 patch_prober.go:28] interesting pod/route-controller-manager-5dbd9bc447-2v8gh container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.68:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:41 crc kubenswrapper[4809]: I0312 09:19:41.472291 4809 patch_prober.go:28] interesting pod/route-controller-manager-5dbd9bc447-2v8gh container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.68:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:41 crc kubenswrapper[4809]: I0312 09:19:41.472276 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-5dbd9bc447-2v8gh" podUID="5ba37d8e-9139-402a-9909-8a9c3fa4d103" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.68:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:41 crc kubenswrapper[4809]: I0312 09:19:41.472328 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5dbd9bc447-2v8gh" podUID="5ba37d8e-9139-402a-9909-8a9c3fa4d103" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.68:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:41 crc kubenswrapper[4809]: I0312 09:19:41.472394 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-route-controller-manager/route-controller-manager-5dbd9bc447-2v8gh" Mar 12 09:19:41 crc kubenswrapper[4809]: I0312 09:19:41.472728 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-s5r4z" podUID="f3a998c8-9d1c-46ce-9bf6-adcbf704fb5c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:41 crc kubenswrapper[4809]: I0312 09:19:41.472901 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-s5r4z" Mar 12 09:19:41 crc kubenswrapper[4809]: I0312 09:19:41.473275 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-nwrl4" podUID="e349e256-24bd-459e-b5d5-4bf9d85b2a5d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:41 crc kubenswrapper[4809]: I0312 09:19:41.473310 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-nwrl4" podUID="e349e256-24bd-459e-b5d5-4bf9d85b2a5d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:41 crc kubenswrapper[4809]: I0312 09:19:41.473379 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-nwrl4" Mar 12 09:19:41 crc kubenswrapper[4809]: I0312 09:19:41.475287 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-tlxd5" Mar 12 09:19:41 crc kubenswrapper[4809]: I0312 09:19:41.483229 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="route-controller-manager" containerStatusID={"Type":"cri-o","ID":"3c24f46e748734abcd75c20587fa57ae57554aacfa8149267e6e1e842dc3973a"} pod="openshift-route-controller-manager/route-controller-manager-5dbd9bc447-2v8gh" containerMessage="Container route-controller-manager failed liveness probe, will be restarted" Mar 12 09:19:41 crc kubenswrapper[4809]: I0312 09:19:41.483747 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5dbd9bc447-2v8gh" podUID="5ba37d8e-9139-402a-9909-8a9c3fa4d103" containerName="route-controller-manager" containerID="cri-o://3c24f46e748734abcd75c20587fa57ae57554aacfa8149267e6e1e842dc3973a" gracePeriod=30 Mar 12 09:19:41 crc kubenswrapper[4809]: I0312 09:19:41.483736 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="controller-manager" containerStatusID={"Type":"cri-o","ID":"84f1f44cea0ba1bbe7a81e9f212f7ffa115592893287b9b94fdcb98a1c713a9e"} pod="openshift-controller-manager/controller-manager-f7c76cdd5-nbjpc" containerMessage="Container controller-manager failed liveness probe, will be restarted" Mar 12 09:19:41 crc kubenswrapper[4809]: I0312 09:19:41.483850 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-f7c76cdd5-nbjpc" podUID="c1cae54e-b80d-4bac-bfd6-fcc3b6e8832b" containerName="controller-manager" containerID="cri-o://84f1f44cea0ba1bbe7a81e9f212f7ffa115592893287b9b94fdcb98a1c713a9e" gracePeriod=30 Mar 12 09:19:41 crc kubenswrapper[4809]: I0312 09:19:41.711365 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-57c6b5bd58-tt85t" podUID="b40480af-2b15-4c8f-9bf2-f63ca0dd6870" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:41 crc kubenswrapper[4809]: I0312 09:19:41.711393 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-677c674df7-p24mv" podUID="60e08cbe-2284-4030-8073-892fd74bcdc6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.124:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:41 crc kubenswrapper[4809]: I0312 09:19:41.711483 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-57c6b5bd58-tt85t" Mar 12 09:19:41 crc kubenswrapper[4809]: I0312 09:19:41.711524 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-677c674df7-p24mv" Mar 12 09:19:41 crc kubenswrapper[4809]: I0312 09:19:41.711612 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-s5r4z" Mar 12 09:19:41 crc kubenswrapper[4809]: I0312 09:19:41.752473 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-gtzwg" podUID="ead62bdc-2a69-4b3a-a6c5-b60614a34263" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:41 crc kubenswrapper[4809]: I0312 09:19:41.752955 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/swift-operator-controller-manager-677c674df7-p24mv" podUID="60e08cbe-2284-4030-8073-892fd74bcdc6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.124:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:41 crc kubenswrapper[4809]: I0312 09:19:41.837410 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-svzr9" podUID="bd9084af-4a31-4802-b9b2-827b0ad53628" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.126:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:41 crc kubenswrapper[4809]: I0312 09:19:41.837497 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/telemetry-operator-controller-manager-57c6b5bd58-tt85t" podUID="b40480af-2b15-4c8f-9bf2-f63ca0dd6870" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:41 crc kubenswrapper[4809]: I0312 09:19:41.837874 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-svzr9" podUID="bd9084af-4a31-4802-b9b2-827b0ad53628" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.126:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:41 crc kubenswrapper[4809]: I0312 09:19:41.837948 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-svzr9" Mar 12 09:19:41 crc kubenswrapper[4809]: I0312 09:19:41.920376 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-rw46j" podUID="4abc098b-51aa-4483-93e1-4880178f6167" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.125:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:41 crc kubenswrapper[4809]: I0312 09:19:41.920771 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-rw46j" podUID="4abc098b-51aa-4483-93e1-4880178f6167" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.125:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:41 crc kubenswrapper[4809]: I0312 09:19:41.920861 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-rw46j" Mar 12 09:19:41 crc kubenswrapper[4809]: I0312 09:19:41.921645 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-677c674df7-p24mv" Mar 12 09:19:41 crc kubenswrapper[4809]: I0312 09:19:41.926546 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-svzr9" Mar 12 09:19:41 crc kubenswrapper[4809]: I0312 09:19:41.981393 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-648f7d48f7-wdwdw" podUID="d177b9be-4037-4f81-8227-9c4361eba85f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.97:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:41 crc kubenswrapper[4809]: I0312 09:19:41.981503 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-648f7d48f7-wdwdw" Mar 12 09:19:42 crc kubenswrapper[4809]: I0312 09:19:42.059531 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.198:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:42 crc kubenswrapper[4809]: I0312 09:19:42.062150 4809 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-2qrsm container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:42 crc kubenswrapper[4809]: I0312 09:19:42.062204 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qrsm" podUID="63625da7-0f0d-48f1-8b58-c75e04bc31e4" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:42 crc kubenswrapper[4809]: I0312 09:19:42.062264 4809 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-2qrsm container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:42 crc kubenswrapper[4809]: I0312 09:19:42.062381 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qrsm" podUID="63625da7-0f0d-48f1-8b58-c75e04bc31e4" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:42 crc kubenswrapper[4809]: I0312 09:19:42.062315 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.198:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:42 crc kubenswrapper[4809]: I0312 09:19:42.062301 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qrsm" Mar 12 09:19:42 crc kubenswrapper[4809]: I0312 09:19:42.062519 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qrsm" Mar 12 09:19:42 crc kubenswrapper[4809]: I0312 09:19:42.082963 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"7846a855e1dfdb16db66ee630e5c34cfab65004b2e4ba492c88b071a31defb59"} pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qrsm" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Mar 12 09:19:42 crc kubenswrapper[4809]: I0312 09:19:42.083048 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qrsm" podUID="63625da7-0f0d-48f1-8b58-c75e04bc31e4" containerName="openshift-config-operator" containerID="cri-o://7846a855e1dfdb16db66ee630e5c34cfab65004b2e4ba492c88b071a31defb59" gracePeriod=30 Mar 12 09:19:42 crc kubenswrapper[4809]: I0312 09:19:42.092067 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-kwt4n" Mar 12 09:19:42 crc kubenswrapper[4809]: I0312 09:19:42.146323 4809 patch_prober.go:28] interesting pod/nmstate-webhook-5f558f5558-xxj6b container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.93:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:42 crc kubenswrapper[4809]: I0312 09:19:42.146414 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-5f558f5558-xxj6b" podUID="293a6f5b-33ba-4398-a1c7-a5f97db11950" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.93:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:42 crc kubenswrapper[4809]: I0312 09:19:42.168084 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qrsm" Mar 12 09:19:42 crc kubenswrapper[4809]: I0312 09:19:42.388320 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-dbw4q" podUID="a5138546-10af-4d98-96b5-b39dd71e9af1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:42 crc kubenswrapper[4809]: I0312 09:19:42.388320 4809 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:42 crc kubenswrapper[4809]: I0312 09:19:42.388456 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:42 crc kubenswrapper[4809]: I0312 09:19:42.388532 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 12 09:19:42 crc kubenswrapper[4809]: I0312 09:19:42.388566 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-nwrl4" Mar 12 09:19:42 crc kubenswrapper[4809]: I0312 09:19:42.396520 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="b92111bc-ddbe-401a-83c3-2b0c1e805c6a" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.21:8081/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:42 crc kubenswrapper[4809]: I0312 09:19:42.396522 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="b92111bc-ddbe-401a-83c3-2b0c1e805c6a" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.21:8080/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:42 crc kubenswrapper[4809]: I0312 09:19:42.396596 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/kube-state-metrics-0" Mar 12 09:19:42 crc kubenswrapper[4809]: I0312 09:19:42.409820 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-state-metrics" containerStatusID={"Type":"cri-o","ID":"1f3af126e12fec7d1cf30bc40044b17f47526d6cd60671e66d8db4099042c526"} pod="openstack/kube-state-metrics-0" containerMessage="Container kube-state-metrics failed liveness probe, will be restarted" Mar 12 09:19:42 crc kubenswrapper[4809]: I0312 09:19:42.409884 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="b92111bc-ddbe-401a-83c3-2b0c1e805c6a" containerName="kube-state-metrics" containerID="cri-o://1f3af126e12fec7d1cf30bc40044b17f47526d6cd60671e66d8db4099042c526" gracePeriod=30 Mar 12 09:19:42 crc kubenswrapper[4809]: I0312 09:19:42.514461 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-6bbc5b75f-zqc46" podUID="53dea28f-c986-4b4e-a4da-757b2bc9435e" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:42 crc kubenswrapper[4809]: I0312 09:19:42.596417 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-59fcfb5dc8-gpg2j" podUID="6befee19-0c78-47ca-a608-be246c0d7bb5" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.98:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:42 crc kubenswrapper[4809]: I0312 09:19:42.596493 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/metallb-operator-webhook-server-59fcfb5dc8-gpg2j" Mar 12 09:19:42 crc kubenswrapper[4809]: I0312 09:19:42.596866 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-59fcfb5dc8-gpg2j" podUID="6befee19-0c78-47ca-a608-be246c0d7bb5" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.98:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:42 crc kubenswrapper[4809]: I0312 09:19:42.596976 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-59fcfb5dc8-gpg2j" Mar 12 09:19:42 crc kubenswrapper[4809]: I0312 09:19:42.597324 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-rw46j" Mar 12 09:19:42 crc kubenswrapper[4809]: I0312 09:19:42.597478 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="webhook-server" containerStatusID={"Type":"cri-o","ID":"026ed91a0e60127784c5d684fb23c4c3092787e276daee53bf7b01c4da6cd7bb"} pod="metallb-system/metallb-operator-webhook-server-59fcfb5dc8-gpg2j" containerMessage="Container webhook-server failed liveness probe, will be restarted" Mar 12 09:19:42 crc kubenswrapper[4809]: I0312 09:19:42.597514 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/metallb-operator-webhook-server-59fcfb5dc8-gpg2j" podUID="6befee19-0c78-47ca-a608-be246c0d7bb5" containerName="webhook-server" containerID="cri-o://026ed91a0e60127784c5d684fb23c4c3092787e276daee53bf7b01c4da6cd7bb" gracePeriod=2 Mar 12 09:19:42 crc kubenswrapper[4809]: I0312 09:19:42.603700 4809 patch_prober.go:28] interesting pod/monitoring-plugin-56cf9d75b7-58kgc container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.87:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:42 crc kubenswrapper[4809]: I0312 09:19:42.603747 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-56cf9d75b7-58kgc" podUID="9b29409f-7b59-433f-9daf-1c9bd70ef6a8" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.87:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:42 crc kubenswrapper[4809]: I0312 09:19:42.603819 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-56cf9d75b7-58kgc" Mar 12 09:19:42 crc kubenswrapper[4809]: I0312 09:19:42.747259 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-46txv" podUID="31c13acb-9eed-41f3-b4a5-63c4cdbdaa40" containerName="registry-server" probeResult="failure" output=< Mar 12 09:19:42 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 09:19:42 crc kubenswrapper[4809]: > Mar 12 09:19:42 crc kubenswrapper[4809]: I0312 09:19:42.747283 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-dpfq8" podUID="58f90f01-2a85-4755-a6ea-fef9035e8982" containerName="registry-server" probeResult="failure" output=< Mar 12 09:19:42 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 09:19:42 crc kubenswrapper[4809]: > Mar 12 09:19:42 crc kubenswrapper[4809]: I0312 09:19:42.752295 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-57c6b5bd58-tt85t" podUID="b40480af-2b15-4c8f-9bf2-f63ca0dd6870" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:42 crc kubenswrapper[4809]: I0312 09:19:42.864077 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-648f7d48f7-wdwdw" Mar 12 09:19:43 crc kubenswrapper[4809]: I0312 09:19:43.108396 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7bdkr4" podUID="d42ca3a9-74a0-4e76-ac25-730f412c28de" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:43 crc kubenswrapper[4809]: I0312 09:19:43.137202 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7bdkr4" Mar 12 09:19:43 crc kubenswrapper[4809]: I0312 09:19:43.187027 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 12 09:19:43 crc kubenswrapper[4809]: I0312 09:19:43.248565 4809 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 09:19:43 crc kubenswrapper[4809]: I0312 09:19:43.317283 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-rmrt9" podUID="6b5e61e4-2d13-491e-be53-aed7ae027cb1" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.99:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:43 crc kubenswrapper[4809]: I0312 09:19:43.317373 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-rmrt9" Mar 12 09:19:43 crc kubenswrapper[4809]: I0312 09:19:43.317391 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-n22vs" podUID="773274bc-3d57-4d1c-aaf9-f81ce1b981c4" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:43 crc kubenswrapper[4809]: I0312 09:19:43.317458 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-n22vs" podUID="773274bc-3d57-4d1c-aaf9-f81ce1b981c4" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:43 crc kubenswrapper[4809]: I0312 09:19:43.317491 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-n22vs" podUID="773274bc-3d57-4d1c-aaf9-f81ce1b981c4" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:43 crc kubenswrapper[4809]: I0312 09:19:43.317516 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-rmrt9" podUID="6b5e61e4-2d13-491e-be53-aed7ae027cb1" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.99:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:43 crc kubenswrapper[4809]: I0312 09:19:43.317972 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-n22vs" Mar 12 09:19:43 crc kubenswrapper[4809]: I0312 09:19:43.318041 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-n22vs" Mar 12 09:19:43 crc kubenswrapper[4809]: I0312 09:19:43.318099 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-n22vs" Mar 12 09:19:43 crc kubenswrapper[4809]: I0312 09:19:43.318162 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-rmrt9" Mar 12 09:19:43 crc kubenswrapper[4809]: I0312 09:19:43.319994 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="frr-k8s-webhook-server" containerStatusID={"Type":"cri-o","ID":"1204925015f75d92dcf2b3a5d0b3c76bf3d729fe9d0def579db6e8215475ff99"} pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-rmrt9" containerMessage="Container frr-k8s-webhook-server failed liveness probe, will be restarted" Mar 12 09:19:43 crc kubenswrapper[4809]: I0312 09:19:43.320044 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-rmrt9" podUID="6b5e61e4-2d13-491e-be53-aed7ae027cb1" containerName="frr-k8s-webhook-server" containerID="cri-o://1204925015f75d92dcf2b3a5d0b3c76bf3d729fe9d0def579db6e8215475ff99" gracePeriod=10 Mar 12 09:19:43 crc kubenswrapper[4809]: I0312 09:19:43.322378 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="controller" containerStatusID={"Type":"cri-o","ID":"2e150576a9ab7f348d2f48751641b97f18a3b8a403d5f71d7c95cc18cad52a1f"} pod="metallb-system/frr-k8s-n22vs" containerMessage="Container controller failed liveness probe, will be restarted" Mar 12 09:19:43 crc kubenswrapper[4809]: I0312 09:19:43.322446 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="frr" containerStatusID={"Type":"cri-o","ID":"b8dbac94cf0906cf8b0e93d8247b8a9d204f6e8dca9a7504899c462a4a4937c2"} pod="metallb-system/frr-k8s-n22vs" containerMessage="Container frr failed liveness probe, will be restarted" Mar 12 09:19:43 crc kubenswrapper[4809]: I0312 09:19:43.322600 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-n22vs" podUID="773274bc-3d57-4d1c-aaf9-f81ce1b981c4" containerName="controller" containerID="cri-o://2e150576a9ab7f348d2f48751641b97f18a3b8a403d5f71d7c95cc18cad52a1f" gracePeriod=2 Mar 12 09:19:43 crc kubenswrapper[4809]: I0312 09:19:43.399398 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-7bb4cc7c98-7jdts" podUID="b7ddc716-c09f-4923-8e70-f2251873aea9" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.100:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:43 crc kubenswrapper[4809]: I0312 09:19:43.399578 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-7bb4cc7c98-7jdts" podUID="b7ddc716-c09f-4923-8e70-f2251873aea9" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.100:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:43 crc kubenswrapper[4809]: I0312 09:19:43.399615 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-7bb4cc7c98-7jdts" Mar 12 09:19:43 crc kubenswrapper[4809]: I0312 09:19:43.399711 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/controller-7bb4cc7c98-7jdts" Mar 12 09:19:43 crc kubenswrapper[4809]: I0312 09:19:43.406337 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="controller" containerStatusID={"Type":"cri-o","ID":"874336bec80c8d58a380231463cdb488cfe96d70677ac37ad84c9f8a1d0e03f0"} pod="metallb-system/controller-7bb4cc7c98-7jdts" containerMessage="Container controller failed liveness probe, will be restarted" Mar 12 09:19:43 crc kubenswrapper[4809]: I0312 09:19:43.406405 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/controller-7bb4cc7c98-7jdts" podUID="b7ddc716-c09f-4923-8e70-f2251873aea9" containerName="controller" containerID="cri-o://874336bec80c8d58a380231463cdb488cfe96d70677ac37ad84c9f8a1d0e03f0" gracePeriod=2 Mar 12 09:19:43 crc kubenswrapper[4809]: I0312 09:19:43.486491 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-n22vs" Mar 12 09:19:43 crc kubenswrapper[4809]: I0312 09:19:43.604868 4809 patch_prober.go:28] interesting pod/monitoring-plugin-56cf9d75b7-58kgc container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.87:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:43 crc kubenswrapper[4809]: I0312 09:19:43.605056 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-56cf9d75b7-58kgc" podUID="9b29409f-7b59-433f-9daf-1c9bd70ef6a8" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.87:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:43 crc kubenswrapper[4809]: I0312 09:19:43.702162 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-7bb4cc7c98-7jdts" podUID="b7ddc716-c09f-4923-8e70-f2251873aea9" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.100:29150/metrics\": EOF" Mar 12 09:19:43 crc kubenswrapper[4809]: I0312 09:19:43.845896 4809 generic.go:334] "Generic (PLEG): container finished" podID="5ba37d8e-9139-402a-9909-8a9c3fa4d103" containerID="3c24f46e748734abcd75c20587fa57ae57554aacfa8149267e6e1e842dc3973a" exitCode=0 Mar 12 09:19:43 crc kubenswrapper[4809]: I0312 09:19:43.846013 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5dbd9bc447-2v8gh" event={"ID":"5ba37d8e-9139-402a-9909-8a9c3fa4d103","Type":"ContainerDied","Data":"3c24f46e748734abcd75c20587fa57ae57554aacfa8149267e6e1e842dc3973a"} Mar 12 09:19:43 crc kubenswrapper[4809]: I0312 09:19:43.849236 4809 generic.go:334] "Generic (PLEG): container finished" podID="b7ddc716-c09f-4923-8e70-f2251873aea9" containerID="874336bec80c8d58a380231463cdb488cfe96d70677ac37ad84c9f8a1d0e03f0" exitCode=0 Mar 12 09:19:43 crc kubenswrapper[4809]: I0312 09:19:43.849344 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-7jdts" event={"ID":"b7ddc716-c09f-4923-8e70-f2251873aea9","Type":"ContainerDied","Data":"874336bec80c8d58a380231463cdb488cfe96d70677ac37ad84c9f8a1d0e03f0"} Mar 12 09:19:43 crc kubenswrapper[4809]: I0312 09:19:43.852478 4809 generic.go:334] "Generic (PLEG): container finished" podID="c1cae54e-b80d-4bac-bfd6-fcc3b6e8832b" containerID="84f1f44cea0ba1bbe7a81e9f212f7ffa115592893287b9b94fdcb98a1c713a9e" exitCode=0 Mar 12 09:19:43 crc kubenswrapper[4809]: I0312 09:19:43.852585 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f7c76cdd5-nbjpc" event={"ID":"c1cae54e-b80d-4bac-bfd6-fcc3b6e8832b","Type":"ContainerDied","Data":"84f1f44cea0ba1bbe7a81e9f212f7ffa115592893287b9b94fdcb98a1c713a9e"} Mar 12 09:19:43 crc kubenswrapper[4809]: I0312 09:19:43.855843 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xx52w" event={"ID":"ac7cc05e-989a-4474-9685-9600e3502dfd","Type":"ContainerStarted","Data":"7d7ba16082731bc86a268ffa27123ddec4bb834c33f8414351a03b33ea8a11b5"} Mar 12 09:19:43 crc kubenswrapper[4809]: I0312 09:19:43.857024 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xx52w" Mar 12 09:19:43 crc kubenswrapper[4809]: I0312 09:19:43.857942 4809 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-xx52w container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:5443/healthz\": dial tcp 10.217.0.41:5443: connect: connection refused" start-of-body= Mar 12 09:19:43 crc kubenswrapper[4809]: I0312 09:19:43.857998 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xx52w" podUID="ac7cc05e-989a-4474-9685-9600e3502dfd" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.41:5443/healthz\": dial tcp 10.217.0.41:5443: connect: connection refused" Mar 12 09:19:43 crc kubenswrapper[4809]: I0312 09:19:43.871556 4809 generic.go:334] "Generic (PLEG): container finished" podID="63625da7-0f0d-48f1-8b58-c75e04bc31e4" containerID="7846a855e1dfdb16db66ee630e5c34cfab65004b2e4ba492c88b071a31defb59" exitCode=0 Mar 12 09:19:43 crc kubenswrapper[4809]: I0312 09:19:43.871710 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qrsm" event={"ID":"63625da7-0f0d-48f1-8b58-c75e04bc31e4","Type":"ContainerDied","Data":"7846a855e1dfdb16db66ee630e5c34cfab65004b2e4ba492c88b071a31defb59"} Mar 12 09:19:43 crc kubenswrapper[4809]: I0312 09:19:43.876049 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-dppmm" event={"ID":"1189f657-b031-4ece-859b-95d3eadd8221","Type":"ContainerStarted","Data":"8b92468fd9f673ab09dee9f4517e9fa05a69771337fcd7f7dd1e10c4c962cde6"} Mar 12 09:19:43 crc kubenswrapper[4809]: I0312 09:19:43.878731 4809 generic.go:334] "Generic (PLEG): container finished" podID="b92111bc-ddbe-401a-83c3-2b0c1e805c6a" containerID="1f3af126e12fec7d1cf30bc40044b17f47526d6cd60671e66d8db4099042c526" exitCode=2 Mar 12 09:19:43 crc kubenswrapper[4809]: I0312 09:19:43.878775 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b92111bc-ddbe-401a-83c3-2b0c1e805c6a","Type":"ContainerDied","Data":"1f3af126e12fec7d1cf30bc40044b17f47526d6cd60671e66d8db4099042c526"} Mar 12 09:19:44 crc kubenswrapper[4809]: I0312 09:19:44.062852 4809 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-2qrsm container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Mar 12 09:19:44 crc kubenswrapper[4809]: I0312 09:19:44.063197 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qrsm" podUID="63625da7-0f0d-48f1-8b58-c75e04bc31e4" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Mar 12 09:19:44 crc kubenswrapper[4809]: I0312 09:19:44.179953 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7bdkr4" podUID="d42ca3a9-74a0-4e76-ac25-730f412c28de" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:44 crc kubenswrapper[4809]: I0312 09:19:44.539396 4809 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-xx52w container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:5443/healthz\": dial tcp 10.217.0.41:5443: connect: connection refused" start-of-body= Mar 12 09:19:44 crc kubenswrapper[4809]: I0312 09:19:44.539831 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xx52w" podUID="ac7cc05e-989a-4474-9685-9600e3502dfd" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.41:5443/healthz\": dial tcp 10.217.0.41:5443: connect: connection refused" Mar 12 09:19:44 crc kubenswrapper[4809]: I0312 09:19:44.540517 4809 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-xx52w container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.41:5443/healthz\": dial tcp 10.217.0.41:5443: connect: connection refused" start-of-body= Mar 12 09:19:44 crc kubenswrapper[4809]: I0312 09:19:44.540642 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xx52w" podUID="ac7cc05e-989a-4474-9685-9600e3502dfd" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.41:5443/healthz\": dial tcp 10.217.0.41:5443: connect: connection refused" Mar 12 09:19:44 crc kubenswrapper[4809]: I0312 09:19:44.880343 4809 patch_prober.go:28] interesting pod/console-operator-58897d9998-6vssd container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:44 crc kubenswrapper[4809]: I0312 09:19:44.880389 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-6vssd" podUID="6f849ade-9f03-46fd-b9c5-5ddd61a27d1c" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:44 crc kubenswrapper[4809]: I0312 09:19:44.880490 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-6vssd" Mar 12 09:19:44 crc kubenswrapper[4809]: I0312 09:19:44.880610 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-jt7s5" podUID="d64bdb22-6590-41af-94ad-0e725ca0355a" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:44 crc kubenswrapper[4809]: I0312 09:19:44.880626 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-jt7s5" podUID="d64bdb22-6590-41af-94ad-0e725ca0355a" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:44 crc kubenswrapper[4809]: I0312 09:19:44.880732 4809 patch_prober.go:28] interesting pod/console-operator-58897d9998-6vssd container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:44 crc kubenswrapper[4809]: I0312 09:19:44.880758 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-jt7s5" Mar 12 09:19:44 crc kubenswrapper[4809]: I0312 09:19:44.880761 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-6vssd" podUID="6f849ade-9f03-46fd-b9c5-5ddd61a27d1c" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:44 crc kubenswrapper[4809]: I0312 09:19:44.880788 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/speaker-jt7s5" Mar 12 09:19:44 crc kubenswrapper[4809]: I0312 09:19:44.880801 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console-operator/console-operator-58897d9998-6vssd" Mar 12 09:19:44 crc kubenswrapper[4809]: I0312 09:19:44.882067 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="speaker" containerStatusID={"Type":"cri-o","ID":"ade5c69ad1337d150741ab5e04d79cdca3f83cf4a3e5e0abf35fb6bf1a1930a9"} pod="metallb-system/speaker-jt7s5" containerMessage="Container speaker failed liveness probe, will be restarted" Mar 12 09:19:44 crc kubenswrapper[4809]: I0312 09:19:44.882165 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/speaker-jt7s5" podUID="d64bdb22-6590-41af-94ad-0e725ca0355a" containerName="speaker" containerID="cri-o://ade5c69ad1337d150741ab5e04d79cdca3f83cf4a3e5e0abf35fb6bf1a1930a9" gracePeriod=2 Mar 12 09:19:44 crc kubenswrapper[4809]: I0312 09:19:44.897746 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="3d541616-6c38-428f-bd28-7dc54dceab8c" containerName="galera" probeResult="failure" output="command timed out" Mar 12 09:19:44 crc kubenswrapper[4809]: I0312 09:19:44.937003 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="console-operator" containerStatusID={"Type":"cri-o","ID":"1687b7a0dc62ca787c34748f8e774d1398baeeb28e8ceb14079cdf2e8b998238"} pod="openshift-console-operator/console-operator-58897d9998-6vssd" containerMessage="Container console-operator failed liveness probe, will be restarted" Mar 12 09:19:44 crc kubenswrapper[4809]: I0312 09:19:44.937075 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console-operator/console-operator-58897d9998-6vssd" podUID="6f849ade-9f03-46fd-b9c5-5ddd61a27d1c" containerName="console-operator" containerID="cri-o://1687b7a0dc62ca787c34748f8e774d1398baeeb28e8ceb14079cdf2e8b998238" gracePeriod=30 Mar 12 09:19:44 crc kubenswrapper[4809]: I0312 09:19:44.954387 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-n22vs" podUID="773274bc-3d57-4d1c-aaf9-f81ce1b981c4" containerName="frr" containerID="cri-o://b8dbac94cf0906cf8b0e93d8247b8a9d204f6e8dca9a7504899c462a4a4937c2" gracePeriod=2 Mar 12 09:19:44 crc kubenswrapper[4809]: I0312 09:19:44.955879 4809 generic.go:334] "Generic (PLEG): container finished" podID="773274bc-3d57-4d1c-aaf9-f81ce1b981c4" containerID="2e150576a9ab7f348d2f48751641b97f18a3b8a403d5f71d7c95cc18cad52a1f" exitCode=0 Mar 12 09:19:44 crc kubenswrapper[4809]: I0312 09:19:44.955945 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-n22vs" event={"ID":"773274bc-3d57-4d1c-aaf9-f81ce1b981c4","Type":"ContainerDied","Data":"2e150576a9ab7f348d2f48751641b97f18a3b8a403d5f71d7c95cc18cad52a1f"} Mar 12 09:19:44 crc kubenswrapper[4809]: I0312 09:19:44.966230 4809 generic.go:334] "Generic (PLEG): container finished" podID="6befee19-0c78-47ca-a608-be246c0d7bb5" containerID="026ed91a0e60127784c5d684fb23c4c3092787e276daee53bf7b01c4da6cd7bb" exitCode=0 Mar 12 09:19:44 crc kubenswrapper[4809]: I0312 09:19:44.966310 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-59fcfb5dc8-gpg2j" event={"ID":"6befee19-0c78-47ca-a608-be246c0d7bb5","Type":"ContainerDied","Data":"026ed91a0e60127784c5d684fb23c4c3092787e276daee53bf7b01c4da6cd7bb"} Mar 12 09:19:44 crc kubenswrapper[4809]: I0312 09:19:44.969753 4809 generic.go:334] "Generic (PLEG): container finished" podID="6b5e61e4-2d13-491e-be53-aed7ae027cb1" containerID="1204925015f75d92dcf2b3a5d0b3c76bf3d729fe9d0def579db6e8215475ff99" exitCode=0 Mar 12 09:19:45 crc kubenswrapper[4809]: I0312 09:19:44.972566 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-rmrt9" event={"ID":"6b5e61e4-2d13-491e-be53-aed7ae027cb1","Type":"ContainerDied","Data":"1204925015f75d92dcf2b3a5d0b3c76bf3d729fe9d0def579db6e8215475ff99"} Mar 12 09:19:45 crc kubenswrapper[4809]: I0312 09:19:44.974869 4809 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-xx52w container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:5443/healthz\": dial tcp 10.217.0.41:5443: connect: connection refused" start-of-body= Mar 12 09:19:45 crc kubenswrapper[4809]: I0312 09:19:44.974912 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xx52w" podUID="ac7cc05e-989a-4474-9685-9600e3502dfd" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.41:5443/healthz\": dial tcp 10.217.0.41:5443: connect: connection refused" Mar 12 09:19:45 crc kubenswrapper[4809]: I0312 09:19:45.049217 4809 patch_prober.go:28] interesting pod/logging-loki-gateway-fd898bfdd-dhc65 container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:45 crc kubenswrapper[4809]: I0312 09:19:45.049265 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-fd898bfdd-dhc65" podUID="8232d992-4bfb-46ca-a440-647d8c006309" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.56:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:45 crc kubenswrapper[4809]: I0312 09:19:45.049559 4809 patch_prober.go:28] interesting pod/logging-loki-gateway-fd898bfdd-dhc65 container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:45 crc kubenswrapper[4809]: I0312 09:19:45.049622 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-fd898bfdd-dhc65" podUID="8232d992-4bfb-46ca-a440-647d8c006309" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.56:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:45 crc kubenswrapper[4809]: I0312 09:19:45.102560 4809 patch_prober.go:28] interesting pod/logging-loki-gateway-fd898bfdd-tbqw2 container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:45 crc kubenswrapper[4809]: I0312 09:19:45.102605 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-fd898bfdd-tbqw2" podUID="7546fb46-f601-417f-ad26-69a4fb625fdc" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:45 crc kubenswrapper[4809]: I0312 09:19:45.103404 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-fxhzz" podUID="1e560783-0ec2-4688-a79e-59a1df5b2e61" containerName="registry-server" probeResult="failure" output=< Mar 12 09:19:45 crc kubenswrapper[4809]: timeout: health rpc did not complete within 1s Mar 12 09:19:45 crc kubenswrapper[4809]: > Mar 12 09:19:45 crc kubenswrapper[4809]: I0312 09:19:45.103464 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/openstack-operator-index-fxhzz" Mar 12 09:19:45 crc kubenswrapper[4809]: I0312 09:19:45.104816 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"c75cd2420e84e4d4ddb914ebd6677b821a32a5ec4685448ee1ebcf37372b79b6"} pod="openstack-operators/openstack-operator-index-fxhzz" containerMessage="Container registry-server failed liveness probe, will be restarted" Mar 12 09:19:45 crc kubenswrapper[4809]: I0312 09:19:45.104871 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-fxhzz" podUID="1e560783-0ec2-4688-a79e-59a1df5b2e61" containerName="registry-server" containerID="cri-o://c75cd2420e84e4d4ddb914ebd6677b821a32a5ec4685448ee1ebcf37372b79b6" gracePeriod=30 Mar 12 09:19:45 crc kubenswrapper[4809]: I0312 09:19:45.105009 4809 patch_prober.go:28] interesting pod/logging-loki-gateway-fd898bfdd-tbqw2 container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:45 crc kubenswrapper[4809]: I0312 09:19:45.105061 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-fd898bfdd-tbqw2" podUID="7546fb46-f601-417f-ad26-69a4fb625fdc" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:45 crc kubenswrapper[4809]: I0312 09:19:45.174534 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-fxhzz" podUID="1e560783-0ec2-4688-a79e-59a1df5b2e61" containerName="registry-server" probeResult="failure" output=< Mar 12 09:19:45 crc kubenswrapper[4809]: timeout: health rpc did not complete within 1s Mar 12 09:19:45 crc kubenswrapper[4809]: > Mar 12 09:19:45 crc kubenswrapper[4809]: I0312 09:19:45.174655 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-fxhzz" Mar 12 09:19:45 crc kubenswrapper[4809]: I0312 09:19:45.276847 4809 patch_prober.go:28] interesting pod/console-operator-58897d9998-6vssd container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/readyz\": EOF" start-of-body= Mar 12 09:19:45 crc kubenswrapper[4809]: I0312 09:19:45.276904 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-6vssd" podUID="6f849ade-9f03-46fd-b9c5-5ddd61a27d1c" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/readyz\": EOF" Mar 12 09:19:45 crc kubenswrapper[4809]: I0312 09:19:45.375747 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-jt7s5" Mar 12 09:19:45 crc kubenswrapper[4809]: I0312 09:19:45.500151 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="3d541616-6c38-428f-bd28-7dc54dceab8c" containerName="galera" containerID="cri-o://15b484e8ab97b9e9fca1f4e65fcc961082eb7604226d160ee802686d1cd47b9e" gracePeriod=20 Mar 12 09:19:45 crc kubenswrapper[4809]: I0312 09:19:45.591609 4809 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-w2hxb container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:45 crc kubenswrapper[4809]: I0312 09:19:45.591676 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w2hxb" podUID="38092207-e107-4e5a-8706-b3ad66bea661" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.35:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:45 crc kubenswrapper[4809]: I0312 09:19:45.591961 4809 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-w2hxb container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.35:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 09:19:45 crc kubenswrapper[4809]: I0312 09:19:45.591978 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w2hxb" podUID="38092207-e107-4e5a-8706-b3ad66bea661" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.35:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 09:19:45 crc kubenswrapper[4809]: I0312 09:19:45.988847 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-7jdts" event={"ID":"b7ddc716-c09f-4923-8e70-f2251873aea9","Type":"ContainerStarted","Data":"b1c33bb600c468a5988bab3e6bc424a1d7930e7e010d3d2f70fc4529373ad33f"} Mar 12 09:19:45 crc kubenswrapper[4809]: I0312 09:19:45.989364 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-7bb4cc7c98-7jdts" Mar 12 09:19:45 crc kubenswrapper[4809]: I0312 09:19:45.992592 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f7c76cdd5-nbjpc" event={"ID":"c1cae54e-b80d-4bac-bfd6-fcc3b6e8832b","Type":"ContainerStarted","Data":"763e1e81f2c570e18b13ef8d1f99a3553a57d38d5f0e47a7d8c20af6d6af8528"} Mar 12 09:19:45 crc kubenswrapper[4809]: I0312 09:19:45.993816 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-f7c76cdd5-nbjpc" Mar 12 09:19:45 crc kubenswrapper[4809]: I0312 09:19:45.994614 4809 patch_prober.go:28] interesting pod/controller-manager-f7c76cdd5-nbjpc container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": dial tcp 10.217.0.67:8443: connect: connection refused" start-of-body= Mar 12 09:19:45 crc kubenswrapper[4809]: I0312 09:19:45.994650 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-f7c76cdd5-nbjpc" podUID="c1cae54e-b80d-4bac-bfd6-fcc3b6e8832b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": dial tcp 10.217.0.67:8443: connect: connection refused" Mar 12 09:19:45 crc kubenswrapper[4809]: I0312 09:19:45.998839 4809 generic.go:334] "Generic (PLEG): container finished" podID="b7c728ba-8361-4d19-833d-b3494509f355" containerID="b96fb9463b48b47c51299bf419244fbc30fcc5ab57bebf0d2a86617d3123c3f7" exitCode=0 Mar 12 09:19:45 crc kubenswrapper[4809]: I0312 09:19:45.998961 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b7c728ba-8361-4d19-833d-b3494509f355","Type":"ContainerDied","Data":"b96fb9463b48b47c51299bf419244fbc30fcc5ab57bebf0d2a86617d3123c3f7"} Mar 12 09:19:46 crc kubenswrapper[4809]: I0312 09:19:46.004719 4809 generic.go:334] "Generic (PLEG): container finished" podID="773274bc-3d57-4d1c-aaf9-f81ce1b981c4" containerID="b8dbac94cf0906cf8b0e93d8247b8a9d204f6e8dca9a7504899c462a4a4937c2" exitCode=143 Mar 12 09:19:46 crc kubenswrapper[4809]: I0312 09:19:46.004815 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-n22vs" event={"ID":"773274bc-3d57-4d1c-aaf9-f81ce1b981c4","Type":"ContainerDied","Data":"b8dbac94cf0906cf8b0e93d8247b8a9d204f6e8dca9a7504899c462a4a4937c2"} Mar 12 09:19:46 crc kubenswrapper[4809]: I0312 09:19:46.020701 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qrsm" event={"ID":"63625da7-0f0d-48f1-8b58-c75e04bc31e4","Type":"ContainerStarted","Data":"ddabe6ef064c4849e33079c6fc0f865abefdc5ff9c3a196568514a087fd48a99"} Mar 12 09:19:46 crc kubenswrapper[4809]: I0312 09:19:46.023309 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qrsm" Mar 12 09:19:46 crc kubenswrapper[4809]: I0312 09:19:46.025815 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5dbd9bc447-2v8gh" event={"ID":"5ba37d8e-9139-402a-9909-8a9c3fa4d103","Type":"ContainerStarted","Data":"03c0c2154b5041a82b53af6d4471ff76d4f330a5423776c211eb8b23217303e0"} Mar 12 09:19:46 crc kubenswrapper[4809]: I0312 09:19:46.027166 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5dbd9bc447-2v8gh" Mar 12 09:19:46 crc kubenswrapper[4809]: I0312 09:19:46.027315 4809 patch_prober.go:28] interesting pod/route-controller-manager-5dbd9bc447-2v8gh container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.68:8443/healthz\": dial tcp 10.217.0.68:8443: connect: connection refused" start-of-body= Mar 12 09:19:46 crc kubenswrapper[4809]: I0312 09:19:46.027358 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5dbd9bc447-2v8gh" podUID="5ba37d8e-9139-402a-9909-8a9c3fa4d103" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.68:8443/healthz\": dial tcp 10.217.0.68:8443: connect: connection refused" Mar 12 09:19:46 crc kubenswrapper[4809]: I0312 09:19:46.134163 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-58897d9998-6vssd_6f849ade-9f03-46fd-b9c5-5ddd61a27d1c/console-operator/0.log" Mar 12 09:19:46 crc kubenswrapper[4809]: I0312 09:19:46.134251 4809 generic.go:334] "Generic (PLEG): container finished" podID="6f849ade-9f03-46fd-b9c5-5ddd61a27d1c" containerID="1687b7a0dc62ca787c34748f8e774d1398baeeb28e8ceb14079cdf2e8b998238" exitCode=1 Mar 12 09:19:46 crc kubenswrapper[4809]: I0312 09:19:46.134314 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-6vssd" event={"ID":"6f849ade-9f03-46fd-b9c5-5ddd61a27d1c","Type":"ContainerDied","Data":"1687b7a0dc62ca787c34748f8e774d1398baeeb28e8ceb14079cdf2e8b998238"} Mar 12 09:19:47 crc kubenswrapper[4809]: I0312 09:19:47.032526 4809 trace.go:236] Trace[1789952950]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-server-1" (12-Mar-2026 09:19:43.756) (total time: 3210ms): Mar 12 09:19:47 crc kubenswrapper[4809]: Trace[1789952950]: [3.210012358s] [3.210012358s] END Mar 12 09:19:47 crc kubenswrapper[4809]: I0312 09:19:47.187549 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-rmrt9" event={"ID":"6b5e61e4-2d13-491e-be53-aed7ae027cb1","Type":"ContainerStarted","Data":"78fe5914fd530fb8046975a993768b06be57e338732aec4ba3455053915b6def"} Mar 12 09:19:47 crc kubenswrapper[4809]: I0312 09:19:47.187991 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-rmrt9" Mar 12 09:19:47 crc kubenswrapper[4809]: I0312 09:19:47.196055 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-58897d9998-6vssd_6f849ade-9f03-46fd-b9c5-5ddd61a27d1c/console-operator/0.log" Mar 12 09:19:47 crc kubenswrapper[4809]: I0312 09:19:47.196286 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-6vssd" event={"ID":"6f849ade-9f03-46fd-b9c5-5ddd61a27d1c","Type":"ContainerStarted","Data":"d315b8ce3bd832f0665a7b11ab000e1226eb72b0f99f2793a31796ee12fd8778"} Mar 12 09:19:47 crc kubenswrapper[4809]: I0312 09:19:47.196802 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-6vssd" Mar 12 09:19:47 crc kubenswrapper[4809]: I0312 09:19:47.196868 4809 patch_prober.go:28] interesting pod/console-operator-58897d9998-6vssd container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/readyz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Mar 12 09:19:47 crc kubenswrapper[4809]: I0312 09:19:47.196904 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-6vssd" podUID="6f849ade-9f03-46fd-b9c5-5ddd61a27d1c" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/readyz\": dial tcp 10.217.0.20:8443: connect: connection refused" Mar 12 09:19:47 crc kubenswrapper[4809]: I0312 09:19:47.204160 4809 generic.go:334] "Generic (PLEG): container finished" podID="1e560783-0ec2-4688-a79e-59a1df5b2e61" containerID="c75cd2420e84e4d4ddb914ebd6677b821a32a5ec4685448ee1ebcf37372b79b6" exitCode=0 Mar 12 09:19:47 crc kubenswrapper[4809]: I0312 09:19:47.204250 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-fxhzz" event={"ID":"1e560783-0ec2-4688-a79e-59a1df5b2e61","Type":"ContainerDied","Data":"c75cd2420e84e4d4ddb914ebd6677b821a32a5ec4685448ee1ebcf37372b79b6"} Mar 12 09:19:47 crc kubenswrapper[4809]: I0312 09:19:47.216773 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-n22vs" event={"ID":"773274bc-3d57-4d1c-aaf9-f81ce1b981c4","Type":"ContainerStarted","Data":"383b08daa700f49a9470f658222daa9da10a64b3bbdd9a37c792fb09b4b400ee"} Mar 12 09:19:47 crc kubenswrapper[4809]: I0312 09:19:47.219163 4809 generic.go:334] "Generic (PLEG): container finished" podID="d64bdb22-6590-41af-94ad-0e725ca0355a" containerID="ade5c69ad1337d150741ab5e04d79cdca3f83cf4a3e5e0abf35fb6bf1a1930a9" exitCode=0 Mar 12 09:19:47 crc kubenswrapper[4809]: I0312 09:19:47.219236 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-jt7s5" event={"ID":"d64bdb22-6590-41af-94ad-0e725ca0355a","Type":"ContainerDied","Data":"ade5c69ad1337d150741ab5e04d79cdca3f83cf4a3e5e0abf35fb6bf1a1930a9"} Mar 12 09:19:47 crc kubenswrapper[4809]: I0312 09:19:47.236449 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-59fcfb5dc8-gpg2j" event={"ID":"6befee19-0c78-47ca-a608-be246c0d7bb5","Type":"ContainerStarted","Data":"bad0445ffcc622fd21e0289c7f41c85c7d4f6ec7add39b5cd758b52fbd3d5d69"} Mar 12 09:19:47 crc kubenswrapper[4809]: I0312 09:19:47.238657 4809 patch_prober.go:28] interesting pod/route-controller-manager-5dbd9bc447-2v8gh container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.68:8443/healthz\": dial tcp 10.217.0.68:8443: connect: connection refused" start-of-body= Mar 12 09:19:47 crc kubenswrapper[4809]: I0312 09:19:47.238691 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5dbd9bc447-2v8gh" podUID="5ba37d8e-9139-402a-9909-8a9c3fa4d103" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.68:8443/healthz\": dial tcp 10.217.0.68:8443: connect: connection refused" Mar 12 09:19:47 crc kubenswrapper[4809]: I0312 09:19:47.238935 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-59fcfb5dc8-gpg2j" Mar 12 09:19:47 crc kubenswrapper[4809]: I0312 09:19:47.239072 4809 patch_prober.go:28] interesting pod/controller-manager-f7c76cdd5-nbjpc container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": dial tcp 10.217.0.67:8443: connect: connection refused" start-of-body= Mar 12 09:19:47 crc kubenswrapper[4809]: I0312 09:19:47.239145 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-f7c76cdd5-nbjpc" podUID="c1cae54e-b80d-4bac-bfd6-fcc3b6e8832b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": dial tcp 10.217.0.67:8443: connect: connection refused" Mar 12 09:19:47 crc kubenswrapper[4809]: I0312 09:19:47.397459 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="b92111bc-ddbe-401a-83c3-2b0c1e805c6a" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.21:8081/readyz\": dial tcp 10.217.1.21:8081: connect: connection refused" Mar 12 09:19:48 crc kubenswrapper[4809]: I0312 09:19:48.172267 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="6f39b431-0c84-4f84-b887-d5f74af3d573" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 09:19:48 crc kubenswrapper[4809]: I0312 09:19:48.264970 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b7c728ba-8361-4d19-833d-b3494509f355","Type":"ContainerStarted","Data":"1ca7de1eb67729882a442d8883d9edfac4f18c1014005faffc334342bbc8befb"} Mar 12 09:19:48 crc kubenswrapper[4809]: I0312 09:19:48.294696 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-fxhzz" event={"ID":"1e560783-0ec2-4688-a79e-59a1df5b2e61","Type":"ContainerStarted","Data":"bc3bec6466d7593440beefe393fa3b248acf7e9d769fda68b2097bc7f7e9b604"} Mar 12 09:19:48 crc kubenswrapper[4809]: I0312 09:19:48.358491 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-n22vs" event={"ID":"773274bc-3d57-4d1c-aaf9-f81ce1b981c4","Type":"ContainerStarted","Data":"b318f8f9972346dc49951a838c869507dab24031be4b2b451e2b751cc5f467e8"} Mar 12 09:19:48 crc kubenswrapper[4809]: I0312 09:19:48.360140 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-n22vs" Mar 12 09:19:48 crc kubenswrapper[4809]: I0312 09:19:48.386343 4809 patch_prober.go:28] interesting pod/controller-manager-f7c76cdd5-nbjpc container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": dial tcp 10.217.0.67:8443: connect: connection refused" start-of-body= Mar 12 09:19:48 crc kubenswrapper[4809]: I0312 09:19:48.386397 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-f7c76cdd5-nbjpc" podUID="c1cae54e-b80d-4bac-bfd6-fcc3b6e8832b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": dial tcp 10.217.0.67:8443: connect: connection refused" Mar 12 09:19:48 crc kubenswrapper[4809]: I0312 09:19:48.386546 4809 patch_prober.go:28] interesting pod/console-operator-58897d9998-6vssd container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/readyz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Mar 12 09:19:48 crc kubenswrapper[4809]: I0312 09:19:48.386586 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-6vssd" podUID="6f849ade-9f03-46fd-b9c5-5ddd61a27d1c" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/readyz\": dial tcp 10.217.0.20:8443: connect: connection refused" Mar 12 09:19:48 crc kubenswrapper[4809]: I0312 09:19:48.386632 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b92111bc-ddbe-401a-83c3-2b0c1e805c6a","Type":"ContainerStarted","Data":"44e060d8a527d5f8ec99359faeba2c1a65f4b3bd112e32d2ad339c6bebe26d6c"} Mar 12 09:19:48 crc kubenswrapper[4809]: I0312 09:19:48.386843 4809 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-2qrsm container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Mar 12 09:19:48 crc kubenswrapper[4809]: I0312 09:19:48.386861 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qrsm" podUID="63625da7-0f0d-48f1-8b58-c75e04bc31e4" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Mar 12 09:19:48 crc kubenswrapper[4809]: I0312 09:19:48.386967 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Mar 12 09:19:48 crc kubenswrapper[4809]: I0312 09:19:48.388172 4809 patch_prober.go:28] interesting pod/route-controller-manager-5dbd9bc447-2v8gh container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.68:8443/healthz\": dial tcp 10.217.0.68:8443: connect: connection refused" start-of-body= Mar 12 09:19:48 crc kubenswrapper[4809]: I0312 09:19:48.388196 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5dbd9bc447-2v8gh" podUID="5ba37d8e-9139-402a-9909-8a9c3fa4d103" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.68:8443/healthz\": dial tcp 10.217.0.68:8443: connect: connection refused" Mar 12 09:19:48 crc kubenswrapper[4809]: I0312 09:19:48.789362 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-pbfks" Mar 12 09:19:48 crc kubenswrapper[4809]: I0312 09:19:48.793278 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-6pp5s" Mar 12 09:19:48 crc kubenswrapper[4809]: I0312 09:19:48.826393 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-76bf7b6d45-zb8df" Mar 12 09:19:49 crc kubenswrapper[4809]: I0312 09:19:49.086836 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5b99b64b5d-4hf4l" Mar 12 09:19:49 crc kubenswrapper[4809]: I0312 09:19:49.455751 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-jt7s5" event={"ID":"d64bdb22-6590-41af-94ad-0e725ca0355a","Type":"ContainerStarted","Data":"ed9608fbfe6df7ca3682f4848f67b8ebd6362cda69714cd6ffd6f462a26a2ca7"} Mar 12 09:19:49 crc kubenswrapper[4809]: I0312 09:19:49.472955 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-jt7s5" podUID="d64bdb22-6590-41af-94ad-0e725ca0355a" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": dial tcp [::1]:29150: connect: connection refused" Mar 12 09:19:49 crc kubenswrapper[4809]: I0312 09:19:49.763456 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-gtzwg" Mar 12 09:19:50 crc kubenswrapper[4809]: I0312 09:19:50.060841 4809 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-2qrsm container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Mar 12 09:19:50 crc kubenswrapper[4809]: I0312 09:19:50.060841 4809 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-2qrsm container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Mar 12 09:19:50 crc kubenswrapper[4809]: I0312 09:19:50.060896 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qrsm" podUID="63625da7-0f0d-48f1-8b58-c75e04bc31e4" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Mar 12 09:19:50 crc kubenswrapper[4809]: I0312 09:19:50.060951 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qrsm" podUID="63625da7-0f0d-48f1-8b58-c75e04bc31e4" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Mar 12 09:19:50 crc kubenswrapper[4809]: I0312 09:19:50.073822 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-dbw4q" Mar 12 09:19:50 crc kubenswrapper[4809]: I0312 09:19:50.256896 4809 patch_prober.go:28] interesting pod/controller-manager-f7c76cdd5-nbjpc container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": dial tcp 10.217.0.67:8443: connect: connection refused" start-of-body= Mar 12 09:19:50 crc kubenswrapper[4809]: I0312 09:19:50.257270 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-f7c76cdd5-nbjpc" podUID="c1cae54e-b80d-4bac-bfd6-fcc3b6e8832b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": dial tcp 10.217.0.67:8443: connect: connection refused" Mar 12 09:19:50 crc kubenswrapper[4809]: I0312 09:19:50.269815 4809 patch_prober.go:28] interesting pod/route-controller-manager-5dbd9bc447-2v8gh container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.68:8443/healthz\": dial tcp 10.217.0.68:8443: connect: connection refused" start-of-body= Mar 12 09:19:50 crc kubenswrapper[4809]: I0312 09:19:50.269871 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5dbd9bc447-2v8gh" podUID="5ba37d8e-9139-402a-9909-8a9c3fa4d103" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.68:8443/healthz\": dial tcp 10.217.0.68:8443: connect: connection refused" Mar 12 09:19:50 crc kubenswrapper[4809]: I0312 09:19:50.284043 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-6bbc5b75f-zqc46" Mar 12 09:19:50 crc kubenswrapper[4809]: I0312 09:19:50.466426 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-jt7s5" Mar 12 09:19:50 crc kubenswrapper[4809]: I0312 09:19:50.496373 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-dpfq8" podUID="58f90f01-2a85-4755-a6ea-fef9035e8982" containerName="registry-server" probeResult="failure" output=< Mar 12 09:19:50 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 09:19:50 crc kubenswrapper[4809]: > Mar 12 09:19:50 crc kubenswrapper[4809]: I0312 09:19:50.570776 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-57c6b5bd58-tt85t" Mar 12 09:19:50 crc kubenswrapper[4809]: I0312 09:19:50.577586 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="6f39b431-0c84-4f84-b887-d5f74af3d573" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 09:19:50 crc kubenswrapper[4809]: I0312 09:19:50.670544 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-fxhzz" Mar 12 09:19:50 crc kubenswrapper[4809]: I0312 09:19:50.670607 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-fxhzz" Mar 12 09:19:51 crc kubenswrapper[4809]: I0312 09:19:51.899920 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-46txv" podUID="31c13acb-9eed-41f3-b4a5-63c4cdbdaa40" containerName="registry-server" probeResult="failure" output=< Mar 12 09:19:51 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 09:19:51 crc kubenswrapper[4809]: > Mar 12 09:19:51 crc kubenswrapper[4809]: I0312 09:19:51.900478 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-46txv" Mar 12 09:19:51 crc kubenswrapper[4809]: I0312 09:19:51.902934 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"022ddad6061f352965a5f12228036f080521580cb0df65a26d724712ed694648"} pod="openshift-marketplace/redhat-operators-46txv" containerMessage="Container registry-server failed startup probe, will be restarted" Mar 12 09:19:51 crc kubenswrapper[4809]: I0312 09:19:51.903404 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-46txv" podUID="31c13acb-9eed-41f3-b4a5-63c4cdbdaa40" containerName="registry-server" containerID="cri-o://022ddad6061f352965a5f12228036f080521580cb0df65a26d724712ed694648" gracePeriod=30 Mar 12 09:19:51 crc kubenswrapper[4809]: I0312 09:19:51.966015 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-56cf9d75b7-58kgc" Mar 12 09:19:51 crc kubenswrapper[4809]: I0312 09:19:51.984402 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 09:19:51 crc kubenswrapper[4809]: I0312 09:19:51.992644 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b7c728ba-8361-4d19-833d-b3494509f355" containerName="ceilometer-notification-agent" containerID="cri-o://9b7c5a236cb3855397521f1895f9fc69f78a5f4b6c64c6475c12dc1103055c0e" gracePeriod=30 Mar 12 09:19:51 crc kubenswrapper[4809]: I0312 09:19:51.992699 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b7c728ba-8361-4d19-833d-b3494509f355" containerName="ceilometer-central-agent" containerID="cri-o://1ca7de1eb67729882a442d8883d9edfac4f18c1014005faffc334342bbc8befb" gracePeriod=30 Mar 12 09:19:51 crc kubenswrapper[4809]: I0312 09:19:51.992753 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b7c728ba-8361-4d19-833d-b3494509f355" containerName="sg-core" containerID="cri-o://6a002b770108522cac326e4c966f8f817e6201f7c15dd2d637628f5568e32439" gracePeriod=30 Mar 12 09:19:51 crc kubenswrapper[4809]: I0312 09:19:51.992734 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b7c728ba-8361-4d19-833d-b3494509f355" containerName="proxy-httpd" containerID="cri-o://1dbc9116030d9b415404602a0515cf16917f16419579c8ba4bc5567e1a1520f8" gracePeriod=30 Mar 12 09:19:52 crc kubenswrapper[4809]: I0312 09:19:52.096525 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-n22vs" Mar 12 09:19:52 crc kubenswrapper[4809]: I0312 09:19:52.182139 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7bdkr4" Mar 12 09:19:52 crc kubenswrapper[4809]: I0312 09:19:52.503007 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openstack-operators/openstack-operator-index-fxhzz" podUID="1e560783-0ec2-4688-a79e-59a1df5b2e61" containerName="registry-server" probeResult="failure" output=< Mar 12 09:19:52 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 09:19:52 crc kubenswrapper[4809]: > Mar 12 09:19:52 crc kubenswrapper[4809]: I0312 09:19:52.919891 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-n22vs" Mar 12 09:19:53 crc kubenswrapper[4809]: I0312 09:19:53.074559 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qrsm" Mar 12 09:19:53 crc kubenswrapper[4809]: I0312 09:19:53.467572 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="6f39b431-0c84-4f84-b887-d5f74af3d573" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 09:19:53 crc kubenswrapper[4809]: I0312 09:19:53.467720 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/cinder-scheduler-0" Mar 12 09:19:53 crc kubenswrapper[4809]: I0312 09:19:53.469650 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cinder-scheduler" containerStatusID={"Type":"cri-o","ID":"e385256959cd28774fee9a870ce5aa13c4a88fe4001e42f00a7915dcc36f3a4d"} pod="openstack/cinder-scheduler-0" containerMessage="Container cinder-scheduler failed liveness probe, will be restarted" Mar 12 09:19:53 crc kubenswrapper[4809]: I0312 09:19:53.469711 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="6f39b431-0c84-4f84-b887-d5f74af3d573" containerName="cinder-scheduler" containerID="cri-o://e385256959cd28774fee9a870ce5aa13c4a88fe4001e42f00a7915dcc36f3a4d" gracePeriod=30 Mar 12 09:19:53 crc kubenswrapper[4809]: E0312 09:19:53.573507 4809 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 15b484e8ab97b9e9fca1f4e65fcc961082eb7604226d160ee802686d1cd47b9e is running failed: container process not found" containerID="15b484e8ab97b9e9fca1f4e65fcc961082eb7604226d160ee802686d1cd47b9e" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Mar 12 09:19:53 crc kubenswrapper[4809]: E0312 09:19:53.589370 4809 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 15b484e8ab97b9e9fca1f4e65fcc961082eb7604226d160ee802686d1cd47b9e is running failed: container process not found" containerID="15b484e8ab97b9e9fca1f4e65fcc961082eb7604226d160ee802686d1cd47b9e" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Mar 12 09:19:53 crc kubenswrapper[4809]: E0312 09:19:53.595084 4809 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 15b484e8ab97b9e9fca1f4e65fcc961082eb7604226d160ee802686d1cd47b9e is running failed: container process not found" containerID="15b484e8ab97b9e9fca1f4e65fcc961082eb7604226d160ee802686d1cd47b9e" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Mar 12 09:19:53 crc kubenswrapper[4809]: E0312 09:19:53.595180 4809 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 15b484e8ab97b9e9fca1f4e65fcc961082eb7604226d160ee802686d1cd47b9e is running failed: container process not found" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="3d541616-6c38-428f-bd28-7dc54dceab8c" containerName="galera" Mar 12 09:19:53 crc kubenswrapper[4809]: I0312 09:19:53.622529 4809 generic.go:334] "Generic (PLEG): container finished" podID="b7c728ba-8361-4d19-833d-b3494509f355" containerID="6a002b770108522cac326e4c966f8f817e6201f7c15dd2d637628f5568e32439" exitCode=2 Mar 12 09:19:53 crc kubenswrapper[4809]: I0312 09:19:53.622634 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b7c728ba-8361-4d19-833d-b3494509f355","Type":"ContainerDied","Data":"6a002b770108522cac326e4c966f8f817e6201f7c15dd2d637628f5568e32439"} Mar 12 09:19:53 crc kubenswrapper[4809]: I0312 09:19:53.643226 4809 generic.go:334] "Generic (PLEG): container finished" podID="3d541616-6c38-428f-bd28-7dc54dceab8c" containerID="15b484e8ab97b9e9fca1f4e65fcc961082eb7604226d160ee802686d1cd47b9e" exitCode=0 Mar 12 09:19:53 crc kubenswrapper[4809]: I0312 09:19:53.643315 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"3d541616-6c38-428f-bd28-7dc54dceab8c","Type":"ContainerDied","Data":"15b484e8ab97b9e9fca1f4e65fcc961082eb7604226d160ee802686d1cd47b9e"} Mar 12 09:19:54 crc kubenswrapper[4809]: I0312 09:19:54.151735 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-6vssd" Mar 12 09:19:54 crc kubenswrapper[4809]: I0312 09:19:54.563971 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xx52w" Mar 12 09:19:54 crc kubenswrapper[4809]: I0312 09:19:54.679543 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"3d541616-6c38-428f-bd28-7dc54dceab8c","Type":"ContainerStarted","Data":"b59ceedc8a92102f7f4dde9d7fefdf4d524831b141ab43d89e1e21821289f97b"} Mar 12 09:19:55 crc kubenswrapper[4809]: I0312 09:19:55.733384 4809 generic.go:334] "Generic (PLEG): container finished" podID="b7c728ba-8361-4d19-833d-b3494509f355" containerID="1dbc9116030d9b415404602a0515cf16917f16419579c8ba4bc5567e1a1520f8" exitCode=0 Mar 12 09:19:55 crc kubenswrapper[4809]: I0312 09:19:55.733624 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b7c728ba-8361-4d19-833d-b3494509f355","Type":"ContainerDied","Data":"1dbc9116030d9b415404602a0515cf16917f16419579c8ba4bc5567e1a1520f8"} Mar 12 09:19:56 crc kubenswrapper[4809]: I0312 09:19:56.756318 4809 generic.go:334] "Generic (PLEG): container finished" podID="b7c728ba-8361-4d19-833d-b3494509f355" containerID="9b7c5a236cb3855397521f1895f9fc69f78a5f4b6c64c6475c12dc1103055c0e" exitCode=0 Mar 12 09:19:56 crc kubenswrapper[4809]: I0312 09:19:56.756404 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b7c728ba-8361-4d19-833d-b3494509f355","Type":"ContainerDied","Data":"9b7c5a236cb3855397521f1895f9fc69f78a5f4b6c64c6475c12dc1103055c0e"} Mar 12 09:19:57 crc kubenswrapper[4809]: I0312 09:19:57.412184 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Mar 12 09:19:57 crc kubenswrapper[4809]: I0312 09:19:57.774947 4809 generic.go:334] "Generic (PLEG): container finished" podID="6f39b431-0c84-4f84-b887-d5f74af3d573" containerID="e385256959cd28774fee9a870ce5aa13c4a88fe4001e42f00a7915dcc36f3a4d" exitCode=0 Mar 12 09:19:57 crc kubenswrapper[4809]: I0312 09:19:57.775097 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6f39b431-0c84-4f84-b887-d5f74af3d573","Type":"ContainerDied","Data":"e385256959cd28774fee9a870ce5aa13c4a88fe4001e42f00a7915dcc36f3a4d"} Mar 12 09:19:59 crc kubenswrapper[4809]: E0312 09:19:59.095314 4809 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.80:38888->38.102.83.80:34627: write tcp 38.102.83.80:38888->38.102.83.80:34627: write: connection reset by peer Mar 12 09:19:59 crc kubenswrapper[4809]: I0312 09:19:59.504438 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-dpfq8" podUID="58f90f01-2a85-4755-a6ea-fef9035e8982" containerName="registry-server" probeResult="failure" output=< Mar 12 09:19:59 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 09:19:59 crc kubenswrapper[4809]: > Mar 12 09:19:59 crc kubenswrapper[4809]: I0312 09:19:59.799048 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6f39b431-0c84-4f84-b887-d5f74af3d573","Type":"ContainerStarted","Data":"3c256c0ef8332c19836da1b3caf794d85a4935df31fa2644e8444b5c2ba26e50"} Mar 12 09:20:00 crc kubenswrapper[4809]: I0312 09:20:00.268883 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-f7c76cdd5-nbjpc" Mar 12 09:20:00 crc kubenswrapper[4809]: I0312 09:20:00.299547 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5dbd9bc447-2v8gh" Mar 12 09:20:00 crc kubenswrapper[4809]: I0312 09:20:00.334389 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555120-z4897"] Mar 12 09:20:00 crc kubenswrapper[4809]: I0312 09:20:00.382630 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555120-z4897" Mar 12 09:20:00 crc kubenswrapper[4809]: I0312 09:20:00.430782 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kms6b\" (UniqueName: \"kubernetes.io/projected/63b7fc38-9fd2-49cc-a08e-ac23cb1e8e3b-kube-api-access-kms6b\") pod \"auto-csr-approver-29555120-z4897\" (UID: \"63b7fc38-9fd2-49cc-a08e-ac23cb1e8e3b\") " pod="openshift-infra/auto-csr-approver-29555120-z4897" Mar 12 09:20:00 crc kubenswrapper[4809]: I0312 09:20:00.436505 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-tvxqv" Mar 12 09:20:00 crc kubenswrapper[4809]: I0312 09:20:00.437220 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 09:20:00 crc kubenswrapper[4809]: I0312 09:20:00.436512 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 09:20:00 crc kubenswrapper[4809]: I0312 09:20:00.536408 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kms6b\" (UniqueName: \"kubernetes.io/projected/63b7fc38-9fd2-49cc-a08e-ac23cb1e8e3b-kube-api-access-kms6b\") pod \"auto-csr-approver-29555120-z4897\" (UID: \"63b7fc38-9fd2-49cc-a08e-ac23cb1e8e3b\") " pod="openshift-infra/auto-csr-approver-29555120-z4897" Mar 12 09:20:00 crc kubenswrapper[4809]: I0312 09:20:00.562159 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555120-z4897"] Mar 12 09:20:00 crc kubenswrapper[4809]: I0312 09:20:00.647421 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kms6b\" (UniqueName: \"kubernetes.io/projected/63b7fc38-9fd2-49cc-a08e-ac23cb1e8e3b-kube-api-access-kms6b\") pod \"auto-csr-approver-29555120-z4897\" (UID: \"63b7fc38-9fd2-49cc-a08e-ac23cb1e8e3b\") " pod="openshift-infra/auto-csr-approver-29555120-z4897" Mar 12 09:20:00 crc kubenswrapper[4809]: I0312 09:20:00.785573 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555120-z4897" Mar 12 09:20:00 crc kubenswrapper[4809]: I0312 09:20:00.883056 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-fxhzz" Mar 12 09:20:00 crc kubenswrapper[4809]: I0312 09:20:00.938193 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-fxhzz" Mar 12 09:20:01 crc kubenswrapper[4809]: I0312 09:20:01.516591 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-59fcfb5dc8-gpg2j" Mar 12 09:20:02 crc kubenswrapper[4809]: I0312 09:20:02.155937 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-rmrt9" Mar 12 09:20:02 crc kubenswrapper[4809]: I0312 09:20:02.172891 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-n22vs" Mar 12 09:20:02 crc kubenswrapper[4809]: I0312 09:20:02.334530 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-7bb4cc7c98-7jdts" Mar 12 09:20:02 crc kubenswrapper[4809]: I0312 09:20:02.616628 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555120-z4897"] Mar 12 09:20:02 crc kubenswrapper[4809]: W0312 09:20:02.639353 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod63b7fc38_9fd2_49cc_a08e_ac23cb1e8e3b.slice/crio-fbb49fd4df6e706a881ece73874499b9e32c003399c4bc81e8033032d83d92b4 WatchSource:0}: Error finding container fbb49fd4df6e706a881ece73874499b9e32c003399c4bc81e8033032d83d92b4: Status 404 returned error can't find the container with id fbb49fd4df6e706a881ece73874499b9e32c003399c4bc81e8033032d83d92b4 Mar 12 09:20:02 crc kubenswrapper[4809]: I0312 09:20:02.855467 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555120-z4897" event={"ID":"63b7fc38-9fd2-49cc-a08e-ac23cb1e8e3b","Type":"ContainerStarted","Data":"fbb49fd4df6e706a881ece73874499b9e32c003399c4bc81e8033032d83d92b4"} Mar 12 09:20:03 crc kubenswrapper[4809]: I0312 09:20:03.380657 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Mar 12 09:20:03 crc kubenswrapper[4809]: I0312 09:20:03.438355 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="6f39b431-0c84-4f84-b887-d5f74af3d573" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 09:20:03 crc kubenswrapper[4809]: I0312 09:20:03.549690 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Mar 12 09:20:03 crc kubenswrapper[4809]: I0312 09:20:03.549739 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Mar 12 09:20:03 crc kubenswrapper[4809]: I0312 09:20:03.723604 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-jt7s5" Mar 12 09:20:03 crc kubenswrapper[4809]: I0312 09:20:03.799364 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Mar 12 09:20:04 crc kubenswrapper[4809]: I0312 09:20:04.034070 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Mar 12 09:20:05 crc kubenswrapper[4809]: I0312 09:20:05.914184 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555120-z4897" event={"ID":"63b7fc38-9fd2-49cc-a08e-ac23cb1e8e3b","Type":"ContainerStarted","Data":"b1a4a9acf515fc0770bc0ad84b9c1b1e7de812d10e38373b6504cb2e72cdf4ee"} Mar 12 09:20:05 crc kubenswrapper[4809]: I0312 09:20:05.947469 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555120-z4897" podStartSLOduration=4.806811303 podStartE2EDuration="5.94599249s" podCreationTimestamp="2026-03-12 09:20:00 +0000 UTC" firstStartedPulling="2026-03-12 09:20:02.664297237 +0000 UTC m=+4876.246332970" lastFinishedPulling="2026-03-12 09:20:03.803478424 +0000 UTC m=+4877.385514157" observedRunningTime="2026-03-12 09:20:05.932896843 +0000 UTC m=+4879.514932576" watchObservedRunningTime="2026-03-12 09:20:05.94599249 +0000 UTC m=+4879.528028223" Mar 12 09:20:06 crc kubenswrapper[4809]: I0312 09:20:06.926501 4809 generic.go:334] "Generic (PLEG): container finished" podID="63b7fc38-9fd2-49cc-a08e-ac23cb1e8e3b" containerID="b1a4a9acf515fc0770bc0ad84b9c1b1e7de812d10e38373b6504cb2e72cdf4ee" exitCode=0 Mar 12 09:20:06 crc kubenswrapper[4809]: I0312 09:20:06.926733 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555120-z4897" event={"ID":"63b7fc38-9fd2-49cc-a08e-ac23cb1e8e3b","Type":"ContainerDied","Data":"b1a4a9acf515fc0770bc0ad84b9c1b1e7de812d10e38373b6504cb2e72cdf4ee"} Mar 12 09:20:08 crc kubenswrapper[4809]: I0312 09:20:08.416638 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Mar 12 09:20:09 crc kubenswrapper[4809]: I0312 09:20:09.232832 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555120-z4897" Mar 12 09:20:09 crc kubenswrapper[4809]: I0312 09:20:09.350193 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kms6b\" (UniqueName: \"kubernetes.io/projected/63b7fc38-9fd2-49cc-a08e-ac23cb1e8e3b-kube-api-access-kms6b\") pod \"63b7fc38-9fd2-49cc-a08e-ac23cb1e8e3b\" (UID: \"63b7fc38-9fd2-49cc-a08e-ac23cb1e8e3b\") " Mar 12 09:20:09 crc kubenswrapper[4809]: I0312 09:20:09.381040 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63b7fc38-9fd2-49cc-a08e-ac23cb1e8e3b-kube-api-access-kms6b" (OuterVolumeSpecName: "kube-api-access-kms6b") pod "63b7fc38-9fd2-49cc-a08e-ac23cb1e8e3b" (UID: "63b7fc38-9fd2-49cc-a08e-ac23cb1e8e3b"). InnerVolumeSpecName "kube-api-access-kms6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 09:20:09 crc kubenswrapper[4809]: I0312 09:20:09.453177 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kms6b\" (UniqueName: \"kubernetes.io/projected/63b7fc38-9fd2-49cc-a08e-ac23cb1e8e3b-kube-api-access-kms6b\") on node \"crc\" DevicePath \"\"" Mar 12 09:20:09 crc kubenswrapper[4809]: I0312 09:20:09.971356 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555120-z4897" event={"ID":"63b7fc38-9fd2-49cc-a08e-ac23cb1e8e3b","Type":"ContainerDied","Data":"fbb49fd4df6e706a881ece73874499b9e32c003399c4bc81e8033032d83d92b4"} Mar 12 09:20:09 crc kubenswrapper[4809]: I0312 09:20:09.971482 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555120-z4897" Mar 12 09:20:09 crc kubenswrapper[4809]: I0312 09:20:09.975150 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbb49fd4df6e706a881ece73874499b9e32c003399c4bc81e8033032d83d92b4" Mar 12 09:20:10 crc kubenswrapper[4809]: I0312 09:20:10.197038 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-dpfq8" podUID="58f90f01-2a85-4755-a6ea-fef9035e8982" containerName="registry-server" probeResult="failure" output=< Mar 12 09:20:10 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 09:20:10 crc kubenswrapper[4809]: > Mar 12 09:20:10 crc kubenswrapper[4809]: E0312 09:20:10.322463 4809 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod63b7fc38_9fd2_49cc_a08e_ac23cb1e8e3b.slice\": RecentStats: unable to find data in memory cache]" Mar 12 09:20:10 crc kubenswrapper[4809]: I0312 09:20:10.359414 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555114-7sx6x"] Mar 12 09:20:10 crc kubenswrapper[4809]: I0312 09:20:10.380702 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555114-7sx6x"] Mar 12 09:20:11 crc kubenswrapper[4809]: I0312 09:20:11.119812 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b3ff787-1ddc-4db7-928a-e6b6c685e6f9" path="/var/lib/kubelet/pods/5b3ff787-1ddc-4db7-928a-e6b6c685e6f9/volumes" Mar 12 09:20:14 crc kubenswrapper[4809]: I0312 09:20:14.940816 4809 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="b7c728ba-8361-4d19-833d-b3494509f355" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.1.25:3000/\": dial tcp 10.217.1.25:3000: connect: connection refused" Mar 12 09:20:18 crc kubenswrapper[4809]: I0312 09:20:18.535473 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dpfq8" Mar 12 09:20:18 crc kubenswrapper[4809]: I0312 09:20:18.605541 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dpfq8" Mar 12 09:20:19 crc kubenswrapper[4809]: I0312 09:20:19.784102 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dpfq8"] Mar 12 09:20:20 crc kubenswrapper[4809]: I0312 09:20:20.110952 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dpfq8" podUID="58f90f01-2a85-4755-a6ea-fef9035e8982" containerName="registry-server" containerID="cri-o://95ce856174ba6755f10553d8bce99f04d6833ec4cd2b775ddc73afa17d61e2d6" gracePeriod=2 Mar 12 09:20:21 crc kubenswrapper[4809]: I0312 09:20:21.129234 4809 generic.go:334] "Generic (PLEG): container finished" podID="58f90f01-2a85-4755-a6ea-fef9035e8982" containerID="95ce856174ba6755f10553d8bce99f04d6833ec4cd2b775ddc73afa17d61e2d6" exitCode=0 Mar 12 09:20:21 crc kubenswrapper[4809]: I0312 09:20:21.129316 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dpfq8" event={"ID":"58f90f01-2a85-4755-a6ea-fef9035e8982","Type":"ContainerDied","Data":"95ce856174ba6755f10553d8bce99f04d6833ec4cd2b775ddc73afa17d61e2d6"} Mar 12 09:20:21 crc kubenswrapper[4809]: I0312 09:20:21.774179 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dpfq8" Mar 12 09:20:21 crc kubenswrapper[4809]: I0312 09:20:21.913014 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tccv6\" (UniqueName: \"kubernetes.io/projected/58f90f01-2a85-4755-a6ea-fef9035e8982-kube-api-access-tccv6\") pod \"58f90f01-2a85-4755-a6ea-fef9035e8982\" (UID: \"58f90f01-2a85-4755-a6ea-fef9035e8982\") " Mar 12 09:20:21 crc kubenswrapper[4809]: I0312 09:20:21.913343 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58f90f01-2a85-4755-a6ea-fef9035e8982-utilities\") pod \"58f90f01-2a85-4755-a6ea-fef9035e8982\" (UID: \"58f90f01-2a85-4755-a6ea-fef9035e8982\") " Mar 12 09:20:21 crc kubenswrapper[4809]: I0312 09:20:21.913562 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58f90f01-2a85-4755-a6ea-fef9035e8982-catalog-content\") pod \"58f90f01-2a85-4755-a6ea-fef9035e8982\" (UID: \"58f90f01-2a85-4755-a6ea-fef9035e8982\") " Mar 12 09:20:21 crc kubenswrapper[4809]: I0312 09:20:21.926188 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58f90f01-2a85-4755-a6ea-fef9035e8982-utilities" (OuterVolumeSpecName: "utilities") pod "58f90f01-2a85-4755-a6ea-fef9035e8982" (UID: "58f90f01-2a85-4755-a6ea-fef9035e8982"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 09:20:21 crc kubenswrapper[4809]: I0312 09:20:21.978916 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58f90f01-2a85-4755-a6ea-fef9035e8982-kube-api-access-tccv6" (OuterVolumeSpecName: "kube-api-access-tccv6") pod "58f90f01-2a85-4755-a6ea-fef9035e8982" (UID: "58f90f01-2a85-4755-a6ea-fef9035e8982"). InnerVolumeSpecName "kube-api-access-tccv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 09:20:22 crc kubenswrapper[4809]: I0312 09:20:22.007173 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58f90f01-2a85-4755-a6ea-fef9035e8982-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "58f90f01-2a85-4755-a6ea-fef9035e8982" (UID: "58f90f01-2a85-4755-a6ea-fef9035e8982"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 09:20:22 crc kubenswrapper[4809]: I0312 09:20:22.018618 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58f90f01-2a85-4755-a6ea-fef9035e8982-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 09:20:22 crc kubenswrapper[4809]: I0312 09:20:22.018668 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tccv6\" (UniqueName: \"kubernetes.io/projected/58f90f01-2a85-4755-a6ea-fef9035e8982-kube-api-access-tccv6\") on node \"crc\" DevicePath \"\"" Mar 12 09:20:22 crc kubenswrapper[4809]: I0312 09:20:22.018686 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58f90f01-2a85-4755-a6ea-fef9035e8982-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 09:20:22 crc kubenswrapper[4809]: I0312 09:20:22.157099 4809 generic.go:334] "Generic (PLEG): container finished" podID="b7c728ba-8361-4d19-833d-b3494509f355" containerID="1ca7de1eb67729882a442d8883d9edfac4f18c1014005faffc334342bbc8befb" exitCode=137 Mar 12 09:20:22 crc kubenswrapper[4809]: I0312 09:20:22.157446 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b7c728ba-8361-4d19-833d-b3494509f355","Type":"ContainerDied","Data":"1ca7de1eb67729882a442d8883d9edfac4f18c1014005faffc334342bbc8befb"} Mar 12 09:20:22 crc kubenswrapper[4809]: I0312 09:20:22.157678 4809 scope.go:117] "RemoveContainer" containerID="b96fb9463b48b47c51299bf419244fbc30fcc5ab57bebf0d2a86617d3123c3f7" Mar 12 09:20:22 crc kubenswrapper[4809]: I0312 09:20:22.182729 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-46txv_31c13acb-9eed-41f3-b4a5-63c4cdbdaa40/registry-server/0.log" Mar 12 09:20:22 crc kubenswrapper[4809]: I0312 09:20:22.186670 4809 generic.go:334] "Generic (PLEG): container finished" podID="31c13acb-9eed-41f3-b4a5-63c4cdbdaa40" containerID="022ddad6061f352965a5f12228036f080521580cb0df65a26d724712ed694648" exitCode=137 Mar 12 09:20:22 crc kubenswrapper[4809]: I0312 09:20:22.186807 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-46txv" event={"ID":"31c13acb-9eed-41f3-b4a5-63c4cdbdaa40","Type":"ContainerDied","Data":"022ddad6061f352965a5f12228036f080521580cb0df65a26d724712ed694648"} Mar 12 09:20:22 crc kubenswrapper[4809]: I0312 09:20:22.194050 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dpfq8" event={"ID":"58f90f01-2a85-4755-a6ea-fef9035e8982","Type":"ContainerDied","Data":"368a9e66a8a48ff8f44ee166cbdccf5b13cc5d176421d480acaa7d4acf1b0ba3"} Mar 12 09:20:22 crc kubenswrapper[4809]: I0312 09:20:22.196154 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dpfq8" Mar 12 09:20:22 crc kubenswrapper[4809]: I0312 09:20:22.257817 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dpfq8"] Mar 12 09:20:22 crc kubenswrapper[4809]: I0312 09:20:22.270329 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dpfq8"] Mar 12 09:20:22 crc kubenswrapper[4809]: I0312 09:20:22.289755 4809 scope.go:117] "RemoveContainer" containerID="95ce856174ba6755f10553d8bce99f04d6833ec4cd2b775ddc73afa17d61e2d6" Mar 12 09:20:22 crc kubenswrapper[4809]: I0312 09:20:22.362288 4809 scope.go:117] "RemoveContainer" containerID="a4a9a12036286fbfb0b54c7538dc53bdd4bde136374beef43d56fb0262152906" Mar 12 09:20:23 crc kubenswrapper[4809]: I0312 09:20:23.131398 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58f90f01-2a85-4755-a6ea-fef9035e8982" path="/var/lib/kubelet/pods/58f90f01-2a85-4755-a6ea-fef9035e8982/volumes" Mar 12 09:20:23 crc kubenswrapper[4809]: I0312 09:20:23.143448 4809 scope.go:117] "RemoveContainer" containerID="ea2c3109ce72d9b310552a518a4d11ed1ffb1970f054e487fe17d1151103740b" Mar 12 09:20:23 crc kubenswrapper[4809]: I0312 09:20:23.216626 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b7c728ba-8361-4d19-833d-b3494509f355","Type":"ContainerDied","Data":"cfad6192d311afdd541f594629f203d5f4f272436ad2ab43b72ffc477fcd0109"} Mar 12 09:20:23 crc kubenswrapper[4809]: I0312 09:20:23.217016 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cfad6192d311afdd541f594629f203d5f4f272436ad2ab43b72ffc477fcd0109" Mar 12 09:20:23 crc kubenswrapper[4809]: I0312 09:20:23.318078 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 09:20:23 crc kubenswrapper[4809]: I0312 09:20:23.376133 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7c728ba-8361-4d19-833d-b3494509f355-scripts\") pod \"b7c728ba-8361-4d19-833d-b3494509f355\" (UID: \"b7c728ba-8361-4d19-833d-b3494509f355\") " Mar 12 09:20:23 crc kubenswrapper[4809]: I0312 09:20:23.376189 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7c728ba-8361-4d19-833d-b3494509f355-config-data\") pod \"b7c728ba-8361-4d19-833d-b3494509f355\" (UID: \"b7c728ba-8361-4d19-833d-b3494509f355\") " Mar 12 09:20:23 crc kubenswrapper[4809]: I0312 09:20:23.376287 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7c728ba-8361-4d19-833d-b3494509f355-ceilometer-tls-certs\") pod \"b7c728ba-8361-4d19-833d-b3494509f355\" (UID: \"b7c728ba-8361-4d19-833d-b3494509f355\") " Mar 12 09:20:23 crc kubenswrapper[4809]: I0312 09:20:23.376320 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b7c728ba-8361-4d19-833d-b3494509f355-log-httpd\") pod \"b7c728ba-8361-4d19-833d-b3494509f355\" (UID: \"b7c728ba-8361-4d19-833d-b3494509f355\") " Mar 12 09:20:23 crc kubenswrapper[4809]: I0312 09:20:23.376363 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b7c728ba-8361-4d19-833d-b3494509f355-run-httpd\") pod \"b7c728ba-8361-4d19-833d-b3494509f355\" (UID: \"b7c728ba-8361-4d19-833d-b3494509f355\") " Mar 12 09:20:23 crc kubenswrapper[4809]: I0312 09:20:23.376419 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7c728ba-8361-4d19-833d-b3494509f355-combined-ca-bundle\") pod \"b7c728ba-8361-4d19-833d-b3494509f355\" (UID: \"b7c728ba-8361-4d19-833d-b3494509f355\") " Mar 12 09:20:23 crc kubenswrapper[4809]: I0312 09:20:23.376496 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5pkf\" (UniqueName: \"kubernetes.io/projected/b7c728ba-8361-4d19-833d-b3494509f355-kube-api-access-j5pkf\") pod \"b7c728ba-8361-4d19-833d-b3494509f355\" (UID: \"b7c728ba-8361-4d19-833d-b3494509f355\") " Mar 12 09:20:23 crc kubenswrapper[4809]: I0312 09:20:23.376612 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b7c728ba-8361-4d19-833d-b3494509f355-sg-core-conf-yaml\") pod \"b7c728ba-8361-4d19-833d-b3494509f355\" (UID: \"b7c728ba-8361-4d19-833d-b3494509f355\") " Mar 12 09:20:23 crc kubenswrapper[4809]: I0312 09:20:23.385029 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7c728ba-8361-4d19-833d-b3494509f355-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b7c728ba-8361-4d19-833d-b3494509f355" (UID: "b7c728ba-8361-4d19-833d-b3494509f355"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 09:20:23 crc kubenswrapper[4809]: I0312 09:20:23.385738 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7c728ba-8361-4d19-833d-b3494509f355-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b7c728ba-8361-4d19-833d-b3494509f355" (UID: "b7c728ba-8361-4d19-833d-b3494509f355"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 09:20:23 crc kubenswrapper[4809]: I0312 09:20:23.389711 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7c728ba-8361-4d19-833d-b3494509f355-kube-api-access-j5pkf" (OuterVolumeSpecName: "kube-api-access-j5pkf") pod "b7c728ba-8361-4d19-833d-b3494509f355" (UID: "b7c728ba-8361-4d19-833d-b3494509f355"). InnerVolumeSpecName "kube-api-access-j5pkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 09:20:23 crc kubenswrapper[4809]: I0312 09:20:23.394010 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7c728ba-8361-4d19-833d-b3494509f355-scripts" (OuterVolumeSpecName: "scripts") pod "b7c728ba-8361-4d19-833d-b3494509f355" (UID: "b7c728ba-8361-4d19-833d-b3494509f355"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 09:20:23 crc kubenswrapper[4809]: I0312 09:20:23.451902 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7c728ba-8361-4d19-833d-b3494509f355-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b7c728ba-8361-4d19-833d-b3494509f355" (UID: "b7c728ba-8361-4d19-833d-b3494509f355"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 09:20:23 crc kubenswrapper[4809]: I0312 09:20:23.497148 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5pkf\" (UniqueName: \"kubernetes.io/projected/b7c728ba-8361-4d19-833d-b3494509f355-kube-api-access-j5pkf\") on node \"crc\" DevicePath \"\"" Mar 12 09:20:23 crc kubenswrapper[4809]: I0312 09:20:23.497192 4809 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b7c728ba-8361-4d19-833d-b3494509f355-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 12 09:20:23 crc kubenswrapper[4809]: I0312 09:20:23.497224 4809 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7c728ba-8361-4d19-833d-b3494509f355-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 09:20:23 crc kubenswrapper[4809]: I0312 09:20:23.497239 4809 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b7c728ba-8361-4d19-833d-b3494509f355-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 09:20:23 crc kubenswrapper[4809]: I0312 09:20:23.497253 4809 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b7c728ba-8361-4d19-833d-b3494509f355-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 09:20:23 crc kubenswrapper[4809]: I0312 09:20:23.526075 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7c728ba-8361-4d19-833d-b3494509f355-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "b7c728ba-8361-4d19-833d-b3494509f355" (UID: "b7c728ba-8361-4d19-833d-b3494509f355"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 09:20:23 crc kubenswrapper[4809]: I0312 09:20:23.560052 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7c728ba-8361-4d19-833d-b3494509f355-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b7c728ba-8361-4d19-833d-b3494509f355" (UID: "b7c728ba-8361-4d19-833d-b3494509f355"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 09:20:23 crc kubenswrapper[4809]: I0312 09:20:23.600032 4809 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7c728ba-8361-4d19-833d-b3494509f355-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 09:20:23 crc kubenswrapper[4809]: I0312 09:20:23.600527 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7c728ba-8361-4d19-833d-b3494509f355-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 09:20:23 crc kubenswrapper[4809]: I0312 09:20:23.648592 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7c728ba-8361-4d19-833d-b3494509f355-config-data" (OuterVolumeSpecName: "config-data") pod "b7c728ba-8361-4d19-833d-b3494509f355" (UID: "b7c728ba-8361-4d19-833d-b3494509f355"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 09:20:23 crc kubenswrapper[4809]: I0312 09:20:23.702621 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7c728ba-8361-4d19-833d-b3494509f355-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 09:20:24 crc kubenswrapper[4809]: I0312 09:20:24.234872 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-46txv_31c13acb-9eed-41f3-b4a5-63c4cdbdaa40/registry-server/0.log" Mar 12 09:20:24 crc kubenswrapper[4809]: I0312 09:20:24.237451 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-46txv" event={"ID":"31c13acb-9eed-41f3-b4a5-63c4cdbdaa40","Type":"ContainerStarted","Data":"08c3d8f88219b058a97746e7181f06b98934851205b15209fed58ea9d6bf9dec"} Mar 12 09:20:24 crc kubenswrapper[4809]: I0312 09:20:24.237490 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 09:20:24 crc kubenswrapper[4809]: I0312 09:20:24.310797 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 09:20:24 crc kubenswrapper[4809]: I0312 09:20:24.329905 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 12 09:20:24 crc kubenswrapper[4809]: I0312 09:20:24.404289 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 12 09:20:24 crc kubenswrapper[4809]: E0312 09:20:24.405604 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7c728ba-8361-4d19-833d-b3494509f355" containerName="ceilometer-central-agent" Mar 12 09:20:24 crc kubenswrapper[4809]: I0312 09:20:24.405628 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7c728ba-8361-4d19-833d-b3494509f355" containerName="ceilometer-central-agent" Mar 12 09:20:24 crc kubenswrapper[4809]: E0312 09:20:24.405651 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58f90f01-2a85-4755-a6ea-fef9035e8982" containerName="registry-server" Mar 12 09:20:24 crc kubenswrapper[4809]: I0312 09:20:24.405659 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="58f90f01-2a85-4755-a6ea-fef9035e8982" containerName="registry-server" Mar 12 09:20:24 crc kubenswrapper[4809]: E0312 09:20:24.405676 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7c728ba-8361-4d19-833d-b3494509f355" containerName="proxy-httpd" Mar 12 09:20:24 crc kubenswrapper[4809]: I0312 09:20:24.405682 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7c728ba-8361-4d19-833d-b3494509f355" containerName="proxy-httpd" Mar 12 09:20:24 crc kubenswrapper[4809]: E0312 09:20:24.405694 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63b7fc38-9fd2-49cc-a08e-ac23cb1e8e3b" containerName="oc" Mar 12 09:20:24 crc kubenswrapper[4809]: I0312 09:20:24.405700 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="63b7fc38-9fd2-49cc-a08e-ac23cb1e8e3b" containerName="oc" Mar 12 09:20:24 crc kubenswrapper[4809]: E0312 09:20:24.405720 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58f90f01-2a85-4755-a6ea-fef9035e8982" containerName="extract-utilities" Mar 12 09:20:24 crc kubenswrapper[4809]: I0312 09:20:24.405726 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="58f90f01-2a85-4755-a6ea-fef9035e8982" containerName="extract-utilities" Mar 12 09:20:24 crc kubenswrapper[4809]: E0312 09:20:24.405737 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7c728ba-8361-4d19-833d-b3494509f355" containerName="ceilometer-central-agent" Mar 12 09:20:24 crc kubenswrapper[4809]: I0312 09:20:24.405744 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7c728ba-8361-4d19-833d-b3494509f355" containerName="ceilometer-central-agent" Mar 12 09:20:24 crc kubenswrapper[4809]: E0312 09:20:24.405752 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7c728ba-8361-4d19-833d-b3494509f355" containerName="sg-core" Mar 12 09:20:24 crc kubenswrapper[4809]: I0312 09:20:24.405759 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7c728ba-8361-4d19-833d-b3494509f355" containerName="sg-core" Mar 12 09:20:24 crc kubenswrapper[4809]: E0312 09:20:24.405771 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58f90f01-2a85-4755-a6ea-fef9035e8982" containerName="extract-content" Mar 12 09:20:24 crc kubenswrapper[4809]: I0312 09:20:24.405776 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="58f90f01-2a85-4755-a6ea-fef9035e8982" containerName="extract-content" Mar 12 09:20:24 crc kubenswrapper[4809]: E0312 09:20:24.405790 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7c728ba-8361-4d19-833d-b3494509f355" containerName="ceilometer-notification-agent" Mar 12 09:20:24 crc kubenswrapper[4809]: I0312 09:20:24.405796 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7c728ba-8361-4d19-833d-b3494509f355" containerName="ceilometer-notification-agent" Mar 12 09:20:24 crc kubenswrapper[4809]: I0312 09:20:24.406059 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7c728ba-8361-4d19-833d-b3494509f355" containerName="ceilometer-central-agent" Mar 12 09:20:24 crc kubenswrapper[4809]: I0312 09:20:24.406082 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7c728ba-8361-4d19-833d-b3494509f355" containerName="sg-core" Mar 12 09:20:24 crc kubenswrapper[4809]: I0312 09:20:24.406099 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7c728ba-8361-4d19-833d-b3494509f355" containerName="ceilometer-notification-agent" Mar 12 09:20:24 crc kubenswrapper[4809]: I0312 09:20:24.406108 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="63b7fc38-9fd2-49cc-a08e-ac23cb1e8e3b" containerName="oc" Mar 12 09:20:24 crc kubenswrapper[4809]: I0312 09:20:24.406224 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7c728ba-8361-4d19-833d-b3494509f355" containerName="ceilometer-central-agent" Mar 12 09:20:24 crc kubenswrapper[4809]: I0312 09:20:24.406249 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7c728ba-8361-4d19-833d-b3494509f355" containerName="proxy-httpd" Mar 12 09:20:24 crc kubenswrapper[4809]: I0312 09:20:24.406260 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="58f90f01-2a85-4755-a6ea-fef9035e8982" containerName="registry-server" Mar 12 09:20:24 crc kubenswrapper[4809]: I0312 09:20:24.413772 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 09:20:24 crc kubenswrapper[4809]: I0312 09:20:24.424451 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 12 09:20:24 crc kubenswrapper[4809]: I0312 09:20:24.424456 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 12 09:20:24 crc kubenswrapper[4809]: I0312 09:20:24.424460 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Mar 12 09:20:24 crc kubenswrapper[4809]: I0312 09:20:24.437581 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 09:20:24 crc kubenswrapper[4809]: I0312 09:20:24.524613 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7aa8a573-6465-4f92-9cc7-79620af9d7f0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7aa8a573-6465-4f92-9cc7-79620af9d7f0\") " pod="openstack/ceilometer-0" Mar 12 09:20:24 crc kubenswrapper[4809]: I0312 09:20:24.524684 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7aa8a573-6465-4f92-9cc7-79620af9d7f0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7aa8a573-6465-4f92-9cc7-79620af9d7f0\") " pod="openstack/ceilometer-0" Mar 12 09:20:24 crc kubenswrapper[4809]: I0312 09:20:24.524777 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7aa8a573-6465-4f92-9cc7-79620af9d7f0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7aa8a573-6465-4f92-9cc7-79620af9d7f0\") " pod="openstack/ceilometer-0" Mar 12 09:20:24 crc kubenswrapper[4809]: I0312 09:20:24.524806 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7aa8a573-6465-4f92-9cc7-79620af9d7f0-run-httpd\") pod \"ceilometer-0\" (UID: \"7aa8a573-6465-4f92-9cc7-79620af9d7f0\") " pod="openstack/ceilometer-0" Mar 12 09:20:24 crc kubenswrapper[4809]: I0312 09:20:24.524915 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7aa8a573-6465-4f92-9cc7-79620af9d7f0-config-data\") pod \"ceilometer-0\" (UID: \"7aa8a573-6465-4f92-9cc7-79620af9d7f0\") " pod="openstack/ceilometer-0" Mar 12 09:20:24 crc kubenswrapper[4809]: I0312 09:20:24.525093 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7d8d\" (UniqueName: \"kubernetes.io/projected/7aa8a573-6465-4f92-9cc7-79620af9d7f0-kube-api-access-r7d8d\") pod \"ceilometer-0\" (UID: \"7aa8a573-6465-4f92-9cc7-79620af9d7f0\") " pod="openstack/ceilometer-0" Mar 12 09:20:24 crc kubenswrapper[4809]: I0312 09:20:24.525411 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7aa8a573-6465-4f92-9cc7-79620af9d7f0-scripts\") pod \"ceilometer-0\" (UID: \"7aa8a573-6465-4f92-9cc7-79620af9d7f0\") " pod="openstack/ceilometer-0" Mar 12 09:20:24 crc kubenswrapper[4809]: I0312 09:20:24.525625 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7aa8a573-6465-4f92-9cc7-79620af9d7f0-log-httpd\") pod \"ceilometer-0\" (UID: \"7aa8a573-6465-4f92-9cc7-79620af9d7f0\") " pod="openstack/ceilometer-0" Mar 12 09:20:24 crc kubenswrapper[4809]: I0312 09:20:24.628725 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7aa8a573-6465-4f92-9cc7-79620af9d7f0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7aa8a573-6465-4f92-9cc7-79620af9d7f0\") " pod="openstack/ceilometer-0" Mar 12 09:20:24 crc kubenswrapper[4809]: I0312 09:20:24.629493 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7aa8a573-6465-4f92-9cc7-79620af9d7f0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7aa8a573-6465-4f92-9cc7-79620af9d7f0\") " pod="openstack/ceilometer-0" Mar 12 09:20:24 crc kubenswrapper[4809]: I0312 09:20:24.629905 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7aa8a573-6465-4f92-9cc7-79620af9d7f0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7aa8a573-6465-4f92-9cc7-79620af9d7f0\") " pod="openstack/ceilometer-0" Mar 12 09:20:24 crc kubenswrapper[4809]: I0312 09:20:24.630200 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7aa8a573-6465-4f92-9cc7-79620af9d7f0-run-httpd\") pod \"ceilometer-0\" (UID: \"7aa8a573-6465-4f92-9cc7-79620af9d7f0\") " pod="openstack/ceilometer-0" Mar 12 09:20:24 crc kubenswrapper[4809]: I0312 09:20:24.630462 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7aa8a573-6465-4f92-9cc7-79620af9d7f0-config-data\") pod \"ceilometer-0\" (UID: \"7aa8a573-6465-4f92-9cc7-79620af9d7f0\") " pod="openstack/ceilometer-0" Mar 12 09:20:24 crc kubenswrapper[4809]: I0312 09:20:24.630733 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7d8d\" (UniqueName: \"kubernetes.io/projected/7aa8a573-6465-4f92-9cc7-79620af9d7f0-kube-api-access-r7d8d\") pod \"ceilometer-0\" (UID: \"7aa8a573-6465-4f92-9cc7-79620af9d7f0\") " pod="openstack/ceilometer-0" Mar 12 09:20:24 crc kubenswrapper[4809]: I0312 09:20:24.630848 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7aa8a573-6465-4f92-9cc7-79620af9d7f0-run-httpd\") pod \"ceilometer-0\" (UID: \"7aa8a573-6465-4f92-9cc7-79620af9d7f0\") " pod="openstack/ceilometer-0" Mar 12 09:20:24 crc kubenswrapper[4809]: I0312 09:20:24.631310 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7aa8a573-6465-4f92-9cc7-79620af9d7f0-scripts\") pod \"ceilometer-0\" (UID: \"7aa8a573-6465-4f92-9cc7-79620af9d7f0\") " pod="openstack/ceilometer-0" Mar 12 09:20:24 crc kubenswrapper[4809]: I0312 09:20:24.631640 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7aa8a573-6465-4f92-9cc7-79620af9d7f0-log-httpd\") pod \"ceilometer-0\" (UID: \"7aa8a573-6465-4f92-9cc7-79620af9d7f0\") " pod="openstack/ceilometer-0" Mar 12 09:20:24 crc kubenswrapper[4809]: I0312 09:20:24.632724 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7aa8a573-6465-4f92-9cc7-79620af9d7f0-log-httpd\") pod \"ceilometer-0\" (UID: \"7aa8a573-6465-4f92-9cc7-79620af9d7f0\") " pod="openstack/ceilometer-0" Mar 12 09:20:24 crc kubenswrapper[4809]: I0312 09:20:24.635876 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7aa8a573-6465-4f92-9cc7-79620af9d7f0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7aa8a573-6465-4f92-9cc7-79620af9d7f0\") " pod="openstack/ceilometer-0" Mar 12 09:20:24 crc kubenswrapper[4809]: I0312 09:20:24.636420 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7aa8a573-6465-4f92-9cc7-79620af9d7f0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7aa8a573-6465-4f92-9cc7-79620af9d7f0\") " pod="openstack/ceilometer-0" Mar 12 09:20:24 crc kubenswrapper[4809]: I0312 09:20:24.636870 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7aa8a573-6465-4f92-9cc7-79620af9d7f0-scripts\") pod \"ceilometer-0\" (UID: \"7aa8a573-6465-4f92-9cc7-79620af9d7f0\") " pod="openstack/ceilometer-0" Mar 12 09:20:24 crc kubenswrapper[4809]: I0312 09:20:24.637808 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7aa8a573-6465-4f92-9cc7-79620af9d7f0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7aa8a573-6465-4f92-9cc7-79620af9d7f0\") " pod="openstack/ceilometer-0" Mar 12 09:20:24 crc kubenswrapper[4809]: I0312 09:20:24.639634 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7aa8a573-6465-4f92-9cc7-79620af9d7f0-config-data\") pod \"ceilometer-0\" (UID: \"7aa8a573-6465-4f92-9cc7-79620af9d7f0\") " pod="openstack/ceilometer-0" Mar 12 09:20:24 crc kubenswrapper[4809]: I0312 09:20:24.766637 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7d8d\" (UniqueName: \"kubernetes.io/projected/7aa8a573-6465-4f92-9cc7-79620af9d7f0-kube-api-access-r7d8d\") pod \"ceilometer-0\" (UID: \"7aa8a573-6465-4f92-9cc7-79620af9d7f0\") " pod="openstack/ceilometer-0" Mar 12 09:20:25 crc kubenswrapper[4809]: I0312 09:20:25.034380 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 09:20:25 crc kubenswrapper[4809]: I0312 09:20:25.127259 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7c728ba-8361-4d19-833d-b3494509f355" path="/var/lib/kubelet/pods/b7c728ba-8361-4d19-833d-b3494509f355/volumes" Mar 12 09:20:26 crc kubenswrapper[4809]: I0312 09:20:26.394315 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 09:20:26 crc kubenswrapper[4809]: W0312 09:20:26.397373 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7aa8a573_6465_4f92_9cc7_79620af9d7f0.slice/crio-da35b95c0c7a0d8ba97e9f52f1f07f07dc0d60b9ae8cd52f2da01380b0b35848 WatchSource:0}: Error finding container da35b95c0c7a0d8ba97e9f52f1f07f07dc0d60b9ae8cd52f2da01380b0b35848: Status 404 returned error can't find the container with id da35b95c0c7a0d8ba97e9f52f1f07f07dc0d60b9ae8cd52f2da01380b0b35848 Mar 12 09:20:26 crc kubenswrapper[4809]: I0312 09:20:26.926417 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 09:20:27 crc kubenswrapper[4809]: I0312 09:20:27.269557 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7aa8a573-6465-4f92-9cc7-79620af9d7f0","Type":"ContainerStarted","Data":"747ca9a97fd2e1b0ed33bcfa8f7210c110bfc61e8804c44c8271a5703a01597e"} Mar 12 09:20:27 crc kubenswrapper[4809]: I0312 09:20:27.269911 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7aa8a573-6465-4f92-9cc7-79620af9d7f0","Type":"ContainerStarted","Data":"da35b95c0c7a0d8ba97e9f52f1f07f07dc0d60b9ae8cd52f2da01380b0b35848"} Mar 12 09:20:29 crc kubenswrapper[4809]: I0312 09:20:29.291155 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7aa8a573-6465-4f92-9cc7-79620af9d7f0","Type":"ContainerStarted","Data":"0c28ea58806b5ca0a408d819133f1a7219cb7c5a2fddf72ac6bf4bd426c3856d"} Mar 12 09:20:30 crc kubenswrapper[4809]: I0312 09:20:30.722154 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-46txv" Mar 12 09:20:30 crc kubenswrapper[4809]: I0312 09:20:30.723794 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-46txv" Mar 12 09:20:31 crc kubenswrapper[4809]: I0312 09:20:31.332747 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7aa8a573-6465-4f92-9cc7-79620af9d7f0","Type":"ContainerStarted","Data":"5e5d0e72956c55db46c9283659f52a0dccdc432343e603a20a597e3956337158"} Mar 12 09:20:31 crc kubenswrapper[4809]: I0312 09:20:31.861184 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-46txv" podUID="31c13acb-9eed-41f3-b4a5-63c4cdbdaa40" containerName="registry-server" probeResult="failure" output=< Mar 12 09:20:31 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 09:20:31 crc kubenswrapper[4809]: > Mar 12 09:20:33 crc kubenswrapper[4809]: I0312 09:20:33.376583 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7aa8a573-6465-4f92-9cc7-79620af9d7f0","Type":"ContainerStarted","Data":"f91c99b9bab69b28fbd88d6c79b1dad31b9177ad587d95b652aa790bb062703c"} Mar 12 09:20:33 crc kubenswrapper[4809]: I0312 09:20:33.378208 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 12 09:20:33 crc kubenswrapper[4809]: I0312 09:20:33.376775 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7aa8a573-6465-4f92-9cc7-79620af9d7f0" containerName="proxy-httpd" containerID="cri-o://f91c99b9bab69b28fbd88d6c79b1dad31b9177ad587d95b652aa790bb062703c" gracePeriod=30 Mar 12 09:20:33 crc kubenswrapper[4809]: I0312 09:20:33.376810 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7aa8a573-6465-4f92-9cc7-79620af9d7f0" containerName="sg-core" containerID="cri-o://5e5d0e72956c55db46c9283659f52a0dccdc432343e603a20a597e3956337158" gracePeriod=30 Mar 12 09:20:33 crc kubenswrapper[4809]: I0312 09:20:33.376845 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7aa8a573-6465-4f92-9cc7-79620af9d7f0" containerName="ceilometer-notification-agent" containerID="cri-o://0c28ea58806b5ca0a408d819133f1a7219cb7c5a2fddf72ac6bf4bd426c3856d" gracePeriod=30 Mar 12 09:20:33 crc kubenswrapper[4809]: I0312 09:20:33.376753 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7aa8a573-6465-4f92-9cc7-79620af9d7f0" containerName="ceilometer-central-agent" containerID="cri-o://747ca9a97fd2e1b0ed33bcfa8f7210c110bfc61e8804c44c8271a5703a01597e" gracePeriod=30 Mar 12 09:20:33 crc kubenswrapper[4809]: I0312 09:20:33.406599 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.551194113 podStartE2EDuration="9.405582142s" podCreationTimestamp="2026-03-12 09:20:24 +0000 UTC" firstStartedPulling="2026-03-12 09:20:26.400484631 +0000 UTC m=+4899.982520364" lastFinishedPulling="2026-03-12 09:20:32.25487266 +0000 UTC m=+4905.836908393" observedRunningTime="2026-03-12 09:20:33.401525681 +0000 UTC m=+4906.983561414" watchObservedRunningTime="2026-03-12 09:20:33.405582142 +0000 UTC m=+4906.987617875" Mar 12 09:20:34 crc kubenswrapper[4809]: I0312 09:20:34.440903 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7aa8a573-6465-4f92-9cc7-79620af9d7f0","Type":"ContainerDied","Data":"f91c99b9bab69b28fbd88d6c79b1dad31b9177ad587d95b652aa790bb062703c"} Mar 12 09:20:34 crc kubenswrapper[4809]: I0312 09:20:34.446152 4809 generic.go:334] "Generic (PLEG): container finished" podID="7aa8a573-6465-4f92-9cc7-79620af9d7f0" containerID="f91c99b9bab69b28fbd88d6c79b1dad31b9177ad587d95b652aa790bb062703c" exitCode=0 Mar 12 09:20:34 crc kubenswrapper[4809]: I0312 09:20:34.446507 4809 generic.go:334] "Generic (PLEG): container finished" podID="7aa8a573-6465-4f92-9cc7-79620af9d7f0" containerID="5e5d0e72956c55db46c9283659f52a0dccdc432343e603a20a597e3956337158" exitCode=2 Mar 12 09:20:34 crc kubenswrapper[4809]: I0312 09:20:34.446531 4809 generic.go:334] "Generic (PLEG): container finished" podID="7aa8a573-6465-4f92-9cc7-79620af9d7f0" containerID="0c28ea58806b5ca0a408d819133f1a7219cb7c5a2fddf72ac6bf4bd426c3856d" exitCode=0 Mar 12 09:20:34 crc kubenswrapper[4809]: I0312 09:20:34.446556 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7aa8a573-6465-4f92-9cc7-79620af9d7f0","Type":"ContainerDied","Data":"5e5d0e72956c55db46c9283659f52a0dccdc432343e603a20a597e3956337158"} Mar 12 09:20:34 crc kubenswrapper[4809]: I0312 09:20:34.446586 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7aa8a573-6465-4f92-9cc7-79620af9d7f0","Type":"ContainerDied","Data":"0c28ea58806b5ca0a408d819133f1a7219cb7c5a2fddf72ac6bf4bd426c3856d"} Mar 12 09:20:41 crc kubenswrapper[4809]: I0312 09:20:41.285373 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 09:20:41 crc kubenswrapper[4809]: I0312 09:20:41.390445 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7aa8a573-6465-4f92-9cc7-79620af9d7f0-sg-core-conf-yaml\") pod \"7aa8a573-6465-4f92-9cc7-79620af9d7f0\" (UID: \"7aa8a573-6465-4f92-9cc7-79620af9d7f0\") " Mar 12 09:20:41 crc kubenswrapper[4809]: I0312 09:20:41.390530 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7aa8a573-6465-4f92-9cc7-79620af9d7f0-scripts\") pod \"7aa8a573-6465-4f92-9cc7-79620af9d7f0\" (UID: \"7aa8a573-6465-4f92-9cc7-79620af9d7f0\") " Mar 12 09:20:41 crc kubenswrapper[4809]: I0312 09:20:41.390556 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7aa8a573-6465-4f92-9cc7-79620af9d7f0-combined-ca-bundle\") pod \"7aa8a573-6465-4f92-9cc7-79620af9d7f0\" (UID: \"7aa8a573-6465-4f92-9cc7-79620af9d7f0\") " Mar 12 09:20:41 crc kubenswrapper[4809]: I0312 09:20:41.390631 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7aa8a573-6465-4f92-9cc7-79620af9d7f0-run-httpd\") pod \"7aa8a573-6465-4f92-9cc7-79620af9d7f0\" (UID: \"7aa8a573-6465-4f92-9cc7-79620af9d7f0\") " Mar 12 09:20:41 crc kubenswrapper[4809]: I0312 09:20:41.390759 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7d8d\" (UniqueName: \"kubernetes.io/projected/7aa8a573-6465-4f92-9cc7-79620af9d7f0-kube-api-access-r7d8d\") pod \"7aa8a573-6465-4f92-9cc7-79620af9d7f0\" (UID: \"7aa8a573-6465-4f92-9cc7-79620af9d7f0\") " Mar 12 09:20:41 crc kubenswrapper[4809]: I0312 09:20:41.390803 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7aa8a573-6465-4f92-9cc7-79620af9d7f0-config-data\") pod \"7aa8a573-6465-4f92-9cc7-79620af9d7f0\" (UID: \"7aa8a573-6465-4f92-9cc7-79620af9d7f0\") " Mar 12 09:20:41 crc kubenswrapper[4809]: I0312 09:20:41.390826 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7aa8a573-6465-4f92-9cc7-79620af9d7f0-ceilometer-tls-certs\") pod \"7aa8a573-6465-4f92-9cc7-79620af9d7f0\" (UID: \"7aa8a573-6465-4f92-9cc7-79620af9d7f0\") " Mar 12 09:20:41 crc kubenswrapper[4809]: I0312 09:20:41.390882 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7aa8a573-6465-4f92-9cc7-79620af9d7f0-log-httpd\") pod \"7aa8a573-6465-4f92-9cc7-79620af9d7f0\" (UID: \"7aa8a573-6465-4f92-9cc7-79620af9d7f0\") " Mar 12 09:20:41 crc kubenswrapper[4809]: I0312 09:20:41.393054 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7aa8a573-6465-4f92-9cc7-79620af9d7f0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7aa8a573-6465-4f92-9cc7-79620af9d7f0" (UID: "7aa8a573-6465-4f92-9cc7-79620af9d7f0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 09:20:41 crc kubenswrapper[4809]: I0312 09:20:41.395263 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7aa8a573-6465-4f92-9cc7-79620af9d7f0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7aa8a573-6465-4f92-9cc7-79620af9d7f0" (UID: "7aa8a573-6465-4f92-9cc7-79620af9d7f0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 09:20:41 crc kubenswrapper[4809]: I0312 09:20:41.404731 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7aa8a573-6465-4f92-9cc7-79620af9d7f0-kube-api-access-r7d8d" (OuterVolumeSpecName: "kube-api-access-r7d8d") pod "7aa8a573-6465-4f92-9cc7-79620af9d7f0" (UID: "7aa8a573-6465-4f92-9cc7-79620af9d7f0"). InnerVolumeSpecName "kube-api-access-r7d8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 09:20:41 crc kubenswrapper[4809]: I0312 09:20:41.428159 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7aa8a573-6465-4f92-9cc7-79620af9d7f0-scripts" (OuterVolumeSpecName: "scripts") pod "7aa8a573-6465-4f92-9cc7-79620af9d7f0" (UID: "7aa8a573-6465-4f92-9cc7-79620af9d7f0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 09:20:41 crc kubenswrapper[4809]: I0312 09:20:41.451591 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7aa8a573-6465-4f92-9cc7-79620af9d7f0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7aa8a573-6465-4f92-9cc7-79620af9d7f0" (UID: "7aa8a573-6465-4f92-9cc7-79620af9d7f0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 09:20:41 crc kubenswrapper[4809]: I0312 09:20:41.499832 4809 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7aa8a573-6465-4f92-9cc7-79620af9d7f0-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 09:20:41 crc kubenswrapper[4809]: I0312 09:20:41.499872 4809 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7aa8a573-6465-4f92-9cc7-79620af9d7f0-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 09:20:41 crc kubenswrapper[4809]: I0312 09:20:41.499885 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7d8d\" (UniqueName: \"kubernetes.io/projected/7aa8a573-6465-4f92-9cc7-79620af9d7f0-kube-api-access-r7d8d\") on node \"crc\" DevicePath \"\"" Mar 12 09:20:41 crc kubenswrapper[4809]: I0312 09:20:41.499900 4809 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7aa8a573-6465-4f92-9cc7-79620af9d7f0-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 09:20:41 crc kubenswrapper[4809]: I0312 09:20:41.499910 4809 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7aa8a573-6465-4f92-9cc7-79620af9d7f0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 12 09:20:41 crc kubenswrapper[4809]: I0312 09:20:41.516690 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7aa8a573-6465-4f92-9cc7-79620af9d7f0-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "7aa8a573-6465-4f92-9cc7-79620af9d7f0" (UID: "7aa8a573-6465-4f92-9cc7-79620af9d7f0"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 09:20:41 crc kubenswrapper[4809]: I0312 09:20:41.536339 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7aa8a573-6465-4f92-9cc7-79620af9d7f0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7aa8a573-6465-4f92-9cc7-79620af9d7f0" (UID: "7aa8a573-6465-4f92-9cc7-79620af9d7f0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 09:20:41 crc kubenswrapper[4809]: I0312 09:20:41.537543 4809 generic.go:334] "Generic (PLEG): container finished" podID="7aa8a573-6465-4f92-9cc7-79620af9d7f0" containerID="747ca9a97fd2e1b0ed33bcfa8f7210c110bfc61e8804c44c8271a5703a01597e" exitCode=0 Mar 12 09:20:41 crc kubenswrapper[4809]: I0312 09:20:41.537621 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 09:20:41 crc kubenswrapper[4809]: I0312 09:20:41.537640 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7aa8a573-6465-4f92-9cc7-79620af9d7f0","Type":"ContainerDied","Data":"747ca9a97fd2e1b0ed33bcfa8f7210c110bfc61e8804c44c8271a5703a01597e"} Mar 12 09:20:41 crc kubenswrapper[4809]: I0312 09:20:41.537994 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7aa8a573-6465-4f92-9cc7-79620af9d7f0","Type":"ContainerDied","Data":"da35b95c0c7a0d8ba97e9f52f1f07f07dc0d60b9ae8cd52f2da01380b0b35848"} Mar 12 09:20:41 crc kubenswrapper[4809]: I0312 09:20:41.538073 4809 scope.go:117] "RemoveContainer" containerID="f91c99b9bab69b28fbd88d6c79b1dad31b9177ad587d95b652aa790bb062703c" Mar 12 09:20:41 crc kubenswrapper[4809]: I0312 09:20:41.583757 4809 scope.go:117] "RemoveContainer" containerID="5e5d0e72956c55db46c9283659f52a0dccdc432343e603a20a597e3956337158" Mar 12 09:20:41 crc kubenswrapper[4809]: I0312 09:20:41.602048 4809 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7aa8a573-6465-4f92-9cc7-79620af9d7f0-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 09:20:41 crc kubenswrapper[4809]: I0312 09:20:41.602075 4809 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7aa8a573-6465-4f92-9cc7-79620af9d7f0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 09:20:41 crc kubenswrapper[4809]: I0312 09:20:41.603190 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7aa8a573-6465-4f92-9cc7-79620af9d7f0-config-data" (OuterVolumeSpecName: "config-data") pod "7aa8a573-6465-4f92-9cc7-79620af9d7f0" (UID: "7aa8a573-6465-4f92-9cc7-79620af9d7f0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 09:20:41 crc kubenswrapper[4809]: I0312 09:20:41.705394 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7aa8a573-6465-4f92-9cc7-79620af9d7f0-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 09:20:41 crc kubenswrapper[4809]: I0312 09:20:41.733525 4809 scope.go:117] "RemoveContainer" containerID="0c28ea58806b5ca0a408d819133f1a7219cb7c5a2fddf72ac6bf4bd426c3856d" Mar 12 09:20:41 crc kubenswrapper[4809]: I0312 09:20:41.776212 4809 scope.go:117] "RemoveContainer" containerID="747ca9a97fd2e1b0ed33bcfa8f7210c110bfc61e8804c44c8271a5703a01597e" Mar 12 09:20:41 crc kubenswrapper[4809]: I0312 09:20:41.804613 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-46txv" podUID="31c13acb-9eed-41f3-b4a5-63c4cdbdaa40" containerName="registry-server" probeResult="failure" output=< Mar 12 09:20:41 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 09:20:41 crc kubenswrapper[4809]: > Mar 12 09:20:41 crc kubenswrapper[4809]: I0312 09:20:41.816025 4809 scope.go:117] "RemoveContainer" containerID="f91c99b9bab69b28fbd88d6c79b1dad31b9177ad587d95b652aa790bb062703c" Mar 12 09:20:41 crc kubenswrapper[4809]: E0312 09:20:41.818079 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f91c99b9bab69b28fbd88d6c79b1dad31b9177ad587d95b652aa790bb062703c\": container with ID starting with f91c99b9bab69b28fbd88d6c79b1dad31b9177ad587d95b652aa790bb062703c not found: ID does not exist" containerID="f91c99b9bab69b28fbd88d6c79b1dad31b9177ad587d95b652aa790bb062703c" Mar 12 09:20:41 crc kubenswrapper[4809]: I0312 09:20:41.818140 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f91c99b9bab69b28fbd88d6c79b1dad31b9177ad587d95b652aa790bb062703c"} err="failed to get container status \"f91c99b9bab69b28fbd88d6c79b1dad31b9177ad587d95b652aa790bb062703c\": rpc error: code = NotFound desc = could not find container \"f91c99b9bab69b28fbd88d6c79b1dad31b9177ad587d95b652aa790bb062703c\": container with ID starting with f91c99b9bab69b28fbd88d6c79b1dad31b9177ad587d95b652aa790bb062703c not found: ID does not exist" Mar 12 09:20:41 crc kubenswrapper[4809]: I0312 09:20:41.818171 4809 scope.go:117] "RemoveContainer" containerID="5e5d0e72956c55db46c9283659f52a0dccdc432343e603a20a597e3956337158" Mar 12 09:20:41 crc kubenswrapper[4809]: E0312 09:20:41.818605 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e5d0e72956c55db46c9283659f52a0dccdc432343e603a20a597e3956337158\": container with ID starting with 5e5d0e72956c55db46c9283659f52a0dccdc432343e603a20a597e3956337158 not found: ID does not exist" containerID="5e5d0e72956c55db46c9283659f52a0dccdc432343e603a20a597e3956337158" Mar 12 09:20:41 crc kubenswrapper[4809]: I0312 09:20:41.818635 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e5d0e72956c55db46c9283659f52a0dccdc432343e603a20a597e3956337158"} err="failed to get container status \"5e5d0e72956c55db46c9283659f52a0dccdc432343e603a20a597e3956337158\": rpc error: code = NotFound desc = could not find container \"5e5d0e72956c55db46c9283659f52a0dccdc432343e603a20a597e3956337158\": container with ID starting with 5e5d0e72956c55db46c9283659f52a0dccdc432343e603a20a597e3956337158 not found: ID does not exist" Mar 12 09:20:41 crc kubenswrapper[4809]: I0312 09:20:41.818650 4809 scope.go:117] "RemoveContainer" containerID="0c28ea58806b5ca0a408d819133f1a7219cb7c5a2fddf72ac6bf4bd426c3856d" Mar 12 09:20:41 crc kubenswrapper[4809]: E0312 09:20:41.819009 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c28ea58806b5ca0a408d819133f1a7219cb7c5a2fddf72ac6bf4bd426c3856d\": container with ID starting with 0c28ea58806b5ca0a408d819133f1a7219cb7c5a2fddf72ac6bf4bd426c3856d not found: ID does not exist" containerID="0c28ea58806b5ca0a408d819133f1a7219cb7c5a2fddf72ac6bf4bd426c3856d" Mar 12 09:20:41 crc kubenswrapper[4809]: I0312 09:20:41.819053 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c28ea58806b5ca0a408d819133f1a7219cb7c5a2fddf72ac6bf4bd426c3856d"} err="failed to get container status \"0c28ea58806b5ca0a408d819133f1a7219cb7c5a2fddf72ac6bf4bd426c3856d\": rpc error: code = NotFound desc = could not find container \"0c28ea58806b5ca0a408d819133f1a7219cb7c5a2fddf72ac6bf4bd426c3856d\": container with ID starting with 0c28ea58806b5ca0a408d819133f1a7219cb7c5a2fddf72ac6bf4bd426c3856d not found: ID does not exist" Mar 12 09:20:41 crc kubenswrapper[4809]: I0312 09:20:41.819084 4809 scope.go:117] "RemoveContainer" containerID="747ca9a97fd2e1b0ed33bcfa8f7210c110bfc61e8804c44c8271a5703a01597e" Mar 12 09:20:41 crc kubenswrapper[4809]: E0312 09:20:41.819460 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"747ca9a97fd2e1b0ed33bcfa8f7210c110bfc61e8804c44c8271a5703a01597e\": container with ID starting with 747ca9a97fd2e1b0ed33bcfa8f7210c110bfc61e8804c44c8271a5703a01597e not found: ID does not exist" containerID="747ca9a97fd2e1b0ed33bcfa8f7210c110bfc61e8804c44c8271a5703a01597e" Mar 12 09:20:41 crc kubenswrapper[4809]: I0312 09:20:41.819494 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"747ca9a97fd2e1b0ed33bcfa8f7210c110bfc61e8804c44c8271a5703a01597e"} err="failed to get container status \"747ca9a97fd2e1b0ed33bcfa8f7210c110bfc61e8804c44c8271a5703a01597e\": rpc error: code = NotFound desc = could not find container \"747ca9a97fd2e1b0ed33bcfa8f7210c110bfc61e8804c44c8271a5703a01597e\": container with ID starting with 747ca9a97fd2e1b0ed33bcfa8f7210c110bfc61e8804c44c8271a5703a01597e not found: ID does not exist" Mar 12 09:20:41 crc kubenswrapper[4809]: I0312 09:20:41.879268 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 09:20:41 crc kubenswrapper[4809]: I0312 09:20:41.890834 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 12 09:20:41 crc kubenswrapper[4809]: I0312 09:20:41.927513 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 12 09:20:41 crc kubenswrapper[4809]: E0312 09:20:41.930271 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7aa8a573-6465-4f92-9cc7-79620af9d7f0" containerName="ceilometer-central-agent" Mar 12 09:20:41 crc kubenswrapper[4809]: I0312 09:20:41.930300 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="7aa8a573-6465-4f92-9cc7-79620af9d7f0" containerName="ceilometer-central-agent" Mar 12 09:20:41 crc kubenswrapper[4809]: E0312 09:20:41.930322 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7aa8a573-6465-4f92-9cc7-79620af9d7f0" containerName="proxy-httpd" Mar 12 09:20:41 crc kubenswrapper[4809]: I0312 09:20:41.930329 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="7aa8a573-6465-4f92-9cc7-79620af9d7f0" containerName="proxy-httpd" Mar 12 09:20:41 crc kubenswrapper[4809]: E0312 09:20:41.930342 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7aa8a573-6465-4f92-9cc7-79620af9d7f0" containerName="ceilometer-notification-agent" Mar 12 09:20:41 crc kubenswrapper[4809]: I0312 09:20:41.930347 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="7aa8a573-6465-4f92-9cc7-79620af9d7f0" containerName="ceilometer-notification-agent" Mar 12 09:20:41 crc kubenswrapper[4809]: E0312 09:20:41.930431 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7aa8a573-6465-4f92-9cc7-79620af9d7f0" containerName="sg-core" Mar 12 09:20:41 crc kubenswrapper[4809]: I0312 09:20:41.930437 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="7aa8a573-6465-4f92-9cc7-79620af9d7f0" containerName="sg-core" Mar 12 09:20:41 crc kubenswrapper[4809]: I0312 09:20:41.932466 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="7aa8a573-6465-4f92-9cc7-79620af9d7f0" containerName="ceilometer-central-agent" Mar 12 09:20:41 crc kubenswrapper[4809]: I0312 09:20:41.932488 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="7aa8a573-6465-4f92-9cc7-79620af9d7f0" containerName="sg-core" Mar 12 09:20:41 crc kubenswrapper[4809]: I0312 09:20:41.932537 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="7aa8a573-6465-4f92-9cc7-79620af9d7f0" containerName="proxy-httpd" Mar 12 09:20:41 crc kubenswrapper[4809]: I0312 09:20:41.932548 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="7aa8a573-6465-4f92-9cc7-79620af9d7f0" containerName="ceilometer-notification-agent" Mar 12 09:20:41 crc kubenswrapper[4809]: I0312 09:20:41.935828 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 09:20:41 crc kubenswrapper[4809]: I0312 09:20:41.939516 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 12 09:20:41 crc kubenswrapper[4809]: I0312 09:20:41.940291 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 12 09:20:41 crc kubenswrapper[4809]: I0312 09:20:41.951333 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Mar 12 09:20:41 crc kubenswrapper[4809]: I0312 09:20:41.959497 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 09:20:42 crc kubenswrapper[4809]: I0312 09:20:42.014151 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpwtb\" (UniqueName: \"kubernetes.io/projected/c30cf37a-f8d3-4e4c-84b1-d348eccf1b02-kube-api-access-zpwtb\") pod \"ceilometer-0\" (UID: \"c30cf37a-f8d3-4e4c-84b1-d348eccf1b02\") " pod="openstack/ceilometer-0" Mar 12 09:20:42 crc kubenswrapper[4809]: I0312 09:20:42.014412 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c30cf37a-f8d3-4e4c-84b1-d348eccf1b02-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c30cf37a-f8d3-4e4c-84b1-d348eccf1b02\") " pod="openstack/ceilometer-0" Mar 12 09:20:42 crc kubenswrapper[4809]: I0312 09:20:42.014445 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c30cf37a-f8d3-4e4c-84b1-d348eccf1b02-run-httpd\") pod \"ceilometer-0\" (UID: \"c30cf37a-f8d3-4e4c-84b1-d348eccf1b02\") " pod="openstack/ceilometer-0" Mar 12 09:20:42 crc kubenswrapper[4809]: I0312 09:20:42.014715 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c30cf37a-f8d3-4e4c-84b1-d348eccf1b02-scripts\") pod \"ceilometer-0\" (UID: \"c30cf37a-f8d3-4e4c-84b1-d348eccf1b02\") " pod="openstack/ceilometer-0" Mar 12 09:20:42 crc kubenswrapper[4809]: I0312 09:20:42.015186 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c30cf37a-f8d3-4e4c-84b1-d348eccf1b02-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c30cf37a-f8d3-4e4c-84b1-d348eccf1b02\") " pod="openstack/ceilometer-0" Mar 12 09:20:42 crc kubenswrapper[4809]: I0312 09:20:42.015231 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c30cf37a-f8d3-4e4c-84b1-d348eccf1b02-config-data\") pod \"ceilometer-0\" (UID: \"c30cf37a-f8d3-4e4c-84b1-d348eccf1b02\") " pod="openstack/ceilometer-0" Mar 12 09:20:42 crc kubenswrapper[4809]: I0312 09:20:42.015322 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c30cf37a-f8d3-4e4c-84b1-d348eccf1b02-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c30cf37a-f8d3-4e4c-84b1-d348eccf1b02\") " pod="openstack/ceilometer-0" Mar 12 09:20:42 crc kubenswrapper[4809]: I0312 09:20:42.015391 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c30cf37a-f8d3-4e4c-84b1-d348eccf1b02-log-httpd\") pod \"ceilometer-0\" (UID: \"c30cf37a-f8d3-4e4c-84b1-d348eccf1b02\") " pod="openstack/ceilometer-0" Mar 12 09:20:42 crc kubenswrapper[4809]: I0312 09:20:42.117894 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpwtb\" (UniqueName: \"kubernetes.io/projected/c30cf37a-f8d3-4e4c-84b1-d348eccf1b02-kube-api-access-zpwtb\") pod \"ceilometer-0\" (UID: \"c30cf37a-f8d3-4e4c-84b1-d348eccf1b02\") " pod="openstack/ceilometer-0" Mar 12 09:20:42 crc kubenswrapper[4809]: I0312 09:20:42.117941 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c30cf37a-f8d3-4e4c-84b1-d348eccf1b02-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c30cf37a-f8d3-4e4c-84b1-d348eccf1b02\") " pod="openstack/ceilometer-0" Mar 12 09:20:42 crc kubenswrapper[4809]: I0312 09:20:42.117970 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c30cf37a-f8d3-4e4c-84b1-d348eccf1b02-run-httpd\") pod \"ceilometer-0\" (UID: \"c30cf37a-f8d3-4e4c-84b1-d348eccf1b02\") " pod="openstack/ceilometer-0" Mar 12 09:20:42 crc kubenswrapper[4809]: I0312 09:20:42.118019 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c30cf37a-f8d3-4e4c-84b1-d348eccf1b02-scripts\") pod \"ceilometer-0\" (UID: \"c30cf37a-f8d3-4e4c-84b1-d348eccf1b02\") " pod="openstack/ceilometer-0" Mar 12 09:20:42 crc kubenswrapper[4809]: I0312 09:20:42.118100 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c30cf37a-f8d3-4e4c-84b1-d348eccf1b02-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c30cf37a-f8d3-4e4c-84b1-d348eccf1b02\") " pod="openstack/ceilometer-0" Mar 12 09:20:42 crc kubenswrapper[4809]: I0312 09:20:42.118136 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c30cf37a-f8d3-4e4c-84b1-d348eccf1b02-config-data\") pod \"ceilometer-0\" (UID: \"c30cf37a-f8d3-4e4c-84b1-d348eccf1b02\") " pod="openstack/ceilometer-0" Mar 12 09:20:42 crc kubenswrapper[4809]: I0312 09:20:42.118158 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c30cf37a-f8d3-4e4c-84b1-d348eccf1b02-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c30cf37a-f8d3-4e4c-84b1-d348eccf1b02\") " pod="openstack/ceilometer-0" Mar 12 09:20:42 crc kubenswrapper[4809]: I0312 09:20:42.118175 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c30cf37a-f8d3-4e4c-84b1-d348eccf1b02-log-httpd\") pod \"ceilometer-0\" (UID: \"c30cf37a-f8d3-4e4c-84b1-d348eccf1b02\") " pod="openstack/ceilometer-0" Mar 12 09:20:42 crc kubenswrapper[4809]: I0312 09:20:42.119465 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c30cf37a-f8d3-4e4c-84b1-d348eccf1b02-log-httpd\") pod \"ceilometer-0\" (UID: \"c30cf37a-f8d3-4e4c-84b1-d348eccf1b02\") " pod="openstack/ceilometer-0" Mar 12 09:20:42 crc kubenswrapper[4809]: I0312 09:20:42.119590 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c30cf37a-f8d3-4e4c-84b1-d348eccf1b02-run-httpd\") pod \"ceilometer-0\" (UID: \"c30cf37a-f8d3-4e4c-84b1-d348eccf1b02\") " pod="openstack/ceilometer-0" Mar 12 09:20:42 crc kubenswrapper[4809]: I0312 09:20:42.124381 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c30cf37a-f8d3-4e4c-84b1-d348eccf1b02-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c30cf37a-f8d3-4e4c-84b1-d348eccf1b02\") " pod="openstack/ceilometer-0" Mar 12 09:20:42 crc kubenswrapper[4809]: I0312 09:20:42.127007 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c30cf37a-f8d3-4e4c-84b1-d348eccf1b02-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c30cf37a-f8d3-4e4c-84b1-d348eccf1b02\") " pod="openstack/ceilometer-0" Mar 12 09:20:42 crc kubenswrapper[4809]: I0312 09:20:42.127995 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c30cf37a-f8d3-4e4c-84b1-d348eccf1b02-config-data\") pod \"ceilometer-0\" (UID: \"c30cf37a-f8d3-4e4c-84b1-d348eccf1b02\") " pod="openstack/ceilometer-0" Mar 12 09:20:42 crc kubenswrapper[4809]: I0312 09:20:42.128235 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c30cf37a-f8d3-4e4c-84b1-d348eccf1b02-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c30cf37a-f8d3-4e4c-84b1-d348eccf1b02\") " pod="openstack/ceilometer-0" Mar 12 09:20:42 crc kubenswrapper[4809]: I0312 09:20:42.132733 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c30cf37a-f8d3-4e4c-84b1-d348eccf1b02-scripts\") pod \"ceilometer-0\" (UID: \"c30cf37a-f8d3-4e4c-84b1-d348eccf1b02\") " pod="openstack/ceilometer-0" Mar 12 09:20:42 crc kubenswrapper[4809]: I0312 09:20:42.134665 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpwtb\" (UniqueName: \"kubernetes.io/projected/c30cf37a-f8d3-4e4c-84b1-d348eccf1b02-kube-api-access-zpwtb\") pod \"ceilometer-0\" (UID: \"c30cf37a-f8d3-4e4c-84b1-d348eccf1b02\") " pod="openstack/ceilometer-0" Mar 12 09:20:42 crc kubenswrapper[4809]: I0312 09:20:42.274157 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 09:20:42 crc kubenswrapper[4809]: I0312 09:20:42.859624 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 09:20:43 crc kubenswrapper[4809]: I0312 09:20:43.203813 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7aa8a573-6465-4f92-9cc7-79620af9d7f0" path="/var/lib/kubelet/pods/7aa8a573-6465-4f92-9cc7-79620af9d7f0/volumes" Mar 12 09:20:43 crc kubenswrapper[4809]: I0312 09:20:43.569307 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c30cf37a-f8d3-4e4c-84b1-d348eccf1b02","Type":"ContainerStarted","Data":"02ece6097791972aee984f5c706e62b04da097ac7147ceb90978ca4195b9edbf"} Mar 12 09:20:43 crc kubenswrapper[4809]: I0312 09:20:43.569734 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c30cf37a-f8d3-4e4c-84b1-d348eccf1b02","Type":"ContainerStarted","Data":"4e822f15373dfe0bb4547c3972d934404e1cd66dd48d9c82a1a804942b4c34d8"} Mar 12 09:20:44 crc kubenswrapper[4809]: I0312 09:20:44.582453 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c30cf37a-f8d3-4e4c-84b1-d348eccf1b02","Type":"ContainerStarted","Data":"f8d8bdeaa365936f2e3cb7dafcd6ad047dee30b9ddeb1a86d9000d18f81c52a6"} Mar 12 09:20:45 crc kubenswrapper[4809]: I0312 09:20:45.597620 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c30cf37a-f8d3-4e4c-84b1-d348eccf1b02","Type":"ContainerStarted","Data":"7c77095fdfdd06e3d6b4f7985b607d84be939cfd3de7ca6700038135487315f5"} Mar 12 09:20:48 crc kubenswrapper[4809]: I0312 09:20:48.662684 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c30cf37a-f8d3-4e4c-84b1-d348eccf1b02","Type":"ContainerStarted","Data":"5d901c161a2a688547f410f5b7f78b6c4a70da982ee0b59756df913f30c1803e"} Mar 12 09:20:48 crc kubenswrapper[4809]: I0312 09:20:48.663332 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 12 09:20:48 crc kubenswrapper[4809]: I0312 09:20:48.693152 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.222265387 podStartE2EDuration="7.693134073s" podCreationTimestamp="2026-03-12 09:20:41 +0000 UTC" firstStartedPulling="2026-03-12 09:20:42.860866686 +0000 UTC m=+4916.442902419" lastFinishedPulling="2026-03-12 09:20:47.331735372 +0000 UTC m=+4920.913771105" observedRunningTime="2026-03-12 09:20:48.684661071 +0000 UTC m=+4922.266696814" watchObservedRunningTime="2026-03-12 09:20:48.693134073 +0000 UTC m=+4922.275169806" Mar 12 09:20:49 crc kubenswrapper[4809]: I0312 09:20:49.063014 4809 scope.go:117] "RemoveContainer" containerID="9b7c5a236cb3855397521f1895f9fc69f78a5f4b6c64c6475c12dc1103055c0e" Mar 12 09:20:49 crc kubenswrapper[4809]: I0312 09:20:49.109660 4809 scope.go:117] "RemoveContainer" containerID="afaea90a01e525c8a793c237e158379fce17e54cb95b2150e82113f8f7c9c6d2" Mar 12 09:20:49 crc kubenswrapper[4809]: I0312 09:20:49.195604 4809 scope.go:117] "RemoveContainer" containerID="1dbc9116030d9b415404602a0515cf16917f16419579c8ba4bc5567e1a1520f8" Mar 12 09:20:49 crc kubenswrapper[4809]: I0312 09:20:49.225778 4809 scope.go:117] "RemoveContainer" containerID="6a002b770108522cac326e4c966f8f817e6201f7c15dd2d637628f5568e32439" Mar 12 09:20:52 crc kubenswrapper[4809]: I0312 09:20:52.391518 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-46txv" podUID="31c13acb-9eed-41f3-b4a5-63c4cdbdaa40" containerName="registry-server" probeResult="failure" output=< Mar 12 09:20:52 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 09:20:52 crc kubenswrapper[4809]: > Mar 12 09:21:01 crc kubenswrapper[4809]: I0312 09:21:01.851090 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-46txv" podUID="31c13acb-9eed-41f3-b4a5-63c4cdbdaa40" containerName="registry-server" probeResult="failure" output=< Mar 12 09:21:01 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 09:21:01 crc kubenswrapper[4809]: > Mar 12 09:21:11 crc kubenswrapper[4809]: I0312 09:21:11.926226 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-46txv" podUID="31c13acb-9eed-41f3-b4a5-63c4cdbdaa40" containerName="registry-server" probeResult="failure" output=< Mar 12 09:21:11 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 09:21:11 crc kubenswrapper[4809]: > Mar 12 09:21:12 crc kubenswrapper[4809]: I0312 09:21:12.389455 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Mar 12 09:21:22 crc kubenswrapper[4809]: I0312 09:21:22.060250 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-46txv" podUID="31c13acb-9eed-41f3-b4a5-63c4cdbdaa40" containerName="registry-server" probeResult="failure" output=< Mar 12 09:21:22 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 09:21:22 crc kubenswrapper[4809]: > Mar 12 09:21:31 crc kubenswrapper[4809]: I0312 09:21:31.793645 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-46txv" podUID="31c13acb-9eed-41f3-b4a5-63c4cdbdaa40" containerName="registry-server" probeResult="failure" output=< Mar 12 09:21:31 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 09:21:31 crc kubenswrapper[4809]: > Mar 12 09:21:40 crc kubenswrapper[4809]: I0312 09:21:40.845355 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-46txv" Mar 12 09:21:40 crc kubenswrapper[4809]: I0312 09:21:40.914101 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-46txv" Mar 12 09:21:41 crc kubenswrapper[4809]: I0312 09:21:41.137494 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-46txv"] Mar 12 09:21:42 crc kubenswrapper[4809]: I0312 09:21:42.379937 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-46txv" podUID="31c13acb-9eed-41f3-b4a5-63c4cdbdaa40" containerName="registry-server" containerID="cri-o://08c3d8f88219b058a97746e7181f06b98934851205b15209fed58ea9d6bf9dec" gracePeriod=2 Mar 12 09:21:43 crc kubenswrapper[4809]: I0312 09:21:43.394799 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-46txv_31c13acb-9eed-41f3-b4a5-63c4cdbdaa40/registry-server/0.log" Mar 12 09:21:43 crc kubenswrapper[4809]: I0312 09:21:43.400232 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-46txv" event={"ID":"31c13acb-9eed-41f3-b4a5-63c4cdbdaa40","Type":"ContainerDied","Data":"08c3d8f88219b058a97746e7181f06b98934851205b15209fed58ea9d6bf9dec"} Mar 12 09:21:43 crc kubenswrapper[4809]: I0312 09:21:43.400801 4809 generic.go:334] "Generic (PLEG): container finished" podID="31c13acb-9eed-41f3-b4a5-63c4cdbdaa40" containerID="08c3d8f88219b058a97746e7181f06b98934851205b15209fed58ea9d6bf9dec" exitCode=0 Mar 12 09:21:43 crc kubenswrapper[4809]: I0312 09:21:43.404105 4809 scope.go:117] "RemoveContainer" containerID="022ddad6061f352965a5f12228036f080521580cb0df65a26d724712ed694648" Mar 12 09:21:43 crc kubenswrapper[4809]: I0312 09:21:43.893261 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-46txv" Mar 12 09:21:43 crc kubenswrapper[4809]: I0312 09:21:43.949949 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31c13acb-9eed-41f3-b4a5-63c4cdbdaa40-catalog-content\") pod \"31c13acb-9eed-41f3-b4a5-63c4cdbdaa40\" (UID: \"31c13acb-9eed-41f3-b4a5-63c4cdbdaa40\") " Mar 12 09:21:43 crc kubenswrapper[4809]: I0312 09:21:43.950503 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31c13acb-9eed-41f3-b4a5-63c4cdbdaa40-utilities\") pod \"31c13acb-9eed-41f3-b4a5-63c4cdbdaa40\" (UID: \"31c13acb-9eed-41f3-b4a5-63c4cdbdaa40\") " Mar 12 09:21:43 crc kubenswrapper[4809]: I0312 09:21:43.950671 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdh7p\" (UniqueName: \"kubernetes.io/projected/31c13acb-9eed-41f3-b4a5-63c4cdbdaa40-kube-api-access-tdh7p\") pod \"31c13acb-9eed-41f3-b4a5-63c4cdbdaa40\" (UID: \"31c13acb-9eed-41f3-b4a5-63c4cdbdaa40\") " Mar 12 09:21:43 crc kubenswrapper[4809]: I0312 09:21:43.954509 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31c13acb-9eed-41f3-b4a5-63c4cdbdaa40-utilities" (OuterVolumeSpecName: "utilities") pod "31c13acb-9eed-41f3-b4a5-63c4cdbdaa40" (UID: "31c13acb-9eed-41f3-b4a5-63c4cdbdaa40"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 09:21:43 crc kubenswrapper[4809]: I0312 09:21:43.978988 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31c13acb-9eed-41f3-b4a5-63c4cdbdaa40-kube-api-access-tdh7p" (OuterVolumeSpecName: "kube-api-access-tdh7p") pod "31c13acb-9eed-41f3-b4a5-63c4cdbdaa40" (UID: "31c13acb-9eed-41f3-b4a5-63c4cdbdaa40"). InnerVolumeSpecName "kube-api-access-tdh7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 09:21:44 crc kubenswrapper[4809]: I0312 09:21:44.056031 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdh7p\" (UniqueName: \"kubernetes.io/projected/31c13acb-9eed-41f3-b4a5-63c4cdbdaa40-kube-api-access-tdh7p\") on node \"crc\" DevicePath \"\"" Mar 12 09:21:44 crc kubenswrapper[4809]: I0312 09:21:44.056092 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31c13acb-9eed-41f3-b4a5-63c4cdbdaa40-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 09:21:44 crc kubenswrapper[4809]: I0312 09:21:44.211581 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31c13acb-9eed-41f3-b4a5-63c4cdbdaa40-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31c13acb-9eed-41f3-b4a5-63c4cdbdaa40" (UID: "31c13acb-9eed-41f3-b4a5-63c4cdbdaa40"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 09:21:44 crc kubenswrapper[4809]: I0312 09:21:44.260786 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31c13acb-9eed-41f3-b4a5-63c4cdbdaa40-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 09:21:44 crc kubenswrapper[4809]: I0312 09:21:44.446713 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-46txv" event={"ID":"31c13acb-9eed-41f3-b4a5-63c4cdbdaa40","Type":"ContainerDied","Data":"7a8b9fdb8dd18dde852347db3dd330348b113389c3c370464698be77a868bafb"} Mar 12 09:21:44 crc kubenswrapper[4809]: I0312 09:21:44.447151 4809 scope.go:117] "RemoveContainer" containerID="08c3d8f88219b058a97746e7181f06b98934851205b15209fed58ea9d6bf9dec" Mar 12 09:21:44 crc kubenswrapper[4809]: I0312 09:21:44.446804 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-46txv" Mar 12 09:21:44 crc kubenswrapper[4809]: I0312 09:21:44.489979 4809 scope.go:117] "RemoveContainer" containerID="a921f29721fba5bf17ebb48892cd422ae94fdf32f90985132d70452042a7204c" Mar 12 09:21:44 crc kubenswrapper[4809]: I0312 09:21:44.502532 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-46txv"] Mar 12 09:21:44 crc kubenswrapper[4809]: I0312 09:21:44.514132 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-46txv"] Mar 12 09:21:44 crc kubenswrapper[4809]: I0312 09:21:44.535937 4809 scope.go:117] "RemoveContainer" containerID="630f38dc4a43b9e34c3ed2c530afdc8f3d32dc5b8a9b97865374fa7eecb84da9" Mar 12 09:21:45 crc kubenswrapper[4809]: I0312 09:21:45.048746 4809 patch_prober.go:28] interesting pod/machine-config-daemon-h6d4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 09:21:45 crc kubenswrapper[4809]: I0312 09:21:45.049723 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 09:21:45 crc kubenswrapper[4809]: I0312 09:21:45.129500 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31c13acb-9eed-41f3-b4a5-63c4cdbdaa40" path="/var/lib/kubelet/pods/31c13acb-9eed-41f3-b4a5-63c4cdbdaa40/volumes" Mar 12 09:22:00 crc kubenswrapper[4809]: I0312 09:22:00.238158 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555122-5ms2r"] Mar 12 09:22:00 crc kubenswrapper[4809]: E0312 09:22:00.239852 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31c13acb-9eed-41f3-b4a5-63c4cdbdaa40" containerName="extract-utilities" Mar 12 09:22:00 crc kubenswrapper[4809]: I0312 09:22:00.239878 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="31c13acb-9eed-41f3-b4a5-63c4cdbdaa40" containerName="extract-utilities" Mar 12 09:22:00 crc kubenswrapper[4809]: E0312 09:22:00.239889 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31c13acb-9eed-41f3-b4a5-63c4cdbdaa40" containerName="registry-server" Mar 12 09:22:00 crc kubenswrapper[4809]: I0312 09:22:00.239896 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="31c13acb-9eed-41f3-b4a5-63c4cdbdaa40" containerName="registry-server" Mar 12 09:22:00 crc kubenswrapper[4809]: E0312 09:22:00.239927 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31c13acb-9eed-41f3-b4a5-63c4cdbdaa40" containerName="registry-server" Mar 12 09:22:00 crc kubenswrapper[4809]: I0312 09:22:00.239935 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="31c13acb-9eed-41f3-b4a5-63c4cdbdaa40" containerName="registry-server" Mar 12 09:22:00 crc kubenswrapper[4809]: E0312 09:22:00.239954 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31c13acb-9eed-41f3-b4a5-63c4cdbdaa40" containerName="extract-content" Mar 12 09:22:00 crc kubenswrapper[4809]: I0312 09:22:00.239961 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="31c13acb-9eed-41f3-b4a5-63c4cdbdaa40" containerName="extract-content" Mar 12 09:22:00 crc kubenswrapper[4809]: I0312 09:22:00.242005 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="31c13acb-9eed-41f3-b4a5-63c4cdbdaa40" containerName="registry-server" Mar 12 09:22:00 crc kubenswrapper[4809]: I0312 09:22:00.242034 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="31c13acb-9eed-41f3-b4a5-63c4cdbdaa40" containerName="registry-server" Mar 12 09:22:00 crc kubenswrapper[4809]: I0312 09:22:00.246519 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555122-5ms2r" Mar 12 09:22:00 crc kubenswrapper[4809]: I0312 09:22:00.258923 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 09:22:00 crc kubenswrapper[4809]: I0312 09:22:00.258924 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-tvxqv" Mar 12 09:22:00 crc kubenswrapper[4809]: I0312 09:22:00.258931 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 09:22:00 crc kubenswrapper[4809]: I0312 09:22:00.278528 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555122-5ms2r"] Mar 12 09:22:00 crc kubenswrapper[4809]: I0312 09:22:00.354420 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gj4c4\" (UniqueName: \"kubernetes.io/projected/6da86c5d-9fed-4acf-8368-4cfb860c81d6-kube-api-access-gj4c4\") pod \"auto-csr-approver-29555122-5ms2r\" (UID: \"6da86c5d-9fed-4acf-8368-4cfb860c81d6\") " pod="openshift-infra/auto-csr-approver-29555122-5ms2r" Mar 12 09:22:00 crc kubenswrapper[4809]: I0312 09:22:00.457006 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gj4c4\" (UniqueName: \"kubernetes.io/projected/6da86c5d-9fed-4acf-8368-4cfb860c81d6-kube-api-access-gj4c4\") pod \"auto-csr-approver-29555122-5ms2r\" (UID: \"6da86c5d-9fed-4acf-8368-4cfb860c81d6\") " pod="openshift-infra/auto-csr-approver-29555122-5ms2r" Mar 12 09:22:00 crc kubenswrapper[4809]: I0312 09:22:00.481093 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gj4c4\" (UniqueName: \"kubernetes.io/projected/6da86c5d-9fed-4acf-8368-4cfb860c81d6-kube-api-access-gj4c4\") pod \"auto-csr-approver-29555122-5ms2r\" (UID: \"6da86c5d-9fed-4acf-8368-4cfb860c81d6\") " pod="openshift-infra/auto-csr-approver-29555122-5ms2r" Mar 12 09:22:00 crc kubenswrapper[4809]: I0312 09:22:00.572745 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555122-5ms2r" Mar 12 09:22:01 crc kubenswrapper[4809]: I0312 09:22:01.215830 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555122-5ms2r"] Mar 12 09:22:01 crc kubenswrapper[4809]: I0312 09:22:01.699708 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555122-5ms2r" event={"ID":"6da86c5d-9fed-4acf-8368-4cfb860c81d6","Type":"ContainerStarted","Data":"8c4cf183bdecbb4b3313893cbdb29470d8a477e172fab2a2fe07d689502e81d3"} Mar 12 09:22:03 crc kubenswrapper[4809]: I0312 09:22:03.729767 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555122-5ms2r" event={"ID":"6da86c5d-9fed-4acf-8368-4cfb860c81d6","Type":"ContainerStarted","Data":"6ab81616e8f5dff5c7ec21dd14d530614d046fab563b224a4039987226f0edce"} Mar 12 09:22:03 crc kubenswrapper[4809]: I0312 09:22:03.745615 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555122-5ms2r" podStartSLOduration=2.4214341839999998 podStartE2EDuration="3.745570879s" podCreationTimestamp="2026-03-12 09:22:00 +0000 UTC" firstStartedPulling="2026-03-12 09:22:01.23249986 +0000 UTC m=+4994.814535593" lastFinishedPulling="2026-03-12 09:22:02.556636555 +0000 UTC m=+4996.138672288" observedRunningTime="2026-03-12 09:22:03.744544681 +0000 UTC m=+4997.326580414" watchObservedRunningTime="2026-03-12 09:22:03.745570879 +0000 UTC m=+4997.327606612" Mar 12 09:22:05 crc kubenswrapper[4809]: I0312 09:22:05.752551 4809 generic.go:334] "Generic (PLEG): container finished" podID="6da86c5d-9fed-4acf-8368-4cfb860c81d6" containerID="6ab81616e8f5dff5c7ec21dd14d530614d046fab563b224a4039987226f0edce" exitCode=0 Mar 12 09:22:05 crc kubenswrapper[4809]: I0312 09:22:05.752640 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555122-5ms2r" event={"ID":"6da86c5d-9fed-4acf-8368-4cfb860c81d6","Type":"ContainerDied","Data":"6ab81616e8f5dff5c7ec21dd14d530614d046fab563b224a4039987226f0edce"} Mar 12 09:22:07 crc kubenswrapper[4809]: I0312 09:22:07.531820 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555122-5ms2r" Mar 12 09:22:07 crc kubenswrapper[4809]: I0312 09:22:07.689556 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gj4c4\" (UniqueName: \"kubernetes.io/projected/6da86c5d-9fed-4acf-8368-4cfb860c81d6-kube-api-access-gj4c4\") pod \"6da86c5d-9fed-4acf-8368-4cfb860c81d6\" (UID: \"6da86c5d-9fed-4acf-8368-4cfb860c81d6\") " Mar 12 09:22:07 crc kubenswrapper[4809]: I0312 09:22:07.699280 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6da86c5d-9fed-4acf-8368-4cfb860c81d6-kube-api-access-gj4c4" (OuterVolumeSpecName: "kube-api-access-gj4c4") pod "6da86c5d-9fed-4acf-8368-4cfb860c81d6" (UID: "6da86c5d-9fed-4acf-8368-4cfb860c81d6"). InnerVolumeSpecName "kube-api-access-gj4c4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 09:22:07 crc kubenswrapper[4809]: I0312 09:22:07.789724 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555122-5ms2r" event={"ID":"6da86c5d-9fed-4acf-8368-4cfb860c81d6","Type":"ContainerDied","Data":"8c4cf183bdecbb4b3313893cbdb29470d8a477e172fab2a2fe07d689502e81d3"} Mar 12 09:22:07 crc kubenswrapper[4809]: I0312 09:22:07.789773 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c4cf183bdecbb4b3313893cbdb29470d8a477e172fab2a2fe07d689502e81d3" Mar 12 09:22:07 crc kubenswrapper[4809]: I0312 09:22:07.789850 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555122-5ms2r" Mar 12 09:22:07 crc kubenswrapper[4809]: I0312 09:22:07.796064 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gj4c4\" (UniqueName: \"kubernetes.io/projected/6da86c5d-9fed-4acf-8368-4cfb860c81d6-kube-api-access-gj4c4\") on node \"crc\" DevicePath \"\"" Mar 12 09:22:07 crc kubenswrapper[4809]: I0312 09:22:07.861761 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555116-vnrs9"] Mar 12 09:22:07 crc kubenswrapper[4809]: I0312 09:22:07.879524 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555116-vnrs9"] Mar 12 09:22:09 crc kubenswrapper[4809]: I0312 09:22:09.130661 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f7ed2c6-8773-41ec-aff7-35d3127dc86d" path="/var/lib/kubelet/pods/1f7ed2c6-8773-41ec-aff7-35d3127dc86d/volumes" Mar 12 09:22:15 crc kubenswrapper[4809]: I0312 09:22:15.048538 4809 patch_prober.go:28] interesting pod/machine-config-daemon-h6d4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 09:22:15 crc kubenswrapper[4809]: I0312 09:22:15.049352 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 09:22:45 crc kubenswrapper[4809]: I0312 09:22:45.048654 4809 patch_prober.go:28] interesting pod/machine-config-daemon-h6d4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 09:22:45 crc kubenswrapper[4809]: I0312 09:22:45.049435 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 09:22:45 crc kubenswrapper[4809]: I0312 09:22:45.049499 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" Mar 12 09:22:45 crc kubenswrapper[4809]: I0312 09:22:45.050809 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"02f01e8c5df87af422c42c4ed2f0dc879d516281446aca0abce3266da364c81e"} pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 09:22:45 crc kubenswrapper[4809]: I0312 09:22:45.050882 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" containerID="cri-o://02f01e8c5df87af422c42c4ed2f0dc879d516281446aca0abce3266da364c81e" gracePeriod=600 Mar 12 09:22:45 crc kubenswrapper[4809]: E0312 09:22:45.185158 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:22:45 crc kubenswrapper[4809]: I0312 09:22:45.271569 4809 generic.go:334] "Generic (PLEG): container finished" podID="101483ba-8ed3-40eb-9855-077e9add029f" containerID="02f01e8c5df87af422c42c4ed2f0dc879d516281446aca0abce3266da364c81e" exitCode=0 Mar 12 09:22:45 crc kubenswrapper[4809]: I0312 09:22:45.271628 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" event={"ID":"101483ba-8ed3-40eb-9855-077e9add029f","Type":"ContainerDied","Data":"02f01e8c5df87af422c42c4ed2f0dc879d516281446aca0abce3266da364c81e"} Mar 12 09:22:45 crc kubenswrapper[4809]: I0312 09:22:45.271693 4809 scope.go:117] "RemoveContainer" containerID="8d31c87fdce03fe5a0350b3981acfc6e7cf41f3cc5482b0734974401077eea98" Mar 12 09:22:45 crc kubenswrapper[4809]: I0312 09:22:45.273030 4809 scope.go:117] "RemoveContainer" containerID="02f01e8c5df87af422c42c4ed2f0dc879d516281446aca0abce3266da364c81e" Mar 12 09:22:45 crc kubenswrapper[4809]: E0312 09:22:45.273827 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:22:49 crc kubenswrapper[4809]: I0312 09:22:49.577404 4809 scope.go:117] "RemoveContainer" containerID="ac864f3a8b8d53d727182fe38f205d55ba4da341ec2230a4b2d339751aa35dab" Mar 12 09:22:59 crc kubenswrapper[4809]: I0312 09:22:59.112924 4809 scope.go:117] "RemoveContainer" containerID="02f01e8c5df87af422c42c4ed2f0dc879d516281446aca0abce3266da364c81e" Mar 12 09:22:59 crc kubenswrapper[4809]: E0312 09:22:59.116085 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:23:10 crc kubenswrapper[4809]: I0312 09:23:10.107495 4809 scope.go:117] "RemoveContainer" containerID="02f01e8c5df87af422c42c4ed2f0dc879d516281446aca0abce3266da364c81e" Mar 12 09:23:10 crc kubenswrapper[4809]: E0312 09:23:10.108912 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:23:23 crc kubenswrapper[4809]: I0312 09:23:23.106077 4809 scope.go:117] "RemoveContainer" containerID="02f01e8c5df87af422c42c4ed2f0dc879d516281446aca0abce3266da364c81e" Mar 12 09:23:23 crc kubenswrapper[4809]: E0312 09:23:23.106787 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:23:34 crc kubenswrapper[4809]: I0312 09:23:34.106787 4809 scope.go:117] "RemoveContainer" containerID="02f01e8c5df87af422c42c4ed2f0dc879d516281446aca0abce3266da364c81e" Mar 12 09:23:34 crc kubenswrapper[4809]: E0312 09:23:34.107780 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:23:45 crc kubenswrapper[4809]: I0312 09:23:45.107665 4809 scope.go:117] "RemoveContainer" containerID="02f01e8c5df87af422c42c4ed2f0dc879d516281446aca0abce3266da364c81e" Mar 12 09:23:45 crc kubenswrapper[4809]: E0312 09:23:45.108811 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:23:58 crc kubenswrapper[4809]: I0312 09:23:58.106794 4809 scope.go:117] "RemoveContainer" containerID="02f01e8c5df87af422c42c4ed2f0dc879d516281446aca0abce3266da364c81e" Mar 12 09:23:58 crc kubenswrapper[4809]: E0312 09:23:58.107796 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:24:00 crc kubenswrapper[4809]: I0312 09:24:00.161681 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555124-b9gln"] Mar 12 09:24:00 crc kubenswrapper[4809]: E0312 09:24:00.162647 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6da86c5d-9fed-4acf-8368-4cfb860c81d6" containerName="oc" Mar 12 09:24:00 crc kubenswrapper[4809]: I0312 09:24:00.162664 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="6da86c5d-9fed-4acf-8368-4cfb860c81d6" containerName="oc" Mar 12 09:24:00 crc kubenswrapper[4809]: I0312 09:24:00.162976 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="6da86c5d-9fed-4acf-8368-4cfb860c81d6" containerName="oc" Mar 12 09:24:00 crc kubenswrapper[4809]: I0312 09:24:00.164069 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555124-b9gln" Mar 12 09:24:00 crc kubenswrapper[4809]: I0312 09:24:00.166959 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 09:24:00 crc kubenswrapper[4809]: I0312 09:24:00.167072 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 09:24:00 crc kubenswrapper[4809]: I0312 09:24:00.167082 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-tvxqv" Mar 12 09:24:00 crc kubenswrapper[4809]: I0312 09:24:00.173692 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555124-b9gln"] Mar 12 09:24:00 crc kubenswrapper[4809]: I0312 09:24:00.354189 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjtf7\" (UniqueName: \"kubernetes.io/projected/97100618-0145-4d43-b11c-fbaa657b6212-kube-api-access-jjtf7\") pod \"auto-csr-approver-29555124-b9gln\" (UID: \"97100618-0145-4d43-b11c-fbaa657b6212\") " pod="openshift-infra/auto-csr-approver-29555124-b9gln" Mar 12 09:24:00 crc kubenswrapper[4809]: I0312 09:24:00.457808 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjtf7\" (UniqueName: \"kubernetes.io/projected/97100618-0145-4d43-b11c-fbaa657b6212-kube-api-access-jjtf7\") pod \"auto-csr-approver-29555124-b9gln\" (UID: \"97100618-0145-4d43-b11c-fbaa657b6212\") " pod="openshift-infra/auto-csr-approver-29555124-b9gln" Mar 12 09:24:00 crc kubenswrapper[4809]: I0312 09:24:00.484389 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjtf7\" (UniqueName: \"kubernetes.io/projected/97100618-0145-4d43-b11c-fbaa657b6212-kube-api-access-jjtf7\") pod \"auto-csr-approver-29555124-b9gln\" (UID: \"97100618-0145-4d43-b11c-fbaa657b6212\") " pod="openshift-infra/auto-csr-approver-29555124-b9gln" Mar 12 09:24:00 crc kubenswrapper[4809]: I0312 09:24:00.492461 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555124-b9gln" Mar 12 09:24:01 crc kubenswrapper[4809]: I0312 09:24:01.017350 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555124-b9gln"] Mar 12 09:24:01 crc kubenswrapper[4809]: I0312 09:24:01.253576 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555124-b9gln" event={"ID":"97100618-0145-4d43-b11c-fbaa657b6212","Type":"ContainerStarted","Data":"7f43f2ceda30b298d242340f10d26691d27f020eb4865aa4dc3158252ebfd3d4"} Mar 12 09:24:03 crc kubenswrapper[4809]: I0312 09:24:03.277807 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555124-b9gln" event={"ID":"97100618-0145-4d43-b11c-fbaa657b6212","Type":"ContainerStarted","Data":"f1fbd606d2eeff0a68d94b7133f01f3e60ac01fb38129ad85456b28870add79c"} Mar 12 09:24:03 crc kubenswrapper[4809]: I0312 09:24:03.301826 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555124-b9gln" podStartSLOduration=2.075691184 podStartE2EDuration="3.301805683s" podCreationTimestamp="2026-03-12 09:24:00 +0000 UTC" firstStartedPulling="2026-03-12 09:24:01.01944425 +0000 UTC m=+5114.601479983" lastFinishedPulling="2026-03-12 09:24:02.245558749 +0000 UTC m=+5115.827594482" observedRunningTime="2026-03-12 09:24:03.294534025 +0000 UTC m=+5116.876569768" watchObservedRunningTime="2026-03-12 09:24:03.301805683 +0000 UTC m=+5116.883841416" Mar 12 09:24:04 crc kubenswrapper[4809]: I0312 09:24:04.291979 4809 generic.go:334] "Generic (PLEG): container finished" podID="97100618-0145-4d43-b11c-fbaa657b6212" containerID="f1fbd606d2eeff0a68d94b7133f01f3e60ac01fb38129ad85456b28870add79c" exitCode=0 Mar 12 09:24:04 crc kubenswrapper[4809]: I0312 09:24:04.292031 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555124-b9gln" event={"ID":"97100618-0145-4d43-b11c-fbaa657b6212","Type":"ContainerDied","Data":"f1fbd606d2eeff0a68d94b7133f01f3e60ac01fb38129ad85456b28870add79c"} Mar 12 09:24:06 crc kubenswrapper[4809]: I0312 09:24:06.317266 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555124-b9gln" event={"ID":"97100618-0145-4d43-b11c-fbaa657b6212","Type":"ContainerDied","Data":"7f43f2ceda30b298d242340f10d26691d27f020eb4865aa4dc3158252ebfd3d4"} Mar 12 09:24:06 crc kubenswrapper[4809]: I0312 09:24:06.317788 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f43f2ceda30b298d242340f10d26691d27f020eb4865aa4dc3158252ebfd3d4" Mar 12 09:24:06 crc kubenswrapper[4809]: I0312 09:24:06.702397 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555124-b9gln" Mar 12 09:24:06 crc kubenswrapper[4809]: I0312 09:24:06.883505 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjtf7\" (UniqueName: \"kubernetes.io/projected/97100618-0145-4d43-b11c-fbaa657b6212-kube-api-access-jjtf7\") pod \"97100618-0145-4d43-b11c-fbaa657b6212\" (UID: \"97100618-0145-4d43-b11c-fbaa657b6212\") " Mar 12 09:24:06 crc kubenswrapper[4809]: I0312 09:24:06.889161 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97100618-0145-4d43-b11c-fbaa657b6212-kube-api-access-jjtf7" (OuterVolumeSpecName: "kube-api-access-jjtf7") pod "97100618-0145-4d43-b11c-fbaa657b6212" (UID: "97100618-0145-4d43-b11c-fbaa657b6212"). InnerVolumeSpecName "kube-api-access-jjtf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 09:24:06 crc kubenswrapper[4809]: I0312 09:24:06.986430 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjtf7\" (UniqueName: \"kubernetes.io/projected/97100618-0145-4d43-b11c-fbaa657b6212-kube-api-access-jjtf7\") on node \"crc\" DevicePath \"\"" Mar 12 09:24:07 crc kubenswrapper[4809]: I0312 09:24:07.326374 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555124-b9gln" Mar 12 09:24:07 crc kubenswrapper[4809]: I0312 09:24:07.777341 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555118-h7hjw"] Mar 12 09:24:07 crc kubenswrapper[4809]: I0312 09:24:07.793369 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555118-h7hjw"] Mar 12 09:24:09 crc kubenswrapper[4809]: I0312 09:24:09.130898 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95c097ab-15b8-4188-a5ee-bf0f310c4d50" path="/var/lib/kubelet/pods/95c097ab-15b8-4188-a5ee-bf0f310c4d50/volumes" Mar 12 09:24:11 crc kubenswrapper[4809]: I0312 09:24:11.107327 4809 scope.go:117] "RemoveContainer" containerID="02f01e8c5df87af422c42c4ed2f0dc879d516281446aca0abce3266da364c81e" Mar 12 09:24:11 crc kubenswrapper[4809]: E0312 09:24:11.107923 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:24:25 crc kubenswrapper[4809]: I0312 09:24:25.107091 4809 scope.go:117] "RemoveContainer" containerID="02f01e8c5df87af422c42c4ed2f0dc879d516281446aca0abce3266da364c81e" Mar 12 09:24:25 crc kubenswrapper[4809]: E0312 09:24:25.108523 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:24:39 crc kubenswrapper[4809]: I0312 09:24:39.106041 4809 scope.go:117] "RemoveContainer" containerID="02f01e8c5df87af422c42c4ed2f0dc879d516281446aca0abce3266da364c81e" Mar 12 09:24:39 crc kubenswrapper[4809]: E0312 09:24:39.107839 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:24:49 crc kubenswrapper[4809]: I0312 09:24:49.747279 4809 scope.go:117] "RemoveContainer" containerID="d3eb763285b96ac6f8961c5b135287314917683899aa30d29647d0b87c4b7b57" Mar 12 09:24:51 crc kubenswrapper[4809]: I0312 09:24:51.107205 4809 scope.go:117] "RemoveContainer" containerID="02f01e8c5df87af422c42c4ed2f0dc879d516281446aca0abce3266da364c81e" Mar 12 09:24:51 crc kubenswrapper[4809]: E0312 09:24:51.108278 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:25:02 crc kubenswrapper[4809]: I0312 09:25:02.107247 4809 scope.go:117] "RemoveContainer" containerID="02f01e8c5df87af422c42c4ed2f0dc879d516281446aca0abce3266da364c81e" Mar 12 09:25:02 crc kubenswrapper[4809]: E0312 09:25:02.108652 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:25:13 crc kubenswrapper[4809]: I0312 09:25:13.977768 4809 trace.go:236] Trace[2138515769]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-server-0" (12-Mar-2026 09:25:12.944) (total time: 1029ms): Mar 12 09:25:13 crc kubenswrapper[4809]: Trace[2138515769]: [1.029558805s] [1.029558805s] END Mar 12 09:25:16 crc kubenswrapper[4809]: I0312 09:25:16.107513 4809 scope.go:117] "RemoveContainer" containerID="02f01e8c5df87af422c42c4ed2f0dc879d516281446aca0abce3266da364c81e" Mar 12 09:25:16 crc kubenswrapper[4809]: E0312 09:25:16.108544 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:25:23 crc kubenswrapper[4809]: I0312 09:25:23.786026 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-clb7l"] Mar 12 09:25:23 crc kubenswrapper[4809]: E0312 09:25:23.787343 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97100618-0145-4d43-b11c-fbaa657b6212" containerName="oc" Mar 12 09:25:23 crc kubenswrapper[4809]: I0312 09:25:23.787364 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="97100618-0145-4d43-b11c-fbaa657b6212" containerName="oc" Mar 12 09:25:23 crc kubenswrapper[4809]: I0312 09:25:23.787657 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="97100618-0145-4d43-b11c-fbaa657b6212" containerName="oc" Mar 12 09:25:23 crc kubenswrapper[4809]: I0312 09:25:23.790079 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-clb7l" Mar 12 09:25:23 crc kubenswrapper[4809]: I0312 09:25:23.802975 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-clb7l"] Mar 12 09:25:23 crc kubenswrapper[4809]: I0312 09:25:23.808279 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccd05106-f862-4106-bc2e-4ec90d4240dc-catalog-content\") pod \"community-operators-clb7l\" (UID: \"ccd05106-f862-4106-bc2e-4ec90d4240dc\") " pod="openshift-marketplace/community-operators-clb7l" Mar 12 09:25:23 crc kubenswrapper[4809]: I0312 09:25:23.808346 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdbs4\" (UniqueName: \"kubernetes.io/projected/ccd05106-f862-4106-bc2e-4ec90d4240dc-kube-api-access-cdbs4\") pod \"community-operators-clb7l\" (UID: \"ccd05106-f862-4106-bc2e-4ec90d4240dc\") " pod="openshift-marketplace/community-operators-clb7l" Mar 12 09:25:23 crc kubenswrapper[4809]: I0312 09:25:23.808404 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccd05106-f862-4106-bc2e-4ec90d4240dc-utilities\") pod \"community-operators-clb7l\" (UID: \"ccd05106-f862-4106-bc2e-4ec90d4240dc\") " pod="openshift-marketplace/community-operators-clb7l" Mar 12 09:25:23 crc kubenswrapper[4809]: I0312 09:25:23.913023 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccd05106-f862-4106-bc2e-4ec90d4240dc-catalog-content\") pod \"community-operators-clb7l\" (UID: \"ccd05106-f862-4106-bc2e-4ec90d4240dc\") " pod="openshift-marketplace/community-operators-clb7l" Mar 12 09:25:23 crc kubenswrapper[4809]: I0312 09:25:23.913216 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdbs4\" (UniqueName: \"kubernetes.io/projected/ccd05106-f862-4106-bc2e-4ec90d4240dc-kube-api-access-cdbs4\") pod \"community-operators-clb7l\" (UID: \"ccd05106-f862-4106-bc2e-4ec90d4240dc\") " pod="openshift-marketplace/community-operators-clb7l" Mar 12 09:25:23 crc kubenswrapper[4809]: I0312 09:25:23.913356 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccd05106-f862-4106-bc2e-4ec90d4240dc-utilities\") pod \"community-operators-clb7l\" (UID: \"ccd05106-f862-4106-bc2e-4ec90d4240dc\") " pod="openshift-marketplace/community-operators-clb7l" Mar 12 09:25:23 crc kubenswrapper[4809]: I0312 09:25:23.913890 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccd05106-f862-4106-bc2e-4ec90d4240dc-utilities\") pod \"community-operators-clb7l\" (UID: \"ccd05106-f862-4106-bc2e-4ec90d4240dc\") " pod="openshift-marketplace/community-operators-clb7l" Mar 12 09:25:23 crc kubenswrapper[4809]: I0312 09:25:23.914048 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccd05106-f862-4106-bc2e-4ec90d4240dc-catalog-content\") pod \"community-operators-clb7l\" (UID: \"ccd05106-f862-4106-bc2e-4ec90d4240dc\") " pod="openshift-marketplace/community-operators-clb7l" Mar 12 09:25:23 crc kubenswrapper[4809]: I0312 09:25:23.938015 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdbs4\" (UniqueName: \"kubernetes.io/projected/ccd05106-f862-4106-bc2e-4ec90d4240dc-kube-api-access-cdbs4\") pod \"community-operators-clb7l\" (UID: \"ccd05106-f862-4106-bc2e-4ec90d4240dc\") " pod="openshift-marketplace/community-operators-clb7l" Mar 12 09:25:24 crc kubenswrapper[4809]: I0312 09:25:24.121300 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-clb7l" Mar 12 09:25:24 crc kubenswrapper[4809]: I0312 09:25:24.694505 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-clb7l"] Mar 12 09:25:25 crc kubenswrapper[4809]: I0312 09:25:25.343058 4809 generic.go:334] "Generic (PLEG): container finished" podID="ccd05106-f862-4106-bc2e-4ec90d4240dc" containerID="16c693c05491c2cef84526a34a14727efb702dba934e543795cd151e10dd5b10" exitCode=0 Mar 12 09:25:25 crc kubenswrapper[4809]: I0312 09:25:25.343145 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-clb7l" event={"ID":"ccd05106-f862-4106-bc2e-4ec90d4240dc","Type":"ContainerDied","Data":"16c693c05491c2cef84526a34a14727efb702dba934e543795cd151e10dd5b10"} Mar 12 09:25:25 crc kubenswrapper[4809]: I0312 09:25:25.343189 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-clb7l" event={"ID":"ccd05106-f862-4106-bc2e-4ec90d4240dc","Type":"ContainerStarted","Data":"965b5047939aa4fa98d6c96c925afaaa48520ab44d9a9a5e9f2cba8947f6e8ce"} Mar 12 09:25:25 crc kubenswrapper[4809]: I0312 09:25:25.345627 4809 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 09:25:30 crc kubenswrapper[4809]: I0312 09:25:30.107007 4809 scope.go:117] "RemoveContainer" containerID="02f01e8c5df87af422c42c4ed2f0dc879d516281446aca0abce3266da364c81e" Mar 12 09:25:30 crc kubenswrapper[4809]: E0312 09:25:30.108327 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:25:32 crc kubenswrapper[4809]: I0312 09:25:32.448729 4809 generic.go:334] "Generic (PLEG): container finished" podID="ccd05106-f862-4106-bc2e-4ec90d4240dc" containerID="02fcd115823656061d647c89d0753ad318cd7f0bab0ff64df91c02c13769576f" exitCode=0 Mar 12 09:25:32 crc kubenswrapper[4809]: I0312 09:25:32.448853 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-clb7l" event={"ID":"ccd05106-f862-4106-bc2e-4ec90d4240dc","Type":"ContainerDied","Data":"02fcd115823656061d647c89d0753ad318cd7f0bab0ff64df91c02c13769576f"} Mar 12 09:25:34 crc kubenswrapper[4809]: I0312 09:25:34.478301 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-clb7l" event={"ID":"ccd05106-f862-4106-bc2e-4ec90d4240dc","Type":"ContainerStarted","Data":"aa0e7563dd5ab0734ce9d26586e9b342fb7efbd159199d1efa20a31d6d9c9844"} Mar 12 09:25:34 crc kubenswrapper[4809]: I0312 09:25:34.515457 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-clb7l" podStartSLOduration=3.5695510280000002 podStartE2EDuration="11.515417982s" podCreationTimestamp="2026-03-12 09:25:23 +0000 UTC" firstStartedPulling="2026-03-12 09:25:25.345328269 +0000 UTC m=+5198.927363992" lastFinishedPulling="2026-03-12 09:25:33.291195203 +0000 UTC m=+5206.873230946" observedRunningTime="2026-03-12 09:25:34.498244162 +0000 UTC m=+5208.080279905" watchObservedRunningTime="2026-03-12 09:25:34.515417982 +0000 UTC m=+5208.097453715" Mar 12 09:25:44 crc kubenswrapper[4809]: I0312 09:25:44.122307 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-clb7l" Mar 12 09:25:44 crc kubenswrapper[4809]: I0312 09:25:44.122922 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-clb7l" Mar 12 09:25:44 crc kubenswrapper[4809]: I0312 09:25:44.185997 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-clb7l" Mar 12 09:25:44 crc kubenswrapper[4809]: I0312 09:25:44.897032 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-clb7l" Mar 12 09:25:44 crc kubenswrapper[4809]: I0312 09:25:44.978012 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-clb7l"] Mar 12 09:25:45 crc kubenswrapper[4809]: I0312 09:25:45.033476 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q7hlx"] Mar 12 09:25:45 crc kubenswrapper[4809]: I0312 09:25:45.034444 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-q7hlx" podUID="3e5c36fc-835f-40b8-8b0c-380d00797bb0" containerName="registry-server" containerID="cri-o://b4e76551ac26182bb53e4a65fb0092d78bd05de2d5f17b94593356fde90ced27" gracePeriod=2 Mar 12 09:25:45 crc kubenswrapper[4809]: E0312 09:25:45.102145 4809 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b4e76551ac26182bb53e4a65fb0092d78bd05de2d5f17b94593356fde90ced27 is running failed: container process not found" containerID="b4e76551ac26182bb53e4a65fb0092d78bd05de2d5f17b94593356fde90ced27" cmd=["grpc_health_probe","-addr=:50051"] Mar 12 09:25:45 crc kubenswrapper[4809]: E0312 09:25:45.103398 4809 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b4e76551ac26182bb53e4a65fb0092d78bd05de2d5f17b94593356fde90ced27 is running failed: container process not found" containerID="b4e76551ac26182bb53e4a65fb0092d78bd05de2d5f17b94593356fde90ced27" cmd=["grpc_health_probe","-addr=:50051"] Mar 12 09:25:45 crc kubenswrapper[4809]: E0312 09:25:45.104146 4809 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b4e76551ac26182bb53e4a65fb0092d78bd05de2d5f17b94593356fde90ced27 is running failed: container process not found" containerID="b4e76551ac26182bb53e4a65fb0092d78bd05de2d5f17b94593356fde90ced27" cmd=["grpc_health_probe","-addr=:50051"] Mar 12 09:25:45 crc kubenswrapper[4809]: E0312 09:25:45.104182 4809 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b4e76551ac26182bb53e4a65fb0092d78bd05de2d5f17b94593356fde90ced27 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-q7hlx" podUID="3e5c36fc-835f-40b8-8b0c-380d00797bb0" containerName="registry-server" Mar 12 09:25:45 crc kubenswrapper[4809]: I0312 09:25:45.108252 4809 scope.go:117] "RemoveContainer" containerID="02f01e8c5df87af422c42c4ed2f0dc879d516281446aca0abce3266da364c81e" Mar 12 09:25:45 crc kubenswrapper[4809]: E0312 09:25:45.108561 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:25:45 crc kubenswrapper[4809]: I0312 09:25:45.645132 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q7hlx" Mar 12 09:25:45 crc kubenswrapper[4809]: I0312 09:25:45.772726 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwgcm\" (UniqueName: \"kubernetes.io/projected/3e5c36fc-835f-40b8-8b0c-380d00797bb0-kube-api-access-dwgcm\") pod \"3e5c36fc-835f-40b8-8b0c-380d00797bb0\" (UID: \"3e5c36fc-835f-40b8-8b0c-380d00797bb0\") " Mar 12 09:25:45 crc kubenswrapper[4809]: I0312 09:25:45.772907 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e5c36fc-835f-40b8-8b0c-380d00797bb0-catalog-content\") pod \"3e5c36fc-835f-40b8-8b0c-380d00797bb0\" (UID: \"3e5c36fc-835f-40b8-8b0c-380d00797bb0\") " Mar 12 09:25:45 crc kubenswrapper[4809]: I0312 09:25:45.773039 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e5c36fc-835f-40b8-8b0c-380d00797bb0-utilities\") pod \"3e5c36fc-835f-40b8-8b0c-380d00797bb0\" (UID: \"3e5c36fc-835f-40b8-8b0c-380d00797bb0\") " Mar 12 09:25:45 crc kubenswrapper[4809]: I0312 09:25:45.777137 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e5c36fc-835f-40b8-8b0c-380d00797bb0-utilities" (OuterVolumeSpecName: "utilities") pod "3e5c36fc-835f-40b8-8b0c-380d00797bb0" (UID: "3e5c36fc-835f-40b8-8b0c-380d00797bb0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 09:25:45 crc kubenswrapper[4809]: I0312 09:25:45.785421 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e5c36fc-835f-40b8-8b0c-380d00797bb0-kube-api-access-dwgcm" (OuterVolumeSpecName: "kube-api-access-dwgcm") pod "3e5c36fc-835f-40b8-8b0c-380d00797bb0" (UID: "3e5c36fc-835f-40b8-8b0c-380d00797bb0"). InnerVolumeSpecName "kube-api-access-dwgcm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 09:25:45 crc kubenswrapper[4809]: I0312 09:25:45.856013 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e5c36fc-835f-40b8-8b0c-380d00797bb0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3e5c36fc-835f-40b8-8b0c-380d00797bb0" (UID: "3e5c36fc-835f-40b8-8b0c-380d00797bb0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 09:25:45 crc kubenswrapper[4809]: I0312 09:25:45.872382 4809 generic.go:334] "Generic (PLEG): container finished" podID="3e5c36fc-835f-40b8-8b0c-380d00797bb0" containerID="b4e76551ac26182bb53e4a65fb0092d78bd05de2d5f17b94593356fde90ced27" exitCode=0 Mar 12 09:25:45 crc kubenswrapper[4809]: I0312 09:25:45.875468 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q7hlx" Mar 12 09:25:45 crc kubenswrapper[4809]: I0312 09:25:45.881946 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7hlx" event={"ID":"3e5c36fc-835f-40b8-8b0c-380d00797bb0","Type":"ContainerDied","Data":"b4e76551ac26182bb53e4a65fb0092d78bd05de2d5f17b94593356fde90ced27"} Mar 12 09:25:45 crc kubenswrapper[4809]: I0312 09:25:45.882051 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7hlx" event={"ID":"3e5c36fc-835f-40b8-8b0c-380d00797bb0","Type":"ContainerDied","Data":"bbc471db4f01de2514fe1a1bfd4c7fae422c9d862e0498267d88e70511fec1b5"} Mar 12 09:25:45 crc kubenswrapper[4809]: I0312 09:25:45.882072 4809 scope.go:117] "RemoveContainer" containerID="b4e76551ac26182bb53e4a65fb0092d78bd05de2d5f17b94593356fde90ced27" Mar 12 09:25:45 crc kubenswrapper[4809]: I0312 09:25:45.886115 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e5c36fc-835f-40b8-8b0c-380d00797bb0-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 09:25:45 crc kubenswrapper[4809]: I0312 09:25:45.886195 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e5c36fc-835f-40b8-8b0c-380d00797bb0-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 09:25:45 crc kubenswrapper[4809]: I0312 09:25:45.886209 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwgcm\" (UniqueName: \"kubernetes.io/projected/3e5c36fc-835f-40b8-8b0c-380d00797bb0-kube-api-access-dwgcm\") on node \"crc\" DevicePath \"\"" Mar 12 09:25:45 crc kubenswrapper[4809]: I0312 09:25:45.928801 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q7hlx"] Mar 12 09:25:45 crc kubenswrapper[4809]: I0312 09:25:45.940246 4809 scope.go:117] "RemoveContainer" containerID="c13b022ff25101bcf48371177a34cfebd86cc15fe8b1efa6f34e9d9083dd491a" Mar 12 09:25:45 crc kubenswrapper[4809]: I0312 09:25:45.943678 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-q7hlx"] Mar 12 09:25:46 crc kubenswrapper[4809]: I0312 09:25:46.009269 4809 scope.go:117] "RemoveContainer" containerID="ace62c9ef8715449a2bb53e189c57e166484730f91c6348eb27d6e731b91c8e1" Mar 12 09:25:46 crc kubenswrapper[4809]: I0312 09:25:46.053298 4809 scope.go:117] "RemoveContainer" containerID="b4e76551ac26182bb53e4a65fb0092d78bd05de2d5f17b94593356fde90ced27" Mar 12 09:25:46 crc kubenswrapper[4809]: E0312 09:25:46.054158 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4e76551ac26182bb53e4a65fb0092d78bd05de2d5f17b94593356fde90ced27\": container with ID starting with b4e76551ac26182bb53e4a65fb0092d78bd05de2d5f17b94593356fde90ced27 not found: ID does not exist" containerID="b4e76551ac26182bb53e4a65fb0092d78bd05de2d5f17b94593356fde90ced27" Mar 12 09:25:46 crc kubenswrapper[4809]: I0312 09:25:46.054257 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4e76551ac26182bb53e4a65fb0092d78bd05de2d5f17b94593356fde90ced27"} err="failed to get container status \"b4e76551ac26182bb53e4a65fb0092d78bd05de2d5f17b94593356fde90ced27\": rpc error: code = NotFound desc = could not find container \"b4e76551ac26182bb53e4a65fb0092d78bd05de2d5f17b94593356fde90ced27\": container with ID starting with b4e76551ac26182bb53e4a65fb0092d78bd05de2d5f17b94593356fde90ced27 not found: ID does not exist" Mar 12 09:25:46 crc kubenswrapper[4809]: I0312 09:25:46.054342 4809 scope.go:117] "RemoveContainer" containerID="c13b022ff25101bcf48371177a34cfebd86cc15fe8b1efa6f34e9d9083dd491a" Mar 12 09:25:46 crc kubenswrapper[4809]: E0312 09:25:46.054819 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c13b022ff25101bcf48371177a34cfebd86cc15fe8b1efa6f34e9d9083dd491a\": container with ID starting with c13b022ff25101bcf48371177a34cfebd86cc15fe8b1efa6f34e9d9083dd491a not found: ID does not exist" containerID="c13b022ff25101bcf48371177a34cfebd86cc15fe8b1efa6f34e9d9083dd491a" Mar 12 09:25:46 crc kubenswrapper[4809]: I0312 09:25:46.054849 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c13b022ff25101bcf48371177a34cfebd86cc15fe8b1efa6f34e9d9083dd491a"} err="failed to get container status \"c13b022ff25101bcf48371177a34cfebd86cc15fe8b1efa6f34e9d9083dd491a\": rpc error: code = NotFound desc = could not find container \"c13b022ff25101bcf48371177a34cfebd86cc15fe8b1efa6f34e9d9083dd491a\": container with ID starting with c13b022ff25101bcf48371177a34cfebd86cc15fe8b1efa6f34e9d9083dd491a not found: ID does not exist" Mar 12 09:25:46 crc kubenswrapper[4809]: I0312 09:25:46.054868 4809 scope.go:117] "RemoveContainer" containerID="ace62c9ef8715449a2bb53e189c57e166484730f91c6348eb27d6e731b91c8e1" Mar 12 09:25:46 crc kubenswrapper[4809]: E0312 09:25:46.055873 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ace62c9ef8715449a2bb53e189c57e166484730f91c6348eb27d6e731b91c8e1\": container with ID starting with ace62c9ef8715449a2bb53e189c57e166484730f91c6348eb27d6e731b91c8e1 not found: ID does not exist" containerID="ace62c9ef8715449a2bb53e189c57e166484730f91c6348eb27d6e731b91c8e1" Mar 12 09:25:46 crc kubenswrapper[4809]: I0312 09:25:46.055965 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ace62c9ef8715449a2bb53e189c57e166484730f91c6348eb27d6e731b91c8e1"} err="failed to get container status \"ace62c9ef8715449a2bb53e189c57e166484730f91c6348eb27d6e731b91c8e1\": rpc error: code = NotFound desc = could not find container \"ace62c9ef8715449a2bb53e189c57e166484730f91c6348eb27d6e731b91c8e1\": container with ID starting with ace62c9ef8715449a2bb53e189c57e166484730f91c6348eb27d6e731b91c8e1 not found: ID does not exist" Mar 12 09:25:47 crc kubenswrapper[4809]: I0312 09:25:47.132220 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e5c36fc-835f-40b8-8b0c-380d00797bb0" path="/var/lib/kubelet/pods/3e5c36fc-835f-40b8-8b0c-380d00797bb0/volumes" Mar 12 09:25:49 crc kubenswrapper[4809]: I0312 09:25:49.883103 4809 scope.go:117] "RemoveContainer" containerID="1ca7de1eb67729882a442d8883d9edfac4f18c1014005faffc334342bbc8befb" Mar 12 09:25:54 crc kubenswrapper[4809]: I0312 09:25:54.447795 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cd6gq"] Mar 12 09:25:54 crc kubenswrapper[4809]: E0312 09:25:54.448807 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e5c36fc-835f-40b8-8b0c-380d00797bb0" containerName="registry-server" Mar 12 09:25:54 crc kubenswrapper[4809]: I0312 09:25:54.448821 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e5c36fc-835f-40b8-8b0c-380d00797bb0" containerName="registry-server" Mar 12 09:25:54 crc kubenswrapper[4809]: E0312 09:25:54.448856 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e5c36fc-835f-40b8-8b0c-380d00797bb0" containerName="extract-utilities" Mar 12 09:25:54 crc kubenswrapper[4809]: I0312 09:25:54.448862 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e5c36fc-835f-40b8-8b0c-380d00797bb0" containerName="extract-utilities" Mar 12 09:25:54 crc kubenswrapper[4809]: E0312 09:25:54.448887 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e5c36fc-835f-40b8-8b0c-380d00797bb0" containerName="extract-content" Mar 12 09:25:54 crc kubenswrapper[4809]: I0312 09:25:54.448894 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e5c36fc-835f-40b8-8b0c-380d00797bb0" containerName="extract-content" Mar 12 09:25:54 crc kubenswrapper[4809]: I0312 09:25:54.449147 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e5c36fc-835f-40b8-8b0c-380d00797bb0" containerName="registry-server" Mar 12 09:25:54 crc kubenswrapper[4809]: I0312 09:25:54.451064 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cd6gq" Mar 12 09:25:54 crc kubenswrapper[4809]: I0312 09:25:54.468051 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cd6gq"] Mar 12 09:25:54 crc kubenswrapper[4809]: I0312 09:25:54.579231 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzm54\" (UniqueName: \"kubernetes.io/projected/dd08bafc-76bd-475b-911f-eb57f7e22298-kube-api-access-lzm54\") pod \"certified-operators-cd6gq\" (UID: \"dd08bafc-76bd-475b-911f-eb57f7e22298\") " pod="openshift-marketplace/certified-operators-cd6gq" Mar 12 09:25:54 crc kubenswrapper[4809]: I0312 09:25:54.579648 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd08bafc-76bd-475b-911f-eb57f7e22298-utilities\") pod \"certified-operators-cd6gq\" (UID: \"dd08bafc-76bd-475b-911f-eb57f7e22298\") " pod="openshift-marketplace/certified-operators-cd6gq" Mar 12 09:25:54 crc kubenswrapper[4809]: I0312 09:25:54.579874 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd08bafc-76bd-475b-911f-eb57f7e22298-catalog-content\") pod \"certified-operators-cd6gq\" (UID: \"dd08bafc-76bd-475b-911f-eb57f7e22298\") " pod="openshift-marketplace/certified-operators-cd6gq" Mar 12 09:25:54 crc kubenswrapper[4809]: I0312 09:25:54.681971 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd08bafc-76bd-475b-911f-eb57f7e22298-catalog-content\") pod \"certified-operators-cd6gq\" (UID: \"dd08bafc-76bd-475b-911f-eb57f7e22298\") " pod="openshift-marketplace/certified-operators-cd6gq" Mar 12 09:25:54 crc kubenswrapper[4809]: I0312 09:25:54.682087 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzm54\" (UniqueName: \"kubernetes.io/projected/dd08bafc-76bd-475b-911f-eb57f7e22298-kube-api-access-lzm54\") pod \"certified-operators-cd6gq\" (UID: \"dd08bafc-76bd-475b-911f-eb57f7e22298\") " pod="openshift-marketplace/certified-operators-cd6gq" Mar 12 09:25:54 crc kubenswrapper[4809]: I0312 09:25:54.682177 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd08bafc-76bd-475b-911f-eb57f7e22298-utilities\") pod \"certified-operators-cd6gq\" (UID: \"dd08bafc-76bd-475b-911f-eb57f7e22298\") " pod="openshift-marketplace/certified-operators-cd6gq" Mar 12 09:25:54 crc kubenswrapper[4809]: I0312 09:25:54.683161 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd08bafc-76bd-475b-911f-eb57f7e22298-catalog-content\") pod \"certified-operators-cd6gq\" (UID: \"dd08bafc-76bd-475b-911f-eb57f7e22298\") " pod="openshift-marketplace/certified-operators-cd6gq" Mar 12 09:25:54 crc kubenswrapper[4809]: I0312 09:25:54.683420 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd08bafc-76bd-475b-911f-eb57f7e22298-utilities\") pod \"certified-operators-cd6gq\" (UID: \"dd08bafc-76bd-475b-911f-eb57f7e22298\") " pod="openshift-marketplace/certified-operators-cd6gq" Mar 12 09:25:55 crc kubenswrapper[4809]: I0312 09:25:55.208540 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzm54\" (UniqueName: \"kubernetes.io/projected/dd08bafc-76bd-475b-911f-eb57f7e22298-kube-api-access-lzm54\") pod \"certified-operators-cd6gq\" (UID: \"dd08bafc-76bd-475b-911f-eb57f7e22298\") " pod="openshift-marketplace/certified-operators-cd6gq" Mar 12 09:25:55 crc kubenswrapper[4809]: I0312 09:25:55.378059 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cd6gq" Mar 12 09:25:55 crc kubenswrapper[4809]: I0312 09:25:55.969247 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cd6gq"] Mar 12 09:25:56 crc kubenswrapper[4809]: I0312 09:25:56.031832 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cd6gq" event={"ID":"dd08bafc-76bd-475b-911f-eb57f7e22298","Type":"ContainerStarted","Data":"dab2697b77e3c97cb76cf199d124c401d66afe384639b1161592c69fcd69b574"} Mar 12 09:25:57 crc kubenswrapper[4809]: I0312 09:25:57.046403 4809 generic.go:334] "Generic (PLEG): container finished" podID="dd08bafc-76bd-475b-911f-eb57f7e22298" containerID="71e29f72948e8a47acf97b021171a7b7e25c8621bb9688ef1db9d12999ac7056" exitCode=0 Mar 12 09:25:57 crc kubenswrapper[4809]: I0312 09:25:57.046467 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cd6gq" event={"ID":"dd08bafc-76bd-475b-911f-eb57f7e22298","Type":"ContainerDied","Data":"71e29f72948e8a47acf97b021171a7b7e25c8621bb9688ef1db9d12999ac7056"} Mar 12 09:25:59 crc kubenswrapper[4809]: I0312 09:25:59.081860 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cd6gq" event={"ID":"dd08bafc-76bd-475b-911f-eb57f7e22298","Type":"ContainerStarted","Data":"22cc89695fe751d6837a25741f05fb550e938ed1309993cd1a07b0e8058476cc"} Mar 12 09:26:00 crc kubenswrapper[4809]: I0312 09:26:00.112976 4809 scope.go:117] "RemoveContainer" containerID="02f01e8c5df87af422c42c4ed2f0dc879d516281446aca0abce3266da364c81e" Mar 12 09:26:00 crc kubenswrapper[4809]: E0312 09:26:00.116657 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:26:00 crc kubenswrapper[4809]: I0312 09:26:00.159459 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555126-x7xxx"] Mar 12 09:26:00 crc kubenswrapper[4809]: I0312 09:26:00.161802 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555126-x7xxx" Mar 12 09:26:00 crc kubenswrapper[4809]: I0312 09:26:00.165840 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 09:26:00 crc kubenswrapper[4809]: I0312 09:26:00.166206 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 09:26:00 crc kubenswrapper[4809]: I0312 09:26:00.166373 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-tvxqv" Mar 12 09:26:00 crc kubenswrapper[4809]: I0312 09:26:00.187171 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555126-x7xxx"] Mar 12 09:26:00 crc kubenswrapper[4809]: I0312 09:26:00.283863 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf9n9\" (UniqueName: \"kubernetes.io/projected/ebcd66a4-9afe-4a78-ac11-b8bbc93e57c9-kube-api-access-zf9n9\") pod \"auto-csr-approver-29555126-x7xxx\" (UID: \"ebcd66a4-9afe-4a78-ac11-b8bbc93e57c9\") " pod="openshift-infra/auto-csr-approver-29555126-x7xxx" Mar 12 09:26:00 crc kubenswrapper[4809]: I0312 09:26:00.386671 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zf9n9\" (UniqueName: \"kubernetes.io/projected/ebcd66a4-9afe-4a78-ac11-b8bbc93e57c9-kube-api-access-zf9n9\") pod \"auto-csr-approver-29555126-x7xxx\" (UID: \"ebcd66a4-9afe-4a78-ac11-b8bbc93e57c9\") " pod="openshift-infra/auto-csr-approver-29555126-x7xxx" Mar 12 09:26:00 crc kubenswrapper[4809]: I0312 09:26:00.409060 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zf9n9\" (UniqueName: \"kubernetes.io/projected/ebcd66a4-9afe-4a78-ac11-b8bbc93e57c9-kube-api-access-zf9n9\") pod \"auto-csr-approver-29555126-x7xxx\" (UID: \"ebcd66a4-9afe-4a78-ac11-b8bbc93e57c9\") " pod="openshift-infra/auto-csr-approver-29555126-x7xxx" Mar 12 09:26:00 crc kubenswrapper[4809]: I0312 09:26:00.490328 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555126-x7xxx" Mar 12 09:26:01 crc kubenswrapper[4809]: W0312 09:26:01.078048 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podebcd66a4_9afe_4a78_ac11_b8bbc93e57c9.slice/crio-d686bfcc3d1950939b542256e37148450b3a10c7fefcb44cd673d5922d2fa520 WatchSource:0}: Error finding container d686bfcc3d1950939b542256e37148450b3a10c7fefcb44cd673d5922d2fa520: Status 404 returned error can't find the container with id d686bfcc3d1950939b542256e37148450b3a10c7fefcb44cd673d5922d2fa520 Mar 12 09:26:01 crc kubenswrapper[4809]: I0312 09:26:01.078700 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555126-x7xxx"] Mar 12 09:26:01 crc kubenswrapper[4809]: I0312 09:26:01.130662 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555126-x7xxx" event={"ID":"ebcd66a4-9afe-4a78-ac11-b8bbc93e57c9","Type":"ContainerStarted","Data":"d686bfcc3d1950939b542256e37148450b3a10c7fefcb44cd673d5922d2fa520"} Mar 12 09:26:01 crc kubenswrapper[4809]: I0312 09:26:01.133975 4809 generic.go:334] "Generic (PLEG): container finished" podID="dd08bafc-76bd-475b-911f-eb57f7e22298" containerID="22cc89695fe751d6837a25741f05fb550e938ed1309993cd1a07b0e8058476cc" exitCode=0 Mar 12 09:26:01 crc kubenswrapper[4809]: I0312 09:26:01.134054 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cd6gq" event={"ID":"dd08bafc-76bd-475b-911f-eb57f7e22298","Type":"ContainerDied","Data":"22cc89695fe751d6837a25741f05fb550e938ed1309993cd1a07b0e8058476cc"} Mar 12 09:26:03 crc kubenswrapper[4809]: I0312 09:26:03.173551 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cd6gq" event={"ID":"dd08bafc-76bd-475b-911f-eb57f7e22298","Type":"ContainerStarted","Data":"374f669a9b5f83ff945c75189f85e0296e946785b76cd3f9a58b658ab7a2c104"} Mar 12 09:26:03 crc kubenswrapper[4809]: I0312 09:26:03.213205 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cd6gq" podStartSLOduration=3.416799365 podStartE2EDuration="9.213180451s" podCreationTimestamp="2026-03-12 09:25:54 +0000 UTC" firstStartedPulling="2026-03-12 09:25:57.050044286 +0000 UTC m=+5230.632080019" lastFinishedPulling="2026-03-12 09:26:02.846425372 +0000 UTC m=+5236.428461105" observedRunningTime="2026-03-12 09:26:03.199778715 +0000 UTC m=+5236.781814448" watchObservedRunningTime="2026-03-12 09:26:03.213180451 +0000 UTC m=+5236.795216184" Mar 12 09:26:05 crc kubenswrapper[4809]: I0312 09:26:05.208671 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555126-x7xxx" event={"ID":"ebcd66a4-9afe-4a78-ac11-b8bbc93e57c9","Type":"ContainerStarted","Data":"34f3d9ca557dc15191aa3d8424185e07320e3772a33b104269650f4242b96c15"} Mar 12 09:26:05 crc kubenswrapper[4809]: I0312 09:26:05.378396 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cd6gq" Mar 12 09:26:05 crc kubenswrapper[4809]: I0312 09:26:05.378790 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cd6gq" Mar 12 09:26:06 crc kubenswrapper[4809]: I0312 09:26:06.223378 4809 generic.go:334] "Generic (PLEG): container finished" podID="ebcd66a4-9afe-4a78-ac11-b8bbc93e57c9" containerID="34f3d9ca557dc15191aa3d8424185e07320e3772a33b104269650f4242b96c15" exitCode=0 Mar 12 09:26:06 crc kubenswrapper[4809]: I0312 09:26:06.223447 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555126-x7xxx" event={"ID":"ebcd66a4-9afe-4a78-ac11-b8bbc93e57c9","Type":"ContainerDied","Data":"34f3d9ca557dc15191aa3d8424185e07320e3772a33b104269650f4242b96c15"} Mar 12 09:26:06 crc kubenswrapper[4809]: I0312 09:26:06.668528 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-cd6gq" podUID="dd08bafc-76bd-475b-911f-eb57f7e22298" containerName="registry-server" probeResult="failure" output=< Mar 12 09:26:06 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 09:26:06 crc kubenswrapper[4809]: > Mar 12 09:26:08 crc kubenswrapper[4809]: I0312 09:26:08.252590 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555126-x7xxx" event={"ID":"ebcd66a4-9afe-4a78-ac11-b8bbc93e57c9","Type":"ContainerDied","Data":"d686bfcc3d1950939b542256e37148450b3a10c7fefcb44cd673d5922d2fa520"} Mar 12 09:26:08 crc kubenswrapper[4809]: I0312 09:26:08.252936 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d686bfcc3d1950939b542256e37148450b3a10c7fefcb44cd673d5922d2fa520" Mar 12 09:26:08 crc kubenswrapper[4809]: I0312 09:26:08.626231 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555126-x7xxx" Mar 12 09:26:08 crc kubenswrapper[4809]: I0312 09:26:08.759127 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zf9n9\" (UniqueName: \"kubernetes.io/projected/ebcd66a4-9afe-4a78-ac11-b8bbc93e57c9-kube-api-access-zf9n9\") pod \"ebcd66a4-9afe-4a78-ac11-b8bbc93e57c9\" (UID: \"ebcd66a4-9afe-4a78-ac11-b8bbc93e57c9\") " Mar 12 09:26:08 crc kubenswrapper[4809]: I0312 09:26:08.797780 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebcd66a4-9afe-4a78-ac11-b8bbc93e57c9-kube-api-access-zf9n9" (OuterVolumeSpecName: "kube-api-access-zf9n9") pod "ebcd66a4-9afe-4a78-ac11-b8bbc93e57c9" (UID: "ebcd66a4-9afe-4a78-ac11-b8bbc93e57c9"). InnerVolumeSpecName "kube-api-access-zf9n9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 09:26:08 crc kubenswrapper[4809]: I0312 09:26:08.863526 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zf9n9\" (UniqueName: \"kubernetes.io/projected/ebcd66a4-9afe-4a78-ac11-b8bbc93e57c9-kube-api-access-zf9n9\") on node \"crc\" DevicePath \"\"" Mar 12 09:26:09 crc kubenswrapper[4809]: I0312 09:26:09.284096 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555126-x7xxx" Mar 12 09:26:09 crc kubenswrapper[4809]: I0312 09:26:09.724140 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555120-z4897"] Mar 12 09:26:09 crc kubenswrapper[4809]: I0312 09:26:09.737917 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555120-z4897"] Mar 12 09:26:11 crc kubenswrapper[4809]: I0312 09:26:11.106749 4809 scope.go:117] "RemoveContainer" containerID="02f01e8c5df87af422c42c4ed2f0dc879d516281446aca0abce3266da364c81e" Mar 12 09:26:11 crc kubenswrapper[4809]: E0312 09:26:11.107545 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:26:11 crc kubenswrapper[4809]: I0312 09:26:11.121351 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63b7fc38-9fd2-49cc-a08e-ac23cb1e8e3b" path="/var/lib/kubelet/pods/63b7fc38-9fd2-49cc-a08e-ac23cb1e8e3b/volumes" Mar 12 09:26:16 crc kubenswrapper[4809]: I0312 09:26:16.434413 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-cd6gq" podUID="dd08bafc-76bd-475b-911f-eb57f7e22298" containerName="registry-server" probeResult="failure" output=< Mar 12 09:26:16 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 09:26:16 crc kubenswrapper[4809]: > Mar 12 09:26:23 crc kubenswrapper[4809]: I0312 09:26:23.106325 4809 scope.go:117] "RemoveContainer" containerID="02f01e8c5df87af422c42c4ed2f0dc879d516281446aca0abce3266da364c81e" Mar 12 09:26:23 crc kubenswrapper[4809]: E0312 09:26:23.107240 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:26:25 crc kubenswrapper[4809]: I0312 09:26:25.452922 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cd6gq" Mar 12 09:26:25 crc kubenswrapper[4809]: I0312 09:26:25.523934 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cd6gq" Mar 12 09:26:25 crc kubenswrapper[4809]: I0312 09:26:25.705936 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cd6gq"] Mar 12 09:26:26 crc kubenswrapper[4809]: I0312 09:26:26.504241 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cd6gq" podUID="dd08bafc-76bd-475b-911f-eb57f7e22298" containerName="registry-server" containerID="cri-o://374f669a9b5f83ff945c75189f85e0296e946785b76cd3f9a58b658ab7a2c104" gracePeriod=2 Mar 12 09:26:27 crc kubenswrapper[4809]: I0312 09:26:27.127767 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cd6gq" Mar 12 09:26:27 crc kubenswrapper[4809]: I0312 09:26:27.188542 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzm54\" (UniqueName: \"kubernetes.io/projected/dd08bafc-76bd-475b-911f-eb57f7e22298-kube-api-access-lzm54\") pod \"dd08bafc-76bd-475b-911f-eb57f7e22298\" (UID: \"dd08bafc-76bd-475b-911f-eb57f7e22298\") " Mar 12 09:26:27 crc kubenswrapper[4809]: I0312 09:26:27.188716 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd08bafc-76bd-475b-911f-eb57f7e22298-utilities\") pod \"dd08bafc-76bd-475b-911f-eb57f7e22298\" (UID: \"dd08bafc-76bd-475b-911f-eb57f7e22298\") " Mar 12 09:26:27 crc kubenswrapper[4809]: I0312 09:26:27.188852 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd08bafc-76bd-475b-911f-eb57f7e22298-catalog-content\") pod \"dd08bafc-76bd-475b-911f-eb57f7e22298\" (UID: \"dd08bafc-76bd-475b-911f-eb57f7e22298\") " Mar 12 09:26:27 crc kubenswrapper[4809]: I0312 09:26:27.194286 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd08bafc-76bd-475b-911f-eb57f7e22298-utilities" (OuterVolumeSpecName: "utilities") pod "dd08bafc-76bd-475b-911f-eb57f7e22298" (UID: "dd08bafc-76bd-475b-911f-eb57f7e22298"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 09:26:27 crc kubenswrapper[4809]: I0312 09:26:27.202692 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd08bafc-76bd-475b-911f-eb57f7e22298-kube-api-access-lzm54" (OuterVolumeSpecName: "kube-api-access-lzm54") pod "dd08bafc-76bd-475b-911f-eb57f7e22298" (UID: "dd08bafc-76bd-475b-911f-eb57f7e22298"). InnerVolumeSpecName "kube-api-access-lzm54". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 09:26:27 crc kubenswrapper[4809]: I0312 09:26:27.291413 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd08bafc-76bd-475b-911f-eb57f7e22298-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 09:26:27 crc kubenswrapper[4809]: I0312 09:26:27.291448 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzm54\" (UniqueName: \"kubernetes.io/projected/dd08bafc-76bd-475b-911f-eb57f7e22298-kube-api-access-lzm54\") on node \"crc\" DevicePath \"\"" Mar 12 09:26:27 crc kubenswrapper[4809]: I0312 09:26:27.326356 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd08bafc-76bd-475b-911f-eb57f7e22298-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dd08bafc-76bd-475b-911f-eb57f7e22298" (UID: "dd08bafc-76bd-475b-911f-eb57f7e22298"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 09:26:27 crc kubenswrapper[4809]: I0312 09:26:27.394070 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd08bafc-76bd-475b-911f-eb57f7e22298-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 09:26:27 crc kubenswrapper[4809]: I0312 09:26:27.535316 4809 generic.go:334] "Generic (PLEG): container finished" podID="dd08bafc-76bd-475b-911f-eb57f7e22298" containerID="374f669a9b5f83ff945c75189f85e0296e946785b76cd3f9a58b658ab7a2c104" exitCode=0 Mar 12 09:26:27 crc kubenswrapper[4809]: I0312 09:26:27.535366 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cd6gq" event={"ID":"dd08bafc-76bd-475b-911f-eb57f7e22298","Type":"ContainerDied","Data":"374f669a9b5f83ff945c75189f85e0296e946785b76cd3f9a58b658ab7a2c104"} Mar 12 09:26:27 crc kubenswrapper[4809]: I0312 09:26:27.535392 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cd6gq" Mar 12 09:26:27 crc kubenswrapper[4809]: I0312 09:26:27.535406 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cd6gq" event={"ID":"dd08bafc-76bd-475b-911f-eb57f7e22298","Type":"ContainerDied","Data":"dab2697b77e3c97cb76cf199d124c401d66afe384639b1161592c69fcd69b574"} Mar 12 09:26:27 crc kubenswrapper[4809]: I0312 09:26:27.535435 4809 scope.go:117] "RemoveContainer" containerID="374f669a9b5f83ff945c75189f85e0296e946785b76cd3f9a58b658ab7a2c104" Mar 12 09:26:27 crc kubenswrapper[4809]: I0312 09:26:27.578077 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cd6gq"] Mar 12 09:26:27 crc kubenswrapper[4809]: I0312 09:26:27.586148 4809 scope.go:117] "RemoveContainer" containerID="22cc89695fe751d6837a25741f05fb550e938ed1309993cd1a07b0e8058476cc" Mar 12 09:26:27 crc kubenswrapper[4809]: I0312 09:26:27.593372 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cd6gq"] Mar 12 09:26:27 crc kubenswrapper[4809]: I0312 09:26:27.624949 4809 scope.go:117] "RemoveContainer" containerID="71e29f72948e8a47acf97b021171a7b7e25c8621bb9688ef1db9d12999ac7056" Mar 12 09:26:27 crc kubenswrapper[4809]: I0312 09:26:27.693072 4809 scope.go:117] "RemoveContainer" containerID="374f669a9b5f83ff945c75189f85e0296e946785b76cd3f9a58b658ab7a2c104" Mar 12 09:26:27 crc kubenswrapper[4809]: E0312 09:26:27.693657 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"374f669a9b5f83ff945c75189f85e0296e946785b76cd3f9a58b658ab7a2c104\": container with ID starting with 374f669a9b5f83ff945c75189f85e0296e946785b76cd3f9a58b658ab7a2c104 not found: ID does not exist" containerID="374f669a9b5f83ff945c75189f85e0296e946785b76cd3f9a58b658ab7a2c104" Mar 12 09:26:27 crc kubenswrapper[4809]: I0312 09:26:27.693689 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"374f669a9b5f83ff945c75189f85e0296e946785b76cd3f9a58b658ab7a2c104"} err="failed to get container status \"374f669a9b5f83ff945c75189f85e0296e946785b76cd3f9a58b658ab7a2c104\": rpc error: code = NotFound desc = could not find container \"374f669a9b5f83ff945c75189f85e0296e946785b76cd3f9a58b658ab7a2c104\": container with ID starting with 374f669a9b5f83ff945c75189f85e0296e946785b76cd3f9a58b658ab7a2c104 not found: ID does not exist" Mar 12 09:26:27 crc kubenswrapper[4809]: I0312 09:26:27.693711 4809 scope.go:117] "RemoveContainer" containerID="22cc89695fe751d6837a25741f05fb550e938ed1309993cd1a07b0e8058476cc" Mar 12 09:26:27 crc kubenswrapper[4809]: E0312 09:26:27.694302 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22cc89695fe751d6837a25741f05fb550e938ed1309993cd1a07b0e8058476cc\": container with ID starting with 22cc89695fe751d6837a25741f05fb550e938ed1309993cd1a07b0e8058476cc not found: ID does not exist" containerID="22cc89695fe751d6837a25741f05fb550e938ed1309993cd1a07b0e8058476cc" Mar 12 09:26:27 crc kubenswrapper[4809]: I0312 09:26:27.694361 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22cc89695fe751d6837a25741f05fb550e938ed1309993cd1a07b0e8058476cc"} err="failed to get container status \"22cc89695fe751d6837a25741f05fb550e938ed1309993cd1a07b0e8058476cc\": rpc error: code = NotFound desc = could not find container \"22cc89695fe751d6837a25741f05fb550e938ed1309993cd1a07b0e8058476cc\": container with ID starting with 22cc89695fe751d6837a25741f05fb550e938ed1309993cd1a07b0e8058476cc not found: ID does not exist" Mar 12 09:26:27 crc kubenswrapper[4809]: I0312 09:26:27.694393 4809 scope.go:117] "RemoveContainer" containerID="71e29f72948e8a47acf97b021171a7b7e25c8621bb9688ef1db9d12999ac7056" Mar 12 09:26:27 crc kubenswrapper[4809]: E0312 09:26:27.695061 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71e29f72948e8a47acf97b021171a7b7e25c8621bb9688ef1db9d12999ac7056\": container with ID starting with 71e29f72948e8a47acf97b021171a7b7e25c8621bb9688ef1db9d12999ac7056 not found: ID does not exist" containerID="71e29f72948e8a47acf97b021171a7b7e25c8621bb9688ef1db9d12999ac7056" Mar 12 09:26:27 crc kubenswrapper[4809]: I0312 09:26:27.695103 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71e29f72948e8a47acf97b021171a7b7e25c8621bb9688ef1db9d12999ac7056"} err="failed to get container status \"71e29f72948e8a47acf97b021171a7b7e25c8621bb9688ef1db9d12999ac7056\": rpc error: code = NotFound desc = could not find container \"71e29f72948e8a47acf97b021171a7b7e25c8621bb9688ef1db9d12999ac7056\": container with ID starting with 71e29f72948e8a47acf97b021171a7b7e25c8621bb9688ef1db9d12999ac7056 not found: ID does not exist" Mar 12 09:26:29 crc kubenswrapper[4809]: I0312 09:26:29.121678 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd08bafc-76bd-475b-911f-eb57f7e22298" path="/var/lib/kubelet/pods/dd08bafc-76bd-475b-911f-eb57f7e22298/volumes" Mar 12 09:26:35 crc kubenswrapper[4809]: I0312 09:26:35.106501 4809 scope.go:117] "RemoveContainer" containerID="02f01e8c5df87af422c42c4ed2f0dc879d516281446aca0abce3266da364c81e" Mar 12 09:26:35 crc kubenswrapper[4809]: E0312 09:26:35.107577 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:26:49 crc kubenswrapper[4809]: I0312 09:26:49.106518 4809 scope.go:117] "RemoveContainer" containerID="02f01e8c5df87af422c42c4ed2f0dc879d516281446aca0abce3266da364c81e" Mar 12 09:26:49 crc kubenswrapper[4809]: E0312 09:26:49.107544 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:26:50 crc kubenswrapper[4809]: I0312 09:26:50.017005 4809 scope.go:117] "RemoveContainer" containerID="b1a4a9acf515fc0770bc0ad84b9c1b1e7de812d10e38373b6504cb2e72cdf4ee" Mar 12 09:27:04 crc kubenswrapper[4809]: I0312 09:27:04.106740 4809 scope.go:117] "RemoveContainer" containerID="02f01e8c5df87af422c42c4ed2f0dc879d516281446aca0abce3266da364c81e" Mar 12 09:27:04 crc kubenswrapper[4809]: E0312 09:27:04.108034 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:27:15 crc kubenswrapper[4809]: I0312 09:27:15.106965 4809 scope.go:117] "RemoveContainer" containerID="02f01e8c5df87af422c42c4ed2f0dc879d516281446aca0abce3266da364c81e" Mar 12 09:27:15 crc kubenswrapper[4809]: E0312 09:27:15.107834 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:27:30 crc kubenswrapper[4809]: I0312 09:27:30.106546 4809 scope.go:117] "RemoveContainer" containerID="02f01e8c5df87af422c42c4ed2f0dc879d516281446aca0abce3266da364c81e" Mar 12 09:27:30 crc kubenswrapper[4809]: E0312 09:27:30.107477 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:27:44 crc kubenswrapper[4809]: I0312 09:27:44.107239 4809 scope.go:117] "RemoveContainer" containerID="02f01e8c5df87af422c42c4ed2f0dc879d516281446aca0abce3266da364c81e" Mar 12 09:27:44 crc kubenswrapper[4809]: E0312 09:27:44.108490 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:27:56 crc kubenswrapper[4809]: I0312 09:27:56.107387 4809 scope.go:117] "RemoveContainer" containerID="02f01e8c5df87af422c42c4ed2f0dc879d516281446aca0abce3266da364c81e" Mar 12 09:27:56 crc kubenswrapper[4809]: I0312 09:27:56.660948 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" event={"ID":"101483ba-8ed3-40eb-9855-077e9add029f","Type":"ContainerStarted","Data":"f804fad92fafa2c516d60507d510027321c4fa2fec6850c9fc8f05bb782945c2"} Mar 12 09:28:00 crc kubenswrapper[4809]: I0312 09:28:00.169306 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555128-wq2tx"] Mar 12 09:28:00 crc kubenswrapper[4809]: E0312 09:28:00.170615 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd08bafc-76bd-475b-911f-eb57f7e22298" containerName="registry-server" Mar 12 09:28:00 crc kubenswrapper[4809]: I0312 09:28:00.170634 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd08bafc-76bd-475b-911f-eb57f7e22298" containerName="registry-server" Mar 12 09:28:00 crc kubenswrapper[4809]: E0312 09:28:00.170648 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd08bafc-76bd-475b-911f-eb57f7e22298" containerName="extract-utilities" Mar 12 09:28:00 crc kubenswrapper[4809]: I0312 09:28:00.170658 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd08bafc-76bd-475b-911f-eb57f7e22298" containerName="extract-utilities" Mar 12 09:28:00 crc kubenswrapper[4809]: E0312 09:28:00.170686 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd08bafc-76bd-475b-911f-eb57f7e22298" containerName="extract-content" Mar 12 09:28:00 crc kubenswrapper[4809]: I0312 09:28:00.170694 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd08bafc-76bd-475b-911f-eb57f7e22298" containerName="extract-content" Mar 12 09:28:00 crc kubenswrapper[4809]: E0312 09:28:00.170715 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebcd66a4-9afe-4a78-ac11-b8bbc93e57c9" containerName="oc" Mar 12 09:28:00 crc kubenswrapper[4809]: I0312 09:28:00.170724 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebcd66a4-9afe-4a78-ac11-b8bbc93e57c9" containerName="oc" Mar 12 09:28:00 crc kubenswrapper[4809]: I0312 09:28:00.171033 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd08bafc-76bd-475b-911f-eb57f7e22298" containerName="registry-server" Mar 12 09:28:00 crc kubenswrapper[4809]: I0312 09:28:00.171064 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebcd66a4-9afe-4a78-ac11-b8bbc93e57c9" containerName="oc" Mar 12 09:28:00 crc kubenswrapper[4809]: I0312 09:28:00.172373 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555128-wq2tx" Mar 12 09:28:00 crc kubenswrapper[4809]: I0312 09:28:00.184987 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-tvxqv" Mar 12 09:28:00 crc kubenswrapper[4809]: I0312 09:28:00.185072 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 09:28:00 crc kubenswrapper[4809]: I0312 09:28:00.186787 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555128-wq2tx"] Mar 12 09:28:00 crc kubenswrapper[4809]: I0312 09:28:00.190545 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 09:28:00 crc kubenswrapper[4809]: I0312 09:28:00.319337 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7vzj\" (UniqueName: \"kubernetes.io/projected/8b67a7e8-4d53-4051-b495-f14ae3034abf-kube-api-access-f7vzj\") pod \"auto-csr-approver-29555128-wq2tx\" (UID: \"8b67a7e8-4d53-4051-b495-f14ae3034abf\") " pod="openshift-infra/auto-csr-approver-29555128-wq2tx" Mar 12 09:28:00 crc kubenswrapper[4809]: I0312 09:28:00.422283 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7vzj\" (UniqueName: \"kubernetes.io/projected/8b67a7e8-4d53-4051-b495-f14ae3034abf-kube-api-access-f7vzj\") pod \"auto-csr-approver-29555128-wq2tx\" (UID: \"8b67a7e8-4d53-4051-b495-f14ae3034abf\") " pod="openshift-infra/auto-csr-approver-29555128-wq2tx" Mar 12 09:28:00 crc kubenswrapper[4809]: I0312 09:28:00.457704 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7vzj\" (UniqueName: \"kubernetes.io/projected/8b67a7e8-4d53-4051-b495-f14ae3034abf-kube-api-access-f7vzj\") pod \"auto-csr-approver-29555128-wq2tx\" (UID: \"8b67a7e8-4d53-4051-b495-f14ae3034abf\") " pod="openshift-infra/auto-csr-approver-29555128-wq2tx" Mar 12 09:28:00 crc kubenswrapper[4809]: I0312 09:28:00.510810 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555128-wq2tx" Mar 12 09:28:02 crc kubenswrapper[4809]: I0312 09:28:02.589813 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555128-wq2tx"] Mar 12 09:28:02 crc kubenswrapper[4809]: I0312 09:28:02.776691 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555128-wq2tx" event={"ID":"8b67a7e8-4d53-4051-b495-f14ae3034abf","Type":"ContainerStarted","Data":"e9632af60fc02ddd7970f2b1be167a2ea9729c40d0e8058949a165c06c1d3389"} Mar 12 09:28:04 crc kubenswrapper[4809]: I0312 09:28:04.801316 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555128-wq2tx" event={"ID":"8b67a7e8-4d53-4051-b495-f14ae3034abf","Type":"ContainerStarted","Data":"992d6e02bbf625847e5feae5c029d7ef55c47651a7beb9d362455f4b8d7c2875"} Mar 12 09:28:04 crc kubenswrapper[4809]: I0312 09:28:04.825706 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555128-wq2tx" podStartSLOduration=3.600888361 podStartE2EDuration="4.825683166s" podCreationTimestamp="2026-03-12 09:28:00 +0000 UTC" firstStartedPulling="2026-03-12 09:28:02.597391771 +0000 UTC m=+5356.179427504" lastFinishedPulling="2026-03-12 09:28:03.822186586 +0000 UTC m=+5357.404222309" observedRunningTime="2026-03-12 09:28:04.816186427 +0000 UTC m=+5358.398222180" watchObservedRunningTime="2026-03-12 09:28:04.825683166 +0000 UTC m=+5358.407718899" Mar 12 09:28:05 crc kubenswrapper[4809]: I0312 09:28:05.814167 4809 generic.go:334] "Generic (PLEG): container finished" podID="8b67a7e8-4d53-4051-b495-f14ae3034abf" containerID="992d6e02bbf625847e5feae5c029d7ef55c47651a7beb9d362455f4b8d7c2875" exitCode=0 Mar 12 09:28:05 crc kubenswrapper[4809]: I0312 09:28:05.814254 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555128-wq2tx" event={"ID":"8b67a7e8-4d53-4051-b495-f14ae3034abf","Type":"ContainerDied","Data":"992d6e02bbf625847e5feae5c029d7ef55c47651a7beb9d362455f4b8d7c2875"} Mar 12 09:28:07 crc kubenswrapper[4809]: I0312 09:28:07.837630 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555128-wq2tx" event={"ID":"8b67a7e8-4d53-4051-b495-f14ae3034abf","Type":"ContainerDied","Data":"e9632af60fc02ddd7970f2b1be167a2ea9729c40d0e8058949a165c06c1d3389"} Mar 12 09:28:07 crc kubenswrapper[4809]: I0312 09:28:07.837939 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9632af60fc02ddd7970f2b1be167a2ea9729c40d0e8058949a165c06c1d3389" Mar 12 09:28:07 crc kubenswrapper[4809]: I0312 09:28:07.956709 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555128-wq2tx" Mar 12 09:28:08 crc kubenswrapper[4809]: I0312 09:28:08.055221 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7vzj\" (UniqueName: \"kubernetes.io/projected/8b67a7e8-4d53-4051-b495-f14ae3034abf-kube-api-access-f7vzj\") pod \"8b67a7e8-4d53-4051-b495-f14ae3034abf\" (UID: \"8b67a7e8-4d53-4051-b495-f14ae3034abf\") " Mar 12 09:28:08 crc kubenswrapper[4809]: I0312 09:28:08.075447 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b67a7e8-4d53-4051-b495-f14ae3034abf-kube-api-access-f7vzj" (OuterVolumeSpecName: "kube-api-access-f7vzj") pod "8b67a7e8-4d53-4051-b495-f14ae3034abf" (UID: "8b67a7e8-4d53-4051-b495-f14ae3034abf"). InnerVolumeSpecName "kube-api-access-f7vzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 09:28:08 crc kubenswrapper[4809]: I0312 09:28:08.166316 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7vzj\" (UniqueName: \"kubernetes.io/projected/8b67a7e8-4d53-4051-b495-f14ae3034abf-kube-api-access-f7vzj\") on node \"crc\" DevicePath \"\"" Mar 12 09:28:08 crc kubenswrapper[4809]: I0312 09:28:08.852174 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555128-wq2tx" Mar 12 09:28:09 crc kubenswrapper[4809]: I0312 09:28:09.053881 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555122-5ms2r"] Mar 12 09:28:09 crc kubenswrapper[4809]: I0312 09:28:09.065718 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555122-5ms2r"] Mar 12 09:28:09 crc kubenswrapper[4809]: I0312 09:28:09.128260 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6da86c5d-9fed-4acf-8368-4cfb860c81d6" path="/var/lib/kubelet/pods/6da86c5d-9fed-4acf-8368-4cfb860c81d6/volumes" Mar 12 09:28:50 crc kubenswrapper[4809]: I0312 09:28:50.764830 4809 scope.go:117] "RemoveContainer" containerID="6ab81616e8f5dff5c7ec21dd14d530614d046fab563b224a4039987226f0edce" Mar 12 09:30:00 crc kubenswrapper[4809]: I0312 09:30:00.163594 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555130-f8n98"] Mar 12 09:30:00 crc kubenswrapper[4809]: E0312 09:30:00.164797 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b67a7e8-4d53-4051-b495-f14ae3034abf" containerName="oc" Mar 12 09:30:00 crc kubenswrapper[4809]: I0312 09:30:00.164813 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b67a7e8-4d53-4051-b495-f14ae3034abf" containerName="oc" Mar 12 09:30:00 crc kubenswrapper[4809]: I0312 09:30:00.165043 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b67a7e8-4d53-4051-b495-f14ae3034abf" containerName="oc" Mar 12 09:30:00 crc kubenswrapper[4809]: I0312 09:30:00.166466 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555130-f8n98" Mar 12 09:30:00 crc kubenswrapper[4809]: I0312 09:30:00.195153 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 12 09:30:00 crc kubenswrapper[4809]: I0312 09:30:00.195550 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 12 09:30:00 crc kubenswrapper[4809]: I0312 09:30:00.205293 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555130-trpfq"] Mar 12 09:30:00 crc kubenswrapper[4809]: I0312 09:30:00.219160 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555130-trpfq" Mar 12 09:30:00 crc kubenswrapper[4809]: I0312 09:30:00.228566 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 09:30:00 crc kubenswrapper[4809]: I0312 09:30:00.230177 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 09:30:00 crc kubenswrapper[4809]: I0312 09:30:00.260105 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-tvxqv" Mar 12 09:30:00 crc kubenswrapper[4809]: I0312 09:30:00.268085 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555130-f8n98"] Mar 12 09:30:00 crc kubenswrapper[4809]: I0312 09:30:00.297261 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555130-trpfq"] Mar 12 09:30:00 crc kubenswrapper[4809]: I0312 09:30:00.364361 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8jr2\" (UniqueName: \"kubernetes.io/projected/14eb30bb-5ed3-4e24-9344-a1367d2391b4-kube-api-access-l8jr2\") pod \"auto-csr-approver-29555130-trpfq\" (UID: \"14eb30bb-5ed3-4e24-9344-a1367d2391b4\") " pod="openshift-infra/auto-csr-approver-29555130-trpfq" Mar 12 09:30:00 crc kubenswrapper[4809]: I0312 09:30:00.364488 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/68cbc33b-cacb-4c41-ac0f-d10f0ce8b102-secret-volume\") pod \"collect-profiles-29555130-f8n98\" (UID: \"68cbc33b-cacb-4c41-ac0f-d10f0ce8b102\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555130-f8n98" Mar 12 09:30:00 crc kubenswrapper[4809]: I0312 09:30:00.364534 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bd8kf\" (UniqueName: \"kubernetes.io/projected/68cbc33b-cacb-4c41-ac0f-d10f0ce8b102-kube-api-access-bd8kf\") pod \"collect-profiles-29555130-f8n98\" (UID: \"68cbc33b-cacb-4c41-ac0f-d10f0ce8b102\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555130-f8n98" Mar 12 09:30:00 crc kubenswrapper[4809]: I0312 09:30:00.364591 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/68cbc33b-cacb-4c41-ac0f-d10f0ce8b102-config-volume\") pod \"collect-profiles-29555130-f8n98\" (UID: \"68cbc33b-cacb-4c41-ac0f-d10f0ce8b102\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555130-f8n98" Mar 12 09:30:00 crc kubenswrapper[4809]: I0312 09:30:00.467260 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8jr2\" (UniqueName: \"kubernetes.io/projected/14eb30bb-5ed3-4e24-9344-a1367d2391b4-kube-api-access-l8jr2\") pod \"auto-csr-approver-29555130-trpfq\" (UID: \"14eb30bb-5ed3-4e24-9344-a1367d2391b4\") " pod="openshift-infra/auto-csr-approver-29555130-trpfq" Mar 12 09:30:00 crc kubenswrapper[4809]: I0312 09:30:00.467315 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/68cbc33b-cacb-4c41-ac0f-d10f0ce8b102-secret-volume\") pod \"collect-profiles-29555130-f8n98\" (UID: \"68cbc33b-cacb-4c41-ac0f-d10f0ce8b102\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555130-f8n98" Mar 12 09:30:00 crc kubenswrapper[4809]: I0312 09:30:00.467361 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bd8kf\" (UniqueName: \"kubernetes.io/projected/68cbc33b-cacb-4c41-ac0f-d10f0ce8b102-kube-api-access-bd8kf\") pod \"collect-profiles-29555130-f8n98\" (UID: \"68cbc33b-cacb-4c41-ac0f-d10f0ce8b102\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555130-f8n98" Mar 12 09:30:00 crc kubenswrapper[4809]: I0312 09:30:00.467421 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/68cbc33b-cacb-4c41-ac0f-d10f0ce8b102-config-volume\") pod \"collect-profiles-29555130-f8n98\" (UID: \"68cbc33b-cacb-4c41-ac0f-d10f0ce8b102\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555130-f8n98" Mar 12 09:30:00 crc kubenswrapper[4809]: I0312 09:30:00.468488 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/68cbc33b-cacb-4c41-ac0f-d10f0ce8b102-config-volume\") pod \"collect-profiles-29555130-f8n98\" (UID: \"68cbc33b-cacb-4c41-ac0f-d10f0ce8b102\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555130-f8n98" Mar 12 09:30:00 crc kubenswrapper[4809]: I0312 09:30:00.478796 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/68cbc33b-cacb-4c41-ac0f-d10f0ce8b102-secret-volume\") pod \"collect-profiles-29555130-f8n98\" (UID: \"68cbc33b-cacb-4c41-ac0f-d10f0ce8b102\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555130-f8n98" Mar 12 09:30:00 crc kubenswrapper[4809]: I0312 09:30:00.492501 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bd8kf\" (UniqueName: \"kubernetes.io/projected/68cbc33b-cacb-4c41-ac0f-d10f0ce8b102-kube-api-access-bd8kf\") pod \"collect-profiles-29555130-f8n98\" (UID: \"68cbc33b-cacb-4c41-ac0f-d10f0ce8b102\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555130-f8n98" Mar 12 09:30:00 crc kubenswrapper[4809]: I0312 09:30:00.493899 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8jr2\" (UniqueName: \"kubernetes.io/projected/14eb30bb-5ed3-4e24-9344-a1367d2391b4-kube-api-access-l8jr2\") pod \"auto-csr-approver-29555130-trpfq\" (UID: \"14eb30bb-5ed3-4e24-9344-a1367d2391b4\") " pod="openshift-infra/auto-csr-approver-29555130-trpfq" Mar 12 09:30:00 crc kubenswrapper[4809]: I0312 09:30:00.514379 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555130-f8n98" Mar 12 09:30:00 crc kubenswrapper[4809]: I0312 09:30:00.549600 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555130-trpfq" Mar 12 09:30:01 crc kubenswrapper[4809]: I0312 09:30:01.140818 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555130-trpfq"] Mar 12 09:30:01 crc kubenswrapper[4809]: I0312 09:30:01.158198 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555130-f8n98"] Mar 12 09:30:02 crc kubenswrapper[4809]: I0312 09:30:02.427710 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555130-trpfq" event={"ID":"14eb30bb-5ed3-4e24-9344-a1367d2391b4","Type":"ContainerStarted","Data":"72371fb7ad1dbb21413f6fbae058b91c09b5544bd1dd94b015114b9c3021103c"} Mar 12 09:30:02 crc kubenswrapper[4809]: I0312 09:30:02.429440 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555130-f8n98" event={"ID":"68cbc33b-cacb-4c41-ac0f-d10f0ce8b102","Type":"ContainerStarted","Data":"af37dd1a189c5c5d8d4f50dca93655a11e040a047864f77d24b151d50087805e"} Mar 12 09:30:02 crc kubenswrapper[4809]: I0312 09:30:02.429465 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555130-f8n98" event={"ID":"68cbc33b-cacb-4c41-ac0f-d10f0ce8b102","Type":"ContainerStarted","Data":"565d6b3a1c9a24fd70f47f9ec15ee0e116cec759caadf9a3876c0f16ef30fcb8"} Mar 12 09:30:02 crc kubenswrapper[4809]: I0312 09:30:02.479821 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29555130-f8n98" podStartSLOduration=2.479793077 podStartE2EDuration="2.479793077s" podCreationTimestamp="2026-03-12 09:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 09:30:02.475051557 +0000 UTC m=+5476.057087290" watchObservedRunningTime="2026-03-12 09:30:02.479793077 +0000 UTC m=+5476.061828830" Mar 12 09:30:03 crc kubenswrapper[4809]: I0312 09:30:03.444685 4809 generic.go:334] "Generic (PLEG): container finished" podID="68cbc33b-cacb-4c41-ac0f-d10f0ce8b102" containerID="af37dd1a189c5c5d8d4f50dca93655a11e040a047864f77d24b151d50087805e" exitCode=0 Mar 12 09:30:03 crc kubenswrapper[4809]: I0312 09:30:03.444793 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555130-f8n98" event={"ID":"68cbc33b-cacb-4c41-ac0f-d10f0ce8b102","Type":"ContainerDied","Data":"af37dd1a189c5c5d8d4f50dca93655a11e040a047864f77d24b151d50087805e"} Mar 12 09:30:04 crc kubenswrapper[4809]: I0312 09:30:04.458661 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555130-trpfq" event={"ID":"14eb30bb-5ed3-4e24-9344-a1367d2391b4","Type":"ContainerStarted","Data":"0194ff8f79d415a9131afbdbf3657b7a44543c3d2716d5e12d701748e6f69bcb"} Mar 12 09:30:04 crc kubenswrapper[4809]: I0312 09:30:04.489565 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555130-trpfq" podStartSLOduration=2.372233935 podStartE2EDuration="4.489548301s" podCreationTimestamp="2026-03-12 09:30:00 +0000 UTC" firstStartedPulling="2026-03-12 09:30:01.616080118 +0000 UTC m=+5475.198115841" lastFinishedPulling="2026-03-12 09:30:03.733394474 +0000 UTC m=+5477.315430207" observedRunningTime="2026-03-12 09:30:04.488309977 +0000 UTC m=+5478.070345710" watchObservedRunningTime="2026-03-12 09:30:04.489548301 +0000 UTC m=+5478.071584034" Mar 12 09:30:05 crc kubenswrapper[4809]: I0312 09:30:05.063734 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555130-f8n98" Mar 12 09:30:05 crc kubenswrapper[4809]: I0312 09:30:05.131596 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bd8kf\" (UniqueName: \"kubernetes.io/projected/68cbc33b-cacb-4c41-ac0f-d10f0ce8b102-kube-api-access-bd8kf\") pod \"68cbc33b-cacb-4c41-ac0f-d10f0ce8b102\" (UID: \"68cbc33b-cacb-4c41-ac0f-d10f0ce8b102\") " Mar 12 09:30:05 crc kubenswrapper[4809]: I0312 09:30:05.131782 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/68cbc33b-cacb-4c41-ac0f-d10f0ce8b102-config-volume\") pod \"68cbc33b-cacb-4c41-ac0f-d10f0ce8b102\" (UID: \"68cbc33b-cacb-4c41-ac0f-d10f0ce8b102\") " Mar 12 09:30:05 crc kubenswrapper[4809]: I0312 09:30:05.131825 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/68cbc33b-cacb-4c41-ac0f-d10f0ce8b102-secret-volume\") pod \"68cbc33b-cacb-4c41-ac0f-d10f0ce8b102\" (UID: \"68cbc33b-cacb-4c41-ac0f-d10f0ce8b102\") " Mar 12 09:30:05 crc kubenswrapper[4809]: I0312 09:30:05.133566 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68cbc33b-cacb-4c41-ac0f-d10f0ce8b102-config-volume" (OuterVolumeSpecName: "config-volume") pod "68cbc33b-cacb-4c41-ac0f-d10f0ce8b102" (UID: "68cbc33b-cacb-4c41-ac0f-d10f0ce8b102"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 09:30:05 crc kubenswrapper[4809]: I0312 09:30:05.140344 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68cbc33b-cacb-4c41-ac0f-d10f0ce8b102-kube-api-access-bd8kf" (OuterVolumeSpecName: "kube-api-access-bd8kf") pod "68cbc33b-cacb-4c41-ac0f-d10f0ce8b102" (UID: "68cbc33b-cacb-4c41-ac0f-d10f0ce8b102"). InnerVolumeSpecName "kube-api-access-bd8kf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 09:30:05 crc kubenswrapper[4809]: I0312 09:30:05.140556 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68cbc33b-cacb-4c41-ac0f-d10f0ce8b102-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "68cbc33b-cacb-4c41-ac0f-d10f0ce8b102" (UID: "68cbc33b-cacb-4c41-ac0f-d10f0ce8b102"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 09:30:05 crc kubenswrapper[4809]: I0312 09:30:05.236160 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bd8kf\" (UniqueName: \"kubernetes.io/projected/68cbc33b-cacb-4c41-ac0f-d10f0ce8b102-kube-api-access-bd8kf\") on node \"crc\" DevicePath \"\"" Mar 12 09:30:05 crc kubenswrapper[4809]: I0312 09:30:05.236497 4809 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/68cbc33b-cacb-4c41-ac0f-d10f0ce8b102-config-volume\") on node \"crc\" DevicePath \"\"" Mar 12 09:30:05 crc kubenswrapper[4809]: I0312 09:30:05.236562 4809 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/68cbc33b-cacb-4c41-ac0f-d10f0ce8b102-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 12 09:30:05 crc kubenswrapper[4809]: I0312 09:30:05.473410 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555130-f8n98" event={"ID":"68cbc33b-cacb-4c41-ac0f-d10f0ce8b102","Type":"ContainerDied","Data":"565d6b3a1c9a24fd70f47f9ec15ee0e116cec759caadf9a3876c0f16ef30fcb8"} Mar 12 09:30:05 crc kubenswrapper[4809]: I0312 09:30:05.473458 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="565d6b3a1c9a24fd70f47f9ec15ee0e116cec759caadf9a3876c0f16ef30fcb8" Mar 12 09:30:05 crc kubenswrapper[4809]: I0312 09:30:05.473518 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555130-f8n98" Mar 12 09:30:05 crc kubenswrapper[4809]: I0312 09:30:05.478814 4809 generic.go:334] "Generic (PLEG): container finished" podID="14eb30bb-5ed3-4e24-9344-a1367d2391b4" containerID="0194ff8f79d415a9131afbdbf3657b7a44543c3d2716d5e12d701748e6f69bcb" exitCode=0 Mar 12 09:30:05 crc kubenswrapper[4809]: I0312 09:30:05.478867 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555130-trpfq" event={"ID":"14eb30bb-5ed3-4e24-9344-a1367d2391b4","Type":"ContainerDied","Data":"0194ff8f79d415a9131afbdbf3657b7a44543c3d2716d5e12d701748e6f69bcb"} Mar 12 09:30:06 crc kubenswrapper[4809]: I0312 09:30:06.157126 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555085-9xbwl"] Mar 12 09:30:06 crc kubenswrapper[4809]: I0312 09:30:06.171231 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555085-9xbwl"] Mar 12 09:30:07 crc kubenswrapper[4809]: I0312 09:30:07.044416 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555130-trpfq" Mar 12 09:30:07 crc kubenswrapper[4809]: I0312 09:30:07.128278 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7dfd98d0-2a15-4bcf-b463-8786260177f4" path="/var/lib/kubelet/pods/7dfd98d0-2a15-4bcf-b463-8786260177f4/volumes" Mar 12 09:30:07 crc kubenswrapper[4809]: I0312 09:30:07.193555 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8jr2\" (UniqueName: \"kubernetes.io/projected/14eb30bb-5ed3-4e24-9344-a1367d2391b4-kube-api-access-l8jr2\") pod \"14eb30bb-5ed3-4e24-9344-a1367d2391b4\" (UID: \"14eb30bb-5ed3-4e24-9344-a1367d2391b4\") " Mar 12 09:30:07 crc kubenswrapper[4809]: I0312 09:30:07.204929 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14eb30bb-5ed3-4e24-9344-a1367d2391b4-kube-api-access-l8jr2" (OuterVolumeSpecName: "kube-api-access-l8jr2") pod "14eb30bb-5ed3-4e24-9344-a1367d2391b4" (UID: "14eb30bb-5ed3-4e24-9344-a1367d2391b4"). InnerVolumeSpecName "kube-api-access-l8jr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 09:30:07 crc kubenswrapper[4809]: I0312 09:30:07.296741 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8jr2\" (UniqueName: \"kubernetes.io/projected/14eb30bb-5ed3-4e24-9344-a1367d2391b4-kube-api-access-l8jr2\") on node \"crc\" DevicePath \"\"" Mar 12 09:30:07 crc kubenswrapper[4809]: I0312 09:30:07.507435 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555130-trpfq" event={"ID":"14eb30bb-5ed3-4e24-9344-a1367d2391b4","Type":"ContainerDied","Data":"72371fb7ad1dbb21413f6fbae058b91c09b5544bd1dd94b015114b9c3021103c"} Mar 12 09:30:07 crc kubenswrapper[4809]: I0312 09:30:07.507483 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72371fb7ad1dbb21413f6fbae058b91c09b5544bd1dd94b015114b9c3021103c" Mar 12 09:30:07 crc kubenswrapper[4809]: I0312 09:30:07.507551 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555130-trpfq" Mar 12 09:30:07 crc kubenswrapper[4809]: I0312 09:30:07.549085 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555124-b9gln"] Mar 12 09:30:07 crc kubenswrapper[4809]: I0312 09:30:07.590818 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555124-b9gln"] Mar 12 09:30:09 crc kubenswrapper[4809]: I0312 09:30:09.122562 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97100618-0145-4d43-b11c-fbaa657b6212" path="/var/lib/kubelet/pods/97100618-0145-4d43-b11c-fbaa657b6212/volumes" Mar 12 09:30:15 crc kubenswrapper[4809]: I0312 09:30:15.049843 4809 patch_prober.go:28] interesting pod/machine-config-daemon-h6d4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 09:30:15 crc kubenswrapper[4809]: I0312 09:30:15.050831 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 09:30:45 crc kubenswrapper[4809]: I0312 09:30:45.048478 4809 patch_prober.go:28] interesting pod/machine-config-daemon-h6d4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 09:30:45 crc kubenswrapper[4809]: I0312 09:30:45.049299 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 09:30:50 crc kubenswrapper[4809]: I0312 09:30:50.927917 4809 scope.go:117] "RemoveContainer" containerID="4c10fb0b966782076e1cbbb15fec4398e05df5e2719ba119de613f948e894a9e" Mar 12 09:30:50 crc kubenswrapper[4809]: I0312 09:30:50.966265 4809 scope.go:117] "RemoveContainer" containerID="f1fbd606d2eeff0a68d94b7133f01f3e60ac01fb38129ad85456b28870add79c" Mar 12 09:31:01 crc kubenswrapper[4809]: I0312 09:31:01.663203 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-htx2b"] Mar 12 09:31:01 crc kubenswrapper[4809]: E0312 09:31:01.664506 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68cbc33b-cacb-4c41-ac0f-d10f0ce8b102" containerName="collect-profiles" Mar 12 09:31:01 crc kubenswrapper[4809]: I0312 09:31:01.664525 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="68cbc33b-cacb-4c41-ac0f-d10f0ce8b102" containerName="collect-profiles" Mar 12 09:31:01 crc kubenswrapper[4809]: E0312 09:31:01.664541 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14eb30bb-5ed3-4e24-9344-a1367d2391b4" containerName="oc" Mar 12 09:31:01 crc kubenswrapper[4809]: I0312 09:31:01.664551 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="14eb30bb-5ed3-4e24-9344-a1367d2391b4" containerName="oc" Mar 12 09:31:01 crc kubenswrapper[4809]: I0312 09:31:01.664833 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="68cbc33b-cacb-4c41-ac0f-d10f0ce8b102" containerName="collect-profiles" Mar 12 09:31:01 crc kubenswrapper[4809]: I0312 09:31:01.664866 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="14eb30bb-5ed3-4e24-9344-a1367d2391b4" containerName="oc" Mar 12 09:31:01 crc kubenswrapper[4809]: I0312 09:31:01.667390 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-htx2b" Mar 12 09:31:01 crc kubenswrapper[4809]: I0312 09:31:01.683026 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-htx2b"] Mar 12 09:31:01 crc kubenswrapper[4809]: I0312 09:31:01.780989 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa5f4cfd-50be-4ca2-a2e4-d1703fa3756f-catalog-content\") pod \"redhat-marketplace-htx2b\" (UID: \"aa5f4cfd-50be-4ca2-a2e4-d1703fa3756f\") " pod="openshift-marketplace/redhat-marketplace-htx2b" Mar 12 09:31:01 crc kubenswrapper[4809]: I0312 09:31:01.781027 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmqrw\" (UniqueName: \"kubernetes.io/projected/aa5f4cfd-50be-4ca2-a2e4-d1703fa3756f-kube-api-access-kmqrw\") pod \"redhat-marketplace-htx2b\" (UID: \"aa5f4cfd-50be-4ca2-a2e4-d1703fa3756f\") " pod="openshift-marketplace/redhat-marketplace-htx2b" Mar 12 09:31:01 crc kubenswrapper[4809]: I0312 09:31:01.781076 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa5f4cfd-50be-4ca2-a2e4-d1703fa3756f-utilities\") pod \"redhat-marketplace-htx2b\" (UID: \"aa5f4cfd-50be-4ca2-a2e4-d1703fa3756f\") " pod="openshift-marketplace/redhat-marketplace-htx2b" Mar 12 09:31:01 crc kubenswrapper[4809]: I0312 09:31:01.884312 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa5f4cfd-50be-4ca2-a2e4-d1703fa3756f-utilities\") pod \"redhat-marketplace-htx2b\" (UID: \"aa5f4cfd-50be-4ca2-a2e4-d1703fa3756f\") " pod="openshift-marketplace/redhat-marketplace-htx2b" Mar 12 09:31:01 crc kubenswrapper[4809]: I0312 09:31:01.884835 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa5f4cfd-50be-4ca2-a2e4-d1703fa3756f-utilities\") pod \"redhat-marketplace-htx2b\" (UID: \"aa5f4cfd-50be-4ca2-a2e4-d1703fa3756f\") " pod="openshift-marketplace/redhat-marketplace-htx2b" Mar 12 09:31:01 crc kubenswrapper[4809]: I0312 09:31:01.885891 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa5f4cfd-50be-4ca2-a2e4-d1703fa3756f-catalog-content\") pod \"redhat-marketplace-htx2b\" (UID: \"aa5f4cfd-50be-4ca2-a2e4-d1703fa3756f\") " pod="openshift-marketplace/redhat-marketplace-htx2b" Mar 12 09:31:01 crc kubenswrapper[4809]: I0312 09:31:01.886158 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa5f4cfd-50be-4ca2-a2e4-d1703fa3756f-catalog-content\") pod \"redhat-marketplace-htx2b\" (UID: \"aa5f4cfd-50be-4ca2-a2e4-d1703fa3756f\") " pod="openshift-marketplace/redhat-marketplace-htx2b" Mar 12 09:31:01 crc kubenswrapper[4809]: I0312 09:31:01.886201 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmqrw\" (UniqueName: \"kubernetes.io/projected/aa5f4cfd-50be-4ca2-a2e4-d1703fa3756f-kube-api-access-kmqrw\") pod \"redhat-marketplace-htx2b\" (UID: \"aa5f4cfd-50be-4ca2-a2e4-d1703fa3756f\") " pod="openshift-marketplace/redhat-marketplace-htx2b" Mar 12 09:31:02 crc kubenswrapper[4809]: I0312 09:31:02.099059 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmqrw\" (UniqueName: \"kubernetes.io/projected/aa5f4cfd-50be-4ca2-a2e4-d1703fa3756f-kube-api-access-kmqrw\") pod \"redhat-marketplace-htx2b\" (UID: \"aa5f4cfd-50be-4ca2-a2e4-d1703fa3756f\") " pod="openshift-marketplace/redhat-marketplace-htx2b" Mar 12 09:31:02 crc kubenswrapper[4809]: I0312 09:31:02.289488 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-htx2b" Mar 12 09:31:02 crc kubenswrapper[4809]: I0312 09:31:02.813825 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-htx2b"] Mar 12 09:31:03 crc kubenswrapper[4809]: I0312 09:31:03.257425 4809 generic.go:334] "Generic (PLEG): container finished" podID="aa5f4cfd-50be-4ca2-a2e4-d1703fa3756f" containerID="0daa0c13d90be3828c66cb3ffa3bae57f30192e133288e2d2f6f8da2084c6b33" exitCode=0 Mar 12 09:31:03 crc kubenswrapper[4809]: I0312 09:31:03.258052 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-htx2b" event={"ID":"aa5f4cfd-50be-4ca2-a2e4-d1703fa3756f","Type":"ContainerDied","Data":"0daa0c13d90be3828c66cb3ffa3bae57f30192e133288e2d2f6f8da2084c6b33"} Mar 12 09:31:03 crc kubenswrapper[4809]: I0312 09:31:03.258164 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-htx2b" event={"ID":"aa5f4cfd-50be-4ca2-a2e4-d1703fa3756f","Type":"ContainerStarted","Data":"1df8a3c8b177a60c845370a5c137bee438fce52cde39322e06870b6537ddea7f"} Mar 12 09:31:03 crc kubenswrapper[4809]: I0312 09:31:03.263751 4809 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 09:31:05 crc kubenswrapper[4809]: I0312 09:31:05.287225 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-htx2b" event={"ID":"aa5f4cfd-50be-4ca2-a2e4-d1703fa3756f","Type":"ContainerStarted","Data":"da44a3a6eb1aecdc2d4ae697b9eb78c804991ed863b0fbc37ed1036b27a9e1fe"} Mar 12 09:31:06 crc kubenswrapper[4809]: I0312 09:31:06.303992 4809 generic.go:334] "Generic (PLEG): container finished" podID="aa5f4cfd-50be-4ca2-a2e4-d1703fa3756f" containerID="da44a3a6eb1aecdc2d4ae697b9eb78c804991ed863b0fbc37ed1036b27a9e1fe" exitCode=0 Mar 12 09:31:06 crc kubenswrapper[4809]: I0312 09:31:06.304123 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-htx2b" event={"ID":"aa5f4cfd-50be-4ca2-a2e4-d1703fa3756f","Type":"ContainerDied","Data":"da44a3a6eb1aecdc2d4ae697b9eb78c804991ed863b0fbc37ed1036b27a9e1fe"} Mar 12 09:31:07 crc kubenswrapper[4809]: I0312 09:31:07.326059 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-htx2b" event={"ID":"aa5f4cfd-50be-4ca2-a2e4-d1703fa3756f","Type":"ContainerStarted","Data":"8285d4e646b41c48d5444ebc7c475de6d0f1411c00b07641552ff9cf4a0a8a6c"} Mar 12 09:31:07 crc kubenswrapper[4809]: I0312 09:31:07.358634 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-htx2b" podStartSLOduration=2.8351550850000002 podStartE2EDuration="6.358611314s" podCreationTimestamp="2026-03-12 09:31:01 +0000 UTC" firstStartedPulling="2026-03-12 09:31:03.260844557 +0000 UTC m=+5536.842880290" lastFinishedPulling="2026-03-12 09:31:06.784300796 +0000 UTC m=+5540.366336519" observedRunningTime="2026-03-12 09:31:07.34490055 +0000 UTC m=+5540.926936293" watchObservedRunningTime="2026-03-12 09:31:07.358611314 +0000 UTC m=+5540.940647047" Mar 12 09:31:12 crc kubenswrapper[4809]: I0312 09:31:12.289817 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-htx2b" Mar 12 09:31:12 crc kubenswrapper[4809]: I0312 09:31:12.292240 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-htx2b" Mar 12 09:31:12 crc kubenswrapper[4809]: I0312 09:31:12.356318 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-htx2b" Mar 12 09:31:12 crc kubenswrapper[4809]: I0312 09:31:12.472259 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-htx2b" Mar 12 09:31:12 crc kubenswrapper[4809]: I0312 09:31:12.594431 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-htx2b"] Mar 12 09:31:14 crc kubenswrapper[4809]: I0312 09:31:14.442925 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-htx2b" podUID="aa5f4cfd-50be-4ca2-a2e4-d1703fa3756f" containerName="registry-server" containerID="cri-o://8285d4e646b41c48d5444ebc7c475de6d0f1411c00b07641552ff9cf4a0a8a6c" gracePeriod=2 Mar 12 09:31:15 crc kubenswrapper[4809]: I0312 09:31:15.048901 4809 patch_prober.go:28] interesting pod/machine-config-daemon-h6d4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 09:31:15 crc kubenswrapper[4809]: I0312 09:31:15.049484 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 09:31:15 crc kubenswrapper[4809]: I0312 09:31:15.049546 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" Mar 12 09:31:15 crc kubenswrapper[4809]: I0312 09:31:15.062542 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f804fad92fafa2c516d60507d510027321c4fa2fec6850c9fc8f05bb782945c2"} pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 09:31:15 crc kubenswrapper[4809]: I0312 09:31:15.062705 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" containerID="cri-o://f804fad92fafa2c516d60507d510027321c4fa2fec6850c9fc8f05bb782945c2" gracePeriod=600 Mar 12 09:31:15 crc kubenswrapper[4809]: I0312 09:31:15.119012 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-htx2b" Mar 12 09:31:15 crc kubenswrapper[4809]: I0312 09:31:15.195288 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmqrw\" (UniqueName: \"kubernetes.io/projected/aa5f4cfd-50be-4ca2-a2e4-d1703fa3756f-kube-api-access-kmqrw\") pod \"aa5f4cfd-50be-4ca2-a2e4-d1703fa3756f\" (UID: \"aa5f4cfd-50be-4ca2-a2e4-d1703fa3756f\") " Mar 12 09:31:15 crc kubenswrapper[4809]: I0312 09:31:15.195359 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa5f4cfd-50be-4ca2-a2e4-d1703fa3756f-catalog-content\") pod \"aa5f4cfd-50be-4ca2-a2e4-d1703fa3756f\" (UID: \"aa5f4cfd-50be-4ca2-a2e4-d1703fa3756f\") " Mar 12 09:31:15 crc kubenswrapper[4809]: I0312 09:31:15.195534 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa5f4cfd-50be-4ca2-a2e4-d1703fa3756f-utilities\") pod \"aa5f4cfd-50be-4ca2-a2e4-d1703fa3756f\" (UID: \"aa5f4cfd-50be-4ca2-a2e4-d1703fa3756f\") " Mar 12 09:31:15 crc kubenswrapper[4809]: I0312 09:31:15.199617 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa5f4cfd-50be-4ca2-a2e4-d1703fa3756f-utilities" (OuterVolumeSpecName: "utilities") pod "aa5f4cfd-50be-4ca2-a2e4-d1703fa3756f" (UID: "aa5f4cfd-50be-4ca2-a2e4-d1703fa3756f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 09:31:15 crc kubenswrapper[4809]: I0312 09:31:15.205395 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa5f4cfd-50be-4ca2-a2e4-d1703fa3756f-kube-api-access-kmqrw" (OuterVolumeSpecName: "kube-api-access-kmqrw") pod "aa5f4cfd-50be-4ca2-a2e4-d1703fa3756f" (UID: "aa5f4cfd-50be-4ca2-a2e4-d1703fa3756f"). InnerVolumeSpecName "kube-api-access-kmqrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 09:31:15 crc kubenswrapper[4809]: I0312 09:31:15.232296 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa5f4cfd-50be-4ca2-a2e4-d1703fa3756f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aa5f4cfd-50be-4ca2-a2e4-d1703fa3756f" (UID: "aa5f4cfd-50be-4ca2-a2e4-d1703fa3756f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 09:31:15 crc kubenswrapper[4809]: I0312 09:31:15.299774 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa5f4cfd-50be-4ca2-a2e4-d1703fa3756f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 09:31:15 crc kubenswrapper[4809]: I0312 09:31:15.299815 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa5f4cfd-50be-4ca2-a2e4-d1703fa3756f-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 09:31:15 crc kubenswrapper[4809]: I0312 09:31:15.299844 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmqrw\" (UniqueName: \"kubernetes.io/projected/aa5f4cfd-50be-4ca2-a2e4-d1703fa3756f-kube-api-access-kmqrw\") on node \"crc\" DevicePath \"\"" Mar 12 09:31:15 crc kubenswrapper[4809]: I0312 09:31:15.458859 4809 generic.go:334] "Generic (PLEG): container finished" podID="101483ba-8ed3-40eb-9855-077e9add029f" containerID="f804fad92fafa2c516d60507d510027321c4fa2fec6850c9fc8f05bb782945c2" exitCode=0 Mar 12 09:31:15 crc kubenswrapper[4809]: I0312 09:31:15.458930 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" event={"ID":"101483ba-8ed3-40eb-9855-077e9add029f","Type":"ContainerDied","Data":"f804fad92fafa2c516d60507d510027321c4fa2fec6850c9fc8f05bb782945c2"} Mar 12 09:31:15 crc kubenswrapper[4809]: I0312 09:31:15.459535 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" event={"ID":"101483ba-8ed3-40eb-9855-077e9add029f","Type":"ContainerStarted","Data":"50defde8943bd5e6d31a0926c787e30d8e3deae22d5d0fa49bb0620ba91106ab"} Mar 12 09:31:15 crc kubenswrapper[4809]: I0312 09:31:15.459568 4809 scope.go:117] "RemoveContainer" containerID="02f01e8c5df87af422c42c4ed2f0dc879d516281446aca0abce3266da364c81e" Mar 12 09:31:15 crc kubenswrapper[4809]: I0312 09:31:15.465411 4809 generic.go:334] "Generic (PLEG): container finished" podID="aa5f4cfd-50be-4ca2-a2e4-d1703fa3756f" containerID="8285d4e646b41c48d5444ebc7c475de6d0f1411c00b07641552ff9cf4a0a8a6c" exitCode=0 Mar 12 09:31:15 crc kubenswrapper[4809]: I0312 09:31:15.465457 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-htx2b" event={"ID":"aa5f4cfd-50be-4ca2-a2e4-d1703fa3756f","Type":"ContainerDied","Data":"8285d4e646b41c48d5444ebc7c475de6d0f1411c00b07641552ff9cf4a0a8a6c"} Mar 12 09:31:15 crc kubenswrapper[4809]: I0312 09:31:15.465504 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-htx2b" event={"ID":"aa5f4cfd-50be-4ca2-a2e4-d1703fa3756f","Type":"ContainerDied","Data":"1df8a3c8b177a60c845370a5c137bee438fce52cde39322e06870b6537ddea7f"} Mar 12 09:31:15 crc kubenswrapper[4809]: I0312 09:31:15.465543 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-htx2b" Mar 12 09:31:15 crc kubenswrapper[4809]: I0312 09:31:15.500592 4809 scope.go:117] "RemoveContainer" containerID="8285d4e646b41c48d5444ebc7c475de6d0f1411c00b07641552ff9cf4a0a8a6c" Mar 12 09:31:15 crc kubenswrapper[4809]: I0312 09:31:15.532693 4809 scope.go:117] "RemoveContainer" containerID="da44a3a6eb1aecdc2d4ae697b9eb78c804991ed863b0fbc37ed1036b27a9e1fe" Mar 12 09:31:15 crc kubenswrapper[4809]: I0312 09:31:15.546351 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-htx2b"] Mar 12 09:31:15 crc kubenswrapper[4809]: I0312 09:31:15.567089 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-htx2b"] Mar 12 09:31:15 crc kubenswrapper[4809]: I0312 09:31:15.575961 4809 scope.go:117] "RemoveContainer" containerID="0daa0c13d90be3828c66cb3ffa3bae57f30192e133288e2d2f6f8da2084c6b33" Mar 12 09:31:15 crc kubenswrapper[4809]: I0312 09:31:15.636281 4809 scope.go:117] "RemoveContainer" containerID="8285d4e646b41c48d5444ebc7c475de6d0f1411c00b07641552ff9cf4a0a8a6c" Mar 12 09:31:15 crc kubenswrapper[4809]: E0312 09:31:15.637040 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8285d4e646b41c48d5444ebc7c475de6d0f1411c00b07641552ff9cf4a0a8a6c\": container with ID starting with 8285d4e646b41c48d5444ebc7c475de6d0f1411c00b07641552ff9cf4a0a8a6c not found: ID does not exist" containerID="8285d4e646b41c48d5444ebc7c475de6d0f1411c00b07641552ff9cf4a0a8a6c" Mar 12 09:31:15 crc kubenswrapper[4809]: I0312 09:31:15.637201 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8285d4e646b41c48d5444ebc7c475de6d0f1411c00b07641552ff9cf4a0a8a6c"} err="failed to get container status \"8285d4e646b41c48d5444ebc7c475de6d0f1411c00b07641552ff9cf4a0a8a6c\": rpc error: code = NotFound desc = could not find container \"8285d4e646b41c48d5444ebc7c475de6d0f1411c00b07641552ff9cf4a0a8a6c\": container with ID starting with 8285d4e646b41c48d5444ebc7c475de6d0f1411c00b07641552ff9cf4a0a8a6c not found: ID does not exist" Mar 12 09:31:15 crc kubenswrapper[4809]: I0312 09:31:15.637317 4809 scope.go:117] "RemoveContainer" containerID="da44a3a6eb1aecdc2d4ae697b9eb78c804991ed863b0fbc37ed1036b27a9e1fe" Mar 12 09:31:15 crc kubenswrapper[4809]: E0312 09:31:15.638003 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da44a3a6eb1aecdc2d4ae697b9eb78c804991ed863b0fbc37ed1036b27a9e1fe\": container with ID starting with da44a3a6eb1aecdc2d4ae697b9eb78c804991ed863b0fbc37ed1036b27a9e1fe not found: ID does not exist" containerID="da44a3a6eb1aecdc2d4ae697b9eb78c804991ed863b0fbc37ed1036b27a9e1fe" Mar 12 09:31:15 crc kubenswrapper[4809]: I0312 09:31:15.638138 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da44a3a6eb1aecdc2d4ae697b9eb78c804991ed863b0fbc37ed1036b27a9e1fe"} err="failed to get container status \"da44a3a6eb1aecdc2d4ae697b9eb78c804991ed863b0fbc37ed1036b27a9e1fe\": rpc error: code = NotFound desc = could not find container \"da44a3a6eb1aecdc2d4ae697b9eb78c804991ed863b0fbc37ed1036b27a9e1fe\": container with ID starting with da44a3a6eb1aecdc2d4ae697b9eb78c804991ed863b0fbc37ed1036b27a9e1fe not found: ID does not exist" Mar 12 09:31:15 crc kubenswrapper[4809]: I0312 09:31:15.638251 4809 scope.go:117] "RemoveContainer" containerID="0daa0c13d90be3828c66cb3ffa3bae57f30192e133288e2d2f6f8da2084c6b33" Mar 12 09:31:15 crc kubenswrapper[4809]: E0312 09:31:15.638764 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0daa0c13d90be3828c66cb3ffa3bae57f30192e133288e2d2f6f8da2084c6b33\": container with ID starting with 0daa0c13d90be3828c66cb3ffa3bae57f30192e133288e2d2f6f8da2084c6b33 not found: ID does not exist" containerID="0daa0c13d90be3828c66cb3ffa3bae57f30192e133288e2d2f6f8da2084c6b33" Mar 12 09:31:15 crc kubenswrapper[4809]: I0312 09:31:15.638861 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0daa0c13d90be3828c66cb3ffa3bae57f30192e133288e2d2f6f8da2084c6b33"} err="failed to get container status \"0daa0c13d90be3828c66cb3ffa3bae57f30192e133288e2d2f6f8da2084c6b33\": rpc error: code = NotFound desc = could not find container \"0daa0c13d90be3828c66cb3ffa3bae57f30192e133288e2d2f6f8da2084c6b33\": container with ID starting with 0daa0c13d90be3828c66cb3ffa3bae57f30192e133288e2d2f6f8da2084c6b33 not found: ID does not exist" Mar 12 09:31:17 crc kubenswrapper[4809]: I0312 09:31:17.132366 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa5f4cfd-50be-4ca2-a2e4-d1703fa3756f" path="/var/lib/kubelet/pods/aa5f4cfd-50be-4ca2-a2e4-d1703fa3756f/volumes" Mar 12 09:32:00 crc kubenswrapper[4809]: I0312 09:32:00.164036 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555132-rprbn"] Mar 12 09:32:00 crc kubenswrapper[4809]: E0312 09:32:00.165542 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa5f4cfd-50be-4ca2-a2e4-d1703fa3756f" containerName="extract-utilities" Mar 12 09:32:00 crc kubenswrapper[4809]: I0312 09:32:00.165562 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa5f4cfd-50be-4ca2-a2e4-d1703fa3756f" containerName="extract-utilities" Mar 12 09:32:00 crc kubenswrapper[4809]: E0312 09:32:00.165587 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa5f4cfd-50be-4ca2-a2e4-d1703fa3756f" containerName="registry-server" Mar 12 09:32:00 crc kubenswrapper[4809]: I0312 09:32:00.165596 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa5f4cfd-50be-4ca2-a2e4-d1703fa3756f" containerName="registry-server" Mar 12 09:32:00 crc kubenswrapper[4809]: E0312 09:32:00.165632 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa5f4cfd-50be-4ca2-a2e4-d1703fa3756f" containerName="extract-content" Mar 12 09:32:00 crc kubenswrapper[4809]: I0312 09:32:00.165643 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa5f4cfd-50be-4ca2-a2e4-d1703fa3756f" containerName="extract-content" Mar 12 09:32:00 crc kubenswrapper[4809]: I0312 09:32:00.165938 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa5f4cfd-50be-4ca2-a2e4-d1703fa3756f" containerName="registry-server" Mar 12 09:32:00 crc kubenswrapper[4809]: I0312 09:32:00.167230 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555132-rprbn" Mar 12 09:32:00 crc kubenswrapper[4809]: I0312 09:32:00.169495 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 09:32:00 crc kubenswrapper[4809]: I0312 09:32:00.170248 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 09:32:00 crc kubenswrapper[4809]: I0312 09:32:00.172186 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-tvxqv" Mar 12 09:32:00 crc kubenswrapper[4809]: I0312 09:32:00.188305 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555132-rprbn"] Mar 12 09:32:00 crc kubenswrapper[4809]: I0312 09:32:00.250620 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8npj\" (UniqueName: \"kubernetes.io/projected/0bef043f-d86a-4ee3-96e0-f3c229a7f718-kube-api-access-z8npj\") pod \"auto-csr-approver-29555132-rprbn\" (UID: \"0bef043f-d86a-4ee3-96e0-f3c229a7f718\") " pod="openshift-infra/auto-csr-approver-29555132-rprbn" Mar 12 09:32:00 crc kubenswrapper[4809]: I0312 09:32:00.354991 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8npj\" (UniqueName: \"kubernetes.io/projected/0bef043f-d86a-4ee3-96e0-f3c229a7f718-kube-api-access-z8npj\") pod \"auto-csr-approver-29555132-rprbn\" (UID: \"0bef043f-d86a-4ee3-96e0-f3c229a7f718\") " pod="openshift-infra/auto-csr-approver-29555132-rprbn" Mar 12 09:32:00 crc kubenswrapper[4809]: I0312 09:32:00.380873 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8npj\" (UniqueName: \"kubernetes.io/projected/0bef043f-d86a-4ee3-96e0-f3c229a7f718-kube-api-access-z8npj\") pod \"auto-csr-approver-29555132-rprbn\" (UID: \"0bef043f-d86a-4ee3-96e0-f3c229a7f718\") " pod="openshift-infra/auto-csr-approver-29555132-rprbn" Mar 12 09:32:00 crc kubenswrapper[4809]: I0312 09:32:00.492858 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555132-rprbn" Mar 12 09:32:01 crc kubenswrapper[4809]: I0312 09:32:01.068509 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555132-rprbn"] Mar 12 09:32:01 crc kubenswrapper[4809]: I0312 09:32:01.169273 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555132-rprbn" event={"ID":"0bef043f-d86a-4ee3-96e0-f3c229a7f718","Type":"ContainerStarted","Data":"e89a2caa1ea256dd71ba8390861d13b8bca070afc5acf65f01a55ff3cc33f360"} Mar 12 09:32:03 crc kubenswrapper[4809]: I0312 09:32:03.196028 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555132-rprbn" event={"ID":"0bef043f-d86a-4ee3-96e0-f3c229a7f718","Type":"ContainerStarted","Data":"74c9d1f95e021e9277b4c378c417bbf003fca7bca145d57813f3d81a952f8a44"} Mar 12 09:32:03 crc kubenswrapper[4809]: I0312 09:32:03.223458 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555132-rprbn" podStartSLOduration=2.225357375 podStartE2EDuration="3.223430968s" podCreationTimestamp="2026-03-12 09:32:00 +0000 UTC" firstStartedPulling="2026-03-12 09:32:01.078856786 +0000 UTC m=+5594.660892519" lastFinishedPulling="2026-03-12 09:32:02.076930379 +0000 UTC m=+5595.658966112" observedRunningTime="2026-03-12 09:32:03.211207916 +0000 UTC m=+5596.793243649" watchObservedRunningTime="2026-03-12 09:32:03.223430968 +0000 UTC m=+5596.805466701" Mar 12 09:32:04 crc kubenswrapper[4809]: I0312 09:32:04.226025 4809 generic.go:334] "Generic (PLEG): container finished" podID="0bef043f-d86a-4ee3-96e0-f3c229a7f718" containerID="74c9d1f95e021e9277b4c378c417bbf003fca7bca145d57813f3d81a952f8a44" exitCode=0 Mar 12 09:32:04 crc kubenswrapper[4809]: I0312 09:32:04.226085 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555132-rprbn" event={"ID":"0bef043f-d86a-4ee3-96e0-f3c229a7f718","Type":"ContainerDied","Data":"74c9d1f95e021e9277b4c378c417bbf003fca7bca145d57813f3d81a952f8a44"} Mar 12 09:32:05 crc kubenswrapper[4809]: I0312 09:32:05.989673 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555132-rprbn" Mar 12 09:32:06 crc kubenswrapper[4809]: I0312 09:32:06.108315 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8npj\" (UniqueName: \"kubernetes.io/projected/0bef043f-d86a-4ee3-96e0-f3c229a7f718-kube-api-access-z8npj\") pod \"0bef043f-d86a-4ee3-96e0-f3c229a7f718\" (UID: \"0bef043f-d86a-4ee3-96e0-f3c229a7f718\") " Mar 12 09:32:06 crc kubenswrapper[4809]: I0312 09:32:06.125213 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bef043f-d86a-4ee3-96e0-f3c229a7f718-kube-api-access-z8npj" (OuterVolumeSpecName: "kube-api-access-z8npj") pod "0bef043f-d86a-4ee3-96e0-f3c229a7f718" (UID: "0bef043f-d86a-4ee3-96e0-f3c229a7f718"). InnerVolumeSpecName "kube-api-access-z8npj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 09:32:06 crc kubenswrapper[4809]: I0312 09:32:06.211077 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8npj\" (UniqueName: \"kubernetes.io/projected/0bef043f-d86a-4ee3-96e0-f3c229a7f718-kube-api-access-z8npj\") on node \"crc\" DevicePath \"\"" Mar 12 09:32:06 crc kubenswrapper[4809]: I0312 09:32:06.251789 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555132-rprbn" event={"ID":"0bef043f-d86a-4ee3-96e0-f3c229a7f718","Type":"ContainerDied","Data":"e89a2caa1ea256dd71ba8390861d13b8bca070afc5acf65f01a55ff3cc33f360"} Mar 12 09:32:06 crc kubenswrapper[4809]: I0312 09:32:06.251840 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e89a2caa1ea256dd71ba8390861d13b8bca070afc5acf65f01a55ff3cc33f360" Mar 12 09:32:06 crc kubenswrapper[4809]: I0312 09:32:06.251870 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555132-rprbn" Mar 12 09:32:06 crc kubenswrapper[4809]: I0312 09:32:06.284966 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555126-x7xxx"] Mar 12 09:32:06 crc kubenswrapper[4809]: I0312 09:32:06.295576 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555126-x7xxx"] Mar 12 09:32:07 crc kubenswrapper[4809]: I0312 09:32:07.123784 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebcd66a4-9afe-4a78-ac11-b8bbc93e57c9" path="/var/lib/kubelet/pods/ebcd66a4-9afe-4a78-ac11-b8bbc93e57c9/volumes" Mar 12 09:32:26 crc kubenswrapper[4809]: I0312 09:32:26.495945 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mb2dx"] Mar 12 09:32:26 crc kubenswrapper[4809]: E0312 09:32:26.497711 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bef043f-d86a-4ee3-96e0-f3c229a7f718" containerName="oc" Mar 12 09:32:26 crc kubenswrapper[4809]: I0312 09:32:26.497733 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bef043f-d86a-4ee3-96e0-f3c229a7f718" containerName="oc" Mar 12 09:32:26 crc kubenswrapper[4809]: I0312 09:32:26.498031 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bef043f-d86a-4ee3-96e0-f3c229a7f718" containerName="oc" Mar 12 09:32:26 crc kubenswrapper[4809]: I0312 09:32:26.500449 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mb2dx" Mar 12 09:32:26 crc kubenswrapper[4809]: I0312 09:32:26.510321 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mb2dx"] Mar 12 09:32:26 crc kubenswrapper[4809]: I0312 09:32:26.603104 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dfc2e68-50ec-4110-a8b5-23148f75d31e-utilities\") pod \"redhat-operators-mb2dx\" (UID: \"0dfc2e68-50ec-4110-a8b5-23148f75d31e\") " pod="openshift-marketplace/redhat-operators-mb2dx" Mar 12 09:32:26 crc kubenswrapper[4809]: I0312 09:32:26.603669 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dfc2e68-50ec-4110-a8b5-23148f75d31e-catalog-content\") pod \"redhat-operators-mb2dx\" (UID: \"0dfc2e68-50ec-4110-a8b5-23148f75d31e\") " pod="openshift-marketplace/redhat-operators-mb2dx" Mar 12 09:32:26 crc kubenswrapper[4809]: I0312 09:32:26.603696 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jt6l\" (UniqueName: \"kubernetes.io/projected/0dfc2e68-50ec-4110-a8b5-23148f75d31e-kube-api-access-6jt6l\") pod \"redhat-operators-mb2dx\" (UID: \"0dfc2e68-50ec-4110-a8b5-23148f75d31e\") " pod="openshift-marketplace/redhat-operators-mb2dx" Mar 12 09:32:26 crc kubenswrapper[4809]: I0312 09:32:26.707348 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dfc2e68-50ec-4110-a8b5-23148f75d31e-utilities\") pod \"redhat-operators-mb2dx\" (UID: \"0dfc2e68-50ec-4110-a8b5-23148f75d31e\") " pod="openshift-marketplace/redhat-operators-mb2dx" Mar 12 09:32:26 crc kubenswrapper[4809]: I0312 09:32:26.707544 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dfc2e68-50ec-4110-a8b5-23148f75d31e-catalog-content\") pod \"redhat-operators-mb2dx\" (UID: \"0dfc2e68-50ec-4110-a8b5-23148f75d31e\") " pod="openshift-marketplace/redhat-operators-mb2dx" Mar 12 09:32:26 crc kubenswrapper[4809]: I0312 09:32:26.707573 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jt6l\" (UniqueName: \"kubernetes.io/projected/0dfc2e68-50ec-4110-a8b5-23148f75d31e-kube-api-access-6jt6l\") pod \"redhat-operators-mb2dx\" (UID: \"0dfc2e68-50ec-4110-a8b5-23148f75d31e\") " pod="openshift-marketplace/redhat-operators-mb2dx" Mar 12 09:32:26 crc kubenswrapper[4809]: I0312 09:32:26.707882 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dfc2e68-50ec-4110-a8b5-23148f75d31e-utilities\") pod \"redhat-operators-mb2dx\" (UID: \"0dfc2e68-50ec-4110-a8b5-23148f75d31e\") " pod="openshift-marketplace/redhat-operators-mb2dx" Mar 12 09:32:26 crc kubenswrapper[4809]: I0312 09:32:26.707978 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dfc2e68-50ec-4110-a8b5-23148f75d31e-catalog-content\") pod \"redhat-operators-mb2dx\" (UID: \"0dfc2e68-50ec-4110-a8b5-23148f75d31e\") " pod="openshift-marketplace/redhat-operators-mb2dx" Mar 12 09:32:26 crc kubenswrapper[4809]: I0312 09:32:26.731666 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jt6l\" (UniqueName: \"kubernetes.io/projected/0dfc2e68-50ec-4110-a8b5-23148f75d31e-kube-api-access-6jt6l\") pod \"redhat-operators-mb2dx\" (UID: \"0dfc2e68-50ec-4110-a8b5-23148f75d31e\") " pod="openshift-marketplace/redhat-operators-mb2dx" Mar 12 09:32:26 crc kubenswrapper[4809]: I0312 09:32:26.842410 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mb2dx" Mar 12 09:32:27 crc kubenswrapper[4809]: I0312 09:32:27.354183 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mb2dx"] Mar 12 09:32:27 crc kubenswrapper[4809]: I0312 09:32:27.503328 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mb2dx" event={"ID":"0dfc2e68-50ec-4110-a8b5-23148f75d31e","Type":"ContainerStarted","Data":"c619dea9ddcfba10af7e517bf5d15aa63cdd0a220e36d3a782df74796e6d53f1"} Mar 12 09:32:28 crc kubenswrapper[4809]: I0312 09:32:28.521477 4809 generic.go:334] "Generic (PLEG): container finished" podID="0dfc2e68-50ec-4110-a8b5-23148f75d31e" containerID="c21c9646c77f9002c65b59c49060381d93fbc1525d6ccbd303d1f56f9300264d" exitCode=0 Mar 12 09:32:28 crc kubenswrapper[4809]: I0312 09:32:28.521568 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mb2dx" event={"ID":"0dfc2e68-50ec-4110-a8b5-23148f75d31e","Type":"ContainerDied","Data":"c21c9646c77f9002c65b59c49060381d93fbc1525d6ccbd303d1f56f9300264d"} Mar 12 09:32:30 crc kubenswrapper[4809]: I0312 09:32:30.548065 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mb2dx" event={"ID":"0dfc2e68-50ec-4110-a8b5-23148f75d31e","Type":"ContainerStarted","Data":"b999cbf95e20caeebabb531ff1e6f08eb7fd64745775b25d810b6a69d5830358"} Mar 12 09:32:35 crc kubenswrapper[4809]: I0312 09:32:35.621130 4809 generic.go:334] "Generic (PLEG): container finished" podID="0dfc2e68-50ec-4110-a8b5-23148f75d31e" containerID="b999cbf95e20caeebabb531ff1e6f08eb7fd64745775b25d810b6a69d5830358" exitCode=0 Mar 12 09:32:35 crc kubenswrapper[4809]: I0312 09:32:35.621168 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mb2dx" event={"ID":"0dfc2e68-50ec-4110-a8b5-23148f75d31e","Type":"ContainerDied","Data":"b999cbf95e20caeebabb531ff1e6f08eb7fd64745775b25d810b6a69d5830358"} Mar 12 09:32:36 crc kubenswrapper[4809]: I0312 09:32:36.638536 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mb2dx" event={"ID":"0dfc2e68-50ec-4110-a8b5-23148f75d31e","Type":"ContainerStarted","Data":"e5df06744f461a0ab047699c47175cd2c4aaac1e66ef66c68dd34e61056d2c72"} Mar 12 09:32:36 crc kubenswrapper[4809]: I0312 09:32:36.663109 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mb2dx" podStartSLOduration=3.080520187 podStartE2EDuration="10.663083616s" podCreationTimestamp="2026-03-12 09:32:26 +0000 UTC" firstStartedPulling="2026-03-12 09:32:28.523827229 +0000 UTC m=+5622.105862962" lastFinishedPulling="2026-03-12 09:32:36.106390648 +0000 UTC m=+5629.688426391" observedRunningTime="2026-03-12 09:32:36.656024933 +0000 UTC m=+5630.238060676" watchObservedRunningTime="2026-03-12 09:32:36.663083616 +0000 UTC m=+5630.245119349" Mar 12 09:32:36 crc kubenswrapper[4809]: I0312 09:32:36.843273 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mb2dx" Mar 12 09:32:36 crc kubenswrapper[4809]: I0312 09:32:36.843319 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mb2dx" Mar 12 09:32:37 crc kubenswrapper[4809]: I0312 09:32:37.898907 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mb2dx" podUID="0dfc2e68-50ec-4110-a8b5-23148f75d31e" containerName="registry-server" probeResult="failure" output=< Mar 12 09:32:37 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 09:32:37 crc kubenswrapper[4809]: > Mar 12 09:32:47 crc kubenswrapper[4809]: I0312 09:32:47.910205 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mb2dx" podUID="0dfc2e68-50ec-4110-a8b5-23148f75d31e" containerName="registry-server" probeResult="failure" output=< Mar 12 09:32:47 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 09:32:47 crc kubenswrapper[4809]: > Mar 12 09:32:51 crc kubenswrapper[4809]: I0312 09:32:51.189417 4809 scope.go:117] "RemoveContainer" containerID="34f3d9ca557dc15191aa3d8424185e07320e3772a33b104269650f4242b96c15" Mar 12 09:32:57 crc kubenswrapper[4809]: I0312 09:32:57.899814 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mb2dx" podUID="0dfc2e68-50ec-4110-a8b5-23148f75d31e" containerName="registry-server" probeResult="failure" output=< Mar 12 09:32:57 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 09:32:57 crc kubenswrapper[4809]: > Mar 12 09:33:08 crc kubenswrapper[4809]: I0312 09:33:08.033348 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mb2dx" podUID="0dfc2e68-50ec-4110-a8b5-23148f75d31e" containerName="registry-server" probeResult="failure" output=< Mar 12 09:33:08 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 09:33:08 crc kubenswrapper[4809]: > Mar 12 09:33:15 crc kubenswrapper[4809]: I0312 09:33:15.048536 4809 patch_prober.go:28] interesting pod/machine-config-daemon-h6d4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 09:33:15 crc kubenswrapper[4809]: I0312 09:33:15.049269 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 09:33:16 crc kubenswrapper[4809]: I0312 09:33:16.910304 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mb2dx" Mar 12 09:33:16 crc kubenswrapper[4809]: I0312 09:33:16.973832 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mb2dx" Mar 12 09:33:17 crc kubenswrapper[4809]: I0312 09:33:17.167059 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mb2dx"] Mar 12 09:33:18 crc kubenswrapper[4809]: I0312 09:33:18.817471 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mb2dx" podUID="0dfc2e68-50ec-4110-a8b5-23148f75d31e" containerName="registry-server" containerID="cri-o://e5df06744f461a0ab047699c47175cd2c4aaac1e66ef66c68dd34e61056d2c72" gracePeriod=2 Mar 12 09:33:19 crc kubenswrapper[4809]: I0312 09:33:19.628579 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mb2dx" Mar 12 09:33:19 crc kubenswrapper[4809]: I0312 09:33:19.806283 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jt6l\" (UniqueName: \"kubernetes.io/projected/0dfc2e68-50ec-4110-a8b5-23148f75d31e-kube-api-access-6jt6l\") pod \"0dfc2e68-50ec-4110-a8b5-23148f75d31e\" (UID: \"0dfc2e68-50ec-4110-a8b5-23148f75d31e\") " Mar 12 09:33:19 crc kubenswrapper[4809]: I0312 09:33:19.808253 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dfc2e68-50ec-4110-a8b5-23148f75d31e-catalog-content\") pod \"0dfc2e68-50ec-4110-a8b5-23148f75d31e\" (UID: \"0dfc2e68-50ec-4110-a8b5-23148f75d31e\") " Mar 12 09:33:19 crc kubenswrapper[4809]: I0312 09:33:19.808843 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dfc2e68-50ec-4110-a8b5-23148f75d31e-utilities\") pod \"0dfc2e68-50ec-4110-a8b5-23148f75d31e\" (UID: \"0dfc2e68-50ec-4110-a8b5-23148f75d31e\") " Mar 12 09:33:19 crc kubenswrapper[4809]: I0312 09:33:19.815320 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0dfc2e68-50ec-4110-a8b5-23148f75d31e-utilities" (OuterVolumeSpecName: "utilities") pod "0dfc2e68-50ec-4110-a8b5-23148f75d31e" (UID: "0dfc2e68-50ec-4110-a8b5-23148f75d31e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 09:33:19 crc kubenswrapper[4809]: I0312 09:33:19.815815 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dfc2e68-50ec-4110-a8b5-23148f75d31e-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 09:33:19 crc kubenswrapper[4809]: I0312 09:33:19.823883 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dfc2e68-50ec-4110-a8b5-23148f75d31e-kube-api-access-6jt6l" (OuterVolumeSpecName: "kube-api-access-6jt6l") pod "0dfc2e68-50ec-4110-a8b5-23148f75d31e" (UID: "0dfc2e68-50ec-4110-a8b5-23148f75d31e"). InnerVolumeSpecName "kube-api-access-6jt6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 09:33:19 crc kubenswrapper[4809]: I0312 09:33:19.834046 4809 generic.go:334] "Generic (PLEG): container finished" podID="0dfc2e68-50ec-4110-a8b5-23148f75d31e" containerID="e5df06744f461a0ab047699c47175cd2c4aaac1e66ef66c68dd34e61056d2c72" exitCode=0 Mar 12 09:33:19 crc kubenswrapper[4809]: I0312 09:33:19.834130 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mb2dx" event={"ID":"0dfc2e68-50ec-4110-a8b5-23148f75d31e","Type":"ContainerDied","Data":"e5df06744f461a0ab047699c47175cd2c4aaac1e66ef66c68dd34e61056d2c72"} Mar 12 09:33:19 crc kubenswrapper[4809]: I0312 09:33:19.834168 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mb2dx" event={"ID":"0dfc2e68-50ec-4110-a8b5-23148f75d31e","Type":"ContainerDied","Data":"c619dea9ddcfba10af7e517bf5d15aa63cdd0a220e36d3a782df74796e6d53f1"} Mar 12 09:33:19 crc kubenswrapper[4809]: I0312 09:33:19.834190 4809 scope.go:117] "RemoveContainer" containerID="e5df06744f461a0ab047699c47175cd2c4aaac1e66ef66c68dd34e61056d2c72" Mar 12 09:33:19 crc kubenswrapper[4809]: I0312 09:33:19.834410 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mb2dx" Mar 12 09:33:19 crc kubenswrapper[4809]: I0312 09:33:19.910783 4809 scope.go:117] "RemoveContainer" containerID="b999cbf95e20caeebabb531ff1e6f08eb7fd64745775b25d810b6a69d5830358" Mar 12 09:33:19 crc kubenswrapper[4809]: I0312 09:33:19.919461 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jt6l\" (UniqueName: \"kubernetes.io/projected/0dfc2e68-50ec-4110-a8b5-23148f75d31e-kube-api-access-6jt6l\") on node \"crc\" DevicePath \"\"" Mar 12 09:33:19 crc kubenswrapper[4809]: I0312 09:33:19.957757 4809 scope.go:117] "RemoveContainer" containerID="c21c9646c77f9002c65b59c49060381d93fbc1525d6ccbd303d1f56f9300264d" Mar 12 09:33:19 crc kubenswrapper[4809]: I0312 09:33:19.994008 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0dfc2e68-50ec-4110-a8b5-23148f75d31e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0dfc2e68-50ec-4110-a8b5-23148f75d31e" (UID: "0dfc2e68-50ec-4110-a8b5-23148f75d31e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 09:33:19 crc kubenswrapper[4809]: I0312 09:33:19.998763 4809 scope.go:117] "RemoveContainer" containerID="e5df06744f461a0ab047699c47175cd2c4aaac1e66ef66c68dd34e61056d2c72" Mar 12 09:33:19 crc kubenswrapper[4809]: E0312 09:33:19.999512 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5df06744f461a0ab047699c47175cd2c4aaac1e66ef66c68dd34e61056d2c72\": container with ID starting with e5df06744f461a0ab047699c47175cd2c4aaac1e66ef66c68dd34e61056d2c72 not found: ID does not exist" containerID="e5df06744f461a0ab047699c47175cd2c4aaac1e66ef66c68dd34e61056d2c72" Mar 12 09:33:19 crc kubenswrapper[4809]: I0312 09:33:19.999649 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5df06744f461a0ab047699c47175cd2c4aaac1e66ef66c68dd34e61056d2c72"} err="failed to get container status \"e5df06744f461a0ab047699c47175cd2c4aaac1e66ef66c68dd34e61056d2c72\": rpc error: code = NotFound desc = could not find container \"e5df06744f461a0ab047699c47175cd2c4aaac1e66ef66c68dd34e61056d2c72\": container with ID starting with e5df06744f461a0ab047699c47175cd2c4aaac1e66ef66c68dd34e61056d2c72 not found: ID does not exist" Mar 12 09:33:19 crc kubenswrapper[4809]: I0312 09:33:19.999674 4809 scope.go:117] "RemoveContainer" containerID="b999cbf95e20caeebabb531ff1e6f08eb7fd64745775b25d810b6a69d5830358" Mar 12 09:33:20 crc kubenswrapper[4809]: E0312 09:33:20.000012 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b999cbf95e20caeebabb531ff1e6f08eb7fd64745775b25d810b6a69d5830358\": container with ID starting with b999cbf95e20caeebabb531ff1e6f08eb7fd64745775b25d810b6a69d5830358 not found: ID does not exist" containerID="b999cbf95e20caeebabb531ff1e6f08eb7fd64745775b25d810b6a69d5830358" Mar 12 09:33:20 crc kubenswrapper[4809]: I0312 09:33:20.000032 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b999cbf95e20caeebabb531ff1e6f08eb7fd64745775b25d810b6a69d5830358"} err="failed to get container status \"b999cbf95e20caeebabb531ff1e6f08eb7fd64745775b25d810b6a69d5830358\": rpc error: code = NotFound desc = could not find container \"b999cbf95e20caeebabb531ff1e6f08eb7fd64745775b25d810b6a69d5830358\": container with ID starting with b999cbf95e20caeebabb531ff1e6f08eb7fd64745775b25d810b6a69d5830358 not found: ID does not exist" Mar 12 09:33:20 crc kubenswrapper[4809]: I0312 09:33:20.000045 4809 scope.go:117] "RemoveContainer" containerID="c21c9646c77f9002c65b59c49060381d93fbc1525d6ccbd303d1f56f9300264d" Mar 12 09:33:20 crc kubenswrapper[4809]: E0312 09:33:20.000436 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c21c9646c77f9002c65b59c49060381d93fbc1525d6ccbd303d1f56f9300264d\": container with ID starting with c21c9646c77f9002c65b59c49060381d93fbc1525d6ccbd303d1f56f9300264d not found: ID does not exist" containerID="c21c9646c77f9002c65b59c49060381d93fbc1525d6ccbd303d1f56f9300264d" Mar 12 09:33:20 crc kubenswrapper[4809]: I0312 09:33:20.000456 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c21c9646c77f9002c65b59c49060381d93fbc1525d6ccbd303d1f56f9300264d"} err="failed to get container status \"c21c9646c77f9002c65b59c49060381d93fbc1525d6ccbd303d1f56f9300264d\": rpc error: code = NotFound desc = could not find container \"c21c9646c77f9002c65b59c49060381d93fbc1525d6ccbd303d1f56f9300264d\": container with ID starting with c21c9646c77f9002c65b59c49060381d93fbc1525d6ccbd303d1f56f9300264d not found: ID does not exist" Mar 12 09:33:20 crc kubenswrapper[4809]: I0312 09:33:20.022988 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dfc2e68-50ec-4110-a8b5-23148f75d31e-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 09:33:20 crc kubenswrapper[4809]: I0312 09:33:20.190781 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mb2dx"] Mar 12 09:33:20 crc kubenswrapper[4809]: I0312 09:33:20.205209 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mb2dx"] Mar 12 09:33:21 crc kubenswrapper[4809]: I0312 09:33:21.124707 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dfc2e68-50ec-4110-a8b5-23148f75d31e" path="/var/lib/kubelet/pods/0dfc2e68-50ec-4110-a8b5-23148f75d31e/volumes" Mar 12 09:33:45 crc kubenswrapper[4809]: I0312 09:33:45.048281 4809 patch_prober.go:28] interesting pod/machine-config-daemon-h6d4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 09:33:45 crc kubenswrapper[4809]: I0312 09:33:45.049635 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 09:34:00 crc kubenswrapper[4809]: I0312 09:34:00.164972 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555134-49r7n"] Mar 12 09:34:00 crc kubenswrapper[4809]: E0312 09:34:00.166301 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dfc2e68-50ec-4110-a8b5-23148f75d31e" containerName="extract-utilities" Mar 12 09:34:00 crc kubenswrapper[4809]: I0312 09:34:00.166321 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dfc2e68-50ec-4110-a8b5-23148f75d31e" containerName="extract-utilities" Mar 12 09:34:00 crc kubenswrapper[4809]: E0312 09:34:00.166356 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dfc2e68-50ec-4110-a8b5-23148f75d31e" containerName="registry-server" Mar 12 09:34:00 crc kubenswrapper[4809]: I0312 09:34:00.166364 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dfc2e68-50ec-4110-a8b5-23148f75d31e" containerName="registry-server" Mar 12 09:34:00 crc kubenswrapper[4809]: E0312 09:34:00.166377 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dfc2e68-50ec-4110-a8b5-23148f75d31e" containerName="extract-content" Mar 12 09:34:00 crc kubenswrapper[4809]: I0312 09:34:00.166383 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dfc2e68-50ec-4110-a8b5-23148f75d31e" containerName="extract-content" Mar 12 09:34:00 crc kubenswrapper[4809]: I0312 09:34:00.166656 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="0dfc2e68-50ec-4110-a8b5-23148f75d31e" containerName="registry-server" Mar 12 09:34:00 crc kubenswrapper[4809]: I0312 09:34:00.167611 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555134-49r7n" Mar 12 09:34:00 crc kubenswrapper[4809]: I0312 09:34:00.173355 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 09:34:00 crc kubenswrapper[4809]: I0312 09:34:00.174144 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-tvxqv" Mar 12 09:34:00 crc kubenswrapper[4809]: I0312 09:34:00.174618 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 09:34:00 crc kubenswrapper[4809]: I0312 09:34:00.180735 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555134-49r7n"] Mar 12 09:34:00 crc kubenswrapper[4809]: I0312 09:34:00.262081 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9ht7\" (UniqueName: \"kubernetes.io/projected/934b370a-37b5-40d3-b543-fdafb84af38d-kube-api-access-s9ht7\") pod \"auto-csr-approver-29555134-49r7n\" (UID: \"934b370a-37b5-40d3-b543-fdafb84af38d\") " pod="openshift-infra/auto-csr-approver-29555134-49r7n" Mar 12 09:34:00 crc kubenswrapper[4809]: I0312 09:34:00.364223 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9ht7\" (UniqueName: \"kubernetes.io/projected/934b370a-37b5-40d3-b543-fdafb84af38d-kube-api-access-s9ht7\") pod \"auto-csr-approver-29555134-49r7n\" (UID: \"934b370a-37b5-40d3-b543-fdafb84af38d\") " pod="openshift-infra/auto-csr-approver-29555134-49r7n" Mar 12 09:34:00 crc kubenswrapper[4809]: I0312 09:34:00.409932 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9ht7\" (UniqueName: \"kubernetes.io/projected/934b370a-37b5-40d3-b543-fdafb84af38d-kube-api-access-s9ht7\") pod \"auto-csr-approver-29555134-49r7n\" (UID: \"934b370a-37b5-40d3-b543-fdafb84af38d\") " pod="openshift-infra/auto-csr-approver-29555134-49r7n" Mar 12 09:34:00 crc kubenswrapper[4809]: I0312 09:34:00.523740 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555134-49r7n" Mar 12 09:34:01 crc kubenswrapper[4809]: I0312 09:34:01.088418 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555134-49r7n"] Mar 12 09:34:01 crc kubenswrapper[4809]: W0312 09:34:01.110523 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod934b370a_37b5_40d3_b543_fdafb84af38d.slice/crio-2e04a43fb0db3ccad30d5c39af86e685eca8304481bbd45ab9f7f90f5ce7cd5a WatchSource:0}: Error finding container 2e04a43fb0db3ccad30d5c39af86e685eca8304481bbd45ab9f7f90f5ce7cd5a: Status 404 returned error can't find the container with id 2e04a43fb0db3ccad30d5c39af86e685eca8304481bbd45ab9f7f90f5ce7cd5a Mar 12 09:34:01 crc kubenswrapper[4809]: I0312 09:34:01.392235 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555134-49r7n" event={"ID":"934b370a-37b5-40d3-b543-fdafb84af38d","Type":"ContainerStarted","Data":"2e04a43fb0db3ccad30d5c39af86e685eca8304481bbd45ab9f7f90f5ce7cd5a"} Mar 12 09:34:03 crc kubenswrapper[4809]: I0312 09:34:03.430655 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555134-49r7n" event={"ID":"934b370a-37b5-40d3-b543-fdafb84af38d","Type":"ContainerStarted","Data":"26bb9616e56be461e072f430a1a13103b942df45086801296883ae0a11a1fd76"} Mar 12 09:34:03 crc kubenswrapper[4809]: I0312 09:34:03.474261 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555134-49r7n" podStartSLOduration=2.146238528 podStartE2EDuration="3.474237527s" podCreationTimestamp="2026-03-12 09:34:00 +0000 UTC" firstStartedPulling="2026-03-12 09:34:01.113904372 +0000 UTC m=+5714.695940105" lastFinishedPulling="2026-03-12 09:34:02.441903371 +0000 UTC m=+5716.023939104" observedRunningTime="2026-03-12 09:34:03.464563003 +0000 UTC m=+5717.046598736" watchObservedRunningTime="2026-03-12 09:34:03.474237527 +0000 UTC m=+5717.056273260" Mar 12 09:34:04 crc kubenswrapper[4809]: I0312 09:34:04.443951 4809 generic.go:334] "Generic (PLEG): container finished" podID="934b370a-37b5-40d3-b543-fdafb84af38d" containerID="26bb9616e56be461e072f430a1a13103b942df45086801296883ae0a11a1fd76" exitCode=0 Mar 12 09:34:04 crc kubenswrapper[4809]: I0312 09:34:04.444021 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555134-49r7n" event={"ID":"934b370a-37b5-40d3-b543-fdafb84af38d","Type":"ContainerDied","Data":"26bb9616e56be461e072f430a1a13103b942df45086801296883ae0a11a1fd76"} Mar 12 09:34:06 crc kubenswrapper[4809]: I0312 09:34:06.217367 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555134-49r7n" Mar 12 09:34:06 crc kubenswrapper[4809]: I0312 09:34:06.338844 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9ht7\" (UniqueName: \"kubernetes.io/projected/934b370a-37b5-40d3-b543-fdafb84af38d-kube-api-access-s9ht7\") pod \"934b370a-37b5-40d3-b543-fdafb84af38d\" (UID: \"934b370a-37b5-40d3-b543-fdafb84af38d\") " Mar 12 09:34:06 crc kubenswrapper[4809]: I0312 09:34:06.345524 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/934b370a-37b5-40d3-b543-fdafb84af38d-kube-api-access-s9ht7" (OuterVolumeSpecName: "kube-api-access-s9ht7") pod "934b370a-37b5-40d3-b543-fdafb84af38d" (UID: "934b370a-37b5-40d3-b543-fdafb84af38d"). InnerVolumeSpecName "kube-api-access-s9ht7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 09:34:06 crc kubenswrapper[4809]: I0312 09:34:06.441807 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9ht7\" (UniqueName: \"kubernetes.io/projected/934b370a-37b5-40d3-b543-fdafb84af38d-kube-api-access-s9ht7\") on node \"crc\" DevicePath \"\"" Mar 12 09:34:06 crc kubenswrapper[4809]: I0312 09:34:06.467087 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555134-49r7n" event={"ID":"934b370a-37b5-40d3-b543-fdafb84af38d","Type":"ContainerDied","Data":"2e04a43fb0db3ccad30d5c39af86e685eca8304481bbd45ab9f7f90f5ce7cd5a"} Mar 12 09:34:06 crc kubenswrapper[4809]: I0312 09:34:06.467157 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e04a43fb0db3ccad30d5c39af86e685eca8304481bbd45ab9f7f90f5ce7cd5a" Mar 12 09:34:06 crc kubenswrapper[4809]: I0312 09:34:06.467225 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555134-49r7n" Mar 12 09:34:07 crc kubenswrapper[4809]: I0312 09:34:07.299673 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555128-wq2tx"] Mar 12 09:34:07 crc kubenswrapper[4809]: I0312 09:34:07.312731 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555128-wq2tx"] Mar 12 09:34:09 crc kubenswrapper[4809]: I0312 09:34:09.120508 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b67a7e8-4d53-4051-b495-f14ae3034abf" path="/var/lib/kubelet/pods/8b67a7e8-4d53-4051-b495-f14ae3034abf/volumes" Mar 12 09:34:15 crc kubenswrapper[4809]: I0312 09:34:15.048525 4809 patch_prober.go:28] interesting pod/machine-config-daemon-h6d4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 09:34:15 crc kubenswrapper[4809]: I0312 09:34:15.049336 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 09:34:15 crc kubenswrapper[4809]: I0312 09:34:15.049398 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" Mar 12 09:34:15 crc kubenswrapper[4809]: I0312 09:34:15.050683 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"50defde8943bd5e6d31a0926c787e30d8e3deae22d5d0fa49bb0620ba91106ab"} pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 09:34:15 crc kubenswrapper[4809]: I0312 09:34:15.050762 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" containerID="cri-o://50defde8943bd5e6d31a0926c787e30d8e3deae22d5d0fa49bb0620ba91106ab" gracePeriod=600 Mar 12 09:34:15 crc kubenswrapper[4809]: E0312 09:34:15.183063 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:34:15 crc kubenswrapper[4809]: I0312 09:34:15.586442 4809 generic.go:334] "Generic (PLEG): container finished" podID="101483ba-8ed3-40eb-9855-077e9add029f" containerID="50defde8943bd5e6d31a0926c787e30d8e3deae22d5d0fa49bb0620ba91106ab" exitCode=0 Mar 12 09:34:15 crc kubenswrapper[4809]: I0312 09:34:15.586530 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" event={"ID":"101483ba-8ed3-40eb-9855-077e9add029f","Type":"ContainerDied","Data":"50defde8943bd5e6d31a0926c787e30d8e3deae22d5d0fa49bb0620ba91106ab"} Mar 12 09:34:15 crc kubenswrapper[4809]: I0312 09:34:15.586860 4809 scope.go:117] "RemoveContainer" containerID="f804fad92fafa2c516d60507d510027321c4fa2fec6850c9fc8f05bb782945c2" Mar 12 09:34:15 crc kubenswrapper[4809]: I0312 09:34:15.588918 4809 scope.go:117] "RemoveContainer" containerID="50defde8943bd5e6d31a0926c787e30d8e3deae22d5d0fa49bb0620ba91106ab" Mar 12 09:34:15 crc kubenswrapper[4809]: E0312 09:34:15.589452 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:34:29 crc kubenswrapper[4809]: I0312 09:34:29.106414 4809 scope.go:117] "RemoveContainer" containerID="50defde8943bd5e6d31a0926c787e30d8e3deae22d5d0fa49bb0620ba91106ab" Mar 12 09:34:29 crc kubenswrapper[4809]: E0312 09:34:29.107247 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:34:40 crc kubenswrapper[4809]: I0312 09:34:40.106614 4809 scope.go:117] "RemoveContainer" containerID="50defde8943bd5e6d31a0926c787e30d8e3deae22d5d0fa49bb0620ba91106ab" Mar 12 09:34:40 crc kubenswrapper[4809]: E0312 09:34:40.107520 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:34:51 crc kubenswrapper[4809]: I0312 09:34:51.957577 4809 scope.go:117] "RemoveContainer" containerID="992d6e02bbf625847e5feae5c029d7ef55c47651a7beb9d362455f4b8d7c2875" Mar 12 09:34:52 crc kubenswrapper[4809]: I0312 09:34:52.108889 4809 scope.go:117] "RemoveContainer" containerID="50defde8943bd5e6d31a0926c787e30d8e3deae22d5d0fa49bb0620ba91106ab" Mar 12 09:34:52 crc kubenswrapper[4809]: E0312 09:34:52.109416 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:35:05 crc kubenswrapper[4809]: I0312 09:35:05.106548 4809 scope.go:117] "RemoveContainer" containerID="50defde8943bd5e6d31a0926c787e30d8e3deae22d5d0fa49bb0620ba91106ab" Mar 12 09:35:05 crc kubenswrapper[4809]: E0312 09:35:05.107468 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:35:20 crc kubenswrapper[4809]: I0312 09:35:20.106461 4809 scope.go:117] "RemoveContainer" containerID="50defde8943bd5e6d31a0926c787e30d8e3deae22d5d0fa49bb0620ba91106ab" Mar 12 09:35:20 crc kubenswrapper[4809]: E0312 09:35:20.107549 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:35:35 crc kubenswrapper[4809]: I0312 09:35:35.106495 4809 scope.go:117] "RemoveContainer" containerID="50defde8943bd5e6d31a0926c787e30d8e3deae22d5d0fa49bb0620ba91106ab" Mar 12 09:35:35 crc kubenswrapper[4809]: E0312 09:35:35.107497 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:35:48 crc kubenswrapper[4809]: I0312 09:35:48.106578 4809 scope.go:117] "RemoveContainer" containerID="50defde8943bd5e6d31a0926c787e30d8e3deae22d5d0fa49bb0620ba91106ab" Mar 12 09:35:48 crc kubenswrapper[4809]: E0312 09:35:48.108877 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:35:59 crc kubenswrapper[4809]: I0312 09:35:59.106855 4809 scope.go:117] "RemoveContainer" containerID="50defde8943bd5e6d31a0926c787e30d8e3deae22d5d0fa49bb0620ba91106ab" Mar 12 09:35:59 crc kubenswrapper[4809]: E0312 09:35:59.107740 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:36:00 crc kubenswrapper[4809]: I0312 09:36:00.168048 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555136-46k99"] Mar 12 09:36:00 crc kubenswrapper[4809]: E0312 09:36:00.169449 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="934b370a-37b5-40d3-b543-fdafb84af38d" containerName="oc" Mar 12 09:36:00 crc kubenswrapper[4809]: I0312 09:36:00.169476 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="934b370a-37b5-40d3-b543-fdafb84af38d" containerName="oc" Mar 12 09:36:00 crc kubenswrapper[4809]: I0312 09:36:00.169920 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="934b370a-37b5-40d3-b543-fdafb84af38d" containerName="oc" Mar 12 09:36:00 crc kubenswrapper[4809]: I0312 09:36:00.171728 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555136-46k99" Mar 12 09:36:00 crc kubenswrapper[4809]: I0312 09:36:00.175515 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 09:36:00 crc kubenswrapper[4809]: I0312 09:36:00.175664 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 09:36:00 crc kubenswrapper[4809]: I0312 09:36:00.181247 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-tvxqv" Mar 12 09:36:00 crc kubenswrapper[4809]: I0312 09:36:00.183535 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555136-46k99"] Mar 12 09:36:00 crc kubenswrapper[4809]: I0312 09:36:00.314260 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24tfd\" (UniqueName: \"kubernetes.io/projected/0c8b01b9-e591-47f5-836f-13323402edef-kube-api-access-24tfd\") pod \"auto-csr-approver-29555136-46k99\" (UID: \"0c8b01b9-e591-47f5-836f-13323402edef\") " pod="openshift-infra/auto-csr-approver-29555136-46k99" Mar 12 09:36:00 crc kubenswrapper[4809]: I0312 09:36:00.418233 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24tfd\" (UniqueName: \"kubernetes.io/projected/0c8b01b9-e591-47f5-836f-13323402edef-kube-api-access-24tfd\") pod \"auto-csr-approver-29555136-46k99\" (UID: \"0c8b01b9-e591-47f5-836f-13323402edef\") " pod="openshift-infra/auto-csr-approver-29555136-46k99" Mar 12 09:36:00 crc kubenswrapper[4809]: I0312 09:36:00.441556 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24tfd\" (UniqueName: \"kubernetes.io/projected/0c8b01b9-e591-47f5-836f-13323402edef-kube-api-access-24tfd\") pod \"auto-csr-approver-29555136-46k99\" (UID: \"0c8b01b9-e591-47f5-836f-13323402edef\") " pod="openshift-infra/auto-csr-approver-29555136-46k99" Mar 12 09:36:00 crc kubenswrapper[4809]: I0312 09:36:00.503855 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555136-46k99" Mar 12 09:36:01 crc kubenswrapper[4809]: I0312 09:36:01.153170 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555136-46k99"] Mar 12 09:36:01 crc kubenswrapper[4809]: I0312 09:36:01.972741 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555136-46k99" event={"ID":"0c8b01b9-e591-47f5-836f-13323402edef","Type":"ContainerStarted","Data":"93ad8732594af34064eb11af1a81fe826cef90de3bfae15c38e5c01ba14e4fba"} Mar 12 09:36:03 crc kubenswrapper[4809]: I0312 09:36:03.007212 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555136-46k99" event={"ID":"0c8b01b9-e591-47f5-836f-13323402edef","Type":"ContainerStarted","Data":"cb4f6a25fd1cd3f3e5ec24e36ba0a6a6fb785921ac26955c3aa4350b4f2bdb38"} Mar 12 09:36:03 crc kubenswrapper[4809]: I0312 09:36:03.041433 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555136-46k99" podStartSLOduration=1.871854321 podStartE2EDuration="3.041390578s" podCreationTimestamp="2026-03-12 09:36:00 +0000 UTC" firstStartedPulling="2026-03-12 09:36:01.197977368 +0000 UTC m=+5834.780013111" lastFinishedPulling="2026-03-12 09:36:02.367513635 +0000 UTC m=+5835.949549368" observedRunningTime="2026-03-12 09:36:03.027823089 +0000 UTC m=+5836.609858842" watchObservedRunningTime="2026-03-12 09:36:03.041390578 +0000 UTC m=+5836.623426311" Mar 12 09:36:04 crc kubenswrapper[4809]: I0312 09:36:04.032486 4809 generic.go:334] "Generic (PLEG): container finished" podID="0c8b01b9-e591-47f5-836f-13323402edef" containerID="cb4f6a25fd1cd3f3e5ec24e36ba0a6a6fb785921ac26955c3aa4350b4f2bdb38" exitCode=0 Mar 12 09:36:04 crc kubenswrapper[4809]: I0312 09:36:04.032838 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555136-46k99" event={"ID":"0c8b01b9-e591-47f5-836f-13323402edef","Type":"ContainerDied","Data":"cb4f6a25fd1cd3f3e5ec24e36ba0a6a6fb785921ac26955c3aa4350b4f2bdb38"} Mar 12 09:36:05 crc kubenswrapper[4809]: I0312 09:36:05.617915 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555136-46k99" Mar 12 09:36:05 crc kubenswrapper[4809]: I0312 09:36:05.734237 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24tfd\" (UniqueName: \"kubernetes.io/projected/0c8b01b9-e591-47f5-836f-13323402edef-kube-api-access-24tfd\") pod \"0c8b01b9-e591-47f5-836f-13323402edef\" (UID: \"0c8b01b9-e591-47f5-836f-13323402edef\") " Mar 12 09:36:05 crc kubenswrapper[4809]: I0312 09:36:05.759986 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c8b01b9-e591-47f5-836f-13323402edef-kube-api-access-24tfd" (OuterVolumeSpecName: "kube-api-access-24tfd") pod "0c8b01b9-e591-47f5-836f-13323402edef" (UID: "0c8b01b9-e591-47f5-836f-13323402edef"). InnerVolumeSpecName "kube-api-access-24tfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 09:36:05 crc kubenswrapper[4809]: I0312 09:36:05.840151 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24tfd\" (UniqueName: \"kubernetes.io/projected/0c8b01b9-e591-47f5-836f-13323402edef-kube-api-access-24tfd\") on node \"crc\" DevicePath \"\"" Mar 12 09:36:06 crc kubenswrapper[4809]: I0312 09:36:06.070904 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555136-46k99" event={"ID":"0c8b01b9-e591-47f5-836f-13323402edef","Type":"ContainerDied","Data":"93ad8732594af34064eb11af1a81fe826cef90de3bfae15c38e5c01ba14e4fba"} Mar 12 09:36:06 crc kubenswrapper[4809]: I0312 09:36:06.070946 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93ad8732594af34064eb11af1a81fe826cef90de3bfae15c38e5c01ba14e4fba" Mar 12 09:36:06 crc kubenswrapper[4809]: I0312 09:36:06.070979 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555136-46k99" Mar 12 09:36:06 crc kubenswrapper[4809]: I0312 09:36:06.116326 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555130-trpfq"] Mar 12 09:36:06 crc kubenswrapper[4809]: I0312 09:36:06.129641 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555130-trpfq"] Mar 12 09:36:07 crc kubenswrapper[4809]: I0312 09:36:07.130463 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14eb30bb-5ed3-4e24-9344-a1367d2391b4" path="/var/lib/kubelet/pods/14eb30bb-5ed3-4e24-9344-a1367d2391b4/volumes" Mar 12 09:36:10 crc kubenswrapper[4809]: I0312 09:36:10.106502 4809 scope.go:117] "RemoveContainer" containerID="50defde8943bd5e6d31a0926c787e30d8e3deae22d5d0fa49bb0620ba91106ab" Mar 12 09:36:10 crc kubenswrapper[4809]: E0312 09:36:10.107528 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:36:23 crc kubenswrapper[4809]: I0312 09:36:23.107098 4809 scope.go:117] "RemoveContainer" containerID="50defde8943bd5e6d31a0926c787e30d8e3deae22d5d0fa49bb0620ba91106ab" Mar 12 09:36:23 crc kubenswrapper[4809]: E0312 09:36:23.107991 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:36:35 crc kubenswrapper[4809]: I0312 09:36:35.105519 4809 scope.go:117] "RemoveContainer" containerID="50defde8943bd5e6d31a0926c787e30d8e3deae22d5d0fa49bb0620ba91106ab" Mar 12 09:36:35 crc kubenswrapper[4809]: E0312 09:36:35.106216 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:36:50 crc kubenswrapper[4809]: I0312 09:36:50.107173 4809 scope.go:117] "RemoveContainer" containerID="50defde8943bd5e6d31a0926c787e30d8e3deae22d5d0fa49bb0620ba91106ab" Mar 12 09:36:50 crc kubenswrapper[4809]: E0312 09:36:50.108170 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:36:52 crc kubenswrapper[4809]: I0312 09:36:52.117919 4809 scope.go:117] "RemoveContainer" containerID="0194ff8f79d415a9131afbdbf3657b7a44543c3d2716d5e12d701748e6f69bcb" Mar 12 09:37:03 crc kubenswrapper[4809]: I0312 09:37:03.106887 4809 scope.go:117] "RemoveContainer" containerID="50defde8943bd5e6d31a0926c787e30d8e3deae22d5d0fa49bb0620ba91106ab" Mar 12 09:37:03 crc kubenswrapper[4809]: E0312 09:37:03.107995 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:37:17 crc kubenswrapper[4809]: I0312 09:37:17.052075 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vxj8s"] Mar 12 09:37:17 crc kubenswrapper[4809]: E0312 09:37:17.054486 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c8b01b9-e591-47f5-836f-13323402edef" containerName="oc" Mar 12 09:37:17 crc kubenswrapper[4809]: I0312 09:37:17.054510 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c8b01b9-e591-47f5-836f-13323402edef" containerName="oc" Mar 12 09:37:17 crc kubenswrapper[4809]: I0312 09:37:17.054860 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c8b01b9-e591-47f5-836f-13323402edef" containerName="oc" Mar 12 09:37:17 crc kubenswrapper[4809]: I0312 09:37:17.057754 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vxj8s" Mar 12 09:37:17 crc kubenswrapper[4809]: I0312 09:37:17.081621 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vxj8s"] Mar 12 09:37:17 crc kubenswrapper[4809]: I0312 09:37:17.093839 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e459853-fab9-4fcf-a0bb-7451f379e0f2-catalog-content\") pod \"certified-operators-vxj8s\" (UID: \"3e459853-fab9-4fcf-a0bb-7451f379e0f2\") " pod="openshift-marketplace/certified-operators-vxj8s" Mar 12 09:37:17 crc kubenswrapper[4809]: I0312 09:37:17.094243 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vvlf\" (UniqueName: \"kubernetes.io/projected/3e459853-fab9-4fcf-a0bb-7451f379e0f2-kube-api-access-8vvlf\") pod \"certified-operators-vxj8s\" (UID: \"3e459853-fab9-4fcf-a0bb-7451f379e0f2\") " pod="openshift-marketplace/certified-operators-vxj8s" Mar 12 09:37:17 crc kubenswrapper[4809]: I0312 09:37:17.094380 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e459853-fab9-4fcf-a0bb-7451f379e0f2-utilities\") pod \"certified-operators-vxj8s\" (UID: \"3e459853-fab9-4fcf-a0bb-7451f379e0f2\") " pod="openshift-marketplace/certified-operators-vxj8s" Mar 12 09:37:17 crc kubenswrapper[4809]: I0312 09:37:17.119201 4809 scope.go:117] "RemoveContainer" containerID="50defde8943bd5e6d31a0926c787e30d8e3deae22d5d0fa49bb0620ba91106ab" Mar 12 09:37:17 crc kubenswrapper[4809]: E0312 09:37:17.119688 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:37:17 crc kubenswrapper[4809]: I0312 09:37:17.198290 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e459853-fab9-4fcf-a0bb-7451f379e0f2-utilities\") pod \"certified-operators-vxj8s\" (UID: \"3e459853-fab9-4fcf-a0bb-7451f379e0f2\") " pod="openshift-marketplace/certified-operators-vxj8s" Mar 12 09:37:17 crc kubenswrapper[4809]: I0312 09:37:17.198532 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e459853-fab9-4fcf-a0bb-7451f379e0f2-catalog-content\") pod \"certified-operators-vxj8s\" (UID: \"3e459853-fab9-4fcf-a0bb-7451f379e0f2\") " pod="openshift-marketplace/certified-operators-vxj8s" Mar 12 09:37:17 crc kubenswrapper[4809]: I0312 09:37:17.198686 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vvlf\" (UniqueName: \"kubernetes.io/projected/3e459853-fab9-4fcf-a0bb-7451f379e0f2-kube-api-access-8vvlf\") pod \"certified-operators-vxj8s\" (UID: \"3e459853-fab9-4fcf-a0bb-7451f379e0f2\") " pod="openshift-marketplace/certified-operators-vxj8s" Mar 12 09:37:17 crc kubenswrapper[4809]: I0312 09:37:17.199852 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e459853-fab9-4fcf-a0bb-7451f379e0f2-catalog-content\") pod \"certified-operators-vxj8s\" (UID: \"3e459853-fab9-4fcf-a0bb-7451f379e0f2\") " pod="openshift-marketplace/certified-operators-vxj8s" Mar 12 09:37:17 crc kubenswrapper[4809]: I0312 09:37:17.201626 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e459853-fab9-4fcf-a0bb-7451f379e0f2-utilities\") pod \"certified-operators-vxj8s\" (UID: \"3e459853-fab9-4fcf-a0bb-7451f379e0f2\") " pod="openshift-marketplace/certified-operators-vxj8s" Mar 12 09:37:17 crc kubenswrapper[4809]: I0312 09:37:17.223202 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vvlf\" (UniqueName: \"kubernetes.io/projected/3e459853-fab9-4fcf-a0bb-7451f379e0f2-kube-api-access-8vvlf\") pod \"certified-operators-vxj8s\" (UID: \"3e459853-fab9-4fcf-a0bb-7451f379e0f2\") " pod="openshift-marketplace/certified-operators-vxj8s" Mar 12 09:37:17 crc kubenswrapper[4809]: I0312 09:37:17.407530 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vxj8s" Mar 12 09:37:17 crc kubenswrapper[4809]: I0312 09:37:17.860049 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vxj8s"] Mar 12 09:37:18 crc kubenswrapper[4809]: I0312 09:37:18.093373 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vxj8s" event={"ID":"3e459853-fab9-4fcf-a0bb-7451f379e0f2","Type":"ContainerStarted","Data":"d00a242f0add1ef74d6d4e51d26890cb09bda136eec89282aa17702788b0b748"} Mar 12 09:37:19 crc kubenswrapper[4809]: I0312 09:37:19.111295 4809 generic.go:334] "Generic (PLEG): container finished" podID="3e459853-fab9-4fcf-a0bb-7451f379e0f2" containerID="e3ed86c985037b5476e334376d4435298796dddec26775a1cd6b289dd1bdc40d" exitCode=0 Mar 12 09:37:19 crc kubenswrapper[4809]: I0312 09:37:19.115559 4809 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 09:37:19 crc kubenswrapper[4809]: I0312 09:37:19.128033 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vxj8s" event={"ID":"3e459853-fab9-4fcf-a0bb-7451f379e0f2","Type":"ContainerDied","Data":"e3ed86c985037b5476e334376d4435298796dddec26775a1cd6b289dd1bdc40d"} Mar 12 09:37:21 crc kubenswrapper[4809]: I0312 09:37:21.144043 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vxj8s" event={"ID":"3e459853-fab9-4fcf-a0bb-7451f379e0f2","Type":"ContainerStarted","Data":"efed9303f04a4917933c0791d79ec61b7bb08d38deec14b054511d7a3915d85c"} Mar 12 09:37:23 crc kubenswrapper[4809]: I0312 09:37:23.168026 4809 generic.go:334] "Generic (PLEG): container finished" podID="3e459853-fab9-4fcf-a0bb-7451f379e0f2" containerID="efed9303f04a4917933c0791d79ec61b7bb08d38deec14b054511d7a3915d85c" exitCode=0 Mar 12 09:37:23 crc kubenswrapper[4809]: I0312 09:37:23.168069 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vxj8s" event={"ID":"3e459853-fab9-4fcf-a0bb-7451f379e0f2","Type":"ContainerDied","Data":"efed9303f04a4917933c0791d79ec61b7bb08d38deec14b054511d7a3915d85c"} Mar 12 09:37:25 crc kubenswrapper[4809]: I0312 09:37:25.202679 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vxj8s" event={"ID":"3e459853-fab9-4fcf-a0bb-7451f379e0f2","Type":"ContainerStarted","Data":"94d44897caf2fde72cf64e5a2b44813415f86e30735abf0f9e79bec27a1d053e"} Mar 12 09:37:25 crc kubenswrapper[4809]: I0312 09:37:25.234833 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vxj8s" podStartSLOduration=3.428782709 podStartE2EDuration="8.234808786s" podCreationTimestamp="2026-03-12 09:37:17 +0000 UTC" firstStartedPulling="2026-03-12 09:37:19.114893035 +0000 UTC m=+5912.696928808" lastFinishedPulling="2026-03-12 09:37:23.920919152 +0000 UTC m=+5917.502954885" observedRunningTime="2026-03-12 09:37:25.232952595 +0000 UTC m=+5918.814988358" watchObservedRunningTime="2026-03-12 09:37:25.234808786 +0000 UTC m=+5918.816844529" Mar 12 09:37:27 crc kubenswrapper[4809]: I0312 09:37:27.407592 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vxj8s" Mar 12 09:37:27 crc kubenswrapper[4809]: I0312 09:37:27.407877 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vxj8s" Mar 12 09:37:28 crc kubenswrapper[4809]: I0312 09:37:28.106821 4809 scope.go:117] "RemoveContainer" containerID="50defde8943bd5e6d31a0926c787e30d8e3deae22d5d0fa49bb0620ba91106ab" Mar 12 09:37:28 crc kubenswrapper[4809]: E0312 09:37:28.107707 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:37:28 crc kubenswrapper[4809]: I0312 09:37:28.463698 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-vxj8s" podUID="3e459853-fab9-4fcf-a0bb-7451f379e0f2" containerName="registry-server" probeResult="failure" output=< Mar 12 09:37:28 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 09:37:28 crc kubenswrapper[4809]: > Mar 12 09:37:37 crc kubenswrapper[4809]: I0312 09:37:37.464600 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vxj8s" Mar 12 09:37:37 crc kubenswrapper[4809]: I0312 09:37:37.523799 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vxj8s" Mar 12 09:37:37 crc kubenswrapper[4809]: I0312 09:37:37.703793 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vxj8s"] Mar 12 09:37:39 crc kubenswrapper[4809]: I0312 09:37:39.437954 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vxj8s" podUID="3e459853-fab9-4fcf-a0bb-7451f379e0f2" containerName="registry-server" containerID="cri-o://94d44897caf2fde72cf64e5a2b44813415f86e30735abf0f9e79bec27a1d053e" gracePeriod=2 Mar 12 09:37:40 crc kubenswrapper[4809]: I0312 09:37:40.054852 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vxj8s" Mar 12 09:37:40 crc kubenswrapper[4809]: I0312 09:37:40.106300 4809 scope.go:117] "RemoveContainer" containerID="50defde8943bd5e6d31a0926c787e30d8e3deae22d5d0fa49bb0620ba91106ab" Mar 12 09:37:40 crc kubenswrapper[4809]: E0312 09:37:40.106808 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:37:40 crc kubenswrapper[4809]: I0312 09:37:40.156597 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vvlf\" (UniqueName: \"kubernetes.io/projected/3e459853-fab9-4fcf-a0bb-7451f379e0f2-kube-api-access-8vvlf\") pod \"3e459853-fab9-4fcf-a0bb-7451f379e0f2\" (UID: \"3e459853-fab9-4fcf-a0bb-7451f379e0f2\") " Mar 12 09:37:40 crc kubenswrapper[4809]: I0312 09:37:40.156741 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e459853-fab9-4fcf-a0bb-7451f379e0f2-catalog-content\") pod \"3e459853-fab9-4fcf-a0bb-7451f379e0f2\" (UID: \"3e459853-fab9-4fcf-a0bb-7451f379e0f2\") " Mar 12 09:37:40 crc kubenswrapper[4809]: I0312 09:37:40.156982 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e459853-fab9-4fcf-a0bb-7451f379e0f2-utilities\") pod \"3e459853-fab9-4fcf-a0bb-7451f379e0f2\" (UID: \"3e459853-fab9-4fcf-a0bb-7451f379e0f2\") " Mar 12 09:37:40 crc kubenswrapper[4809]: I0312 09:37:40.160832 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e459853-fab9-4fcf-a0bb-7451f379e0f2-utilities" (OuterVolumeSpecName: "utilities") pod "3e459853-fab9-4fcf-a0bb-7451f379e0f2" (UID: "3e459853-fab9-4fcf-a0bb-7451f379e0f2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 09:37:40 crc kubenswrapper[4809]: I0312 09:37:40.165745 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e459853-fab9-4fcf-a0bb-7451f379e0f2-kube-api-access-8vvlf" (OuterVolumeSpecName: "kube-api-access-8vvlf") pod "3e459853-fab9-4fcf-a0bb-7451f379e0f2" (UID: "3e459853-fab9-4fcf-a0bb-7451f379e0f2"). InnerVolumeSpecName "kube-api-access-8vvlf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 09:37:40 crc kubenswrapper[4809]: I0312 09:37:40.233754 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e459853-fab9-4fcf-a0bb-7451f379e0f2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3e459853-fab9-4fcf-a0bb-7451f379e0f2" (UID: "3e459853-fab9-4fcf-a0bb-7451f379e0f2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 09:37:40 crc kubenswrapper[4809]: I0312 09:37:40.263935 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e459853-fab9-4fcf-a0bb-7451f379e0f2-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 09:37:40 crc kubenswrapper[4809]: I0312 09:37:40.263975 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vvlf\" (UniqueName: \"kubernetes.io/projected/3e459853-fab9-4fcf-a0bb-7451f379e0f2-kube-api-access-8vvlf\") on node \"crc\" DevicePath \"\"" Mar 12 09:37:40 crc kubenswrapper[4809]: I0312 09:37:40.263992 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e459853-fab9-4fcf-a0bb-7451f379e0f2-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 09:37:40 crc kubenswrapper[4809]: I0312 09:37:40.451261 4809 generic.go:334] "Generic (PLEG): container finished" podID="3e459853-fab9-4fcf-a0bb-7451f379e0f2" containerID="94d44897caf2fde72cf64e5a2b44813415f86e30735abf0f9e79bec27a1d053e" exitCode=0 Mar 12 09:37:40 crc kubenswrapper[4809]: I0312 09:37:40.451314 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vxj8s" event={"ID":"3e459853-fab9-4fcf-a0bb-7451f379e0f2","Type":"ContainerDied","Data":"94d44897caf2fde72cf64e5a2b44813415f86e30735abf0f9e79bec27a1d053e"} Mar 12 09:37:40 crc kubenswrapper[4809]: I0312 09:37:40.451349 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vxj8s" event={"ID":"3e459853-fab9-4fcf-a0bb-7451f379e0f2","Type":"ContainerDied","Data":"d00a242f0add1ef74d6d4e51d26890cb09bda136eec89282aa17702788b0b748"} Mar 12 09:37:40 crc kubenswrapper[4809]: I0312 09:37:40.451371 4809 scope.go:117] "RemoveContainer" containerID="94d44897caf2fde72cf64e5a2b44813415f86e30735abf0f9e79bec27a1d053e" Mar 12 09:37:40 crc kubenswrapper[4809]: I0312 09:37:40.451500 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vxj8s" Mar 12 09:37:40 crc kubenswrapper[4809]: I0312 09:37:40.503417 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vxj8s"] Mar 12 09:37:40 crc kubenswrapper[4809]: I0312 09:37:40.507887 4809 scope.go:117] "RemoveContainer" containerID="efed9303f04a4917933c0791d79ec61b7bb08d38deec14b054511d7a3915d85c" Mar 12 09:37:40 crc kubenswrapper[4809]: I0312 09:37:40.534453 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vxj8s"] Mar 12 09:37:40 crc kubenswrapper[4809]: I0312 09:37:40.559193 4809 scope.go:117] "RemoveContainer" containerID="e3ed86c985037b5476e334376d4435298796dddec26775a1cd6b289dd1bdc40d" Mar 12 09:37:40 crc kubenswrapper[4809]: I0312 09:37:40.619232 4809 scope.go:117] "RemoveContainer" containerID="94d44897caf2fde72cf64e5a2b44813415f86e30735abf0f9e79bec27a1d053e" Mar 12 09:37:40 crc kubenswrapper[4809]: E0312 09:37:40.619871 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94d44897caf2fde72cf64e5a2b44813415f86e30735abf0f9e79bec27a1d053e\": container with ID starting with 94d44897caf2fde72cf64e5a2b44813415f86e30735abf0f9e79bec27a1d053e not found: ID does not exist" containerID="94d44897caf2fde72cf64e5a2b44813415f86e30735abf0f9e79bec27a1d053e" Mar 12 09:37:40 crc kubenswrapper[4809]: I0312 09:37:40.619929 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94d44897caf2fde72cf64e5a2b44813415f86e30735abf0f9e79bec27a1d053e"} err="failed to get container status \"94d44897caf2fde72cf64e5a2b44813415f86e30735abf0f9e79bec27a1d053e\": rpc error: code = NotFound desc = could not find container \"94d44897caf2fde72cf64e5a2b44813415f86e30735abf0f9e79bec27a1d053e\": container with ID starting with 94d44897caf2fde72cf64e5a2b44813415f86e30735abf0f9e79bec27a1d053e not found: ID does not exist" Mar 12 09:37:40 crc kubenswrapper[4809]: I0312 09:37:40.619966 4809 scope.go:117] "RemoveContainer" containerID="efed9303f04a4917933c0791d79ec61b7bb08d38deec14b054511d7a3915d85c" Mar 12 09:37:40 crc kubenswrapper[4809]: E0312 09:37:40.620400 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efed9303f04a4917933c0791d79ec61b7bb08d38deec14b054511d7a3915d85c\": container with ID starting with efed9303f04a4917933c0791d79ec61b7bb08d38deec14b054511d7a3915d85c not found: ID does not exist" containerID="efed9303f04a4917933c0791d79ec61b7bb08d38deec14b054511d7a3915d85c" Mar 12 09:37:40 crc kubenswrapper[4809]: I0312 09:37:40.620447 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efed9303f04a4917933c0791d79ec61b7bb08d38deec14b054511d7a3915d85c"} err="failed to get container status \"efed9303f04a4917933c0791d79ec61b7bb08d38deec14b054511d7a3915d85c\": rpc error: code = NotFound desc = could not find container \"efed9303f04a4917933c0791d79ec61b7bb08d38deec14b054511d7a3915d85c\": container with ID starting with efed9303f04a4917933c0791d79ec61b7bb08d38deec14b054511d7a3915d85c not found: ID does not exist" Mar 12 09:37:40 crc kubenswrapper[4809]: I0312 09:37:40.620484 4809 scope.go:117] "RemoveContainer" containerID="e3ed86c985037b5476e334376d4435298796dddec26775a1cd6b289dd1bdc40d" Mar 12 09:37:40 crc kubenswrapper[4809]: E0312 09:37:40.620945 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3ed86c985037b5476e334376d4435298796dddec26775a1cd6b289dd1bdc40d\": container with ID starting with e3ed86c985037b5476e334376d4435298796dddec26775a1cd6b289dd1bdc40d not found: ID does not exist" containerID="e3ed86c985037b5476e334376d4435298796dddec26775a1cd6b289dd1bdc40d" Mar 12 09:37:40 crc kubenswrapper[4809]: I0312 09:37:40.620983 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3ed86c985037b5476e334376d4435298796dddec26775a1cd6b289dd1bdc40d"} err="failed to get container status \"e3ed86c985037b5476e334376d4435298796dddec26775a1cd6b289dd1bdc40d\": rpc error: code = NotFound desc = could not find container \"e3ed86c985037b5476e334376d4435298796dddec26775a1cd6b289dd1bdc40d\": container with ID starting with e3ed86c985037b5476e334376d4435298796dddec26775a1cd6b289dd1bdc40d not found: ID does not exist" Mar 12 09:37:41 crc kubenswrapper[4809]: I0312 09:37:41.122413 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e459853-fab9-4fcf-a0bb-7451f379e0f2" path="/var/lib/kubelet/pods/3e459853-fab9-4fcf-a0bb-7451f379e0f2/volumes" Mar 12 09:37:53 crc kubenswrapper[4809]: I0312 09:37:53.105826 4809 scope.go:117] "RemoveContainer" containerID="50defde8943bd5e6d31a0926c787e30d8e3deae22d5d0fa49bb0620ba91106ab" Mar 12 09:37:53 crc kubenswrapper[4809]: E0312 09:37:53.106805 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:38:00 crc kubenswrapper[4809]: I0312 09:38:00.161919 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555138-7lx2q"] Mar 12 09:38:00 crc kubenswrapper[4809]: E0312 09:38:00.163328 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e459853-fab9-4fcf-a0bb-7451f379e0f2" containerName="extract-content" Mar 12 09:38:00 crc kubenswrapper[4809]: I0312 09:38:00.163349 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e459853-fab9-4fcf-a0bb-7451f379e0f2" containerName="extract-content" Mar 12 09:38:00 crc kubenswrapper[4809]: E0312 09:38:00.163368 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e459853-fab9-4fcf-a0bb-7451f379e0f2" containerName="extract-utilities" Mar 12 09:38:00 crc kubenswrapper[4809]: I0312 09:38:00.163376 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e459853-fab9-4fcf-a0bb-7451f379e0f2" containerName="extract-utilities" Mar 12 09:38:00 crc kubenswrapper[4809]: E0312 09:38:00.163427 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e459853-fab9-4fcf-a0bb-7451f379e0f2" containerName="registry-server" Mar 12 09:38:00 crc kubenswrapper[4809]: I0312 09:38:00.163435 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e459853-fab9-4fcf-a0bb-7451f379e0f2" containerName="registry-server" Mar 12 09:38:00 crc kubenswrapper[4809]: I0312 09:38:00.163753 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e459853-fab9-4fcf-a0bb-7451f379e0f2" containerName="registry-server" Mar 12 09:38:00 crc kubenswrapper[4809]: I0312 09:38:00.164934 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555138-7lx2q" Mar 12 09:38:00 crc kubenswrapper[4809]: I0312 09:38:00.167216 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 09:38:00 crc kubenswrapper[4809]: I0312 09:38:00.167846 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 09:38:00 crc kubenswrapper[4809]: I0312 09:38:00.173729 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-tvxqv" Mar 12 09:38:00 crc kubenswrapper[4809]: I0312 09:38:00.183321 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555138-7lx2q"] Mar 12 09:38:00 crc kubenswrapper[4809]: I0312 09:38:00.308232 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwfbc\" (UniqueName: \"kubernetes.io/projected/70576c65-c34c-4c1b-9c96-2623814c7eb9-kube-api-access-lwfbc\") pod \"auto-csr-approver-29555138-7lx2q\" (UID: \"70576c65-c34c-4c1b-9c96-2623814c7eb9\") " pod="openshift-infra/auto-csr-approver-29555138-7lx2q" Mar 12 09:38:00 crc kubenswrapper[4809]: I0312 09:38:00.411264 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwfbc\" (UniqueName: \"kubernetes.io/projected/70576c65-c34c-4c1b-9c96-2623814c7eb9-kube-api-access-lwfbc\") pod \"auto-csr-approver-29555138-7lx2q\" (UID: \"70576c65-c34c-4c1b-9c96-2623814c7eb9\") " pod="openshift-infra/auto-csr-approver-29555138-7lx2q" Mar 12 09:38:00 crc kubenswrapper[4809]: I0312 09:38:00.437849 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwfbc\" (UniqueName: \"kubernetes.io/projected/70576c65-c34c-4c1b-9c96-2623814c7eb9-kube-api-access-lwfbc\") pod \"auto-csr-approver-29555138-7lx2q\" (UID: \"70576c65-c34c-4c1b-9c96-2623814c7eb9\") " pod="openshift-infra/auto-csr-approver-29555138-7lx2q" Mar 12 09:38:00 crc kubenswrapper[4809]: I0312 09:38:00.499288 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555138-7lx2q" Mar 12 09:38:01 crc kubenswrapper[4809]: I0312 09:38:01.031769 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555138-7lx2q"] Mar 12 09:38:01 crc kubenswrapper[4809]: I0312 09:38:01.752942 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555138-7lx2q" event={"ID":"70576c65-c34c-4c1b-9c96-2623814c7eb9","Type":"ContainerStarted","Data":"a0b1f44451a117f160328146ca8d6d98567c75c6c7b717b95792f43d20a4a3e1"} Mar 12 09:38:03 crc kubenswrapper[4809]: I0312 09:38:03.776395 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555138-7lx2q" event={"ID":"70576c65-c34c-4c1b-9c96-2623814c7eb9","Type":"ContainerStarted","Data":"6b5d3c3b166a8e9e50603d0a94d1eb56a10390968c5fb9c70abb3e5f5d1d4e3d"} Mar 12 09:38:03 crc kubenswrapper[4809]: I0312 09:38:03.806818 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555138-7lx2q" podStartSLOduration=2.490099358 podStartE2EDuration="3.806798728s" podCreationTimestamp="2026-03-12 09:38:00 +0000 UTC" firstStartedPulling="2026-03-12 09:38:01.036071344 +0000 UTC m=+5954.618107067" lastFinishedPulling="2026-03-12 09:38:02.352770704 +0000 UTC m=+5955.934806437" observedRunningTime="2026-03-12 09:38:03.800683302 +0000 UTC m=+5957.382719035" watchObservedRunningTime="2026-03-12 09:38:03.806798728 +0000 UTC m=+5957.388834461" Mar 12 09:38:05 crc kubenswrapper[4809]: I0312 09:38:05.801426 4809 generic.go:334] "Generic (PLEG): container finished" podID="70576c65-c34c-4c1b-9c96-2623814c7eb9" containerID="6b5d3c3b166a8e9e50603d0a94d1eb56a10390968c5fb9c70abb3e5f5d1d4e3d" exitCode=0 Mar 12 09:38:05 crc kubenswrapper[4809]: I0312 09:38:05.801517 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555138-7lx2q" event={"ID":"70576c65-c34c-4c1b-9c96-2623814c7eb9","Type":"ContainerDied","Data":"6b5d3c3b166a8e9e50603d0a94d1eb56a10390968c5fb9c70abb3e5f5d1d4e3d"} Mar 12 09:38:07 crc kubenswrapper[4809]: I0312 09:38:07.315026 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555138-7lx2q" Mar 12 09:38:07 crc kubenswrapper[4809]: I0312 09:38:07.472844 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwfbc\" (UniqueName: \"kubernetes.io/projected/70576c65-c34c-4c1b-9c96-2623814c7eb9-kube-api-access-lwfbc\") pod \"70576c65-c34c-4c1b-9c96-2623814c7eb9\" (UID: \"70576c65-c34c-4c1b-9c96-2623814c7eb9\") " Mar 12 09:38:07 crc kubenswrapper[4809]: I0312 09:38:07.480057 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70576c65-c34c-4c1b-9c96-2623814c7eb9-kube-api-access-lwfbc" (OuterVolumeSpecName: "kube-api-access-lwfbc") pod "70576c65-c34c-4c1b-9c96-2623814c7eb9" (UID: "70576c65-c34c-4c1b-9c96-2623814c7eb9"). InnerVolumeSpecName "kube-api-access-lwfbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 09:38:07 crc kubenswrapper[4809]: I0312 09:38:07.576951 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwfbc\" (UniqueName: \"kubernetes.io/projected/70576c65-c34c-4c1b-9c96-2623814c7eb9-kube-api-access-lwfbc\") on node \"crc\" DevicePath \"\"" Mar 12 09:38:07 crc kubenswrapper[4809]: I0312 09:38:07.829528 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555138-7lx2q" event={"ID":"70576c65-c34c-4c1b-9c96-2623814c7eb9","Type":"ContainerDied","Data":"a0b1f44451a117f160328146ca8d6d98567c75c6c7b717b95792f43d20a4a3e1"} Mar 12 09:38:07 crc kubenswrapper[4809]: I0312 09:38:07.829575 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0b1f44451a117f160328146ca8d6d98567c75c6c7b717b95792f43d20a4a3e1" Mar 12 09:38:07 crc kubenswrapper[4809]: I0312 09:38:07.829605 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555138-7lx2q" Mar 12 09:38:07 crc kubenswrapper[4809]: I0312 09:38:07.912742 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555132-rprbn"] Mar 12 09:38:07 crc kubenswrapper[4809]: I0312 09:38:07.924736 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555132-rprbn"] Mar 12 09:38:08 crc kubenswrapper[4809]: I0312 09:38:08.105953 4809 scope.go:117] "RemoveContainer" containerID="50defde8943bd5e6d31a0926c787e30d8e3deae22d5d0fa49bb0620ba91106ab" Mar 12 09:38:08 crc kubenswrapper[4809]: E0312 09:38:08.106416 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:38:09 crc kubenswrapper[4809]: I0312 09:38:09.179657 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bef043f-d86a-4ee3-96e0-f3c229a7f718" path="/var/lib/kubelet/pods/0bef043f-d86a-4ee3-96e0-f3c229a7f718/volumes" Mar 12 09:38:19 crc kubenswrapper[4809]: I0312 09:38:19.107787 4809 scope.go:117] "RemoveContainer" containerID="50defde8943bd5e6d31a0926c787e30d8e3deae22d5d0fa49bb0620ba91106ab" Mar 12 09:38:19 crc kubenswrapper[4809]: E0312 09:38:19.108731 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:38:33 crc kubenswrapper[4809]: I0312 09:38:33.106933 4809 scope.go:117] "RemoveContainer" containerID="50defde8943bd5e6d31a0926c787e30d8e3deae22d5d0fa49bb0620ba91106ab" Mar 12 09:38:33 crc kubenswrapper[4809]: E0312 09:38:33.108030 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:38:47 crc kubenswrapper[4809]: I0312 09:38:47.124423 4809 scope.go:117] "RemoveContainer" containerID="50defde8943bd5e6d31a0926c787e30d8e3deae22d5d0fa49bb0620ba91106ab" Mar 12 09:38:47 crc kubenswrapper[4809]: E0312 09:38:47.128580 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:38:52 crc kubenswrapper[4809]: I0312 09:38:52.281064 4809 scope.go:117] "RemoveContainer" containerID="74c9d1f95e021e9277b4c378c417bbf003fca7bca145d57813f3d81a952f8a44" Mar 12 09:38:59 crc kubenswrapper[4809]: I0312 09:38:59.107234 4809 scope.go:117] "RemoveContainer" containerID="50defde8943bd5e6d31a0926c787e30d8e3deae22d5d0fa49bb0620ba91106ab" Mar 12 09:38:59 crc kubenswrapper[4809]: E0312 09:38:59.108267 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:39:13 crc kubenswrapper[4809]: I0312 09:39:13.107999 4809 scope.go:117] "RemoveContainer" containerID="50defde8943bd5e6d31a0926c787e30d8e3deae22d5d0fa49bb0620ba91106ab" Mar 12 09:39:13 crc kubenswrapper[4809]: E0312 09:39:13.109092 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:39:24 crc kubenswrapper[4809]: I0312 09:39:24.106736 4809 scope.go:117] "RemoveContainer" containerID="50defde8943bd5e6d31a0926c787e30d8e3deae22d5d0fa49bb0620ba91106ab" Mar 12 09:39:24 crc kubenswrapper[4809]: I0312 09:39:24.920436 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" event={"ID":"101483ba-8ed3-40eb-9855-077e9add029f","Type":"ContainerStarted","Data":"2e6ab1f1fc625e84ede9f32b1a2be463f7b37e1535ff9f1acd9bbcf4150c9d1a"} Mar 12 09:39:27 crc kubenswrapper[4809]: I0312 09:39:27.953525 4809 generic.go:334] "Generic (PLEG): container finished" podID="d4b8d2be-fafc-4d7e-9348-053c53d3cb4d" containerID="397e2f02561b964b375eb590587fb3968f9e132cfa4c93400edbc56886647694" exitCode=1 Mar 12 09:39:27 crc kubenswrapper[4809]: I0312 09:39:27.953608 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"d4b8d2be-fafc-4d7e-9348-053c53d3cb4d","Type":"ContainerDied","Data":"397e2f02561b964b375eb590587fb3968f9e132cfa4c93400edbc56886647694"} Mar 12 09:39:29 crc kubenswrapper[4809]: I0312 09:39:29.447695 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Mar 12 09:39:29 crc kubenswrapper[4809]: I0312 09:39:29.571750 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d4b8d2be-fafc-4d7e-9348-053c53d3cb4d-test-operator-ephemeral-temporary\") pod \"d4b8d2be-fafc-4d7e-9348-053c53d3cb4d\" (UID: \"d4b8d2be-fafc-4d7e-9348-053c53d3cb4d\") " Mar 12 09:39:29 crc kubenswrapper[4809]: I0312 09:39:29.571923 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d4b8d2be-fafc-4d7e-9348-053c53d3cb4d-test-operator-ephemeral-workdir\") pod \"d4b8d2be-fafc-4d7e-9348-053c53d3cb4d\" (UID: \"d4b8d2be-fafc-4d7e-9348-053c53d3cb4d\") " Mar 12 09:39:29 crc kubenswrapper[4809]: I0312 09:39:29.572034 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d4b8d2be-fafc-4d7e-9348-053c53d3cb4d-openstack-config-secret\") pod \"d4b8d2be-fafc-4d7e-9348-053c53d3cb4d\" (UID: \"d4b8d2be-fafc-4d7e-9348-053c53d3cb4d\") " Mar 12 09:39:29 crc kubenswrapper[4809]: I0312 09:39:29.572163 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfzf8\" (UniqueName: \"kubernetes.io/projected/d4b8d2be-fafc-4d7e-9348-053c53d3cb4d-kube-api-access-bfzf8\") pod \"d4b8d2be-fafc-4d7e-9348-053c53d3cb4d\" (UID: \"d4b8d2be-fafc-4d7e-9348-053c53d3cb4d\") " Mar 12 09:39:29 crc kubenswrapper[4809]: I0312 09:39:29.572229 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d4b8d2be-fafc-4d7e-9348-053c53d3cb4d-openstack-config\") pod \"d4b8d2be-fafc-4d7e-9348-053c53d3cb4d\" (UID: \"d4b8d2be-fafc-4d7e-9348-053c53d3cb4d\") " Mar 12 09:39:29 crc kubenswrapper[4809]: I0312 09:39:29.572291 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d4b8d2be-fafc-4d7e-9348-053c53d3cb4d-config-data\") pod \"d4b8d2be-fafc-4d7e-9348-053c53d3cb4d\" (UID: \"d4b8d2be-fafc-4d7e-9348-053c53d3cb4d\") " Mar 12 09:39:29 crc kubenswrapper[4809]: I0312 09:39:29.572406 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d4b8d2be-fafc-4d7e-9348-053c53d3cb4d-ca-certs\") pod \"d4b8d2be-fafc-4d7e-9348-053c53d3cb4d\" (UID: \"d4b8d2be-fafc-4d7e-9348-053c53d3cb4d\") " Mar 12 09:39:29 crc kubenswrapper[4809]: I0312 09:39:29.572434 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"d4b8d2be-fafc-4d7e-9348-053c53d3cb4d\" (UID: \"d4b8d2be-fafc-4d7e-9348-053c53d3cb4d\") " Mar 12 09:39:29 crc kubenswrapper[4809]: I0312 09:39:29.572939 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d4b8d2be-fafc-4d7e-9348-053c53d3cb4d-ssh-key\") pod \"d4b8d2be-fafc-4d7e-9348-053c53d3cb4d\" (UID: \"d4b8d2be-fafc-4d7e-9348-053c53d3cb4d\") " Mar 12 09:39:29 crc kubenswrapper[4809]: I0312 09:39:29.573203 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4b8d2be-fafc-4d7e-9348-053c53d3cb4d-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "d4b8d2be-fafc-4d7e-9348-053c53d3cb4d" (UID: "d4b8d2be-fafc-4d7e-9348-053c53d3cb4d"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 09:39:29 crc kubenswrapper[4809]: I0312 09:39:29.573818 4809 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d4b8d2be-fafc-4d7e-9348-053c53d3cb4d-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Mar 12 09:39:29 crc kubenswrapper[4809]: I0312 09:39:29.581406 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "test-operator-logs") pod "d4b8d2be-fafc-4d7e-9348-053c53d3cb4d" (UID: "d4b8d2be-fafc-4d7e-9348-053c53d3cb4d"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 12 09:39:29 crc kubenswrapper[4809]: I0312 09:39:29.584620 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4b8d2be-fafc-4d7e-9348-053c53d3cb4d-config-data" (OuterVolumeSpecName: "config-data") pod "d4b8d2be-fafc-4d7e-9348-053c53d3cb4d" (UID: "d4b8d2be-fafc-4d7e-9348-053c53d3cb4d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 09:39:29 crc kubenswrapper[4809]: I0312 09:39:29.585694 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4b8d2be-fafc-4d7e-9348-053c53d3cb4d-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "d4b8d2be-fafc-4d7e-9348-053c53d3cb4d" (UID: "d4b8d2be-fafc-4d7e-9348-053c53d3cb4d"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 09:39:29 crc kubenswrapper[4809]: I0312 09:39:29.589470 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4b8d2be-fafc-4d7e-9348-053c53d3cb4d-kube-api-access-bfzf8" (OuterVolumeSpecName: "kube-api-access-bfzf8") pod "d4b8d2be-fafc-4d7e-9348-053c53d3cb4d" (UID: "d4b8d2be-fafc-4d7e-9348-053c53d3cb4d"). InnerVolumeSpecName "kube-api-access-bfzf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 09:39:29 crc kubenswrapper[4809]: I0312 09:39:29.612014 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4b8d2be-fafc-4d7e-9348-053c53d3cb4d-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "d4b8d2be-fafc-4d7e-9348-053c53d3cb4d" (UID: "d4b8d2be-fafc-4d7e-9348-053c53d3cb4d"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 09:39:29 crc kubenswrapper[4809]: I0312 09:39:29.635579 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4b8d2be-fafc-4d7e-9348-053c53d3cb4d-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "d4b8d2be-fafc-4d7e-9348-053c53d3cb4d" (UID: "d4b8d2be-fafc-4d7e-9348-053c53d3cb4d"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 09:39:29 crc kubenswrapper[4809]: I0312 09:39:29.637621 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4b8d2be-fafc-4d7e-9348-053c53d3cb4d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d4b8d2be-fafc-4d7e-9348-053c53d3cb4d" (UID: "d4b8d2be-fafc-4d7e-9348-053c53d3cb4d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 09:39:29 crc kubenswrapper[4809]: I0312 09:39:29.682282 4809 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d4b8d2be-fafc-4d7e-9348-053c53d3cb4d-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Mar 12 09:39:29 crc kubenswrapper[4809]: I0312 09:39:29.682316 4809 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d4b8d2be-fafc-4d7e-9348-053c53d3cb4d-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Mar 12 09:39:29 crc kubenswrapper[4809]: I0312 09:39:29.682332 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bfzf8\" (UniqueName: \"kubernetes.io/projected/d4b8d2be-fafc-4d7e-9348-053c53d3cb4d-kube-api-access-bfzf8\") on node \"crc\" DevicePath \"\"" Mar 12 09:39:29 crc kubenswrapper[4809]: I0312 09:39:29.682342 4809 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d4b8d2be-fafc-4d7e-9348-053c53d3cb4d-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 09:39:29 crc kubenswrapper[4809]: I0312 09:39:29.682351 4809 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d4b8d2be-fafc-4d7e-9348-053c53d3cb4d-ca-certs\") on node \"crc\" DevicePath \"\"" Mar 12 09:39:29 crc kubenswrapper[4809]: I0312 09:39:29.682992 4809 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Mar 12 09:39:29 crc kubenswrapper[4809]: I0312 09:39:29.683011 4809 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d4b8d2be-fafc-4d7e-9348-053c53d3cb4d-ssh-key\") on node \"crc\" DevicePath \"\"" Mar 12 09:39:29 crc kubenswrapper[4809]: I0312 09:39:29.683417 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4b8d2be-fafc-4d7e-9348-053c53d3cb4d-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "d4b8d2be-fafc-4d7e-9348-053c53d3cb4d" (UID: "d4b8d2be-fafc-4d7e-9348-053c53d3cb4d"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 09:39:29 crc kubenswrapper[4809]: I0312 09:39:29.725986 4809 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Mar 12 09:39:29 crc kubenswrapper[4809]: I0312 09:39:29.785391 4809 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d4b8d2be-fafc-4d7e-9348-053c53d3cb4d-openstack-config\") on node \"crc\" DevicePath \"\"" Mar 12 09:39:29 crc kubenswrapper[4809]: I0312 09:39:29.785427 4809 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Mar 12 09:39:29 crc kubenswrapper[4809]: I0312 09:39:29.988863 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"d4b8d2be-fafc-4d7e-9348-053c53d3cb4d","Type":"ContainerDied","Data":"fa4f835a8bf6c604809bcdd04d9466f13532f158c03007c0012cd7bc489ef801"} Mar 12 09:39:29 crc kubenswrapper[4809]: I0312 09:39:29.989354 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa4f835a8bf6c604809bcdd04d9466f13532f158c03007c0012cd7bc489ef801" Mar 12 09:39:29 crc kubenswrapper[4809]: I0312 09:39:29.989633 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Mar 12 09:39:34 crc kubenswrapper[4809]: I0312 09:39:34.441686 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Mar 12 09:39:34 crc kubenswrapper[4809]: E0312 09:39:34.443378 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70576c65-c34c-4c1b-9c96-2623814c7eb9" containerName="oc" Mar 12 09:39:34 crc kubenswrapper[4809]: I0312 09:39:34.443400 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="70576c65-c34c-4c1b-9c96-2623814c7eb9" containerName="oc" Mar 12 09:39:34 crc kubenswrapper[4809]: E0312 09:39:34.443432 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4b8d2be-fafc-4d7e-9348-053c53d3cb4d" containerName="tempest-tests-tempest-tests-runner" Mar 12 09:39:34 crc kubenswrapper[4809]: I0312 09:39:34.443440 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4b8d2be-fafc-4d7e-9348-053c53d3cb4d" containerName="tempest-tests-tempest-tests-runner" Mar 12 09:39:34 crc kubenswrapper[4809]: I0312 09:39:34.443760 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4b8d2be-fafc-4d7e-9348-053c53d3cb4d" containerName="tempest-tests-tempest-tests-runner" Mar 12 09:39:34 crc kubenswrapper[4809]: I0312 09:39:34.443787 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="70576c65-c34c-4c1b-9c96-2623814c7eb9" containerName="oc" Mar 12 09:39:34 crc kubenswrapper[4809]: I0312 09:39:34.444981 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 12 09:39:34 crc kubenswrapper[4809]: I0312 09:39:34.448401 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-rzjwh" Mar 12 09:39:34 crc kubenswrapper[4809]: I0312 09:39:34.475253 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Mar 12 09:39:34 crc kubenswrapper[4809]: I0312 09:39:34.507157 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtpvl\" (UniqueName: \"kubernetes.io/projected/f431a085-378f-4a04-95ba-806aeee1f1dc-kube-api-access-mtpvl\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"f431a085-378f-4a04-95ba-806aeee1f1dc\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 12 09:39:34 crc kubenswrapper[4809]: I0312 09:39:34.507226 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"f431a085-378f-4a04-95ba-806aeee1f1dc\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 12 09:39:34 crc kubenswrapper[4809]: I0312 09:39:34.609781 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtpvl\" (UniqueName: \"kubernetes.io/projected/f431a085-378f-4a04-95ba-806aeee1f1dc-kube-api-access-mtpvl\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"f431a085-378f-4a04-95ba-806aeee1f1dc\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 12 09:39:34 crc kubenswrapper[4809]: I0312 09:39:34.609887 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"f431a085-378f-4a04-95ba-806aeee1f1dc\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 12 09:39:34 crc kubenswrapper[4809]: I0312 09:39:34.611666 4809 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"f431a085-378f-4a04-95ba-806aeee1f1dc\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 12 09:39:34 crc kubenswrapper[4809]: I0312 09:39:34.637489 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtpvl\" (UniqueName: \"kubernetes.io/projected/f431a085-378f-4a04-95ba-806aeee1f1dc-kube-api-access-mtpvl\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"f431a085-378f-4a04-95ba-806aeee1f1dc\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 12 09:39:34 crc kubenswrapper[4809]: I0312 09:39:34.646828 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"f431a085-378f-4a04-95ba-806aeee1f1dc\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 12 09:39:34 crc kubenswrapper[4809]: I0312 09:39:34.780548 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 12 09:39:35 crc kubenswrapper[4809]: I0312 09:39:35.500502 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Mar 12 09:39:36 crc kubenswrapper[4809]: I0312 09:39:36.063372 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"f431a085-378f-4a04-95ba-806aeee1f1dc","Type":"ContainerStarted","Data":"0d730f72176f3e9213a58acd59d674195515e197421f9980571b15036fa3d7eb"} Mar 12 09:39:37 crc kubenswrapper[4809]: I0312 09:39:37.076925 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"f431a085-378f-4a04-95ba-806aeee1f1dc","Type":"ContainerStarted","Data":"cd02b8b9d0d4ce5e58f92d3bc7fca23fc2062af8672a8843f31170441e1e91ab"} Mar 12 09:39:37 crc kubenswrapper[4809]: I0312 09:39:37.099541 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.908429596 podStartE2EDuration="3.099520822s" podCreationTimestamp="2026-03-12 09:39:34 +0000 UTC" firstStartedPulling="2026-03-12 09:39:35.487017276 +0000 UTC m=+6049.069053009" lastFinishedPulling="2026-03-12 09:39:36.678108502 +0000 UTC m=+6050.260144235" observedRunningTime="2026-03-12 09:39:37.0884459 +0000 UTC m=+6050.670481643" watchObservedRunningTime="2026-03-12 09:39:37.099520822 +0000 UTC m=+6050.681556555" Mar 12 09:40:00 crc kubenswrapper[4809]: I0312 09:40:00.157747 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555140-7nmbx"] Mar 12 09:40:00 crc kubenswrapper[4809]: I0312 09:40:00.160795 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555140-7nmbx" Mar 12 09:40:00 crc kubenswrapper[4809]: I0312 09:40:00.164226 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-tvxqv" Mar 12 09:40:00 crc kubenswrapper[4809]: I0312 09:40:00.165520 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 09:40:00 crc kubenswrapper[4809]: I0312 09:40:00.165599 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 09:40:00 crc kubenswrapper[4809]: I0312 09:40:00.175673 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555140-7nmbx"] Mar 12 09:40:00 crc kubenswrapper[4809]: I0312 09:40:00.295328 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm2bx\" (UniqueName: \"kubernetes.io/projected/363c2df6-c2f2-4cfd-ad74-4a2a598cb809-kube-api-access-pm2bx\") pod \"auto-csr-approver-29555140-7nmbx\" (UID: \"363c2df6-c2f2-4cfd-ad74-4a2a598cb809\") " pod="openshift-infra/auto-csr-approver-29555140-7nmbx" Mar 12 09:40:00 crc kubenswrapper[4809]: I0312 09:40:00.400211 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pm2bx\" (UniqueName: \"kubernetes.io/projected/363c2df6-c2f2-4cfd-ad74-4a2a598cb809-kube-api-access-pm2bx\") pod \"auto-csr-approver-29555140-7nmbx\" (UID: \"363c2df6-c2f2-4cfd-ad74-4a2a598cb809\") " pod="openshift-infra/auto-csr-approver-29555140-7nmbx" Mar 12 09:40:00 crc kubenswrapper[4809]: I0312 09:40:00.425501 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pm2bx\" (UniqueName: \"kubernetes.io/projected/363c2df6-c2f2-4cfd-ad74-4a2a598cb809-kube-api-access-pm2bx\") pod \"auto-csr-approver-29555140-7nmbx\" (UID: \"363c2df6-c2f2-4cfd-ad74-4a2a598cb809\") " pod="openshift-infra/auto-csr-approver-29555140-7nmbx" Mar 12 09:40:00 crc kubenswrapper[4809]: I0312 09:40:00.482685 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555140-7nmbx" Mar 12 09:40:01 crc kubenswrapper[4809]: I0312 09:40:01.059343 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555140-7nmbx"] Mar 12 09:40:01 crc kubenswrapper[4809]: I0312 09:40:01.380600 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555140-7nmbx" event={"ID":"363c2df6-c2f2-4cfd-ad74-4a2a598cb809","Type":"ContainerStarted","Data":"a7c4a4e3bf7ba5e247749bb0f008163a45c4a6ee532db9ccae1c6ac06ed87131"} Mar 12 09:40:02 crc kubenswrapper[4809]: I0312 09:40:02.771368 4809 trace.go:236] Trace[837110883]: "Calculate volume metrics of ovndbcluster-sb-etc-ovn for pod openstack/ovsdbserver-sb-0" (12-Mar-2026 09:40:01.725) (total time: 1043ms): Mar 12 09:40:02 crc kubenswrapper[4809]: Trace[837110883]: [1.0434497s] [1.0434497s] END Mar 12 09:40:03 crc kubenswrapper[4809]: I0312 09:40:03.405162 4809 generic.go:334] "Generic (PLEG): container finished" podID="363c2df6-c2f2-4cfd-ad74-4a2a598cb809" containerID="6d41a315fa4f3b1fd87c667d783f452ad0a4791ab752fcd92d86b57bf800f08d" exitCode=0 Mar 12 09:40:03 crc kubenswrapper[4809]: I0312 09:40:03.405267 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555140-7nmbx" event={"ID":"363c2df6-c2f2-4cfd-ad74-4a2a598cb809","Type":"ContainerDied","Data":"6d41a315fa4f3b1fd87c667d783f452ad0a4791ab752fcd92d86b57bf800f08d"} Mar 12 09:40:05 crc kubenswrapper[4809]: I0312 09:40:05.182023 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555140-7nmbx" Mar 12 09:40:05 crc kubenswrapper[4809]: I0312 09:40:05.254074 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pm2bx\" (UniqueName: \"kubernetes.io/projected/363c2df6-c2f2-4cfd-ad74-4a2a598cb809-kube-api-access-pm2bx\") pod \"363c2df6-c2f2-4cfd-ad74-4a2a598cb809\" (UID: \"363c2df6-c2f2-4cfd-ad74-4a2a598cb809\") " Mar 12 09:40:05 crc kubenswrapper[4809]: I0312 09:40:05.279751 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/363c2df6-c2f2-4cfd-ad74-4a2a598cb809-kube-api-access-pm2bx" (OuterVolumeSpecName: "kube-api-access-pm2bx") pod "363c2df6-c2f2-4cfd-ad74-4a2a598cb809" (UID: "363c2df6-c2f2-4cfd-ad74-4a2a598cb809"). InnerVolumeSpecName "kube-api-access-pm2bx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 09:40:05 crc kubenswrapper[4809]: I0312 09:40:05.358170 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pm2bx\" (UniqueName: \"kubernetes.io/projected/363c2df6-c2f2-4cfd-ad74-4a2a598cb809-kube-api-access-pm2bx\") on node \"crc\" DevicePath \"\"" Mar 12 09:40:05 crc kubenswrapper[4809]: I0312 09:40:05.435273 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555140-7nmbx" event={"ID":"363c2df6-c2f2-4cfd-ad74-4a2a598cb809","Type":"ContainerDied","Data":"a7c4a4e3bf7ba5e247749bb0f008163a45c4a6ee532db9ccae1c6ac06ed87131"} Mar 12 09:40:05 crc kubenswrapper[4809]: I0312 09:40:05.435320 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7c4a4e3bf7ba5e247749bb0f008163a45c4a6ee532db9ccae1c6ac06ed87131" Mar 12 09:40:05 crc kubenswrapper[4809]: I0312 09:40:05.435351 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555140-7nmbx" Mar 12 09:40:06 crc kubenswrapper[4809]: I0312 09:40:06.291147 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555134-49r7n"] Mar 12 09:40:06 crc kubenswrapper[4809]: I0312 09:40:06.313555 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555134-49r7n"] Mar 12 09:40:07 crc kubenswrapper[4809]: I0312 09:40:07.130965 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="934b370a-37b5-40d3-b543-fdafb84af38d" path="/var/lib/kubelet/pods/934b370a-37b5-40d3-b543-fdafb84af38d/volumes" Mar 12 09:40:27 crc kubenswrapper[4809]: I0312 09:40:27.295275 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-pnl96/must-gather-256gx"] Mar 12 09:40:27 crc kubenswrapper[4809]: E0312 09:40:27.297189 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="363c2df6-c2f2-4cfd-ad74-4a2a598cb809" containerName="oc" Mar 12 09:40:27 crc kubenswrapper[4809]: I0312 09:40:27.297208 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="363c2df6-c2f2-4cfd-ad74-4a2a598cb809" containerName="oc" Mar 12 09:40:27 crc kubenswrapper[4809]: I0312 09:40:27.297855 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="363c2df6-c2f2-4cfd-ad74-4a2a598cb809" containerName="oc" Mar 12 09:40:27 crc kubenswrapper[4809]: I0312 09:40:27.301468 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pnl96/must-gather-256gx" Mar 12 09:40:27 crc kubenswrapper[4809]: I0312 09:40:27.306755 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-pnl96"/"kube-root-ca.crt" Mar 12 09:40:27 crc kubenswrapper[4809]: I0312 09:40:27.306871 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-pnl96"/"default-dockercfg-xjbz5" Mar 12 09:40:27 crc kubenswrapper[4809]: I0312 09:40:27.307017 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-pnl96"/"openshift-service-ca.crt" Mar 12 09:40:27 crc kubenswrapper[4809]: I0312 09:40:27.327868 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-pnl96/must-gather-256gx"] Mar 12 09:40:27 crc kubenswrapper[4809]: I0312 09:40:27.392342 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/af6d2578-2db0-448f-9bf5-c695df62f63b-must-gather-output\") pod \"must-gather-256gx\" (UID: \"af6d2578-2db0-448f-9bf5-c695df62f63b\") " pod="openshift-must-gather-pnl96/must-gather-256gx" Mar 12 09:40:27 crc kubenswrapper[4809]: I0312 09:40:27.392736 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9zbq\" (UniqueName: \"kubernetes.io/projected/af6d2578-2db0-448f-9bf5-c695df62f63b-kube-api-access-z9zbq\") pod \"must-gather-256gx\" (UID: \"af6d2578-2db0-448f-9bf5-c695df62f63b\") " pod="openshift-must-gather-pnl96/must-gather-256gx" Mar 12 09:40:27 crc kubenswrapper[4809]: I0312 09:40:27.494711 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/af6d2578-2db0-448f-9bf5-c695df62f63b-must-gather-output\") pod \"must-gather-256gx\" (UID: \"af6d2578-2db0-448f-9bf5-c695df62f63b\") " pod="openshift-must-gather-pnl96/must-gather-256gx" Mar 12 09:40:27 crc kubenswrapper[4809]: I0312 09:40:27.495010 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9zbq\" (UniqueName: \"kubernetes.io/projected/af6d2578-2db0-448f-9bf5-c695df62f63b-kube-api-access-z9zbq\") pod \"must-gather-256gx\" (UID: \"af6d2578-2db0-448f-9bf5-c695df62f63b\") " pod="openshift-must-gather-pnl96/must-gather-256gx" Mar 12 09:40:27 crc kubenswrapper[4809]: I0312 09:40:27.495234 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/af6d2578-2db0-448f-9bf5-c695df62f63b-must-gather-output\") pod \"must-gather-256gx\" (UID: \"af6d2578-2db0-448f-9bf5-c695df62f63b\") " pod="openshift-must-gather-pnl96/must-gather-256gx" Mar 12 09:40:27 crc kubenswrapper[4809]: I0312 09:40:27.517382 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9zbq\" (UniqueName: \"kubernetes.io/projected/af6d2578-2db0-448f-9bf5-c695df62f63b-kube-api-access-z9zbq\") pod \"must-gather-256gx\" (UID: \"af6d2578-2db0-448f-9bf5-c695df62f63b\") " pod="openshift-must-gather-pnl96/must-gather-256gx" Mar 12 09:40:27 crc kubenswrapper[4809]: I0312 09:40:27.626616 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pnl96/must-gather-256gx" Mar 12 09:40:28 crc kubenswrapper[4809]: W0312 09:40:28.309376 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaf6d2578_2db0_448f_9bf5_c695df62f63b.slice/crio-a1f304f5270197974bd0121e5f533476a69aaa83a5cf665d2d192212ab9af776 WatchSource:0}: Error finding container a1f304f5270197974bd0121e5f533476a69aaa83a5cf665d2d192212ab9af776: Status 404 returned error can't find the container with id a1f304f5270197974bd0121e5f533476a69aaa83a5cf665d2d192212ab9af776 Mar 12 09:40:28 crc kubenswrapper[4809]: I0312 09:40:28.320489 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-pnl96/must-gather-256gx"] Mar 12 09:40:28 crc kubenswrapper[4809]: I0312 09:40:28.798264 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pnl96/must-gather-256gx" event={"ID":"af6d2578-2db0-448f-9bf5-c695df62f63b","Type":"ContainerStarted","Data":"a1f304f5270197974bd0121e5f533476a69aaa83a5cf665d2d192212ab9af776"} Mar 12 09:40:37 crc kubenswrapper[4809]: I0312 09:40:37.924649 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pnl96/must-gather-256gx" event={"ID":"af6d2578-2db0-448f-9bf5-c695df62f63b","Type":"ContainerStarted","Data":"e46cbea23cd2d8d9bf85c10e0cdd4581950b657acc31a512e1eb50225abc4ded"} Mar 12 09:40:37 crc kubenswrapper[4809]: I0312 09:40:37.925917 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pnl96/must-gather-256gx" event={"ID":"af6d2578-2db0-448f-9bf5-c695df62f63b","Type":"ContainerStarted","Data":"f15337705ffb729809abd03cd2160f0bac1743d4ea2cc0228f4e401bf2d6d723"} Mar 12 09:40:37 crc kubenswrapper[4809]: I0312 09:40:37.953653 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-pnl96/must-gather-256gx" podStartSLOduration=2.202159809 podStartE2EDuration="10.953625359s" podCreationTimestamp="2026-03-12 09:40:27 +0000 UTC" firstStartedPulling="2026-03-12 09:40:28.313862889 +0000 UTC m=+6101.895898612" lastFinishedPulling="2026-03-12 09:40:37.065328429 +0000 UTC m=+6110.647364162" observedRunningTime="2026-03-12 09:40:37.947189913 +0000 UTC m=+6111.529225676" watchObservedRunningTime="2026-03-12 09:40:37.953625359 +0000 UTC m=+6111.535661092" Mar 12 09:40:44 crc kubenswrapper[4809]: I0312 09:40:44.087214 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-pnl96/crc-debug-5kf86"] Mar 12 09:40:44 crc kubenswrapper[4809]: I0312 09:40:44.091782 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pnl96/crc-debug-5kf86" Mar 12 09:40:44 crc kubenswrapper[4809]: I0312 09:40:44.181803 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f9ba1c91-fee8-4d89-810a-422d5fa89d27-host\") pod \"crc-debug-5kf86\" (UID: \"f9ba1c91-fee8-4d89-810a-422d5fa89d27\") " pod="openshift-must-gather-pnl96/crc-debug-5kf86" Mar 12 09:40:44 crc kubenswrapper[4809]: I0312 09:40:44.181891 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vfp2\" (UniqueName: \"kubernetes.io/projected/f9ba1c91-fee8-4d89-810a-422d5fa89d27-kube-api-access-2vfp2\") pod \"crc-debug-5kf86\" (UID: \"f9ba1c91-fee8-4d89-810a-422d5fa89d27\") " pod="openshift-must-gather-pnl96/crc-debug-5kf86" Mar 12 09:40:44 crc kubenswrapper[4809]: I0312 09:40:44.284845 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f9ba1c91-fee8-4d89-810a-422d5fa89d27-host\") pod \"crc-debug-5kf86\" (UID: \"f9ba1c91-fee8-4d89-810a-422d5fa89d27\") " pod="openshift-must-gather-pnl96/crc-debug-5kf86" Mar 12 09:40:44 crc kubenswrapper[4809]: I0312 09:40:44.284970 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vfp2\" (UniqueName: \"kubernetes.io/projected/f9ba1c91-fee8-4d89-810a-422d5fa89d27-kube-api-access-2vfp2\") pod \"crc-debug-5kf86\" (UID: \"f9ba1c91-fee8-4d89-810a-422d5fa89d27\") " pod="openshift-must-gather-pnl96/crc-debug-5kf86" Mar 12 09:40:44 crc kubenswrapper[4809]: I0312 09:40:44.286241 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f9ba1c91-fee8-4d89-810a-422d5fa89d27-host\") pod \"crc-debug-5kf86\" (UID: \"f9ba1c91-fee8-4d89-810a-422d5fa89d27\") " pod="openshift-must-gather-pnl96/crc-debug-5kf86" Mar 12 09:40:44 crc kubenswrapper[4809]: I0312 09:40:44.999011 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vfp2\" (UniqueName: \"kubernetes.io/projected/f9ba1c91-fee8-4d89-810a-422d5fa89d27-kube-api-access-2vfp2\") pod \"crc-debug-5kf86\" (UID: \"f9ba1c91-fee8-4d89-810a-422d5fa89d27\") " pod="openshift-must-gather-pnl96/crc-debug-5kf86" Mar 12 09:40:45 crc kubenswrapper[4809]: I0312 09:40:45.012362 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pnl96/crc-debug-5kf86" Mar 12 09:40:46 crc kubenswrapper[4809]: I0312 09:40:46.038593 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pnl96/crc-debug-5kf86" event={"ID":"f9ba1c91-fee8-4d89-810a-422d5fa89d27","Type":"ContainerStarted","Data":"b68eb24e19b3c94fdc1e56e6173e8bafa3d1044d8f4028719ff61a3d3e3b6259"} Mar 12 09:40:52 crc kubenswrapper[4809]: I0312 09:40:52.466398 4809 scope.go:117] "RemoveContainer" containerID="26bb9616e56be461e072f430a1a13103b942df45086801296883ae0a11a1fd76" Mar 12 09:40:59 crc kubenswrapper[4809]: I0312 09:40:59.245682 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pnl96/crc-debug-5kf86" event={"ID":"f9ba1c91-fee8-4d89-810a-422d5fa89d27","Type":"ContainerStarted","Data":"449e5cf6a59f9b7dfa719db451d1a172aeb37fbaac048edbe084ff068a76da5b"} Mar 12 09:40:59 crc kubenswrapper[4809]: I0312 09:40:59.275673 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-pnl96/crc-debug-5kf86" podStartSLOduration=1.856885842 podStartE2EDuration="15.275643256s" podCreationTimestamp="2026-03-12 09:40:44 +0000 UTC" firstStartedPulling="2026-03-12 09:40:45.147340155 +0000 UTC m=+6118.729375878" lastFinishedPulling="2026-03-12 09:40:58.566097559 +0000 UTC m=+6132.148133292" observedRunningTime="2026-03-12 09:40:59.266944429 +0000 UTC m=+6132.848980162" watchObservedRunningTime="2026-03-12 09:40:59.275643256 +0000 UTC m=+6132.857678989" Mar 12 09:40:59 crc kubenswrapper[4809]: I0312 09:40:59.545229 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4m948"] Mar 12 09:40:59 crc kubenswrapper[4809]: I0312 09:40:59.549079 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4m948" Mar 12 09:40:59 crc kubenswrapper[4809]: I0312 09:40:59.559900 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4m948"] Mar 12 09:40:59 crc kubenswrapper[4809]: I0312 09:40:59.673699 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8e0ec1b-f6aa-4c66-9664-78beeb14c66e-catalog-content\") pod \"community-operators-4m948\" (UID: \"a8e0ec1b-f6aa-4c66-9664-78beeb14c66e\") " pod="openshift-marketplace/community-operators-4m948" Mar 12 09:40:59 crc kubenswrapper[4809]: I0312 09:40:59.673890 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8e0ec1b-f6aa-4c66-9664-78beeb14c66e-utilities\") pod \"community-operators-4m948\" (UID: \"a8e0ec1b-f6aa-4c66-9664-78beeb14c66e\") " pod="openshift-marketplace/community-operators-4m948" Mar 12 09:40:59 crc kubenswrapper[4809]: I0312 09:40:59.673984 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd95l\" (UniqueName: \"kubernetes.io/projected/a8e0ec1b-f6aa-4c66-9664-78beeb14c66e-kube-api-access-xd95l\") pod \"community-operators-4m948\" (UID: \"a8e0ec1b-f6aa-4c66-9664-78beeb14c66e\") " pod="openshift-marketplace/community-operators-4m948" Mar 12 09:40:59 crc kubenswrapper[4809]: I0312 09:40:59.776503 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xd95l\" (UniqueName: \"kubernetes.io/projected/a8e0ec1b-f6aa-4c66-9664-78beeb14c66e-kube-api-access-xd95l\") pod \"community-operators-4m948\" (UID: \"a8e0ec1b-f6aa-4c66-9664-78beeb14c66e\") " pod="openshift-marketplace/community-operators-4m948" Mar 12 09:40:59 crc kubenswrapper[4809]: I0312 09:40:59.776603 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8e0ec1b-f6aa-4c66-9664-78beeb14c66e-catalog-content\") pod \"community-operators-4m948\" (UID: \"a8e0ec1b-f6aa-4c66-9664-78beeb14c66e\") " pod="openshift-marketplace/community-operators-4m948" Mar 12 09:40:59 crc kubenswrapper[4809]: I0312 09:40:59.776786 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8e0ec1b-f6aa-4c66-9664-78beeb14c66e-utilities\") pod \"community-operators-4m948\" (UID: \"a8e0ec1b-f6aa-4c66-9664-78beeb14c66e\") " pod="openshift-marketplace/community-operators-4m948" Mar 12 09:40:59 crc kubenswrapper[4809]: I0312 09:40:59.777094 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8e0ec1b-f6aa-4c66-9664-78beeb14c66e-catalog-content\") pod \"community-operators-4m948\" (UID: \"a8e0ec1b-f6aa-4c66-9664-78beeb14c66e\") " pod="openshift-marketplace/community-operators-4m948" Mar 12 09:40:59 crc kubenswrapper[4809]: I0312 09:40:59.777105 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8e0ec1b-f6aa-4c66-9664-78beeb14c66e-utilities\") pod \"community-operators-4m948\" (UID: \"a8e0ec1b-f6aa-4c66-9664-78beeb14c66e\") " pod="openshift-marketplace/community-operators-4m948" Mar 12 09:40:59 crc kubenswrapper[4809]: I0312 09:40:59.819137 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xd95l\" (UniqueName: \"kubernetes.io/projected/a8e0ec1b-f6aa-4c66-9664-78beeb14c66e-kube-api-access-xd95l\") pod \"community-operators-4m948\" (UID: \"a8e0ec1b-f6aa-4c66-9664-78beeb14c66e\") " pod="openshift-marketplace/community-operators-4m948" Mar 12 09:40:59 crc kubenswrapper[4809]: I0312 09:40:59.876818 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4m948" Mar 12 09:41:01 crc kubenswrapper[4809]: I0312 09:41:01.290133 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4m948"] Mar 12 09:41:01 crc kubenswrapper[4809]: W0312 09:41:01.922052 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8e0ec1b_f6aa_4c66_9664_78beeb14c66e.slice/crio-389cdcacfdf0985b703cd8895c7f3036692869035e09c4e9814cbb4dc6a4c5c9 WatchSource:0}: Error finding container 389cdcacfdf0985b703cd8895c7f3036692869035e09c4e9814cbb4dc6a4c5c9: Status 404 returned error can't find the container with id 389cdcacfdf0985b703cd8895c7f3036692869035e09c4e9814cbb4dc6a4c5c9 Mar 12 09:41:02 crc kubenswrapper[4809]: I0312 09:41:02.315016 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4m948" event={"ID":"a8e0ec1b-f6aa-4c66-9664-78beeb14c66e","Type":"ContainerStarted","Data":"389cdcacfdf0985b703cd8895c7f3036692869035e09c4e9814cbb4dc6a4c5c9"} Mar 12 09:41:03 crc kubenswrapper[4809]: I0312 09:41:03.327604 4809 generic.go:334] "Generic (PLEG): container finished" podID="a8e0ec1b-f6aa-4c66-9664-78beeb14c66e" containerID="e40499e9fd3d34f198dfff09e8e07a0ad5570af16687d16dc433484fcd4e66ec" exitCode=0 Mar 12 09:41:03 crc kubenswrapper[4809]: I0312 09:41:03.327702 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4m948" event={"ID":"a8e0ec1b-f6aa-4c66-9664-78beeb14c66e","Type":"ContainerDied","Data":"e40499e9fd3d34f198dfff09e8e07a0ad5570af16687d16dc433484fcd4e66ec"} Mar 12 09:41:05 crc kubenswrapper[4809]: I0312 09:41:05.363070 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4m948" event={"ID":"a8e0ec1b-f6aa-4c66-9664-78beeb14c66e","Type":"ContainerStarted","Data":"8b331f97c1728384a3605205c737c1a017e000ccf062089b7e0b4523973c1bec"} Mar 12 09:41:07 crc kubenswrapper[4809]: I0312 09:41:07.387250 4809 generic.go:334] "Generic (PLEG): container finished" podID="a8e0ec1b-f6aa-4c66-9664-78beeb14c66e" containerID="8b331f97c1728384a3605205c737c1a017e000ccf062089b7e0b4523973c1bec" exitCode=0 Mar 12 09:41:07 crc kubenswrapper[4809]: I0312 09:41:07.387302 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4m948" event={"ID":"a8e0ec1b-f6aa-4c66-9664-78beeb14c66e","Type":"ContainerDied","Data":"8b331f97c1728384a3605205c737c1a017e000ccf062089b7e0b4523973c1bec"} Mar 12 09:41:10 crc kubenswrapper[4809]: I0312 09:41:10.445333 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4m948" event={"ID":"a8e0ec1b-f6aa-4c66-9664-78beeb14c66e","Type":"ContainerStarted","Data":"0c77d17c9de2662e8203a849f3ef0653d5467eae1209f9330f5c4ece00543990"} Mar 12 09:41:10 crc kubenswrapper[4809]: I0312 09:41:10.482856 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4m948" podStartSLOduration=5.590964688 podStartE2EDuration="11.482810721s" podCreationTimestamp="2026-03-12 09:40:59 +0000 UTC" firstStartedPulling="2026-03-12 09:41:03.331369116 +0000 UTC m=+6136.913404849" lastFinishedPulling="2026-03-12 09:41:09.223215139 +0000 UTC m=+6142.805250882" observedRunningTime="2026-03-12 09:41:10.471797832 +0000 UTC m=+6144.053833585" watchObservedRunningTime="2026-03-12 09:41:10.482810721 +0000 UTC m=+6144.064846454" Mar 12 09:41:19 crc kubenswrapper[4809]: I0312 09:41:19.877982 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4m948" Mar 12 09:41:19 crc kubenswrapper[4809]: I0312 09:41:19.878612 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4m948" Mar 12 09:41:20 crc kubenswrapper[4809]: I0312 09:41:20.940497 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-4m948" podUID="a8e0ec1b-f6aa-4c66-9664-78beeb14c66e" containerName="registry-server" probeResult="failure" output=< Mar 12 09:41:20 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 09:41:20 crc kubenswrapper[4809]: > Mar 12 09:41:29 crc kubenswrapper[4809]: I0312 09:41:29.983865 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4m948" Mar 12 09:41:30 crc kubenswrapper[4809]: I0312 09:41:30.042526 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4m948" Mar 12 09:41:31 crc kubenswrapper[4809]: I0312 09:41:31.045885 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4m948"] Mar 12 09:41:31 crc kubenswrapper[4809]: I0312 09:41:31.705796 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4m948" podUID="a8e0ec1b-f6aa-4c66-9664-78beeb14c66e" containerName="registry-server" containerID="cri-o://0c77d17c9de2662e8203a849f3ef0653d5467eae1209f9330f5c4ece00543990" gracePeriod=2 Mar 12 09:41:32 crc kubenswrapper[4809]: I0312 09:41:32.482854 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4m948" Mar 12 09:41:32 crc kubenswrapper[4809]: I0312 09:41:32.669225 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xd95l\" (UniqueName: \"kubernetes.io/projected/a8e0ec1b-f6aa-4c66-9664-78beeb14c66e-kube-api-access-xd95l\") pod \"a8e0ec1b-f6aa-4c66-9664-78beeb14c66e\" (UID: \"a8e0ec1b-f6aa-4c66-9664-78beeb14c66e\") " Mar 12 09:41:32 crc kubenswrapper[4809]: I0312 09:41:32.669313 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8e0ec1b-f6aa-4c66-9664-78beeb14c66e-catalog-content\") pod \"a8e0ec1b-f6aa-4c66-9664-78beeb14c66e\" (UID: \"a8e0ec1b-f6aa-4c66-9664-78beeb14c66e\") " Mar 12 09:41:32 crc kubenswrapper[4809]: I0312 09:41:32.669464 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8e0ec1b-f6aa-4c66-9664-78beeb14c66e-utilities\") pod \"a8e0ec1b-f6aa-4c66-9664-78beeb14c66e\" (UID: \"a8e0ec1b-f6aa-4c66-9664-78beeb14c66e\") " Mar 12 09:41:32 crc kubenswrapper[4809]: I0312 09:41:32.670178 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8e0ec1b-f6aa-4c66-9664-78beeb14c66e-utilities" (OuterVolumeSpecName: "utilities") pod "a8e0ec1b-f6aa-4c66-9664-78beeb14c66e" (UID: "a8e0ec1b-f6aa-4c66-9664-78beeb14c66e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 09:41:32 crc kubenswrapper[4809]: I0312 09:41:32.701342 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8e0ec1b-f6aa-4c66-9664-78beeb14c66e-kube-api-access-xd95l" (OuterVolumeSpecName: "kube-api-access-xd95l") pod "a8e0ec1b-f6aa-4c66-9664-78beeb14c66e" (UID: "a8e0ec1b-f6aa-4c66-9664-78beeb14c66e"). InnerVolumeSpecName "kube-api-access-xd95l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 09:41:32 crc kubenswrapper[4809]: I0312 09:41:32.719226 4809 generic.go:334] "Generic (PLEG): container finished" podID="a8e0ec1b-f6aa-4c66-9664-78beeb14c66e" containerID="0c77d17c9de2662e8203a849f3ef0653d5467eae1209f9330f5c4ece00543990" exitCode=0 Mar 12 09:41:32 crc kubenswrapper[4809]: I0312 09:41:32.719269 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4m948" event={"ID":"a8e0ec1b-f6aa-4c66-9664-78beeb14c66e","Type":"ContainerDied","Data":"0c77d17c9de2662e8203a849f3ef0653d5467eae1209f9330f5c4ece00543990"} Mar 12 09:41:32 crc kubenswrapper[4809]: I0312 09:41:32.719296 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4m948" event={"ID":"a8e0ec1b-f6aa-4c66-9664-78beeb14c66e","Type":"ContainerDied","Data":"389cdcacfdf0985b703cd8895c7f3036692869035e09c4e9814cbb4dc6a4c5c9"} Mar 12 09:41:32 crc kubenswrapper[4809]: I0312 09:41:32.719317 4809 scope.go:117] "RemoveContainer" containerID="0c77d17c9de2662e8203a849f3ef0653d5467eae1209f9330f5c4ece00543990" Mar 12 09:41:32 crc kubenswrapper[4809]: I0312 09:41:32.719385 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4m948" Mar 12 09:41:32 crc kubenswrapper[4809]: I0312 09:41:32.753294 4809 scope.go:117] "RemoveContainer" containerID="8b331f97c1728384a3605205c737c1a017e000ccf062089b7e0b4523973c1bec" Mar 12 09:41:32 crc kubenswrapper[4809]: I0312 09:41:32.754993 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8e0ec1b-f6aa-4c66-9664-78beeb14c66e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a8e0ec1b-f6aa-4c66-9664-78beeb14c66e" (UID: "a8e0ec1b-f6aa-4c66-9664-78beeb14c66e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 09:41:32 crc kubenswrapper[4809]: I0312 09:41:32.771792 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8e0ec1b-f6aa-4c66-9664-78beeb14c66e-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 09:41:32 crc kubenswrapper[4809]: I0312 09:41:32.771838 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xd95l\" (UniqueName: \"kubernetes.io/projected/a8e0ec1b-f6aa-4c66-9664-78beeb14c66e-kube-api-access-xd95l\") on node \"crc\" DevicePath \"\"" Mar 12 09:41:32 crc kubenswrapper[4809]: I0312 09:41:32.771852 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8e0ec1b-f6aa-4c66-9664-78beeb14c66e-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 09:41:32 crc kubenswrapper[4809]: I0312 09:41:32.773787 4809 scope.go:117] "RemoveContainer" containerID="e40499e9fd3d34f198dfff09e8e07a0ad5570af16687d16dc433484fcd4e66ec" Mar 12 09:41:32 crc kubenswrapper[4809]: I0312 09:41:32.834632 4809 scope.go:117] "RemoveContainer" containerID="0c77d17c9de2662e8203a849f3ef0653d5467eae1209f9330f5c4ece00543990" Mar 12 09:41:32 crc kubenswrapper[4809]: E0312 09:41:32.835189 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c77d17c9de2662e8203a849f3ef0653d5467eae1209f9330f5c4ece00543990\": container with ID starting with 0c77d17c9de2662e8203a849f3ef0653d5467eae1209f9330f5c4ece00543990 not found: ID does not exist" containerID="0c77d17c9de2662e8203a849f3ef0653d5467eae1209f9330f5c4ece00543990" Mar 12 09:41:32 crc kubenswrapper[4809]: I0312 09:41:32.835229 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c77d17c9de2662e8203a849f3ef0653d5467eae1209f9330f5c4ece00543990"} err="failed to get container status \"0c77d17c9de2662e8203a849f3ef0653d5467eae1209f9330f5c4ece00543990\": rpc error: code = NotFound desc = could not find container \"0c77d17c9de2662e8203a849f3ef0653d5467eae1209f9330f5c4ece00543990\": container with ID starting with 0c77d17c9de2662e8203a849f3ef0653d5467eae1209f9330f5c4ece00543990 not found: ID does not exist" Mar 12 09:41:32 crc kubenswrapper[4809]: I0312 09:41:32.835252 4809 scope.go:117] "RemoveContainer" containerID="8b331f97c1728384a3605205c737c1a017e000ccf062089b7e0b4523973c1bec" Mar 12 09:41:32 crc kubenswrapper[4809]: E0312 09:41:32.836046 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b331f97c1728384a3605205c737c1a017e000ccf062089b7e0b4523973c1bec\": container with ID starting with 8b331f97c1728384a3605205c737c1a017e000ccf062089b7e0b4523973c1bec not found: ID does not exist" containerID="8b331f97c1728384a3605205c737c1a017e000ccf062089b7e0b4523973c1bec" Mar 12 09:41:32 crc kubenswrapper[4809]: I0312 09:41:32.836198 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b331f97c1728384a3605205c737c1a017e000ccf062089b7e0b4523973c1bec"} err="failed to get container status \"8b331f97c1728384a3605205c737c1a017e000ccf062089b7e0b4523973c1bec\": rpc error: code = NotFound desc = could not find container \"8b331f97c1728384a3605205c737c1a017e000ccf062089b7e0b4523973c1bec\": container with ID starting with 8b331f97c1728384a3605205c737c1a017e000ccf062089b7e0b4523973c1bec not found: ID does not exist" Mar 12 09:41:32 crc kubenswrapper[4809]: I0312 09:41:32.836333 4809 scope.go:117] "RemoveContainer" containerID="e40499e9fd3d34f198dfff09e8e07a0ad5570af16687d16dc433484fcd4e66ec" Mar 12 09:41:32 crc kubenswrapper[4809]: E0312 09:41:32.836813 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e40499e9fd3d34f198dfff09e8e07a0ad5570af16687d16dc433484fcd4e66ec\": container with ID starting with e40499e9fd3d34f198dfff09e8e07a0ad5570af16687d16dc433484fcd4e66ec not found: ID does not exist" containerID="e40499e9fd3d34f198dfff09e8e07a0ad5570af16687d16dc433484fcd4e66ec" Mar 12 09:41:32 crc kubenswrapper[4809]: I0312 09:41:32.836841 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e40499e9fd3d34f198dfff09e8e07a0ad5570af16687d16dc433484fcd4e66ec"} err="failed to get container status \"e40499e9fd3d34f198dfff09e8e07a0ad5570af16687d16dc433484fcd4e66ec\": rpc error: code = NotFound desc = could not find container \"e40499e9fd3d34f198dfff09e8e07a0ad5570af16687d16dc433484fcd4e66ec\": container with ID starting with e40499e9fd3d34f198dfff09e8e07a0ad5570af16687d16dc433484fcd4e66ec not found: ID does not exist" Mar 12 09:41:33 crc kubenswrapper[4809]: I0312 09:41:33.057357 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4m948"] Mar 12 09:41:33 crc kubenswrapper[4809]: I0312 09:41:33.068909 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4m948"] Mar 12 09:41:33 crc kubenswrapper[4809]: I0312 09:41:33.120696 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8e0ec1b-f6aa-4c66-9664-78beeb14c66e" path="/var/lib/kubelet/pods/a8e0ec1b-f6aa-4c66-9664-78beeb14c66e/volumes" Mar 12 09:41:45 crc kubenswrapper[4809]: I0312 09:41:45.049049 4809 patch_prober.go:28] interesting pod/machine-config-daemon-h6d4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 09:41:45 crc kubenswrapper[4809]: I0312 09:41:45.049690 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 09:41:54 crc kubenswrapper[4809]: I0312 09:41:54.344671 4809 generic.go:334] "Generic (PLEG): container finished" podID="f9ba1c91-fee8-4d89-810a-422d5fa89d27" containerID="449e5cf6a59f9b7dfa719db451d1a172aeb37fbaac048edbe084ff068a76da5b" exitCode=0 Mar 12 09:41:54 crc kubenswrapper[4809]: I0312 09:41:54.344748 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pnl96/crc-debug-5kf86" event={"ID":"f9ba1c91-fee8-4d89-810a-422d5fa89d27","Type":"ContainerDied","Data":"449e5cf6a59f9b7dfa719db451d1a172aeb37fbaac048edbe084ff068a76da5b"} Mar 12 09:41:55 crc kubenswrapper[4809]: I0312 09:41:55.492873 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pnl96/crc-debug-5kf86" Mar 12 09:41:55 crc kubenswrapper[4809]: I0312 09:41:55.532577 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-pnl96/crc-debug-5kf86"] Mar 12 09:41:55 crc kubenswrapper[4809]: I0312 09:41:55.549074 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-pnl96/crc-debug-5kf86"] Mar 12 09:41:55 crc kubenswrapper[4809]: I0312 09:41:55.599363 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f9ba1c91-fee8-4d89-810a-422d5fa89d27-host\") pod \"f9ba1c91-fee8-4d89-810a-422d5fa89d27\" (UID: \"f9ba1c91-fee8-4d89-810a-422d5fa89d27\") " Mar 12 09:41:55 crc kubenswrapper[4809]: I0312 09:41:55.599492 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9ba1c91-fee8-4d89-810a-422d5fa89d27-host" (OuterVolumeSpecName: "host") pod "f9ba1c91-fee8-4d89-810a-422d5fa89d27" (UID: "f9ba1c91-fee8-4d89-810a-422d5fa89d27"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 09:41:55 crc kubenswrapper[4809]: I0312 09:41:55.600494 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vfp2\" (UniqueName: \"kubernetes.io/projected/f9ba1c91-fee8-4d89-810a-422d5fa89d27-kube-api-access-2vfp2\") pod \"f9ba1c91-fee8-4d89-810a-422d5fa89d27\" (UID: \"f9ba1c91-fee8-4d89-810a-422d5fa89d27\") " Mar 12 09:41:55 crc kubenswrapper[4809]: I0312 09:41:55.601319 4809 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f9ba1c91-fee8-4d89-810a-422d5fa89d27-host\") on node \"crc\" DevicePath \"\"" Mar 12 09:41:55 crc kubenswrapper[4809]: I0312 09:41:55.611205 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9ba1c91-fee8-4d89-810a-422d5fa89d27-kube-api-access-2vfp2" (OuterVolumeSpecName: "kube-api-access-2vfp2") pod "f9ba1c91-fee8-4d89-810a-422d5fa89d27" (UID: "f9ba1c91-fee8-4d89-810a-422d5fa89d27"). InnerVolumeSpecName "kube-api-access-2vfp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 09:41:55 crc kubenswrapper[4809]: I0312 09:41:55.704743 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vfp2\" (UniqueName: \"kubernetes.io/projected/f9ba1c91-fee8-4d89-810a-422d5fa89d27-kube-api-access-2vfp2\") on node \"crc\" DevicePath \"\"" Mar 12 09:41:56 crc kubenswrapper[4809]: I0312 09:41:56.390546 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b68eb24e19b3c94fdc1e56e6173e8bafa3d1044d8f4028719ff61a3d3e3b6259" Mar 12 09:41:56 crc kubenswrapper[4809]: I0312 09:41:56.390969 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pnl96/crc-debug-5kf86" Mar 12 09:41:56 crc kubenswrapper[4809]: I0312 09:41:56.766260 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-pnl96/crc-debug-wpd4d"] Mar 12 09:41:56 crc kubenswrapper[4809]: E0312 09:41:56.766886 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8e0ec1b-f6aa-4c66-9664-78beeb14c66e" containerName="registry-server" Mar 12 09:41:56 crc kubenswrapper[4809]: I0312 09:41:56.766903 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8e0ec1b-f6aa-4c66-9664-78beeb14c66e" containerName="registry-server" Mar 12 09:41:56 crc kubenswrapper[4809]: E0312 09:41:56.766914 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9ba1c91-fee8-4d89-810a-422d5fa89d27" containerName="container-00" Mar 12 09:41:56 crc kubenswrapper[4809]: I0312 09:41:56.766920 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9ba1c91-fee8-4d89-810a-422d5fa89d27" containerName="container-00" Mar 12 09:41:56 crc kubenswrapper[4809]: E0312 09:41:56.766968 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8e0ec1b-f6aa-4c66-9664-78beeb14c66e" containerName="extract-utilities" Mar 12 09:41:56 crc kubenswrapper[4809]: I0312 09:41:56.766975 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8e0ec1b-f6aa-4c66-9664-78beeb14c66e" containerName="extract-utilities" Mar 12 09:41:56 crc kubenswrapper[4809]: E0312 09:41:56.766991 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8e0ec1b-f6aa-4c66-9664-78beeb14c66e" containerName="extract-content" Mar 12 09:41:56 crc kubenswrapper[4809]: I0312 09:41:56.766997 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8e0ec1b-f6aa-4c66-9664-78beeb14c66e" containerName="extract-content" Mar 12 09:41:56 crc kubenswrapper[4809]: I0312 09:41:56.767231 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9ba1c91-fee8-4d89-810a-422d5fa89d27" containerName="container-00" Mar 12 09:41:56 crc kubenswrapper[4809]: I0312 09:41:56.767268 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8e0ec1b-f6aa-4c66-9664-78beeb14c66e" containerName="registry-server" Mar 12 09:41:56 crc kubenswrapper[4809]: I0312 09:41:56.768787 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pnl96/crc-debug-wpd4d" Mar 12 09:41:56 crc kubenswrapper[4809]: I0312 09:41:56.932705 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/21220d18-27e9-448d-85be-019c6319f668-host\") pod \"crc-debug-wpd4d\" (UID: \"21220d18-27e9-448d-85be-019c6319f668\") " pod="openshift-must-gather-pnl96/crc-debug-wpd4d" Mar 12 09:41:56 crc kubenswrapper[4809]: I0312 09:41:56.933082 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrvgq\" (UniqueName: \"kubernetes.io/projected/21220d18-27e9-448d-85be-019c6319f668-kube-api-access-wrvgq\") pod \"crc-debug-wpd4d\" (UID: \"21220d18-27e9-448d-85be-019c6319f668\") " pod="openshift-must-gather-pnl96/crc-debug-wpd4d" Mar 12 09:41:57 crc kubenswrapper[4809]: I0312 09:41:57.035416 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/21220d18-27e9-448d-85be-019c6319f668-host\") pod \"crc-debug-wpd4d\" (UID: \"21220d18-27e9-448d-85be-019c6319f668\") " pod="openshift-must-gather-pnl96/crc-debug-wpd4d" Mar 12 09:41:57 crc kubenswrapper[4809]: I0312 09:41:57.035492 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrvgq\" (UniqueName: \"kubernetes.io/projected/21220d18-27e9-448d-85be-019c6319f668-kube-api-access-wrvgq\") pod \"crc-debug-wpd4d\" (UID: \"21220d18-27e9-448d-85be-019c6319f668\") " pod="openshift-must-gather-pnl96/crc-debug-wpd4d" Mar 12 09:41:57 crc kubenswrapper[4809]: I0312 09:41:57.035523 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/21220d18-27e9-448d-85be-019c6319f668-host\") pod \"crc-debug-wpd4d\" (UID: \"21220d18-27e9-448d-85be-019c6319f668\") " pod="openshift-must-gather-pnl96/crc-debug-wpd4d" Mar 12 09:41:57 crc kubenswrapper[4809]: I0312 09:41:57.055649 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrvgq\" (UniqueName: \"kubernetes.io/projected/21220d18-27e9-448d-85be-019c6319f668-kube-api-access-wrvgq\") pod \"crc-debug-wpd4d\" (UID: \"21220d18-27e9-448d-85be-019c6319f668\") " pod="openshift-must-gather-pnl96/crc-debug-wpd4d" Mar 12 09:41:57 crc kubenswrapper[4809]: I0312 09:41:57.090985 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pnl96/crc-debug-wpd4d" Mar 12 09:41:57 crc kubenswrapper[4809]: I0312 09:41:57.126969 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9ba1c91-fee8-4d89-810a-422d5fa89d27" path="/var/lib/kubelet/pods/f9ba1c91-fee8-4d89-810a-422d5fa89d27/volumes" Mar 12 09:41:57 crc kubenswrapper[4809]: I0312 09:41:57.409496 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pnl96/crc-debug-wpd4d" event={"ID":"21220d18-27e9-448d-85be-019c6319f668","Type":"ContainerStarted","Data":"bba63ce049831f1e3377230ca5151228973952bd2a4c40d4fbda41d62ddc079c"} Mar 12 09:41:57 crc kubenswrapper[4809]: I0312 09:41:57.409894 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pnl96/crc-debug-wpd4d" event={"ID":"21220d18-27e9-448d-85be-019c6319f668","Type":"ContainerStarted","Data":"0a169802d12e44636ffea0d19b5d02798c3e98b2052494dc9a93bb3a86179733"} Mar 12 09:41:57 crc kubenswrapper[4809]: I0312 09:41:57.436162 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-pnl96/crc-debug-wpd4d" podStartSLOduration=1.436133623 podStartE2EDuration="1.436133623s" podCreationTimestamp="2026-03-12 09:41:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 09:41:57.424689731 +0000 UTC m=+6191.006725464" watchObservedRunningTime="2026-03-12 09:41:57.436133623 +0000 UTC m=+6191.018169366" Mar 12 09:41:58 crc kubenswrapper[4809]: I0312 09:41:58.423993 4809 generic.go:334] "Generic (PLEG): container finished" podID="21220d18-27e9-448d-85be-019c6319f668" containerID="bba63ce049831f1e3377230ca5151228973952bd2a4c40d4fbda41d62ddc079c" exitCode=0 Mar 12 09:41:58 crc kubenswrapper[4809]: I0312 09:41:58.424055 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pnl96/crc-debug-wpd4d" event={"ID":"21220d18-27e9-448d-85be-019c6319f668","Type":"ContainerDied","Data":"bba63ce049831f1e3377230ca5151228973952bd2a4c40d4fbda41d62ddc079c"} Mar 12 09:41:59 crc kubenswrapper[4809]: I0312 09:41:59.584800 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pnl96/crc-debug-wpd4d" Mar 12 09:41:59 crc kubenswrapper[4809]: I0312 09:41:59.602752 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrvgq\" (UniqueName: \"kubernetes.io/projected/21220d18-27e9-448d-85be-019c6319f668-kube-api-access-wrvgq\") pod \"21220d18-27e9-448d-85be-019c6319f668\" (UID: \"21220d18-27e9-448d-85be-019c6319f668\") " Mar 12 09:41:59 crc kubenswrapper[4809]: I0312 09:41:59.603599 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/21220d18-27e9-448d-85be-019c6319f668-host\") pod \"21220d18-27e9-448d-85be-019c6319f668\" (UID: \"21220d18-27e9-448d-85be-019c6319f668\") " Mar 12 09:41:59 crc kubenswrapper[4809]: I0312 09:41:59.604273 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21220d18-27e9-448d-85be-019c6319f668-host" (OuterVolumeSpecName: "host") pod "21220d18-27e9-448d-85be-019c6319f668" (UID: "21220d18-27e9-448d-85be-019c6319f668"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 09:41:59 crc kubenswrapper[4809]: I0312 09:41:59.604449 4809 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/21220d18-27e9-448d-85be-019c6319f668-host\") on node \"crc\" DevicePath \"\"" Mar 12 09:41:59 crc kubenswrapper[4809]: I0312 09:41:59.612970 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21220d18-27e9-448d-85be-019c6319f668-kube-api-access-wrvgq" (OuterVolumeSpecName: "kube-api-access-wrvgq") pod "21220d18-27e9-448d-85be-019c6319f668" (UID: "21220d18-27e9-448d-85be-019c6319f668"). InnerVolumeSpecName "kube-api-access-wrvgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 09:41:59 crc kubenswrapper[4809]: I0312 09:41:59.711259 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrvgq\" (UniqueName: \"kubernetes.io/projected/21220d18-27e9-448d-85be-019c6319f668-kube-api-access-wrvgq\") on node \"crc\" DevicePath \"\"" Mar 12 09:42:00 crc kubenswrapper[4809]: I0312 09:42:00.183418 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555142-zvrth"] Mar 12 09:42:00 crc kubenswrapper[4809]: E0312 09:42:00.184130 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21220d18-27e9-448d-85be-019c6319f668" containerName="container-00" Mar 12 09:42:00 crc kubenswrapper[4809]: I0312 09:42:00.184157 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="21220d18-27e9-448d-85be-019c6319f668" containerName="container-00" Mar 12 09:42:00 crc kubenswrapper[4809]: I0312 09:42:00.184487 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="21220d18-27e9-448d-85be-019c6319f668" containerName="container-00" Mar 12 09:42:00 crc kubenswrapper[4809]: I0312 09:42:00.185570 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555142-zvrth" Mar 12 09:42:00 crc kubenswrapper[4809]: I0312 09:42:00.188385 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 09:42:00 crc kubenswrapper[4809]: I0312 09:42:00.188533 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-tvxqv" Mar 12 09:42:00 crc kubenswrapper[4809]: I0312 09:42:00.188657 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 09:42:00 crc kubenswrapper[4809]: I0312 09:42:00.194723 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555142-zvrth"] Mar 12 09:42:00 crc kubenswrapper[4809]: I0312 09:42:00.219604 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cvp4\" (UniqueName: \"kubernetes.io/projected/5935bc45-f371-4dd3-b7ce-65955339ea44-kube-api-access-5cvp4\") pod \"auto-csr-approver-29555142-zvrth\" (UID: \"5935bc45-f371-4dd3-b7ce-65955339ea44\") " pod="openshift-infra/auto-csr-approver-29555142-zvrth" Mar 12 09:42:00 crc kubenswrapper[4809]: I0312 09:42:00.225248 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-pnl96/crc-debug-wpd4d"] Mar 12 09:42:00 crc kubenswrapper[4809]: I0312 09:42:00.235597 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-pnl96/crc-debug-wpd4d"] Mar 12 09:42:00 crc kubenswrapper[4809]: I0312 09:42:00.321752 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cvp4\" (UniqueName: \"kubernetes.io/projected/5935bc45-f371-4dd3-b7ce-65955339ea44-kube-api-access-5cvp4\") pod \"auto-csr-approver-29555142-zvrth\" (UID: \"5935bc45-f371-4dd3-b7ce-65955339ea44\") " pod="openshift-infra/auto-csr-approver-29555142-zvrth" Mar 12 09:42:00 crc kubenswrapper[4809]: I0312 09:42:00.346156 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cvp4\" (UniqueName: \"kubernetes.io/projected/5935bc45-f371-4dd3-b7ce-65955339ea44-kube-api-access-5cvp4\") pod \"auto-csr-approver-29555142-zvrth\" (UID: \"5935bc45-f371-4dd3-b7ce-65955339ea44\") " pod="openshift-infra/auto-csr-approver-29555142-zvrth" Mar 12 09:42:00 crc kubenswrapper[4809]: I0312 09:42:00.445696 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a169802d12e44636ffea0d19b5d02798c3e98b2052494dc9a93bb3a86179733" Mar 12 09:42:00 crc kubenswrapper[4809]: I0312 09:42:00.445744 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pnl96/crc-debug-wpd4d" Mar 12 09:42:00 crc kubenswrapper[4809]: I0312 09:42:00.504907 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555142-zvrth" Mar 12 09:42:01 crc kubenswrapper[4809]: W0312 09:42:01.025014 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5935bc45_f371_4dd3_b7ce_65955339ea44.slice/crio-4150f007acab82fa8188ce400a8e6a84bbaaed8f02dba838c33cb8989fd38330 WatchSource:0}: Error finding container 4150f007acab82fa8188ce400a8e6a84bbaaed8f02dba838c33cb8989fd38330: Status 404 returned error can't find the container with id 4150f007acab82fa8188ce400a8e6a84bbaaed8f02dba838c33cb8989fd38330 Mar 12 09:42:01 crc kubenswrapper[4809]: I0312 09:42:01.028552 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555142-zvrth"] Mar 12 09:42:01 crc kubenswrapper[4809]: I0312 09:42:01.124626 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21220d18-27e9-448d-85be-019c6319f668" path="/var/lib/kubelet/pods/21220d18-27e9-448d-85be-019c6319f668/volumes" Mar 12 09:42:01 crc kubenswrapper[4809]: I0312 09:42:01.457286 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-pnl96/crc-debug-6xbrr"] Mar 12 09:42:01 crc kubenswrapper[4809]: I0312 09:42:01.459977 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555142-zvrth" event={"ID":"5935bc45-f371-4dd3-b7ce-65955339ea44","Type":"ContainerStarted","Data":"4150f007acab82fa8188ce400a8e6a84bbaaed8f02dba838c33cb8989fd38330"} Mar 12 09:42:01 crc kubenswrapper[4809]: I0312 09:42:01.460195 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pnl96/crc-debug-6xbrr" Mar 12 09:42:01 crc kubenswrapper[4809]: I0312 09:42:01.659019 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1e961be0-97b0-45fe-84db-61b86de4b9fa-host\") pod \"crc-debug-6xbrr\" (UID: \"1e961be0-97b0-45fe-84db-61b86de4b9fa\") " pod="openshift-must-gather-pnl96/crc-debug-6xbrr" Mar 12 09:42:01 crc kubenswrapper[4809]: I0312 09:42:01.659862 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtjzt\" (UniqueName: \"kubernetes.io/projected/1e961be0-97b0-45fe-84db-61b86de4b9fa-kube-api-access-vtjzt\") pod \"crc-debug-6xbrr\" (UID: \"1e961be0-97b0-45fe-84db-61b86de4b9fa\") " pod="openshift-must-gather-pnl96/crc-debug-6xbrr" Mar 12 09:42:01 crc kubenswrapper[4809]: I0312 09:42:01.763094 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtjzt\" (UniqueName: \"kubernetes.io/projected/1e961be0-97b0-45fe-84db-61b86de4b9fa-kube-api-access-vtjzt\") pod \"crc-debug-6xbrr\" (UID: \"1e961be0-97b0-45fe-84db-61b86de4b9fa\") " pod="openshift-must-gather-pnl96/crc-debug-6xbrr" Mar 12 09:42:01 crc kubenswrapper[4809]: I0312 09:42:01.763629 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1e961be0-97b0-45fe-84db-61b86de4b9fa-host\") pod \"crc-debug-6xbrr\" (UID: \"1e961be0-97b0-45fe-84db-61b86de4b9fa\") " pod="openshift-must-gather-pnl96/crc-debug-6xbrr" Mar 12 09:42:01 crc kubenswrapper[4809]: I0312 09:42:01.763799 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1e961be0-97b0-45fe-84db-61b86de4b9fa-host\") pod \"crc-debug-6xbrr\" (UID: \"1e961be0-97b0-45fe-84db-61b86de4b9fa\") " pod="openshift-must-gather-pnl96/crc-debug-6xbrr" Mar 12 09:42:01 crc kubenswrapper[4809]: I0312 09:42:01.794000 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtjzt\" (UniqueName: \"kubernetes.io/projected/1e961be0-97b0-45fe-84db-61b86de4b9fa-kube-api-access-vtjzt\") pod \"crc-debug-6xbrr\" (UID: \"1e961be0-97b0-45fe-84db-61b86de4b9fa\") " pod="openshift-must-gather-pnl96/crc-debug-6xbrr" Mar 12 09:42:02 crc kubenswrapper[4809]: I0312 09:42:02.091549 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pnl96/crc-debug-6xbrr" Mar 12 09:42:02 crc kubenswrapper[4809]: I0312 09:42:02.473495 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pnl96/crc-debug-6xbrr" event={"ID":"1e961be0-97b0-45fe-84db-61b86de4b9fa","Type":"ContainerStarted","Data":"8aa61896e9119cbc4be4e9e4098ffca8515977f68996a0eb96da39217219014f"} Mar 12 09:42:02 crc kubenswrapper[4809]: I0312 09:42:02.474341 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pnl96/crc-debug-6xbrr" event={"ID":"1e961be0-97b0-45fe-84db-61b86de4b9fa","Type":"ContainerStarted","Data":"f9ff398f2f2e767dbffd8bbb06bad48e2c1142c6ed44563fce697f785a66796c"} Mar 12 09:42:02 crc kubenswrapper[4809]: I0312 09:42:02.496431 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-pnl96/crc-debug-6xbrr" podStartSLOduration=1.496411581 podStartE2EDuration="1.496411581s" podCreationTimestamp="2026-03-12 09:42:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 09:42:02.491715513 +0000 UTC m=+6196.073751247" watchObservedRunningTime="2026-03-12 09:42:02.496411581 +0000 UTC m=+6196.078447304" Mar 12 09:42:03 crc kubenswrapper[4809]: I0312 09:42:03.484259 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cqf4x"] Mar 12 09:42:03 crc kubenswrapper[4809]: I0312 09:42:03.496345 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cqf4x" Mar 12 09:42:03 crc kubenswrapper[4809]: I0312 09:42:03.499494 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555142-zvrth" event={"ID":"5935bc45-f371-4dd3-b7ce-65955339ea44","Type":"ContainerStarted","Data":"3612feef0c6610b319319f4affa5f692e9111fc6f14218b40866899ff4aff337"} Mar 12 09:42:03 crc kubenswrapper[4809]: I0312 09:42:03.507782 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cqf4x"] Mar 12 09:42:03 crc kubenswrapper[4809]: I0312 09:42:03.525575 4809 generic.go:334] "Generic (PLEG): container finished" podID="1e961be0-97b0-45fe-84db-61b86de4b9fa" containerID="8aa61896e9119cbc4be4e9e4098ffca8515977f68996a0eb96da39217219014f" exitCode=0 Mar 12 09:42:03 crc kubenswrapper[4809]: I0312 09:42:03.525655 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pnl96/crc-debug-6xbrr" event={"ID":"1e961be0-97b0-45fe-84db-61b86de4b9fa","Type":"ContainerDied","Data":"8aa61896e9119cbc4be4e9e4098ffca8515977f68996a0eb96da39217219014f"} Mar 12 09:42:03 crc kubenswrapper[4809]: I0312 09:42:03.526201 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcrhz\" (UniqueName: \"kubernetes.io/projected/e9eb0451-aecf-42c8-a97f-a8f1393d6e1c-kube-api-access-vcrhz\") pod \"redhat-marketplace-cqf4x\" (UID: \"e9eb0451-aecf-42c8-a97f-a8f1393d6e1c\") " pod="openshift-marketplace/redhat-marketplace-cqf4x" Mar 12 09:42:03 crc kubenswrapper[4809]: I0312 09:42:03.526313 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9eb0451-aecf-42c8-a97f-a8f1393d6e1c-utilities\") pod \"redhat-marketplace-cqf4x\" (UID: \"e9eb0451-aecf-42c8-a97f-a8f1393d6e1c\") " pod="openshift-marketplace/redhat-marketplace-cqf4x" Mar 12 09:42:03 crc kubenswrapper[4809]: I0312 09:42:03.526440 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9eb0451-aecf-42c8-a97f-a8f1393d6e1c-catalog-content\") pod \"redhat-marketplace-cqf4x\" (UID: \"e9eb0451-aecf-42c8-a97f-a8f1393d6e1c\") " pod="openshift-marketplace/redhat-marketplace-cqf4x" Mar 12 09:42:03 crc kubenswrapper[4809]: I0312 09:42:03.588137 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555142-zvrth" podStartSLOduration=2.5599314939999998 podStartE2EDuration="3.588102337s" podCreationTimestamp="2026-03-12 09:42:00 +0000 UTC" firstStartedPulling="2026-03-12 09:42:01.028239702 +0000 UTC m=+6194.610275435" lastFinishedPulling="2026-03-12 09:42:02.056410545 +0000 UTC m=+6195.638446278" observedRunningTime="2026-03-12 09:42:03.560946297 +0000 UTC m=+6197.142982030" watchObservedRunningTime="2026-03-12 09:42:03.588102337 +0000 UTC m=+6197.170138070" Mar 12 09:42:03 crc kubenswrapper[4809]: I0312 09:42:03.629161 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9eb0451-aecf-42c8-a97f-a8f1393d6e1c-catalog-content\") pod \"redhat-marketplace-cqf4x\" (UID: \"e9eb0451-aecf-42c8-a97f-a8f1393d6e1c\") " pod="openshift-marketplace/redhat-marketplace-cqf4x" Mar 12 09:42:03 crc kubenswrapper[4809]: I0312 09:42:03.629728 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9eb0451-aecf-42c8-a97f-a8f1393d6e1c-catalog-content\") pod \"redhat-marketplace-cqf4x\" (UID: \"e9eb0451-aecf-42c8-a97f-a8f1393d6e1c\") " pod="openshift-marketplace/redhat-marketplace-cqf4x" Mar 12 09:42:03 crc kubenswrapper[4809]: I0312 09:42:03.630143 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcrhz\" (UniqueName: \"kubernetes.io/projected/e9eb0451-aecf-42c8-a97f-a8f1393d6e1c-kube-api-access-vcrhz\") pod \"redhat-marketplace-cqf4x\" (UID: \"e9eb0451-aecf-42c8-a97f-a8f1393d6e1c\") " pod="openshift-marketplace/redhat-marketplace-cqf4x" Mar 12 09:42:03 crc kubenswrapper[4809]: I0312 09:42:03.630357 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9eb0451-aecf-42c8-a97f-a8f1393d6e1c-utilities\") pod \"redhat-marketplace-cqf4x\" (UID: \"e9eb0451-aecf-42c8-a97f-a8f1393d6e1c\") " pod="openshift-marketplace/redhat-marketplace-cqf4x" Mar 12 09:42:03 crc kubenswrapper[4809]: I0312 09:42:03.630844 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9eb0451-aecf-42c8-a97f-a8f1393d6e1c-utilities\") pod \"redhat-marketplace-cqf4x\" (UID: \"e9eb0451-aecf-42c8-a97f-a8f1393d6e1c\") " pod="openshift-marketplace/redhat-marketplace-cqf4x" Mar 12 09:42:03 crc kubenswrapper[4809]: I0312 09:42:03.667890 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcrhz\" (UniqueName: \"kubernetes.io/projected/e9eb0451-aecf-42c8-a97f-a8f1393d6e1c-kube-api-access-vcrhz\") pod \"redhat-marketplace-cqf4x\" (UID: \"e9eb0451-aecf-42c8-a97f-a8f1393d6e1c\") " pod="openshift-marketplace/redhat-marketplace-cqf4x" Mar 12 09:42:03 crc kubenswrapper[4809]: I0312 09:42:03.850758 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cqf4x" Mar 12 09:42:04 crc kubenswrapper[4809]: I0312 09:42:04.416551 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cqf4x"] Mar 12 09:42:05 crc kubenswrapper[4809]: W0312 09:42:05.204377 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9eb0451_aecf_42c8_a97f_a8f1393d6e1c.slice/crio-fa8dd4a0697dd4bb72d37f6ca746da71a0d87fddfdc978c1b53d637d0ffea09f WatchSource:0}: Error finding container fa8dd4a0697dd4bb72d37f6ca746da71a0d87fddfdc978c1b53d637d0ffea09f: Status 404 returned error can't find the container with id fa8dd4a0697dd4bb72d37f6ca746da71a0d87fddfdc978c1b53d637d0ffea09f Mar 12 09:42:05 crc kubenswrapper[4809]: I0312 09:42:05.550252 4809 generic.go:334] "Generic (PLEG): container finished" podID="5935bc45-f371-4dd3-b7ce-65955339ea44" containerID="3612feef0c6610b319319f4affa5f692e9111fc6f14218b40866899ff4aff337" exitCode=0 Mar 12 09:42:05 crc kubenswrapper[4809]: I0312 09:42:05.550319 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555142-zvrth" event={"ID":"5935bc45-f371-4dd3-b7ce-65955339ea44","Type":"ContainerDied","Data":"3612feef0c6610b319319f4affa5f692e9111fc6f14218b40866899ff4aff337"} Mar 12 09:42:05 crc kubenswrapper[4809]: I0312 09:42:05.553818 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cqf4x" event={"ID":"e9eb0451-aecf-42c8-a97f-a8f1393d6e1c","Type":"ContainerStarted","Data":"fa8dd4a0697dd4bb72d37f6ca746da71a0d87fddfdc978c1b53d637d0ffea09f"} Mar 12 09:42:05 crc kubenswrapper[4809]: I0312 09:42:05.556005 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pnl96/crc-debug-6xbrr" event={"ID":"1e961be0-97b0-45fe-84db-61b86de4b9fa","Type":"ContainerDied","Data":"f9ff398f2f2e767dbffd8bbb06bad48e2c1142c6ed44563fce697f785a66796c"} Mar 12 09:42:05 crc kubenswrapper[4809]: I0312 09:42:05.556049 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9ff398f2f2e767dbffd8bbb06bad48e2c1142c6ed44563fce697f785a66796c" Mar 12 09:42:05 crc kubenswrapper[4809]: I0312 09:42:05.599225 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pnl96/crc-debug-6xbrr" Mar 12 09:42:05 crc kubenswrapper[4809]: I0312 09:42:05.671786 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-pnl96/crc-debug-6xbrr"] Mar 12 09:42:05 crc kubenswrapper[4809]: I0312 09:42:05.695775 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1e961be0-97b0-45fe-84db-61b86de4b9fa-host\") pod \"1e961be0-97b0-45fe-84db-61b86de4b9fa\" (UID: \"1e961be0-97b0-45fe-84db-61b86de4b9fa\") " Mar 12 09:42:05 crc kubenswrapper[4809]: I0312 09:42:05.695950 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtjzt\" (UniqueName: \"kubernetes.io/projected/1e961be0-97b0-45fe-84db-61b86de4b9fa-kube-api-access-vtjzt\") pod \"1e961be0-97b0-45fe-84db-61b86de4b9fa\" (UID: \"1e961be0-97b0-45fe-84db-61b86de4b9fa\") " Mar 12 09:42:05 crc kubenswrapper[4809]: I0312 09:42:05.696416 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e961be0-97b0-45fe-84db-61b86de4b9fa-host" (OuterVolumeSpecName: "host") pod "1e961be0-97b0-45fe-84db-61b86de4b9fa" (UID: "1e961be0-97b0-45fe-84db-61b86de4b9fa"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 09:42:05 crc kubenswrapper[4809]: I0312 09:42:05.700855 4809 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1e961be0-97b0-45fe-84db-61b86de4b9fa-host\") on node \"crc\" DevicePath \"\"" Mar 12 09:42:05 crc kubenswrapper[4809]: I0312 09:42:05.715357 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e961be0-97b0-45fe-84db-61b86de4b9fa-kube-api-access-vtjzt" (OuterVolumeSpecName: "kube-api-access-vtjzt") pod "1e961be0-97b0-45fe-84db-61b86de4b9fa" (UID: "1e961be0-97b0-45fe-84db-61b86de4b9fa"). InnerVolumeSpecName "kube-api-access-vtjzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 09:42:05 crc kubenswrapper[4809]: I0312 09:42:05.728871 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-pnl96/crc-debug-6xbrr"] Mar 12 09:42:05 crc kubenswrapper[4809]: I0312 09:42:05.803606 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtjzt\" (UniqueName: \"kubernetes.io/projected/1e961be0-97b0-45fe-84db-61b86de4b9fa-kube-api-access-vtjzt\") on node \"crc\" DevicePath \"\"" Mar 12 09:42:06 crc kubenswrapper[4809]: I0312 09:42:06.568824 4809 generic.go:334] "Generic (PLEG): container finished" podID="e9eb0451-aecf-42c8-a97f-a8f1393d6e1c" containerID="f880fe3a41b94d7588c74e5a9f98a59eb65de0a46e4434d7f32ffab311da1f5e" exitCode=0 Mar 12 09:42:06 crc kubenswrapper[4809]: I0312 09:42:06.568907 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cqf4x" event={"ID":"e9eb0451-aecf-42c8-a97f-a8f1393d6e1c","Type":"ContainerDied","Data":"f880fe3a41b94d7588c74e5a9f98a59eb65de0a46e4434d7f32ffab311da1f5e"} Mar 12 09:42:06 crc kubenswrapper[4809]: I0312 09:42:06.569209 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pnl96/crc-debug-6xbrr" Mar 12 09:42:07 crc kubenswrapper[4809]: I0312 09:42:07.132021 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e961be0-97b0-45fe-84db-61b86de4b9fa" path="/var/lib/kubelet/pods/1e961be0-97b0-45fe-84db-61b86de4b9fa/volumes" Mar 12 09:42:07 crc kubenswrapper[4809]: I0312 09:42:07.826381 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555142-zvrth" Mar 12 09:42:07 crc kubenswrapper[4809]: I0312 09:42:07.950030 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cvp4\" (UniqueName: \"kubernetes.io/projected/5935bc45-f371-4dd3-b7ce-65955339ea44-kube-api-access-5cvp4\") pod \"5935bc45-f371-4dd3-b7ce-65955339ea44\" (UID: \"5935bc45-f371-4dd3-b7ce-65955339ea44\") " Mar 12 09:42:07 crc kubenswrapper[4809]: I0312 09:42:07.957718 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5935bc45-f371-4dd3-b7ce-65955339ea44-kube-api-access-5cvp4" (OuterVolumeSpecName: "kube-api-access-5cvp4") pod "5935bc45-f371-4dd3-b7ce-65955339ea44" (UID: "5935bc45-f371-4dd3-b7ce-65955339ea44"). InnerVolumeSpecName "kube-api-access-5cvp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 09:42:08 crc kubenswrapper[4809]: I0312 09:42:08.053271 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cvp4\" (UniqueName: \"kubernetes.io/projected/5935bc45-f371-4dd3-b7ce-65955339ea44-kube-api-access-5cvp4\") on node \"crc\" DevicePath \"\"" Mar 12 09:42:08 crc kubenswrapper[4809]: I0312 09:42:08.599561 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cqf4x" event={"ID":"e9eb0451-aecf-42c8-a97f-a8f1393d6e1c","Type":"ContainerStarted","Data":"c07925f6afc96b9f701cfb43de37d1e008c17b157b0baa47fa61a7da4d7fd5ed"} Mar 12 09:42:08 crc kubenswrapper[4809]: I0312 09:42:08.603037 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555142-zvrth" event={"ID":"5935bc45-f371-4dd3-b7ce-65955339ea44","Type":"ContainerDied","Data":"4150f007acab82fa8188ce400a8e6a84bbaaed8f02dba838c33cb8989fd38330"} Mar 12 09:42:08 crc kubenswrapper[4809]: I0312 09:42:08.603071 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4150f007acab82fa8188ce400a8e6a84bbaaed8f02dba838c33cb8989fd38330" Mar 12 09:42:08 crc kubenswrapper[4809]: I0312 09:42:08.603242 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555142-zvrth" Mar 12 09:42:08 crc kubenswrapper[4809]: I0312 09:42:08.944254 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555136-46k99"] Mar 12 09:42:08 crc kubenswrapper[4809]: I0312 09:42:08.955600 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555136-46k99"] Mar 12 09:42:09 crc kubenswrapper[4809]: I0312 09:42:09.117520 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c8b01b9-e591-47f5-836f-13323402edef" path="/var/lib/kubelet/pods/0c8b01b9-e591-47f5-836f-13323402edef/volumes" Mar 12 09:42:09 crc kubenswrapper[4809]: I0312 09:42:09.672156 4809 generic.go:334] "Generic (PLEG): container finished" podID="e9eb0451-aecf-42c8-a97f-a8f1393d6e1c" containerID="c07925f6afc96b9f701cfb43de37d1e008c17b157b0baa47fa61a7da4d7fd5ed" exitCode=0 Mar 12 09:42:09 crc kubenswrapper[4809]: I0312 09:42:09.672240 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cqf4x" event={"ID":"e9eb0451-aecf-42c8-a97f-a8f1393d6e1c","Type":"ContainerDied","Data":"c07925f6afc96b9f701cfb43de37d1e008c17b157b0baa47fa61a7da4d7fd5ed"} Mar 12 09:42:10 crc kubenswrapper[4809]: I0312 09:42:10.690935 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cqf4x" event={"ID":"e9eb0451-aecf-42c8-a97f-a8f1393d6e1c","Type":"ContainerStarted","Data":"15d94c07105b9fa25b3e2856aca3e6f5463ca14a2284ec5a53e0fe4f3e683cf7"} Mar 12 09:42:10 crc kubenswrapper[4809]: I0312 09:42:10.735628 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cqf4x" podStartSLOduration=4.196399918 podStartE2EDuration="7.735604675s" podCreationTimestamp="2026-03-12 09:42:03 +0000 UTC" firstStartedPulling="2026-03-12 09:42:06.571434728 +0000 UTC m=+6200.153470461" lastFinishedPulling="2026-03-12 09:42:10.110639465 +0000 UTC m=+6203.692675218" observedRunningTime="2026-03-12 09:42:10.723481714 +0000 UTC m=+6204.305517457" watchObservedRunningTime="2026-03-12 09:42:10.735604675 +0000 UTC m=+6204.317640408" Mar 12 09:42:13 crc kubenswrapper[4809]: I0312 09:42:13.851126 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cqf4x" Mar 12 09:42:13 crc kubenswrapper[4809]: I0312 09:42:13.851780 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cqf4x" Mar 12 09:42:13 crc kubenswrapper[4809]: I0312 09:42:13.906027 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cqf4x" Mar 12 09:42:15 crc kubenswrapper[4809]: I0312 09:42:15.048440 4809 patch_prober.go:28] interesting pod/machine-config-daemon-h6d4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 09:42:15 crc kubenswrapper[4809]: I0312 09:42:15.048831 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 09:42:23 crc kubenswrapper[4809]: I0312 09:42:23.937002 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cqf4x" Mar 12 09:42:24 crc kubenswrapper[4809]: I0312 09:42:24.010334 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cqf4x"] Mar 12 09:42:24 crc kubenswrapper[4809]: I0312 09:42:24.867150 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cqf4x" podUID="e9eb0451-aecf-42c8-a97f-a8f1393d6e1c" containerName="registry-server" containerID="cri-o://15d94c07105b9fa25b3e2856aca3e6f5463ca14a2284ec5a53e0fe4f3e683cf7" gracePeriod=2 Mar 12 09:42:25 crc kubenswrapper[4809]: I0312 09:42:25.521954 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cqf4x" Mar 12 09:42:25 crc kubenswrapper[4809]: I0312 09:42:25.623091 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9eb0451-aecf-42c8-a97f-a8f1393d6e1c-utilities\") pod \"e9eb0451-aecf-42c8-a97f-a8f1393d6e1c\" (UID: \"e9eb0451-aecf-42c8-a97f-a8f1393d6e1c\") " Mar 12 09:42:25 crc kubenswrapper[4809]: I0312 09:42:25.623708 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9eb0451-aecf-42c8-a97f-a8f1393d6e1c-catalog-content\") pod \"e9eb0451-aecf-42c8-a97f-a8f1393d6e1c\" (UID: \"e9eb0451-aecf-42c8-a97f-a8f1393d6e1c\") " Mar 12 09:42:25 crc kubenswrapper[4809]: I0312 09:42:25.624292 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcrhz\" (UniqueName: \"kubernetes.io/projected/e9eb0451-aecf-42c8-a97f-a8f1393d6e1c-kube-api-access-vcrhz\") pod \"e9eb0451-aecf-42c8-a97f-a8f1393d6e1c\" (UID: \"e9eb0451-aecf-42c8-a97f-a8f1393d6e1c\") " Mar 12 09:42:25 crc kubenswrapper[4809]: I0312 09:42:25.625929 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9eb0451-aecf-42c8-a97f-a8f1393d6e1c-utilities" (OuterVolumeSpecName: "utilities") pod "e9eb0451-aecf-42c8-a97f-a8f1393d6e1c" (UID: "e9eb0451-aecf-42c8-a97f-a8f1393d6e1c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 09:42:25 crc kubenswrapper[4809]: I0312 09:42:25.631357 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9eb0451-aecf-42c8-a97f-a8f1393d6e1c-kube-api-access-vcrhz" (OuterVolumeSpecName: "kube-api-access-vcrhz") pod "e9eb0451-aecf-42c8-a97f-a8f1393d6e1c" (UID: "e9eb0451-aecf-42c8-a97f-a8f1393d6e1c"). InnerVolumeSpecName "kube-api-access-vcrhz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 09:42:25 crc kubenswrapper[4809]: I0312 09:42:25.656231 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9eb0451-aecf-42c8-a97f-a8f1393d6e1c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e9eb0451-aecf-42c8-a97f-a8f1393d6e1c" (UID: "e9eb0451-aecf-42c8-a97f-a8f1393d6e1c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 09:42:25 crc kubenswrapper[4809]: I0312 09:42:25.728360 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcrhz\" (UniqueName: \"kubernetes.io/projected/e9eb0451-aecf-42c8-a97f-a8f1393d6e1c-kube-api-access-vcrhz\") on node \"crc\" DevicePath \"\"" Mar 12 09:42:25 crc kubenswrapper[4809]: I0312 09:42:25.728809 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9eb0451-aecf-42c8-a97f-a8f1393d6e1c-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 09:42:25 crc kubenswrapper[4809]: I0312 09:42:25.728947 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9eb0451-aecf-42c8-a97f-a8f1393d6e1c-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 09:42:25 crc kubenswrapper[4809]: I0312 09:42:25.882845 4809 generic.go:334] "Generic (PLEG): container finished" podID="e9eb0451-aecf-42c8-a97f-a8f1393d6e1c" containerID="15d94c07105b9fa25b3e2856aca3e6f5463ca14a2284ec5a53e0fe4f3e683cf7" exitCode=0 Mar 12 09:42:25 crc kubenswrapper[4809]: I0312 09:42:25.882961 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cqf4x" event={"ID":"e9eb0451-aecf-42c8-a97f-a8f1393d6e1c","Type":"ContainerDied","Data":"15d94c07105b9fa25b3e2856aca3e6f5463ca14a2284ec5a53e0fe4f3e683cf7"} Mar 12 09:42:25 crc kubenswrapper[4809]: I0312 09:42:25.883643 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cqf4x" event={"ID":"e9eb0451-aecf-42c8-a97f-a8f1393d6e1c","Type":"ContainerDied","Data":"fa8dd4a0697dd4bb72d37f6ca746da71a0d87fddfdc978c1b53d637d0ffea09f"} Mar 12 09:42:25 crc kubenswrapper[4809]: I0312 09:42:25.883704 4809 scope.go:117] "RemoveContainer" containerID="15d94c07105b9fa25b3e2856aca3e6f5463ca14a2284ec5a53e0fe4f3e683cf7" Mar 12 09:42:25 crc kubenswrapper[4809]: I0312 09:42:25.882980 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cqf4x" Mar 12 09:42:25 crc kubenswrapper[4809]: I0312 09:42:25.917364 4809 scope.go:117] "RemoveContainer" containerID="c07925f6afc96b9f701cfb43de37d1e008c17b157b0baa47fa61a7da4d7fd5ed" Mar 12 09:42:25 crc kubenswrapper[4809]: I0312 09:42:25.937565 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cqf4x"] Mar 12 09:42:25 crc kubenswrapper[4809]: I0312 09:42:25.951036 4809 scope.go:117] "RemoveContainer" containerID="f880fe3a41b94d7588c74e5a9f98a59eb65de0a46e4434d7f32ffab311da1f5e" Mar 12 09:42:25 crc kubenswrapper[4809]: I0312 09:42:25.955777 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cqf4x"] Mar 12 09:42:26 crc kubenswrapper[4809]: I0312 09:42:26.061858 4809 scope.go:117] "RemoveContainer" containerID="15d94c07105b9fa25b3e2856aca3e6f5463ca14a2284ec5a53e0fe4f3e683cf7" Mar 12 09:42:26 crc kubenswrapper[4809]: E0312 09:42:26.064776 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15d94c07105b9fa25b3e2856aca3e6f5463ca14a2284ec5a53e0fe4f3e683cf7\": container with ID starting with 15d94c07105b9fa25b3e2856aca3e6f5463ca14a2284ec5a53e0fe4f3e683cf7 not found: ID does not exist" containerID="15d94c07105b9fa25b3e2856aca3e6f5463ca14a2284ec5a53e0fe4f3e683cf7" Mar 12 09:42:26 crc kubenswrapper[4809]: I0312 09:42:26.064935 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15d94c07105b9fa25b3e2856aca3e6f5463ca14a2284ec5a53e0fe4f3e683cf7"} err="failed to get container status \"15d94c07105b9fa25b3e2856aca3e6f5463ca14a2284ec5a53e0fe4f3e683cf7\": rpc error: code = NotFound desc = could not find container \"15d94c07105b9fa25b3e2856aca3e6f5463ca14a2284ec5a53e0fe4f3e683cf7\": container with ID starting with 15d94c07105b9fa25b3e2856aca3e6f5463ca14a2284ec5a53e0fe4f3e683cf7 not found: ID does not exist" Mar 12 09:42:26 crc kubenswrapper[4809]: I0312 09:42:26.065087 4809 scope.go:117] "RemoveContainer" containerID="c07925f6afc96b9f701cfb43de37d1e008c17b157b0baa47fa61a7da4d7fd5ed" Mar 12 09:42:26 crc kubenswrapper[4809]: E0312 09:42:26.065811 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c07925f6afc96b9f701cfb43de37d1e008c17b157b0baa47fa61a7da4d7fd5ed\": container with ID starting with c07925f6afc96b9f701cfb43de37d1e008c17b157b0baa47fa61a7da4d7fd5ed not found: ID does not exist" containerID="c07925f6afc96b9f701cfb43de37d1e008c17b157b0baa47fa61a7da4d7fd5ed" Mar 12 09:42:26 crc kubenswrapper[4809]: I0312 09:42:26.065865 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c07925f6afc96b9f701cfb43de37d1e008c17b157b0baa47fa61a7da4d7fd5ed"} err="failed to get container status \"c07925f6afc96b9f701cfb43de37d1e008c17b157b0baa47fa61a7da4d7fd5ed\": rpc error: code = NotFound desc = could not find container \"c07925f6afc96b9f701cfb43de37d1e008c17b157b0baa47fa61a7da4d7fd5ed\": container with ID starting with c07925f6afc96b9f701cfb43de37d1e008c17b157b0baa47fa61a7da4d7fd5ed not found: ID does not exist" Mar 12 09:42:26 crc kubenswrapper[4809]: I0312 09:42:26.065902 4809 scope.go:117] "RemoveContainer" containerID="f880fe3a41b94d7588c74e5a9f98a59eb65de0a46e4434d7f32ffab311da1f5e" Mar 12 09:42:26 crc kubenswrapper[4809]: E0312 09:42:26.066756 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f880fe3a41b94d7588c74e5a9f98a59eb65de0a46e4434d7f32ffab311da1f5e\": container with ID starting with f880fe3a41b94d7588c74e5a9f98a59eb65de0a46e4434d7f32ffab311da1f5e not found: ID does not exist" containerID="f880fe3a41b94d7588c74e5a9f98a59eb65de0a46e4434d7f32ffab311da1f5e" Mar 12 09:42:26 crc kubenswrapper[4809]: I0312 09:42:26.066807 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f880fe3a41b94d7588c74e5a9f98a59eb65de0a46e4434d7f32ffab311da1f5e"} err="failed to get container status \"f880fe3a41b94d7588c74e5a9f98a59eb65de0a46e4434d7f32ffab311da1f5e\": rpc error: code = NotFound desc = could not find container \"f880fe3a41b94d7588c74e5a9f98a59eb65de0a46e4434d7f32ffab311da1f5e\": container with ID starting with f880fe3a41b94d7588c74e5a9f98a59eb65de0a46e4434d7f32ffab311da1f5e not found: ID does not exist" Mar 12 09:42:27 crc kubenswrapper[4809]: I0312 09:42:27.126665 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9eb0451-aecf-42c8-a97f-a8f1393d6e1c" path="/var/lib/kubelet/pods/e9eb0451-aecf-42c8-a97f-a8f1393d6e1c/volumes" Mar 12 09:42:39 crc kubenswrapper[4809]: I0312 09:42:39.744985 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_b43dd5f7-6f15-464c-8fea-98a37a6942d1/aodh-api/0.log" Mar 12 09:42:39 crc kubenswrapper[4809]: I0312 09:42:39.987561 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_b43dd5f7-6f15-464c-8fea-98a37a6942d1/aodh-listener/0.log" Mar 12 09:42:39 crc kubenswrapper[4809]: I0312 09:42:39.994708 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_b43dd5f7-6f15-464c-8fea-98a37a6942d1/aodh-notifier/0.log" Mar 12 09:42:40 crc kubenswrapper[4809]: I0312 09:42:40.020263 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_b43dd5f7-6f15-464c-8fea-98a37a6942d1/aodh-evaluator/0.log" Mar 12 09:42:40 crc kubenswrapper[4809]: I0312 09:42:40.212001 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-787b8bc5d6-ldxk6_2396067f-cc69-4c96-8acd-a74b7667ebf3/barbican-api/0.log" Mar 12 09:42:40 crc kubenswrapper[4809]: I0312 09:42:40.302154 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6f8c457c74-ppnrt_edad7564-3eab-49a8-a90d-7f945bf1a458/barbican-keystone-listener/0.log" Mar 12 09:42:40 crc kubenswrapper[4809]: I0312 09:42:40.309581 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-787b8bc5d6-ldxk6_2396067f-cc69-4c96-8acd-a74b7667ebf3/barbican-api-log/0.log" Mar 12 09:42:40 crc kubenswrapper[4809]: I0312 09:42:40.559271 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6f8c457c74-ppnrt_edad7564-3eab-49a8-a90d-7f945bf1a458/barbican-keystone-listener-log/0.log" Mar 12 09:42:40 crc kubenswrapper[4809]: I0312 09:42:40.563737 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-68668c5975-8z984_afaa98ca-8124-4f9a-b990-58012066b090/barbican-worker-log/0.log" Mar 12 09:42:40 crc kubenswrapper[4809]: I0312 09:42:40.633605 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-68668c5975-8z984_afaa98ca-8124-4f9a-b990-58012066b090/barbican-worker/0.log" Mar 12 09:42:40 crc kubenswrapper[4809]: I0312 09:42:40.951640 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-zb4bp_23801b93-a5b2-44dc-b04a-bc3c50fccbfd/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 09:42:41 crc kubenswrapper[4809]: I0312 09:42:41.239399 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_c30cf37a-f8d3-4e4c-84b1-d348eccf1b02/ceilometer-central-agent/0.log" Mar 12 09:42:41 crc kubenswrapper[4809]: I0312 09:42:41.303958 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_c30cf37a-f8d3-4e4c-84b1-d348eccf1b02/proxy-httpd/0.log" Mar 12 09:42:41 crc kubenswrapper[4809]: I0312 09:42:41.309230 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_c30cf37a-f8d3-4e4c-84b1-d348eccf1b02/ceilometer-notification-agent/0.log" Mar 12 09:42:41 crc kubenswrapper[4809]: I0312 09:42:41.430424 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_c30cf37a-f8d3-4e4c-84b1-d348eccf1b02/sg-core/0.log" Mar 12 09:42:41 crc kubenswrapper[4809]: I0312 09:42:41.590695 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_ab7a99ec-57ee-48dd-948b-6e9309a0aa10/cinder-api-log/0.log" Mar 12 09:42:41 crc kubenswrapper[4809]: I0312 09:42:41.658594 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_ab7a99ec-57ee-48dd-948b-6e9309a0aa10/cinder-api/0.log" Mar 12 09:42:41 crc kubenswrapper[4809]: I0312 09:42:41.859131 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_6f39b431-0c84-4f84-b887-d5f74af3d573/cinder-scheduler/1.log" Mar 12 09:42:41 crc kubenswrapper[4809]: I0312 09:42:41.916894 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_6f39b431-0c84-4f84-b887-d5f74af3d573/cinder-scheduler/0.log" Mar 12 09:42:41 crc kubenswrapper[4809]: I0312 09:42:41.948826 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_6f39b431-0c84-4f84-b887-d5f74af3d573/probe/0.log" Mar 12 09:42:42 crc kubenswrapper[4809]: I0312 09:42:42.123955 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-cl2gg_888c8215-5ca1-481a-88cb-b01e21be6eff/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 09:42:42 crc kubenswrapper[4809]: I0312 09:42:42.205266 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-mb7f2_f49ea7c3-c615-485c-8780-984d8c590f22/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 09:42:42 crc kubenswrapper[4809]: I0312 09:42:42.396248 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-d9spn_12c88305-18da-470c-8cda-9a3844ca3e56/init/0.log" Mar 12 09:42:42 crc kubenswrapper[4809]: I0312 09:42:42.645132 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-d9spn_12c88305-18da-470c-8cda-9a3844ca3e56/init/0.log" Mar 12 09:42:42 crc kubenswrapper[4809]: I0312 09:42:42.746170 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-d9spn_12c88305-18da-470c-8cda-9a3844ca3e56/dnsmasq-dns/0.log" Mar 12 09:42:42 crc kubenswrapper[4809]: I0312 09:42:42.823571 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-kcsgx_1f2c1c6e-1410-4f2e-9cb9-8f55ccf83f59/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 09:42:42 crc kubenswrapper[4809]: I0312 09:42:42.957060 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_1e2012b7-5391-4ad2-a9aa-0ffb55502fe7/glance-httpd/0.log" Mar 12 09:42:42 crc kubenswrapper[4809]: I0312 09:42:42.977387 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_1e2012b7-5391-4ad2-a9aa-0ffb55502fe7/glance-log/0.log" Mar 12 09:42:43 crc kubenswrapper[4809]: I0312 09:42:43.232036 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_d1e77680-3cef-42db-8595-1aa372cf995b/glance-log/0.log" Mar 12 09:42:43 crc kubenswrapper[4809]: I0312 09:42:43.232639 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_d1e77680-3cef-42db-8595-1aa372cf995b/glance-httpd/0.log" Mar 12 09:42:44 crc kubenswrapper[4809]: I0312 09:42:44.169985 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-7ccd669dc7-8d45d_420beb8d-02d6-4a02-9b98-d30c28771f03/heat-engine/0.log" Mar 12 09:42:44 crc kubenswrapper[4809]: I0312 09:42:44.282893 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-l2z9q_4f1326dd-cb21-41ec-9927-70a2f27b2020/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 09:42:44 crc kubenswrapper[4809]: I0312 09:42:44.465961 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-wc48v_7291754c-2659-44a6-b305-527b19034672/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 09:42:44 crc kubenswrapper[4809]: I0312 09:42:44.499831 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-8c9fdf79f-l9g9l_8934ae77-4826-4fa1-a5e1-578b06fa6650/heat-api/0.log" Mar 12 09:42:44 crc kubenswrapper[4809]: I0312 09:42:44.539420 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-6dd54d47c7-gfgfm_d99e4f98-531f-4ef3-a833-82591d23bea7/heat-cfnapi/0.log" Mar 12 09:42:44 crc kubenswrapper[4809]: I0312 09:42:44.767763 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29555101-9ndpl_51b417a3-2df1-4c98-8e49-b33a834c3e3e/keystone-cron/0.log" Mar 12 09:42:44 crc kubenswrapper[4809]: I0312 09:42:44.963169 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_b92111bc-ddbe-401a-83c3-2b0c1e805c6a/kube-state-metrics/1.log" Mar 12 09:42:45 crc kubenswrapper[4809]: I0312 09:42:45.048603 4809 patch_prober.go:28] interesting pod/machine-config-daemon-h6d4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 09:42:45 crc kubenswrapper[4809]: I0312 09:42:45.048713 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 09:42:45 crc kubenswrapper[4809]: I0312 09:42:45.048797 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" Mar 12 09:42:45 crc kubenswrapper[4809]: I0312 09:42:45.055741 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2e6ab1f1fc625e84ede9f32b1a2be463f7b37e1535ff9f1acd9bbcf4150c9d1a"} pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 09:42:45 crc kubenswrapper[4809]: I0312 09:42:45.055844 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" containerID="cri-o://2e6ab1f1fc625e84ede9f32b1a2be463f7b37e1535ff9f1acd9bbcf4150c9d1a" gracePeriod=600 Mar 12 09:42:45 crc kubenswrapper[4809]: I0312 09:42:45.073181 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_b92111bc-ddbe-401a-83c3-2b0c1e805c6a/kube-state-metrics/0.log" Mar 12 09:42:45 crc kubenswrapper[4809]: I0312 09:42:45.187409 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-vb7zc_284de205-702a-4c6f-9623-d11a516113ca/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 09:42:45 crc kubenswrapper[4809]: I0312 09:42:45.380176 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_logging-edpm-deployment-openstack-edpm-ipam-bmpc2_91efdd7b-a690-4c5c-8749-2d4589830be9/logging-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 09:42:45 crc kubenswrapper[4809]: I0312 09:42:45.531791 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-znnmb"] Mar 12 09:42:45 crc kubenswrapper[4809]: E0312 09:42:45.532568 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5935bc45-f371-4dd3-b7ce-65955339ea44" containerName="oc" Mar 12 09:42:45 crc kubenswrapper[4809]: I0312 09:42:45.532593 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="5935bc45-f371-4dd3-b7ce-65955339ea44" containerName="oc" Mar 12 09:42:45 crc kubenswrapper[4809]: E0312 09:42:45.532617 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9eb0451-aecf-42c8-a97f-a8f1393d6e1c" containerName="registry-server" Mar 12 09:42:45 crc kubenswrapper[4809]: I0312 09:42:45.532625 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9eb0451-aecf-42c8-a97f-a8f1393d6e1c" containerName="registry-server" Mar 12 09:42:45 crc kubenswrapper[4809]: E0312 09:42:45.532638 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e961be0-97b0-45fe-84db-61b86de4b9fa" containerName="container-00" Mar 12 09:42:45 crc kubenswrapper[4809]: I0312 09:42:45.532646 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e961be0-97b0-45fe-84db-61b86de4b9fa" containerName="container-00" Mar 12 09:42:45 crc kubenswrapper[4809]: E0312 09:42:45.532674 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9eb0451-aecf-42c8-a97f-a8f1393d6e1c" containerName="extract-utilities" Mar 12 09:42:45 crc kubenswrapper[4809]: I0312 09:42:45.532681 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9eb0451-aecf-42c8-a97f-a8f1393d6e1c" containerName="extract-utilities" Mar 12 09:42:45 crc kubenswrapper[4809]: E0312 09:42:45.532716 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9eb0451-aecf-42c8-a97f-a8f1393d6e1c" containerName="extract-content" Mar 12 09:42:45 crc kubenswrapper[4809]: I0312 09:42:45.532725 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9eb0451-aecf-42c8-a97f-a8f1393d6e1c" containerName="extract-content" Mar 12 09:42:45 crc kubenswrapper[4809]: I0312 09:42:45.532999 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="5935bc45-f371-4dd3-b7ce-65955339ea44" containerName="oc" Mar 12 09:42:45 crc kubenswrapper[4809]: I0312 09:42:45.533029 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9eb0451-aecf-42c8-a97f-a8f1393d6e1c" containerName="registry-server" Mar 12 09:42:45 crc kubenswrapper[4809]: I0312 09:42:45.533040 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e961be0-97b0-45fe-84db-61b86de4b9fa" containerName="container-00" Mar 12 09:42:45 crc kubenswrapper[4809]: I0312 09:42:45.543423 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-znnmb" Mar 12 09:42:45 crc kubenswrapper[4809]: I0312 09:42:45.557538 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-znnmb"] Mar 12 09:42:45 crc kubenswrapper[4809]: I0312 09:42:45.576423 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1034d2f4-b4fe-4732-b91d-fe1433fb4fbe-catalog-content\") pod \"redhat-operators-znnmb\" (UID: \"1034d2f4-b4fe-4732-b91d-fe1433fb4fbe\") " pod="openshift-marketplace/redhat-operators-znnmb" Mar 12 09:42:45 crc kubenswrapper[4809]: I0312 09:42:45.576551 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1034d2f4-b4fe-4732-b91d-fe1433fb4fbe-utilities\") pod \"redhat-operators-znnmb\" (UID: \"1034d2f4-b4fe-4732-b91d-fe1433fb4fbe\") " pod="openshift-marketplace/redhat-operators-znnmb" Mar 12 09:42:45 crc kubenswrapper[4809]: I0312 09:42:45.576716 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ppjz\" (UniqueName: \"kubernetes.io/projected/1034d2f4-b4fe-4732-b91d-fe1433fb4fbe-kube-api-access-8ppjz\") pod \"redhat-operators-znnmb\" (UID: \"1034d2f4-b4fe-4732-b91d-fe1433fb4fbe\") " pod="openshift-marketplace/redhat-operators-znnmb" Mar 12 09:42:45 crc kubenswrapper[4809]: I0312 09:42:45.681897 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1034d2f4-b4fe-4732-b91d-fe1433fb4fbe-utilities\") pod \"redhat-operators-znnmb\" (UID: \"1034d2f4-b4fe-4732-b91d-fe1433fb4fbe\") " pod="openshift-marketplace/redhat-operators-znnmb" Mar 12 09:42:45 crc kubenswrapper[4809]: I0312 09:42:45.682533 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ppjz\" (UniqueName: \"kubernetes.io/projected/1034d2f4-b4fe-4732-b91d-fe1433fb4fbe-kube-api-access-8ppjz\") pod \"redhat-operators-znnmb\" (UID: \"1034d2f4-b4fe-4732-b91d-fe1433fb4fbe\") " pod="openshift-marketplace/redhat-operators-znnmb" Mar 12 09:42:45 crc kubenswrapper[4809]: I0312 09:42:45.683066 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1034d2f4-b4fe-4732-b91d-fe1433fb4fbe-catalog-content\") pod \"redhat-operators-znnmb\" (UID: \"1034d2f4-b4fe-4732-b91d-fe1433fb4fbe\") " pod="openshift-marketplace/redhat-operators-znnmb" Mar 12 09:42:45 crc kubenswrapper[4809]: I0312 09:42:45.685649 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1034d2f4-b4fe-4732-b91d-fe1433fb4fbe-utilities\") pod \"redhat-operators-znnmb\" (UID: \"1034d2f4-b4fe-4732-b91d-fe1433fb4fbe\") " pod="openshift-marketplace/redhat-operators-znnmb" Mar 12 09:42:45 crc kubenswrapper[4809]: I0312 09:42:45.686359 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1034d2f4-b4fe-4732-b91d-fe1433fb4fbe-catalog-content\") pod \"redhat-operators-znnmb\" (UID: \"1034d2f4-b4fe-4732-b91d-fe1433fb4fbe\") " pod="openshift-marketplace/redhat-operators-znnmb" Mar 12 09:42:45 crc kubenswrapper[4809]: I0312 09:42:45.740715 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ppjz\" (UniqueName: \"kubernetes.io/projected/1034d2f4-b4fe-4732-b91d-fe1433fb4fbe-kube-api-access-8ppjz\") pod \"redhat-operators-znnmb\" (UID: \"1034d2f4-b4fe-4732-b91d-fe1433fb4fbe\") " pod="openshift-marketplace/redhat-operators-znnmb" Mar 12 09:42:45 crc kubenswrapper[4809]: I0312 09:42:45.881275 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-znnmb" Mar 12 09:42:45 crc kubenswrapper[4809]: I0312 09:42:45.889657 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mysqld-exporter-0_b70e91cb-e032-4dcf-9e4a-4f82241f7398/mysqld-exporter/0.log" Mar 12 09:42:46 crc kubenswrapper[4809]: I0312 09:42:46.222490 4809 generic.go:334] "Generic (PLEG): container finished" podID="101483ba-8ed3-40eb-9855-077e9add029f" containerID="2e6ab1f1fc625e84ede9f32b1a2be463f7b37e1535ff9f1acd9bbcf4150c9d1a" exitCode=0 Mar 12 09:42:46 crc kubenswrapper[4809]: I0312 09:42:46.222676 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" event={"ID":"101483ba-8ed3-40eb-9855-077e9add029f","Type":"ContainerDied","Data":"2e6ab1f1fc625e84ede9f32b1a2be463f7b37e1535ff9f1acd9bbcf4150c9d1a"} Mar 12 09:42:46 crc kubenswrapper[4809]: I0312 09:42:46.223246 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" event={"ID":"101483ba-8ed3-40eb-9855-077e9add029f","Type":"ContainerStarted","Data":"2cc4d61696c1a97a33dbf89b6027d85f4124e9d804e87e559c4cc8ddb20dd8d3"} Mar 12 09:42:46 crc kubenswrapper[4809]: I0312 09:42:46.223286 4809 scope.go:117] "RemoveContainer" containerID="50defde8943bd5e6d31a0926c787e30d8e3deae22d5d0fa49bb0620ba91106ab" Mar 12 09:42:46 crc kubenswrapper[4809]: I0312 09:42:46.511721 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-677488855f-bz28w_5c65e075-ccaa-4054-9903-ebcd26368c00/neutron-httpd/0.log" Mar 12 09:42:46 crc kubenswrapper[4809]: I0312 09:42:46.614835 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-tgn6n_3ea89d78-22ee-4c08-9c21-f093139ca0ac/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 09:42:46 crc kubenswrapper[4809]: I0312 09:42:46.647945 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-677488855f-bz28w_5c65e075-ccaa-4054-9903-ebcd26368c00/neutron-api/0.log" Mar 12 09:42:46 crc kubenswrapper[4809]: I0312 09:42:46.719718 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-znnmb"] Mar 12 09:42:46 crc kubenswrapper[4809]: I0312 09:42:46.875902 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-76f57f5c5-gb5xf_d470174b-7e71-48e0-936a-b527f398db7e/keystone-api/0.log" Mar 12 09:42:47 crc kubenswrapper[4809]: I0312 09:42:47.252607 4809 generic.go:334] "Generic (PLEG): container finished" podID="1034d2f4-b4fe-4732-b91d-fe1433fb4fbe" containerID="13fe74a8eeaee95d99fdae8b7473c56f310fb6265bd5c6f33d476de0fddece58" exitCode=0 Mar 12 09:42:47 crc kubenswrapper[4809]: I0312 09:42:47.253098 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-znnmb" event={"ID":"1034d2f4-b4fe-4732-b91d-fe1433fb4fbe","Type":"ContainerDied","Data":"13fe74a8eeaee95d99fdae8b7473c56f310fb6265bd5c6f33d476de0fddece58"} Mar 12 09:42:47 crc kubenswrapper[4809]: I0312 09:42:47.253141 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-znnmb" event={"ID":"1034d2f4-b4fe-4732-b91d-fe1433fb4fbe","Type":"ContainerStarted","Data":"09fe826e740b9646c3072843e3f4cc03e530db6cc923a6a3d1d613218a23670a"} Mar 12 09:42:47 crc kubenswrapper[4809]: I0312 09:42:47.260063 4809 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 09:42:47 crc kubenswrapper[4809]: I0312 09:42:47.489760 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_5eb0f97f-8243-4466-90df-a657b0bd5ceb/nova-cell0-conductor-conductor/0.log" Mar 12 09:42:47 crc kubenswrapper[4809]: I0312 09:42:47.965326 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_ec6fbec3-cafa-494f-a79b-9fcdb1665bb6/nova-api-log/0.log" Mar 12 09:42:47 crc kubenswrapper[4809]: I0312 09:42:47.977482 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_58eaf15a-e31e-4457-8f23-a3b58f5bd943/nova-cell1-conductor-conductor/0.log" Mar 12 09:42:48 crc kubenswrapper[4809]: I0312 09:42:48.329531 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_b77d704e-5a2f-48ba-ac3c-c8495bda44ff/nova-cell1-novncproxy-novncproxy/0.log" Mar 12 09:42:48 crc kubenswrapper[4809]: I0312 09:42:48.500187 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_ec6fbec3-cafa-494f-a79b-9fcdb1665bb6/nova-api-api/0.log" Mar 12 09:42:49 crc kubenswrapper[4809]: I0312 09:42:49.082561 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-w88ml_0fb437d4-d106-4655-8a3f-05446deb2be1/nova-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 09:42:49 crc kubenswrapper[4809]: I0312 09:42:49.167325 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_9af05a91-f59f-4731-b75d-95fe7b869838/nova-metadata-log/0.log" Mar 12 09:42:49 crc kubenswrapper[4809]: I0312 09:42:49.304684 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-znnmb" event={"ID":"1034d2f4-b4fe-4732-b91d-fe1433fb4fbe","Type":"ContainerStarted","Data":"05d1fc190a2c0bc06d9a50a0c31ca008274906378c31f86b07cc31b097e3ab20"} Mar 12 09:42:49 crc kubenswrapper[4809]: I0312 09:42:49.684061 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_271f5266-7da5-4a37-a61c-aa02f9e04d15/nova-scheduler-scheduler/0.log" Mar 12 09:42:49 crc kubenswrapper[4809]: I0312 09:42:49.921268 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_729d6f4c-335b-486c-bea9-812d4abfdfd9/mysql-bootstrap/0.log" Mar 12 09:42:50 crc kubenswrapper[4809]: I0312 09:42:50.035474 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_729d6f4c-335b-486c-bea9-812d4abfdfd9/mysql-bootstrap/0.log" Mar 12 09:42:50 crc kubenswrapper[4809]: I0312 09:42:50.080252 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_729d6f4c-335b-486c-bea9-812d4abfdfd9/galera/0.log" Mar 12 09:42:50 crc kubenswrapper[4809]: I0312 09:42:50.434476 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_3d541616-6c38-428f-bd28-7dc54dceab8c/mysql-bootstrap/0.log" Mar 12 09:42:50 crc kubenswrapper[4809]: I0312 09:42:50.715711 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_3d541616-6c38-428f-bd28-7dc54dceab8c/mysql-bootstrap/0.log" Mar 12 09:42:50 crc kubenswrapper[4809]: I0312 09:42:50.750314 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_3d541616-6c38-428f-bd28-7dc54dceab8c/galera/1.log" Mar 12 09:42:50 crc kubenswrapper[4809]: I0312 09:42:50.765279 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_3d541616-6c38-428f-bd28-7dc54dceab8c/galera/0.log" Mar 12 09:42:51 crc kubenswrapper[4809]: I0312 09:42:51.002863 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_1af3f1ff-a2d9-4f4d-bd1d-f66ec1cd87bd/openstackclient/0.log" Mar 12 09:42:51 crc kubenswrapper[4809]: I0312 09:42:51.972507 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-chwkh_55e426a8-784b-4859-a48a-509e5f045c98/openstack-network-exporter/0.log" Mar 12 09:42:52 crc kubenswrapper[4809]: I0312 09:42:52.069967 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_9af05a91-f59f-4731-b75d-95fe7b869838/nova-metadata-metadata/0.log" Mar 12 09:42:52 crc kubenswrapper[4809]: I0312 09:42:52.266844 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-pt4wh_cbad11f2-2bbf-45af-9f9f-72c409c5b0a6/ovsdb-server-init/0.log" Mar 12 09:42:52 crc kubenswrapper[4809]: I0312 09:42:52.810046 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-pt4wh_cbad11f2-2bbf-45af-9f9f-72c409c5b0a6/ovs-vswitchd/0.log" Mar 12 09:42:52 crc kubenswrapper[4809]: I0312 09:42:52.828023 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-pt4wh_cbad11f2-2bbf-45af-9f9f-72c409c5b0a6/ovsdb-server-init/0.log" Mar 12 09:42:52 crc kubenswrapper[4809]: I0312 09:42:52.828578 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-pt4wh_cbad11f2-2bbf-45af-9f9f-72c409c5b0a6/ovsdb-server/0.log" Mar 12 09:42:53 crc kubenswrapper[4809]: I0312 09:42:53.172488 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-x5bc6_e8f77780-9a39-4298-8bfe-76a54e1e41d9/ovn-controller/0.log" Mar 12 09:42:53 crc kubenswrapper[4809]: I0312 09:42:53.329379 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-45gch_918530d9-5478-4136-8ba8-a36197850b6e/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 09:42:53 crc kubenswrapper[4809]: I0312 09:42:53.527890 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_4df4ab5f-c9cf-40a8-9b50-44c5a67a97cc/openstack-network-exporter/0.log" Mar 12 09:42:53 crc kubenswrapper[4809]: I0312 09:42:53.628709 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_4df4ab5f-c9cf-40a8-9b50-44c5a67a97cc/ovn-northd/0.log" Mar 12 09:42:53 crc kubenswrapper[4809]: I0312 09:42:53.702462 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_70486c1b-1f98-4340-a8f9-5a48e381ef7d/openstack-network-exporter/0.log" Mar 12 09:42:53 crc kubenswrapper[4809]: I0312 09:42:53.855676 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_70486c1b-1f98-4340-a8f9-5a48e381ef7d/ovsdbserver-nb/0.log" Mar 12 09:42:54 crc kubenswrapper[4809]: I0312 09:42:54.001519 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_e7ddd2df-8231-4c1c-99a5-7af5758508ba/ovsdbserver-sb/0.log" Mar 12 09:42:54 crc kubenswrapper[4809]: I0312 09:42:54.043732 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_e7ddd2df-8231-4c1c-99a5-7af5758508ba/openstack-network-exporter/0.log" Mar 12 09:42:54 crc kubenswrapper[4809]: I0312 09:42:54.433278 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6f84ff9464-ktgn6_0102cd54-f02d-4f95-8152-6012f7397103/placement-log/0.log" Mar 12 09:42:54 crc kubenswrapper[4809]: I0312 09:42:54.511287 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6f84ff9464-ktgn6_0102cd54-f02d-4f95-8152-6012f7397103/placement-api/0.log" Mar 12 09:42:54 crc kubenswrapper[4809]: I0312 09:42:54.545344 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5/init-config-reloader/0.log" Mar 12 09:42:54 crc kubenswrapper[4809]: I0312 09:42:54.861210 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5/config-reloader/0.log" Mar 12 09:42:54 crc kubenswrapper[4809]: I0312 09:42:54.868099 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5/prometheus/0.log" Mar 12 09:42:54 crc kubenswrapper[4809]: I0312 09:42:54.885476 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5/init-config-reloader/0.log" Mar 12 09:42:54 crc kubenswrapper[4809]: I0312 09:42:54.925317 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_9d591ff1-4f15-4ee2-90f0-9d3a7f1b90d5/thanos-sidecar/0.log" Mar 12 09:42:55 crc kubenswrapper[4809]: I0312 09:42:55.176248 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_b96c701c-45de-46e2-95d0-df4e12f6d643/setup-container/0.log" Mar 12 09:42:55 crc kubenswrapper[4809]: I0312 09:42:55.388036 4809 generic.go:334] "Generic (PLEG): container finished" podID="1034d2f4-b4fe-4732-b91d-fe1433fb4fbe" containerID="05d1fc190a2c0bc06d9a50a0c31ca008274906378c31f86b07cc31b097e3ab20" exitCode=0 Mar 12 09:42:55 crc kubenswrapper[4809]: I0312 09:42:55.388086 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-znnmb" event={"ID":"1034d2f4-b4fe-4732-b91d-fe1433fb4fbe","Type":"ContainerDied","Data":"05d1fc190a2c0bc06d9a50a0c31ca008274906378c31f86b07cc31b097e3ab20"} Mar 12 09:42:55 crc kubenswrapper[4809]: I0312 09:42:55.469401 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_b96c701c-45de-46e2-95d0-df4e12f6d643/setup-container/0.log" Mar 12 09:42:55 crc kubenswrapper[4809]: I0312 09:42:55.511787 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_b96c701c-45de-46e2-95d0-df4e12f6d643/rabbitmq/0.log" Mar 12 09:42:55 crc kubenswrapper[4809]: I0312 09:42:55.613197 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_396048fa-2424-4d6c-80b6-61e9cce8a4ec/setup-container/0.log" Mar 12 09:42:55 crc kubenswrapper[4809]: I0312 09:42:55.802760 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_396048fa-2424-4d6c-80b6-61e9cce8a4ec/setup-container/0.log" Mar 12 09:42:55 crc kubenswrapper[4809]: I0312 09:42:55.881401 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_66d312ab-6fb2-43de-98f2-dc692f592a47/setup-container/0.log" Mar 12 09:42:55 crc kubenswrapper[4809]: I0312 09:42:55.929783 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_396048fa-2424-4d6c-80b6-61e9cce8a4ec/rabbitmq/0.log" Mar 12 09:42:56 crc kubenswrapper[4809]: I0312 09:42:56.156632 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_66d312ab-6fb2-43de-98f2-dc692f592a47/setup-container/0.log" Mar 12 09:42:56 crc kubenswrapper[4809]: I0312 09:42:56.252095 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_66d312ab-6fb2-43de-98f2-dc692f592a47/rabbitmq/0.log" Mar 12 09:42:56 crc kubenswrapper[4809]: I0312 09:42:56.335227 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_221e3c09-0978-4460-9f66-642aa1165af4/setup-container/0.log" Mar 12 09:42:56 crc kubenswrapper[4809]: I0312 09:42:56.401947 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-znnmb" event={"ID":"1034d2f4-b4fe-4732-b91d-fe1433fb4fbe","Type":"ContainerStarted","Data":"c80d2bd375c73821277020b2a22bd5790f86c9016625fee8d475a2002354bae6"} Mar 12 09:42:56 crc kubenswrapper[4809]: I0312 09:42:56.505944 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-znnmb" podStartSLOduration=2.8428938710000002 podStartE2EDuration="11.505885279s" podCreationTimestamp="2026-03-12 09:42:45 +0000 UTC" firstStartedPulling="2026-03-12 09:42:47.255895477 +0000 UTC m=+6240.837931210" lastFinishedPulling="2026-03-12 09:42:55.918886885 +0000 UTC m=+6249.500922618" observedRunningTime="2026-03-12 09:42:56.436512808 +0000 UTC m=+6250.018548541" watchObservedRunningTime="2026-03-12 09:42:56.505885279 +0000 UTC m=+6250.087921012" Mar 12 09:42:56 crc kubenswrapper[4809]: I0312 09:42:56.791268 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_221e3c09-0978-4460-9f66-642aa1165af4/setup-container/0.log" Mar 12 09:42:56 crc kubenswrapper[4809]: I0312 09:42:56.870950 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_221e3c09-0978-4460-9f66-642aa1165af4/rabbitmq/0.log" Mar 12 09:42:56 crc kubenswrapper[4809]: I0312 09:42:56.976127 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-dw8zf_6d715d91-c333-4191-afe9-f58e0350a408/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 09:42:57 crc kubenswrapper[4809]: I0312 09:42:57.123384 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-wbr6r_bc959ced-37e3-4644-a945-4d2803e3d453/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 09:42:57 crc kubenswrapper[4809]: I0312 09:42:57.441784 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-9qpr5_3bd1743a-df42-404b-b846-0ca55bf273ef/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 09:42:57 crc kubenswrapper[4809]: I0312 09:42:57.444632 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-d7cg4_baef87a5-805e-4a55-855f-df82b7292028/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 09:42:58 crc kubenswrapper[4809]: I0312 09:42:58.006621 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-w6xqs_08e213da-a2a0-49b8-851c-c33ded78276a/ssh-known-hosts-edpm-deployment/0.log" Mar 12 09:42:58 crc kubenswrapper[4809]: I0312 09:42:58.452248 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-w9c45_7c349c64-fcf0-48cf-91c5-fac0131bacc6/swift-ring-rebalance/0.log" Mar 12 09:42:58 crc kubenswrapper[4809]: I0312 09:42:58.472375 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-677f6cdf55-cmx2q_409cda21-626c-4670-9cf9-06900631ddd5/proxy-server/0.log" Mar 12 09:42:58 crc kubenswrapper[4809]: I0312 09:42:58.499446 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-677f6cdf55-cmx2q_409cda21-626c-4670-9cf9-06900631ddd5/proxy-httpd/0.log" Mar 12 09:42:58 crc kubenswrapper[4809]: I0312 09:42:58.656650 4809 scope.go:117] "RemoveContainer" containerID="cb4f6a25fd1cd3f3e5ec24e36ba0a6a6fb785921ac26955c3aa4350b4f2bdb38" Mar 12 09:42:58 crc kubenswrapper[4809]: I0312 09:42:58.729835 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7b325264-3ac9-446e-b820-c40d942263e6/account-auditor/0.log" Mar 12 09:42:58 crc kubenswrapper[4809]: I0312 09:42:58.841952 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7b325264-3ac9-446e-b820-c40d942263e6/account-reaper/0.log" Mar 12 09:42:58 crc kubenswrapper[4809]: I0312 09:42:58.874108 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7b325264-3ac9-446e-b820-c40d942263e6/account-replicator/0.log" Mar 12 09:42:59 crc kubenswrapper[4809]: I0312 09:42:59.074495 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7b325264-3ac9-446e-b820-c40d942263e6/account-server/0.log" Mar 12 09:42:59 crc kubenswrapper[4809]: I0312 09:42:59.097408 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_58606282-c6cc-482f-b1be-78717b5d38b2/memcached/0.log" Mar 12 09:42:59 crc kubenswrapper[4809]: I0312 09:42:59.145929 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7b325264-3ac9-446e-b820-c40d942263e6/container-auditor/0.log" Mar 12 09:42:59 crc kubenswrapper[4809]: I0312 09:42:59.152276 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7b325264-3ac9-446e-b820-c40d942263e6/container-server/0.log" Mar 12 09:42:59 crc kubenswrapper[4809]: I0312 09:42:59.155204 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7b325264-3ac9-446e-b820-c40d942263e6/container-replicator/0.log" Mar 12 09:42:59 crc kubenswrapper[4809]: I0312 09:42:59.284809 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7b325264-3ac9-446e-b820-c40d942263e6/container-updater/0.log" Mar 12 09:42:59 crc kubenswrapper[4809]: I0312 09:42:59.346571 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7b325264-3ac9-446e-b820-c40d942263e6/object-auditor/0.log" Mar 12 09:42:59 crc kubenswrapper[4809]: I0312 09:42:59.415204 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7b325264-3ac9-446e-b820-c40d942263e6/object-replicator/0.log" Mar 12 09:42:59 crc kubenswrapper[4809]: I0312 09:42:59.425206 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7b325264-3ac9-446e-b820-c40d942263e6/object-expirer/0.log" Mar 12 09:42:59 crc kubenswrapper[4809]: I0312 09:42:59.452789 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7b325264-3ac9-446e-b820-c40d942263e6/object-server/0.log" Mar 12 09:42:59 crc kubenswrapper[4809]: I0312 09:42:59.514986 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7b325264-3ac9-446e-b820-c40d942263e6/object-updater/0.log" Mar 12 09:42:59 crc kubenswrapper[4809]: I0312 09:42:59.639710 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7b325264-3ac9-446e-b820-c40d942263e6/swift-recon-cron/0.log" Mar 12 09:42:59 crc kubenswrapper[4809]: I0312 09:42:59.656162 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7b325264-3ac9-446e-b820-c40d942263e6/rsync/0.log" Mar 12 09:42:59 crc kubenswrapper[4809]: I0312 09:42:59.767944 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-5b29d_590ac6ab-bccd-4261-bff2-c0027731a4af/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 09:42:59 crc kubenswrapper[4809]: I0312 09:42:59.980801 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-power-monitoring-edpm-deployment-openstack-edpm-qtmnh_16172e01-c601-4b38-81d4-86a28061049a/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 09:43:00 crc kubenswrapper[4809]: I0312 09:43:00.103660 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_f431a085-378f-4a04-95ba-806aeee1f1dc/test-operator-logs-container/0.log" Mar 12 09:43:00 crc kubenswrapper[4809]: I0312 09:43:00.325167 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-trnst_833f444f-3131-4fe7-b59a-fd9c67224b6e/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 09:43:00 crc kubenswrapper[4809]: I0312 09:43:00.730510 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_d4b8d2be-fafc-4d7e-9348-053c53d3cb4d/tempest-tests-tempest-tests-runner/0.log" Mar 12 09:43:05 crc kubenswrapper[4809]: I0312 09:43:05.883311 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-znnmb" Mar 12 09:43:05 crc kubenswrapper[4809]: I0312 09:43:05.883994 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-znnmb" Mar 12 09:43:06 crc kubenswrapper[4809]: I0312 09:43:06.939245 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-znnmb" podUID="1034d2f4-b4fe-4732-b91d-fe1433fb4fbe" containerName="registry-server" probeResult="failure" output=< Mar 12 09:43:06 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 09:43:06 crc kubenswrapper[4809]: > Mar 12 09:43:16 crc kubenswrapper[4809]: I0312 09:43:16.937203 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-znnmb" podUID="1034d2f4-b4fe-4732-b91d-fe1433fb4fbe" containerName="registry-server" probeResult="failure" output=< Mar 12 09:43:16 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 09:43:16 crc kubenswrapper[4809]: > Mar 12 09:43:26 crc kubenswrapper[4809]: I0312 09:43:26.930546 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-znnmb" podUID="1034d2f4-b4fe-4732-b91d-fe1433fb4fbe" containerName="registry-server" probeResult="failure" output=< Mar 12 09:43:26 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 09:43:26 crc kubenswrapper[4809]: > Mar 12 09:43:31 crc kubenswrapper[4809]: I0312 09:43:31.827021 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_caeacfaf247b7c347bd3b189b13a1b1dbb195ad871df9bb2a8c421b70bpd64f_5973f973-351b-4a16-a21e-330a074ef1e3/util/0.log" Mar 12 09:43:32 crc kubenswrapper[4809]: I0312 09:43:32.115267 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_caeacfaf247b7c347bd3b189b13a1b1dbb195ad871df9bb2a8c421b70bpd64f_5973f973-351b-4a16-a21e-330a074ef1e3/pull/0.log" Mar 12 09:43:32 crc kubenswrapper[4809]: I0312 09:43:32.127028 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_caeacfaf247b7c347bd3b189b13a1b1dbb195ad871df9bb2a8c421b70bpd64f_5973f973-351b-4a16-a21e-330a074ef1e3/pull/0.log" Mar 12 09:43:32 crc kubenswrapper[4809]: I0312 09:43:32.166012 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_caeacfaf247b7c347bd3b189b13a1b1dbb195ad871df9bb2a8c421b70bpd64f_5973f973-351b-4a16-a21e-330a074ef1e3/util/0.log" Mar 12 09:43:32 crc kubenswrapper[4809]: I0312 09:43:32.367864 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_caeacfaf247b7c347bd3b189b13a1b1dbb195ad871df9bb2a8c421b70bpd64f_5973f973-351b-4a16-a21e-330a074ef1e3/util/0.log" Mar 12 09:43:32 crc kubenswrapper[4809]: I0312 09:43:32.420348 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_caeacfaf247b7c347bd3b189b13a1b1dbb195ad871df9bb2a8c421b70bpd64f_5973f973-351b-4a16-a21e-330a074ef1e3/pull/0.log" Mar 12 09:43:32 crc kubenswrapper[4809]: I0312 09:43:32.457768 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_caeacfaf247b7c347bd3b189b13a1b1dbb195ad871df9bb2a8c421b70bpd64f_5973f973-351b-4a16-a21e-330a074ef1e3/extract/0.log" Mar 12 09:43:32 crc kubenswrapper[4809]: I0312 09:43:32.983195 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-66d56f6ff4-vbjh6_275762b5-44af-4358-8562-9574a793b736/manager/0.log" Mar 12 09:43:33 crc kubenswrapper[4809]: I0312 09:43:33.376429 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5964f64c48-qtwhh_12b71885-6cb4-4888-9056-a39becec3670/manager/0.log" Mar 12 09:43:33 crc kubenswrapper[4809]: I0312 09:43:33.952308 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-77b6666d85-7c2ts_b7c605d7-46e5-4daa-beb3-4ef624bc0df9/manager/0.log" Mar 12 09:43:34 crc kubenswrapper[4809]: I0312 09:43:34.054453 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-6d9d6b584d-cvm8g_da29e412-21cc-4249-9791-55335156ff1b/manager/0.log" Mar 12 09:43:35 crc kubenswrapper[4809]: I0312 09:43:35.403025 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6bbb499bbc-rkf7l_a4ff847c-f029-4537-ab92-0ae803769dfc/manager/0.log" Mar 12 09:43:35 crc kubenswrapper[4809]: I0312 09:43:35.522959 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-5995f4446f-mz9kq_9da05ba1-fc66-48d8-a8ce-c99c04f0e416/manager/0.log" Mar 12 09:43:35 crc kubenswrapper[4809]: I0312 09:43:35.921086 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-984cd4dcf-7dq74_8684cb78-fad5-4998-a52f-ba39be875af1/manager/0.log" Mar 12 09:43:35 crc kubenswrapper[4809]: I0312 09:43:35.923796 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-684f77d66d-kwt4n_ddab063f-ed2f-416c-8730-55de13229f58/manager/0.log" Mar 12 09:43:36 crc kubenswrapper[4809]: I0312 09:43:36.010173 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-znnmb" Mar 12 09:43:36 crc kubenswrapper[4809]: I0312 09:43:36.084978 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-znnmb" Mar 12 09:43:36 crc kubenswrapper[4809]: I0312 09:43:36.261285 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-znnmb"] Mar 12 09:43:36 crc kubenswrapper[4809]: I0312 09:43:36.281765 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-68f45f9d9f-dbw4q_a5138546-10af-4d98-96b5-b39dd71e9af1/manager/0.log" Mar 12 09:43:36 crc kubenswrapper[4809]: I0312 09:43:36.599284 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-658d4cdd5-mnhzr_87b1729d-5a9d-4e35-bec1-21d7307020f2/manager/0.log" Mar 12 09:43:36 crc kubenswrapper[4809]: I0312 09:43:36.710609 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-776c5696bf-s5r4z_f3a998c8-9d1c-46ce-9bf6-adcbf704fb5c/manager/0.log" Mar 12 09:43:37 crc kubenswrapper[4809]: I0312 09:43:37.019537 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-569cc54c5-tlxd5_b9586c2f-2fbd-4ba8-a5a5-45ed306bc53e/manager/0.log" Mar 12 09:43:37 crc kubenswrapper[4809]: I0312 09:43:37.080805 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5f4f55cb5c-nwrl4_e349e256-24bd-459e-b5d5-4bf9d85b2a5d/manager/0.log" Mar 12 09:43:37 crc kubenswrapper[4809]: I0312 09:43:37.339358 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-557ccf57b7bdkr4_d42ca3a9-74a0-4e76-ac25-730f412c28de/manager/0.log" Mar 12 09:43:37 crc kubenswrapper[4809]: I0312 09:43:37.669676 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-6bbc5b75f-zqc46_53dea28f-c986-4b4e-a4da-757b2bc9435e/operator/0.log" Mar 12 09:43:37 crc kubenswrapper[4809]: I0312 09:43:37.751925 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-fxhzz_1e560783-0ec2-4688-a79e-59a1df5b2e61/registry-server/1.log" Mar 12 09:43:37 crc kubenswrapper[4809]: I0312 09:43:37.891159 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-znnmb" podUID="1034d2f4-b4fe-4732-b91d-fe1433fb4fbe" containerName="registry-server" containerID="cri-o://c80d2bd375c73821277020b2a22bd5790f86c9016625fee8d475a2002354bae6" gracePeriod=2 Mar 12 09:43:38 crc kubenswrapper[4809]: I0312 09:43:38.306295 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-fxhzz_1e560783-0ec2-4688-a79e-59a1df5b2e61/registry-server/0.log" Mar 12 09:43:38 crc kubenswrapper[4809]: I0312 09:43:38.472803 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-bbc5b68f9-9544k_990522cb-5ef4-45d5-9eba-debcb4e51bae/manager/0.log" Mar 12 09:43:38 crc kubenswrapper[4809]: I0312 09:43:38.701557 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-574d45c66c-nsxhb_3f72b8db-a17a-4b2c-b638-711766e4f6ed/manager/0.log" Mar 12 09:43:38 crc kubenswrapper[4809]: I0312 09:43:38.926105 4809 generic.go:334] "Generic (PLEG): container finished" podID="1034d2f4-b4fe-4732-b91d-fe1433fb4fbe" containerID="c80d2bd375c73821277020b2a22bd5790f86c9016625fee8d475a2002354bae6" exitCode=0 Mar 12 09:43:38 crc kubenswrapper[4809]: I0312 09:43:38.926186 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-znnmb" event={"ID":"1034d2f4-b4fe-4732-b91d-fe1433fb4fbe","Type":"ContainerDied","Data":"c80d2bd375c73821277020b2a22bd5790f86c9016625fee8d475a2002354bae6"} Mar 12 09:43:39 crc kubenswrapper[4809]: I0312 09:43:39.087805 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-tc5f9_59492e92-3148-41d4-86ab-0a69de5a3518/operator/0.log" Mar 12 09:43:39 crc kubenswrapper[4809]: I0312 09:43:39.227507 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-677c674df7-p24mv_60e08cbe-2284-4030-8073-892fd74bcdc6/manager/0.log" Mar 12 09:43:39 crc kubenswrapper[4809]: I0312 09:43:39.536358 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-znnmb" Mar 12 09:43:39 crc kubenswrapper[4809]: I0312 09:43:39.692833 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1034d2f4-b4fe-4732-b91d-fe1433fb4fbe-utilities\") pod \"1034d2f4-b4fe-4732-b91d-fe1433fb4fbe\" (UID: \"1034d2f4-b4fe-4732-b91d-fe1433fb4fbe\") " Mar 12 09:43:39 crc kubenswrapper[4809]: I0312 09:43:39.693284 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1034d2f4-b4fe-4732-b91d-fe1433fb4fbe-catalog-content\") pod \"1034d2f4-b4fe-4732-b91d-fe1433fb4fbe\" (UID: \"1034d2f4-b4fe-4732-b91d-fe1433fb4fbe\") " Mar 12 09:43:39 crc kubenswrapper[4809]: I0312 09:43:39.693323 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ppjz\" (UniqueName: \"kubernetes.io/projected/1034d2f4-b4fe-4732-b91d-fe1433fb4fbe-kube-api-access-8ppjz\") pod \"1034d2f4-b4fe-4732-b91d-fe1433fb4fbe\" (UID: \"1034d2f4-b4fe-4732-b91d-fe1433fb4fbe\") " Mar 12 09:43:39 crc kubenswrapper[4809]: I0312 09:43:39.698200 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1034d2f4-b4fe-4732-b91d-fe1433fb4fbe-utilities" (OuterVolumeSpecName: "utilities") pod "1034d2f4-b4fe-4732-b91d-fe1433fb4fbe" (UID: "1034d2f4-b4fe-4732-b91d-fe1433fb4fbe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 09:43:39 crc kubenswrapper[4809]: I0312 09:43:39.715317 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5c5cb9c4d7-rw46j_4abc098b-51aa-4483-93e1-4880178f6167/manager/0.log" Mar 12 09:43:39 crc kubenswrapper[4809]: I0312 09:43:39.717802 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1034d2f4-b4fe-4732-b91d-fe1433fb4fbe-kube-api-access-8ppjz" (OuterVolumeSpecName: "kube-api-access-8ppjz") pod "1034d2f4-b4fe-4732-b91d-fe1433fb4fbe" (UID: "1034d2f4-b4fe-4732-b91d-fe1433fb4fbe"). InnerVolumeSpecName "kube-api-access-8ppjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 09:43:39 crc kubenswrapper[4809]: I0312 09:43:39.802260 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1034d2f4-b4fe-4732-b91d-fe1433fb4fbe-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 09:43:39 crc kubenswrapper[4809]: I0312 09:43:39.802293 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ppjz\" (UniqueName: \"kubernetes.io/projected/1034d2f4-b4fe-4732-b91d-fe1433fb4fbe-kube-api-access-8ppjz\") on node \"crc\" DevicePath \"\"" Mar 12 09:43:39 crc kubenswrapper[4809]: I0312 09:43:39.945395 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-znnmb" event={"ID":"1034d2f4-b4fe-4732-b91d-fe1433fb4fbe","Type":"ContainerDied","Data":"09fe826e740b9646c3072843e3f4cc03e530db6cc923a6a3d1d613218a23670a"} Mar 12 09:43:39 crc kubenswrapper[4809]: I0312 09:43:39.945452 4809 scope.go:117] "RemoveContainer" containerID="c80d2bd375c73821277020b2a22bd5790f86c9016625fee8d475a2002354bae6" Mar 12 09:43:39 crc kubenswrapper[4809]: I0312 09:43:39.945616 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-znnmb" Mar 12 09:43:39 crc kubenswrapper[4809]: I0312 09:43:39.997733 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1034d2f4-b4fe-4732-b91d-fe1433fb4fbe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1034d2f4-b4fe-4732-b91d-fe1433fb4fbe" (UID: "1034d2f4-b4fe-4732-b91d-fe1433fb4fbe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 09:43:40 crc kubenswrapper[4809]: I0312 09:43:40.007045 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1034d2f4-b4fe-4732-b91d-fe1433fb4fbe-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 09:43:40 crc kubenswrapper[4809]: I0312 09:43:40.015772 4809 scope.go:117] "RemoveContainer" containerID="05d1fc190a2c0bc06d9a50a0c31ca008274906378c31f86b07cc31b097e3ab20" Mar 12 09:43:40 crc kubenswrapper[4809]: I0312 09:43:40.045940 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6dd88c6f67-svzr9_bd9084af-4a31-4802-b9b2-827b0ad53628/manager/0.log" Mar 12 09:43:40 crc kubenswrapper[4809]: I0312 09:43:40.066441 4809 scope.go:117] "RemoveContainer" containerID="13fe74a8eeaee95d99fdae8b7473c56f310fb6265bd5c6f33d476de0fddece58" Mar 12 09:43:40 crc kubenswrapper[4809]: I0312 09:43:40.479210 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-znnmb"] Mar 12 09:43:40 crc kubenswrapper[4809]: I0312 09:43:40.493027 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-znnmb"] Mar 12 09:43:40 crc kubenswrapper[4809]: I0312 09:43:40.770628 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-57c6b5bd58-tt85t_b40480af-2b15-4c8f-9bf2-f63ca0dd6870/manager/0.log" Mar 12 09:43:40 crc kubenswrapper[4809]: I0312 09:43:40.985763 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5d5444f5b-xmqds_2ef4d6d0-1c93-4f10-bd15-5de5ede76c62/manager/0.log" Mar 12 09:43:41 crc kubenswrapper[4809]: I0312 09:43:41.135222 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1034d2f4-b4fe-4732-b91d-fe1433fb4fbe" path="/var/lib/kubelet/pods/1034d2f4-b4fe-4732-b91d-fe1433fb4fbe/volumes" Mar 12 09:43:46 crc kubenswrapper[4809]: I0312 09:43:46.183300 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-677bd678f7-gtzwg_ead62bdc-2a69-4b3a-a6c5-b60614a34263/manager/0.log" Mar 12 09:44:00 crc kubenswrapper[4809]: I0312 09:44:00.160228 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555144-fddmz"] Mar 12 09:44:00 crc kubenswrapper[4809]: E0312 09:44:00.163633 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1034d2f4-b4fe-4732-b91d-fe1433fb4fbe" containerName="extract-utilities" Mar 12 09:44:00 crc kubenswrapper[4809]: I0312 09:44:00.163677 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="1034d2f4-b4fe-4732-b91d-fe1433fb4fbe" containerName="extract-utilities" Mar 12 09:44:00 crc kubenswrapper[4809]: E0312 09:44:00.163722 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1034d2f4-b4fe-4732-b91d-fe1433fb4fbe" containerName="extract-content" Mar 12 09:44:00 crc kubenswrapper[4809]: I0312 09:44:00.163732 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="1034d2f4-b4fe-4732-b91d-fe1433fb4fbe" containerName="extract-content" Mar 12 09:44:00 crc kubenswrapper[4809]: E0312 09:44:00.163776 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1034d2f4-b4fe-4732-b91d-fe1433fb4fbe" containerName="registry-server" Mar 12 09:44:00 crc kubenswrapper[4809]: I0312 09:44:00.163784 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="1034d2f4-b4fe-4732-b91d-fe1433fb4fbe" containerName="registry-server" Mar 12 09:44:00 crc kubenswrapper[4809]: I0312 09:44:00.164096 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="1034d2f4-b4fe-4732-b91d-fe1433fb4fbe" containerName="registry-server" Mar 12 09:44:00 crc kubenswrapper[4809]: I0312 09:44:00.165641 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555144-fddmz" Mar 12 09:44:00 crc kubenswrapper[4809]: I0312 09:44:00.175251 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555144-fddmz"] Mar 12 09:44:00 crc kubenswrapper[4809]: I0312 09:44:00.179895 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 09:44:00 crc kubenswrapper[4809]: I0312 09:44:00.179910 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 09:44:00 crc kubenswrapper[4809]: I0312 09:44:00.179893 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-tvxqv" Mar 12 09:44:00 crc kubenswrapper[4809]: I0312 09:44:00.247375 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqkvn\" (UniqueName: \"kubernetes.io/projected/97017784-7f50-41cc-82a7-d6aeb7265902-kube-api-access-dqkvn\") pod \"auto-csr-approver-29555144-fddmz\" (UID: \"97017784-7f50-41cc-82a7-d6aeb7265902\") " pod="openshift-infra/auto-csr-approver-29555144-fddmz" Mar 12 09:44:00 crc kubenswrapper[4809]: I0312 09:44:00.353359 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqkvn\" (UniqueName: \"kubernetes.io/projected/97017784-7f50-41cc-82a7-d6aeb7265902-kube-api-access-dqkvn\") pod \"auto-csr-approver-29555144-fddmz\" (UID: \"97017784-7f50-41cc-82a7-d6aeb7265902\") " pod="openshift-infra/auto-csr-approver-29555144-fddmz" Mar 12 09:44:00 crc kubenswrapper[4809]: I0312 09:44:00.382177 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqkvn\" (UniqueName: \"kubernetes.io/projected/97017784-7f50-41cc-82a7-d6aeb7265902-kube-api-access-dqkvn\") pod \"auto-csr-approver-29555144-fddmz\" (UID: \"97017784-7f50-41cc-82a7-d6aeb7265902\") " pod="openshift-infra/auto-csr-approver-29555144-fddmz" Mar 12 09:44:00 crc kubenswrapper[4809]: I0312 09:44:00.506056 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555144-fddmz" Mar 12 09:44:01 crc kubenswrapper[4809]: I0312 09:44:01.946809 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555144-fddmz"] Mar 12 09:44:03 crc kubenswrapper[4809]: I0312 09:44:03.329037 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555144-fddmz" event={"ID":"97017784-7f50-41cc-82a7-d6aeb7265902","Type":"ContainerStarted","Data":"73732e118df3cc57b97c7a837c0cc401e816719032ec59ef1f737863312ecb16"} Mar 12 09:44:05 crc kubenswrapper[4809]: I0312 09:44:05.351575 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555144-fddmz" event={"ID":"97017784-7f50-41cc-82a7-d6aeb7265902","Type":"ContainerStarted","Data":"5e68cbd2883814ab74c042bc03851c28af6a20ba6df276e132c3c2a6c6e8b1d9"} Mar 12 09:44:05 crc kubenswrapper[4809]: I0312 09:44:05.373325 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555144-fddmz" podStartSLOduration=4.4212354640000004 podStartE2EDuration="5.373305822s" podCreationTimestamp="2026-03-12 09:44:00 +0000 UTC" firstStartedPulling="2026-03-12 09:44:02.415282202 +0000 UTC m=+6315.997317935" lastFinishedPulling="2026-03-12 09:44:03.36735256 +0000 UTC m=+6316.949388293" observedRunningTime="2026-03-12 09:44:05.363851515 +0000 UTC m=+6318.945887288" watchObservedRunningTime="2026-03-12 09:44:05.373305822 +0000 UTC m=+6318.955341555" Mar 12 09:44:06 crc kubenswrapper[4809]: I0312 09:44:06.363934 4809 generic.go:334] "Generic (PLEG): container finished" podID="97017784-7f50-41cc-82a7-d6aeb7265902" containerID="5e68cbd2883814ab74c042bc03851c28af6a20ba6df276e132c3c2a6c6e8b1d9" exitCode=0 Mar 12 09:44:06 crc kubenswrapper[4809]: I0312 09:44:06.364056 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555144-fddmz" event={"ID":"97017784-7f50-41cc-82a7-d6aeb7265902","Type":"ContainerDied","Data":"5e68cbd2883814ab74c042bc03851c28af6a20ba6df276e132c3c2a6c6e8b1d9"} Mar 12 09:44:07 crc kubenswrapper[4809]: I0312 09:44:07.818073 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555144-fddmz" Mar 12 09:44:07 crc kubenswrapper[4809]: I0312 09:44:07.972432 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqkvn\" (UniqueName: \"kubernetes.io/projected/97017784-7f50-41cc-82a7-d6aeb7265902-kube-api-access-dqkvn\") pod \"97017784-7f50-41cc-82a7-d6aeb7265902\" (UID: \"97017784-7f50-41cc-82a7-d6aeb7265902\") " Mar 12 09:44:07 crc kubenswrapper[4809]: I0312 09:44:07.980295 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97017784-7f50-41cc-82a7-d6aeb7265902-kube-api-access-dqkvn" (OuterVolumeSpecName: "kube-api-access-dqkvn") pod "97017784-7f50-41cc-82a7-d6aeb7265902" (UID: "97017784-7f50-41cc-82a7-d6aeb7265902"). InnerVolumeSpecName "kube-api-access-dqkvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 09:44:08 crc kubenswrapper[4809]: I0312 09:44:08.075661 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqkvn\" (UniqueName: \"kubernetes.io/projected/97017784-7f50-41cc-82a7-d6aeb7265902-kube-api-access-dqkvn\") on node \"crc\" DevicePath \"\"" Mar 12 09:44:08 crc kubenswrapper[4809]: I0312 09:44:08.386560 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555144-fddmz" event={"ID":"97017784-7f50-41cc-82a7-d6aeb7265902","Type":"ContainerDied","Data":"73732e118df3cc57b97c7a837c0cc401e816719032ec59ef1f737863312ecb16"} Mar 12 09:44:08 crc kubenswrapper[4809]: I0312 09:44:08.386603 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73732e118df3cc57b97c7a837c0cc401e816719032ec59ef1f737863312ecb16" Mar 12 09:44:08 crc kubenswrapper[4809]: I0312 09:44:08.386608 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555144-fddmz" Mar 12 09:44:08 crc kubenswrapper[4809]: I0312 09:44:08.524426 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555138-7lx2q"] Mar 12 09:44:08 crc kubenswrapper[4809]: I0312 09:44:08.536526 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555138-7lx2q"] Mar 12 09:44:08 crc kubenswrapper[4809]: I0312 09:44:08.745419 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-nfbkw_54d01473-f99a-47d2-ae35-0a4b933b5098/control-plane-machine-set-operator/0.log" Mar 12 09:44:09 crc kubenswrapper[4809]: I0312 09:44:09.120178 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70576c65-c34c-4c1b-9c96-2623814c7eb9" path="/var/lib/kubelet/pods/70576c65-c34c-4c1b-9c96-2623814c7eb9/volumes" Mar 12 09:44:09 crc kubenswrapper[4809]: I0312 09:44:09.136358 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-cqnvn_76b05f76-b086-4375-9ba4-b1d4f5624ba0/kube-rbac-proxy/0.log" Mar 12 09:44:09 crc kubenswrapper[4809]: I0312 09:44:09.153816 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-cqnvn_76b05f76-b086-4375-9ba4-b1d4f5624ba0/machine-api-operator/0.log" Mar 12 09:44:27 crc kubenswrapper[4809]: I0312 09:44:27.098044 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-947gd_6bc14a5b-8cfa-4f00-917e-72248d3aadb5/cert-manager-controller/0.log" Mar 12 09:44:27 crc kubenswrapper[4809]: I0312 09:44:27.312161 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-jfznn_a993902b-9d72-48a5-8acb-0dd1501e3445/cert-manager-cainjector/0.log" Mar 12 09:44:27 crc kubenswrapper[4809]: I0312 09:44:27.361092 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-4jx86_5097b432-e4b9-407e-97a3-3821992f9f91/cert-manager-webhook/0.log" Mar 12 09:44:43 crc kubenswrapper[4809]: I0312 09:44:43.265542 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-86f58fcf4-72xtc_9a21a990-10ce-4677-95d2-df00083cbe34/nmstate-console-plugin/0.log" Mar 12 09:44:43 crc kubenswrapper[4809]: I0312 09:44:43.494583 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-bpg7k_f05544ad-32fb-491d-ae13-7849090c1f34/kube-rbac-proxy/0.log" Mar 12 09:44:43 crc kubenswrapper[4809]: I0312 09:44:43.498272 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-jgxtq_f89f6199-4afe-4ace-a7a9-2b8c91451d40/nmstate-handler/0.log" Mar 12 09:44:43 crc kubenswrapper[4809]: I0312 09:44:43.722554 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-bpg7k_f05544ad-32fb-491d-ae13-7849090c1f34/nmstate-metrics/0.log" Mar 12 09:44:43 crc kubenswrapper[4809]: I0312 09:44:43.794487 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-796d4cfff4-cjfhr_5fcddb8c-3912-454f-9b48-9137114837a7/nmstate-operator/0.log" Mar 12 09:44:43 crc kubenswrapper[4809]: I0312 09:44:43.941296 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f558f5558-xxj6b_293a6f5b-33ba-4398-a1c7-a5f97db11950/nmstate-webhook/0.log" Mar 12 09:44:45 crc kubenswrapper[4809]: I0312 09:44:45.048708 4809 patch_prober.go:28] interesting pod/machine-config-daemon-h6d4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 09:44:45 crc kubenswrapper[4809]: I0312 09:44:45.049059 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 09:44:58 crc kubenswrapper[4809]: I0312 09:44:58.926940 4809 scope.go:117] "RemoveContainer" containerID="6b5d3c3b166a8e9e50603d0a94d1eb56a10390968c5fb9c70abb3e5f5d1d4e3d" Mar 12 09:44:59 crc kubenswrapper[4809]: I0312 09:44:59.673471 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-65bb5b59df-8jk5h_861c2912-a932-4142-9b25-c7c0e1aaf062/kube-rbac-proxy/0.log" Mar 12 09:44:59 crc kubenswrapper[4809]: I0312 09:44:59.734229 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-65bb5b59df-8jk5h_861c2912-a932-4142-9b25-c7c0e1aaf062/manager/0.log" Mar 12 09:45:00 crc kubenswrapper[4809]: I0312 09:45:00.162959 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555145-rn6ld"] Mar 12 09:45:00 crc kubenswrapper[4809]: E0312 09:45:00.163664 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97017784-7f50-41cc-82a7-d6aeb7265902" containerName="oc" Mar 12 09:45:00 crc kubenswrapper[4809]: I0312 09:45:00.163680 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="97017784-7f50-41cc-82a7-d6aeb7265902" containerName="oc" Mar 12 09:45:00 crc kubenswrapper[4809]: I0312 09:45:00.163960 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="97017784-7f50-41cc-82a7-d6aeb7265902" containerName="oc" Mar 12 09:45:00 crc kubenswrapper[4809]: I0312 09:45:00.165504 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555145-rn6ld" Mar 12 09:45:00 crc kubenswrapper[4809]: I0312 09:45:00.172613 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 12 09:45:00 crc kubenswrapper[4809]: I0312 09:45:00.172692 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 12 09:45:00 crc kubenswrapper[4809]: I0312 09:45:00.177087 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555145-rn6ld"] Mar 12 09:45:00 crc kubenswrapper[4809]: I0312 09:45:00.203896 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jtg2\" (UniqueName: \"kubernetes.io/projected/8fe32095-6b2e-4019-a8c4-50bec0f786b9-kube-api-access-4jtg2\") pod \"collect-profiles-29555145-rn6ld\" (UID: \"8fe32095-6b2e-4019-a8c4-50bec0f786b9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555145-rn6ld" Mar 12 09:45:00 crc kubenswrapper[4809]: I0312 09:45:00.203978 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8fe32095-6b2e-4019-a8c4-50bec0f786b9-secret-volume\") pod \"collect-profiles-29555145-rn6ld\" (UID: \"8fe32095-6b2e-4019-a8c4-50bec0f786b9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555145-rn6ld" Mar 12 09:45:00 crc kubenswrapper[4809]: I0312 09:45:00.204300 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8fe32095-6b2e-4019-a8c4-50bec0f786b9-config-volume\") pod \"collect-profiles-29555145-rn6ld\" (UID: \"8fe32095-6b2e-4019-a8c4-50bec0f786b9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555145-rn6ld" Mar 12 09:45:00 crc kubenswrapper[4809]: I0312 09:45:00.306361 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jtg2\" (UniqueName: \"kubernetes.io/projected/8fe32095-6b2e-4019-a8c4-50bec0f786b9-kube-api-access-4jtg2\") pod \"collect-profiles-29555145-rn6ld\" (UID: \"8fe32095-6b2e-4019-a8c4-50bec0f786b9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555145-rn6ld" Mar 12 09:45:00 crc kubenswrapper[4809]: I0312 09:45:00.306423 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8fe32095-6b2e-4019-a8c4-50bec0f786b9-secret-volume\") pod \"collect-profiles-29555145-rn6ld\" (UID: \"8fe32095-6b2e-4019-a8c4-50bec0f786b9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555145-rn6ld" Mar 12 09:45:00 crc kubenswrapper[4809]: I0312 09:45:00.306630 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8fe32095-6b2e-4019-a8c4-50bec0f786b9-config-volume\") pod \"collect-profiles-29555145-rn6ld\" (UID: \"8fe32095-6b2e-4019-a8c4-50bec0f786b9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555145-rn6ld" Mar 12 09:45:00 crc kubenswrapper[4809]: I0312 09:45:00.307927 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8fe32095-6b2e-4019-a8c4-50bec0f786b9-config-volume\") pod \"collect-profiles-29555145-rn6ld\" (UID: \"8fe32095-6b2e-4019-a8c4-50bec0f786b9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555145-rn6ld" Mar 12 09:45:00 crc kubenswrapper[4809]: I0312 09:45:00.316084 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8fe32095-6b2e-4019-a8c4-50bec0f786b9-secret-volume\") pod \"collect-profiles-29555145-rn6ld\" (UID: \"8fe32095-6b2e-4019-a8c4-50bec0f786b9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555145-rn6ld" Mar 12 09:45:00 crc kubenswrapper[4809]: I0312 09:45:00.326728 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jtg2\" (UniqueName: \"kubernetes.io/projected/8fe32095-6b2e-4019-a8c4-50bec0f786b9-kube-api-access-4jtg2\") pod \"collect-profiles-29555145-rn6ld\" (UID: \"8fe32095-6b2e-4019-a8c4-50bec0f786b9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555145-rn6ld" Mar 12 09:45:00 crc kubenswrapper[4809]: I0312 09:45:00.486557 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555145-rn6ld" Mar 12 09:45:01 crc kubenswrapper[4809]: I0312 09:45:01.122039 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555145-rn6ld"] Mar 12 09:45:02 crc kubenswrapper[4809]: I0312 09:45:02.017514 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555145-rn6ld" event={"ID":"8fe32095-6b2e-4019-a8c4-50bec0f786b9","Type":"ContainerStarted","Data":"9ddf32e07410285b539ecb2a03126cc8b2422ce079da076a080a34125c655af1"} Mar 12 09:45:02 crc kubenswrapper[4809]: I0312 09:45:02.017981 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555145-rn6ld" event={"ID":"8fe32095-6b2e-4019-a8c4-50bec0f786b9","Type":"ContainerStarted","Data":"5892a331c2421fd6a4b0bcd88b0545e5832a5d8139f3da8f0214c268e01276e1"} Mar 12 09:45:02 crc kubenswrapper[4809]: I0312 09:45:02.045987 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29555145-rn6ld" podStartSLOduration=2.04596454 podStartE2EDuration="2.04596454s" podCreationTimestamp="2026-03-12 09:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 09:45:02.034407283 +0000 UTC m=+6375.616443016" watchObservedRunningTime="2026-03-12 09:45:02.04596454 +0000 UTC m=+6375.628000273" Mar 12 09:45:03 crc kubenswrapper[4809]: I0312 09:45:03.029546 4809 generic.go:334] "Generic (PLEG): container finished" podID="8fe32095-6b2e-4019-a8c4-50bec0f786b9" containerID="9ddf32e07410285b539ecb2a03126cc8b2422ce079da076a080a34125c655af1" exitCode=0 Mar 12 09:45:03 crc kubenswrapper[4809]: I0312 09:45:03.029594 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555145-rn6ld" event={"ID":"8fe32095-6b2e-4019-a8c4-50bec0f786b9","Type":"ContainerDied","Data":"9ddf32e07410285b539ecb2a03126cc8b2422ce079da076a080a34125c655af1"} Mar 12 09:45:04 crc kubenswrapper[4809]: I0312 09:45:04.603996 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555145-rn6ld" Mar 12 09:45:04 crc kubenswrapper[4809]: I0312 09:45:04.652684 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jtg2\" (UniqueName: \"kubernetes.io/projected/8fe32095-6b2e-4019-a8c4-50bec0f786b9-kube-api-access-4jtg2\") pod \"8fe32095-6b2e-4019-a8c4-50bec0f786b9\" (UID: \"8fe32095-6b2e-4019-a8c4-50bec0f786b9\") " Mar 12 09:45:04 crc kubenswrapper[4809]: I0312 09:45:04.652853 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8fe32095-6b2e-4019-a8c4-50bec0f786b9-config-volume\") pod \"8fe32095-6b2e-4019-a8c4-50bec0f786b9\" (UID: \"8fe32095-6b2e-4019-a8c4-50bec0f786b9\") " Mar 12 09:45:04 crc kubenswrapper[4809]: I0312 09:45:04.652950 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8fe32095-6b2e-4019-a8c4-50bec0f786b9-secret-volume\") pod \"8fe32095-6b2e-4019-a8c4-50bec0f786b9\" (UID: \"8fe32095-6b2e-4019-a8c4-50bec0f786b9\") " Mar 12 09:45:04 crc kubenswrapper[4809]: I0312 09:45:04.654951 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fe32095-6b2e-4019-a8c4-50bec0f786b9-config-volume" (OuterVolumeSpecName: "config-volume") pod "8fe32095-6b2e-4019-a8c4-50bec0f786b9" (UID: "8fe32095-6b2e-4019-a8c4-50bec0f786b9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 09:45:04 crc kubenswrapper[4809]: I0312 09:45:04.661337 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fe32095-6b2e-4019-a8c4-50bec0f786b9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8fe32095-6b2e-4019-a8c4-50bec0f786b9" (UID: "8fe32095-6b2e-4019-a8c4-50bec0f786b9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 09:45:04 crc kubenswrapper[4809]: I0312 09:45:04.662023 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fe32095-6b2e-4019-a8c4-50bec0f786b9-kube-api-access-4jtg2" (OuterVolumeSpecName: "kube-api-access-4jtg2") pod "8fe32095-6b2e-4019-a8c4-50bec0f786b9" (UID: "8fe32095-6b2e-4019-a8c4-50bec0f786b9"). InnerVolumeSpecName "kube-api-access-4jtg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 09:45:04 crc kubenswrapper[4809]: I0312 09:45:04.755772 4809 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8fe32095-6b2e-4019-a8c4-50bec0f786b9-config-volume\") on node \"crc\" DevicePath \"\"" Mar 12 09:45:04 crc kubenswrapper[4809]: I0312 09:45:04.755808 4809 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8fe32095-6b2e-4019-a8c4-50bec0f786b9-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 12 09:45:04 crc kubenswrapper[4809]: I0312 09:45:04.755819 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jtg2\" (UniqueName: \"kubernetes.io/projected/8fe32095-6b2e-4019-a8c4-50bec0f786b9-kube-api-access-4jtg2\") on node \"crc\" DevicePath \"\"" Mar 12 09:45:05 crc kubenswrapper[4809]: I0312 09:45:05.055175 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555145-rn6ld" event={"ID":"8fe32095-6b2e-4019-a8c4-50bec0f786b9","Type":"ContainerDied","Data":"5892a331c2421fd6a4b0bcd88b0545e5832a5d8139f3da8f0214c268e01276e1"} Mar 12 09:45:05 crc kubenswrapper[4809]: I0312 09:45:05.055217 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5892a331c2421fd6a4b0bcd88b0545e5832a5d8139f3da8f0214c268e01276e1" Mar 12 09:45:05 crc kubenswrapper[4809]: I0312 09:45:05.055287 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555145-rn6ld" Mar 12 09:45:05 crc kubenswrapper[4809]: I0312 09:45:05.137430 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555100-xtvph"] Mar 12 09:45:05 crc kubenswrapper[4809]: I0312 09:45:05.153236 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555100-xtvph"] Mar 12 09:45:07 crc kubenswrapper[4809]: I0312 09:45:07.120484 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7ea56f1-9c01-47d5-b5b2-d778e7dc339a" path="/var/lib/kubelet/pods/b7ea56f1-9c01-47d5-b5b2-d778e7dc339a/volumes" Mar 12 09:45:15 crc kubenswrapper[4809]: I0312 09:45:15.048968 4809 patch_prober.go:28] interesting pod/machine-config-daemon-h6d4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 09:45:15 crc kubenswrapper[4809]: I0312 09:45:15.049730 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 09:45:16 crc kubenswrapper[4809]: I0312 09:45:16.933464 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-fzj9d_a6312c6e-68f1-40c5-82ea-50fda65c492f/prometheus-operator/0.log" Mar 12 09:45:17 crc kubenswrapper[4809]: I0312 09:45:17.254699 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-56f5999595-jrnk6_d4977a4c-9481-45c8-ba76-6e985c4e11be/prometheus-operator-admission-webhook/0.log" Mar 12 09:45:17 crc kubenswrapper[4809]: I0312 09:45:17.399052 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-56f5999595-xkcxn_ef4ced13-c901-407a-aa7b-ed5198a4cca8/prometheus-operator-admission-webhook/0.log" Mar 12 09:45:17 crc kubenswrapper[4809]: I0312 09:45:17.506485 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-wwrvc_8f1a2dab-e883-409f-ba21-a52ea0947c1b/operator/0.log" Mar 12 09:45:17 crc kubenswrapper[4809]: I0312 09:45:17.641875 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-p9md4_a3c7bda2-fd1b-4e75-8991-a7f713283b7d/observability-ui-dashboards/0.log" Mar 12 09:45:17 crc kubenswrapper[4809]: I0312 09:45:17.759796 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-gb2mk_68cdd0d8-8927-4777-8067-995b7a404794/perses-operator/0.log" Mar 12 09:45:35 crc kubenswrapper[4809]: I0312 09:45:35.120327 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-c769fd969-z59kc_594fada9-d745-48a4-888c-a162cae5bf71/cluster-logging-operator/0.log" Mar 12 09:45:35 crc kubenswrapper[4809]: I0312 09:45:35.353815 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-n9p24_20298793-4684-4286-a96e-36aea4b5be08/collector/0.log" Mar 12 09:45:35 crc kubenswrapper[4809]: I0312 09:45:35.363691 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_ba5833e7-becf-412f-879b-6cab8777fb0b/loki-compactor/0.log" Mar 12 09:45:35 crc kubenswrapper[4809]: I0312 09:45:35.591860 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-fd898bfdd-dhc65_8232d992-4bfb-46ca-a440-647d8c006309/gateway/0.log" Mar 12 09:45:35 crc kubenswrapper[4809]: I0312 09:45:35.666002 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-fd898bfdd-dhc65_8232d992-4bfb-46ca-a440-647d8c006309/opa/0.log" Mar 12 09:45:35 crc kubenswrapper[4809]: I0312 09:45:35.674826 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-5d5548c9f5-pbfks_8ac4723a-9ff0-4186-8177-8a86f6db8b9f/loki-distributor/0.log" Mar 12 09:45:35 crc kubenswrapper[4809]: I0312 09:45:35.885798 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-fd898bfdd-tbqw2_7546fb46-f601-417f-ad26-69a4fb625fdc/opa/0.log" Mar 12 09:45:35 crc kubenswrapper[4809]: I0312 09:45:35.915013 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-fd898bfdd-tbqw2_7546fb46-f601-417f-ad26-69a4fb625fdc/gateway/0.log" Mar 12 09:45:36 crc kubenswrapper[4809]: I0312 09:45:36.115387 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_bcc0a610-6eb0-4a5f-88d9-5d069f760c14/loki-index-gateway/0.log" Mar 12 09:45:36 crc kubenswrapper[4809]: I0312 09:45:36.209644 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_1ef2f625-286b-49c8-97d9-a98350cfea7b/loki-ingester/0.log" Mar 12 09:45:36 crc kubenswrapper[4809]: I0312 09:45:36.405802 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-76bf7b6d45-zb8df_c3630a5f-f4c4-42af-8335-60dbcbdb4961/loki-querier/0.log" Mar 12 09:45:36 crc kubenswrapper[4809]: I0312 09:45:36.687677 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-6d6859c548-6pp5s_9fc47673-0fe3-49f6-a2bb-06845a2f3fc4/loki-query-frontend/0.log" Mar 12 09:45:45 crc kubenswrapper[4809]: I0312 09:45:45.049029 4809 patch_prober.go:28] interesting pod/machine-config-daemon-h6d4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 09:45:45 crc kubenswrapper[4809]: I0312 09:45:45.049869 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 09:45:45 crc kubenswrapper[4809]: I0312 09:45:45.049942 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" Mar 12 09:45:45 crc kubenswrapper[4809]: I0312 09:45:45.050986 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2cc4d61696c1a97a33dbf89b6027d85f4124e9d804e87e559c4cc8ddb20dd8d3"} pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 09:45:45 crc kubenswrapper[4809]: I0312 09:45:45.051049 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" containerID="cri-o://2cc4d61696c1a97a33dbf89b6027d85f4124e9d804e87e559c4cc8ddb20dd8d3" gracePeriod=600 Mar 12 09:45:45 crc kubenswrapper[4809]: E0312 09:45:45.176363 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:45:45 crc kubenswrapper[4809]: I0312 09:45:45.529516 4809 generic.go:334] "Generic (PLEG): container finished" podID="101483ba-8ed3-40eb-9855-077e9add029f" containerID="2cc4d61696c1a97a33dbf89b6027d85f4124e9d804e87e559c4cc8ddb20dd8d3" exitCode=0 Mar 12 09:45:45 crc kubenswrapper[4809]: I0312 09:45:45.529578 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" event={"ID":"101483ba-8ed3-40eb-9855-077e9add029f","Type":"ContainerDied","Data":"2cc4d61696c1a97a33dbf89b6027d85f4124e9d804e87e559c4cc8ddb20dd8d3"} Mar 12 09:45:45 crc kubenswrapper[4809]: I0312 09:45:45.529642 4809 scope.go:117] "RemoveContainer" containerID="2e6ab1f1fc625e84ede9f32b1a2be463f7b37e1535ff9f1acd9bbcf4150c9d1a" Mar 12 09:45:45 crc kubenswrapper[4809]: I0312 09:45:45.530504 4809 scope.go:117] "RemoveContainer" containerID="2cc4d61696c1a97a33dbf89b6027d85f4124e9d804e87e559c4cc8ddb20dd8d3" Mar 12 09:45:45 crc kubenswrapper[4809]: E0312 09:45:45.530904 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:45:54 crc kubenswrapper[4809]: I0312 09:45:54.020259 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-7jdts_b7ddc716-c09f-4923-8e70-f2251873aea9/controller/1.log" Mar 12 09:45:54 crc kubenswrapper[4809]: I0312 09:45:54.138595 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-7jdts_b7ddc716-c09f-4923-8e70-f2251873aea9/controller/0.log" Mar 12 09:45:54 crc kubenswrapper[4809]: I0312 09:45:54.211859 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-7jdts_b7ddc716-c09f-4923-8e70-f2251873aea9/kube-rbac-proxy/0.log" Mar 12 09:45:54 crc kubenswrapper[4809]: I0312 09:45:54.388363 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n22vs_773274bc-3d57-4d1c-aaf9-f81ce1b981c4/cp-frr-files/0.log" Mar 12 09:45:54 crc kubenswrapper[4809]: I0312 09:45:54.574976 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n22vs_773274bc-3d57-4d1c-aaf9-f81ce1b981c4/cp-reloader/0.log" Mar 12 09:45:54 crc kubenswrapper[4809]: I0312 09:45:54.588101 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n22vs_773274bc-3d57-4d1c-aaf9-f81ce1b981c4/cp-frr-files/0.log" Mar 12 09:45:54 crc kubenswrapper[4809]: I0312 09:45:54.588542 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n22vs_773274bc-3d57-4d1c-aaf9-f81ce1b981c4/cp-metrics/0.log" Mar 12 09:45:54 crc kubenswrapper[4809]: I0312 09:45:54.641610 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n22vs_773274bc-3d57-4d1c-aaf9-f81ce1b981c4/cp-reloader/0.log" Mar 12 09:45:54 crc kubenswrapper[4809]: I0312 09:45:54.823680 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n22vs_773274bc-3d57-4d1c-aaf9-f81ce1b981c4/cp-reloader/0.log" Mar 12 09:45:54 crc kubenswrapper[4809]: I0312 09:45:54.848482 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n22vs_773274bc-3d57-4d1c-aaf9-f81ce1b981c4/cp-metrics/0.log" Mar 12 09:45:54 crc kubenswrapper[4809]: I0312 09:45:54.857087 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n22vs_773274bc-3d57-4d1c-aaf9-f81ce1b981c4/cp-metrics/0.log" Mar 12 09:45:54 crc kubenswrapper[4809]: I0312 09:45:54.859033 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n22vs_773274bc-3d57-4d1c-aaf9-f81ce1b981c4/cp-frr-files/0.log" Mar 12 09:45:55 crc kubenswrapper[4809]: I0312 09:45:55.060164 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n22vs_773274bc-3d57-4d1c-aaf9-f81ce1b981c4/cp-reloader/0.log" Mar 12 09:45:55 crc kubenswrapper[4809]: I0312 09:45:55.065702 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n22vs_773274bc-3d57-4d1c-aaf9-f81ce1b981c4/cp-frr-files/0.log" Mar 12 09:45:55 crc kubenswrapper[4809]: I0312 09:45:55.069720 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n22vs_773274bc-3d57-4d1c-aaf9-f81ce1b981c4/cp-metrics/0.log" Mar 12 09:45:55 crc kubenswrapper[4809]: I0312 09:45:55.101430 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n22vs_773274bc-3d57-4d1c-aaf9-f81ce1b981c4/controller/1.log" Mar 12 09:45:55 crc kubenswrapper[4809]: I0312 09:45:55.266150 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n22vs_773274bc-3d57-4d1c-aaf9-f81ce1b981c4/controller/0.log" Mar 12 09:45:55 crc kubenswrapper[4809]: I0312 09:45:55.327142 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n22vs_773274bc-3d57-4d1c-aaf9-f81ce1b981c4/frr-metrics/0.log" Mar 12 09:45:55 crc kubenswrapper[4809]: I0312 09:45:55.528254 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n22vs_773274bc-3d57-4d1c-aaf9-f81ce1b981c4/kube-rbac-proxy/0.log" Mar 12 09:45:55 crc kubenswrapper[4809]: I0312 09:45:55.582888 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n22vs_773274bc-3d57-4d1c-aaf9-f81ce1b981c4/kube-rbac-proxy-frr/0.log" Mar 12 09:45:55 crc kubenswrapper[4809]: I0312 09:45:55.834602 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n22vs_773274bc-3d57-4d1c-aaf9-f81ce1b981c4/reloader/0.log" Mar 12 09:45:55 crc kubenswrapper[4809]: I0312 09:45:55.899159 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n22vs_773274bc-3d57-4d1c-aaf9-f81ce1b981c4/frr/1.log" Mar 12 09:45:56 crc kubenswrapper[4809]: I0312 09:45:56.093381 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-bcc4b6f68-rmrt9_6b5e61e4-2d13-491e-be53-aed7ae027cb1/frr-k8s-webhook-server/1.log" Mar 12 09:45:56 crc kubenswrapper[4809]: I0312 09:45:56.107667 4809 scope.go:117] "RemoveContainer" containerID="2cc4d61696c1a97a33dbf89b6027d85f4124e9d804e87e559c4cc8ddb20dd8d3" Mar 12 09:45:56 crc kubenswrapper[4809]: E0312 09:45:56.108158 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:45:56 crc kubenswrapper[4809]: I0312 09:45:56.124106 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-bcc4b6f68-rmrt9_6b5e61e4-2d13-491e-be53-aed7ae027cb1/frr-k8s-webhook-server/0.log" Mar 12 09:45:56 crc kubenswrapper[4809]: I0312 09:45:56.398454 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-59fcfb5dc8-gpg2j_6befee19-0c78-47ca-a608-be246c0d7bb5/webhook-server/1.log" Mar 12 09:45:56 crc kubenswrapper[4809]: I0312 09:45:56.420794 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-648f7d48f7-wdwdw_d177b9be-4037-4f81-8227-9c4361eba85f/manager/0.log" Mar 12 09:45:56 crc kubenswrapper[4809]: I0312 09:45:56.731066 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-59fcfb5dc8-gpg2j_6befee19-0c78-47ca-a608-be246c0d7bb5/webhook-server/0.log" Mar 12 09:45:56 crc kubenswrapper[4809]: I0312 09:45:56.736583 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-jt7s5_d64bdb22-6590-41af-94ad-0e725ca0355a/kube-rbac-proxy/0.log" Mar 12 09:45:57 crc kubenswrapper[4809]: I0312 09:45:57.348287 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-jt7s5_d64bdb22-6590-41af-94ad-0e725ca0355a/speaker/1.log" Mar 12 09:45:57 crc kubenswrapper[4809]: I0312 09:45:57.777358 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-jt7s5_d64bdb22-6590-41af-94ad-0e725ca0355a/speaker/0.log" Mar 12 09:45:57 crc kubenswrapper[4809]: I0312 09:45:57.805654 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n22vs_773274bc-3d57-4d1c-aaf9-f81ce1b981c4/frr/0.log" Mar 12 09:45:59 crc kubenswrapper[4809]: I0312 09:45:59.059824 4809 scope.go:117] "RemoveContainer" containerID="143305fe37fb0b9dd67eb703f68f087a4205eb79e78b70eb3d7ef557c0b89947" Mar 12 09:46:00 crc kubenswrapper[4809]: I0312 09:46:00.159016 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555146-b9bqf"] Mar 12 09:46:00 crc kubenswrapper[4809]: E0312 09:46:00.160225 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fe32095-6b2e-4019-a8c4-50bec0f786b9" containerName="collect-profiles" Mar 12 09:46:00 crc kubenswrapper[4809]: I0312 09:46:00.160259 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fe32095-6b2e-4019-a8c4-50bec0f786b9" containerName="collect-profiles" Mar 12 09:46:00 crc kubenswrapper[4809]: I0312 09:46:00.160680 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fe32095-6b2e-4019-a8c4-50bec0f786b9" containerName="collect-profiles" Mar 12 09:46:00 crc kubenswrapper[4809]: I0312 09:46:00.162563 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555146-b9bqf" Mar 12 09:46:00 crc kubenswrapper[4809]: I0312 09:46:00.166693 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 09:46:00 crc kubenswrapper[4809]: I0312 09:46:00.166743 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-tvxqv" Mar 12 09:46:00 crc kubenswrapper[4809]: I0312 09:46:00.167000 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 09:46:00 crc kubenswrapper[4809]: I0312 09:46:00.182406 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555146-b9bqf"] Mar 12 09:46:00 crc kubenswrapper[4809]: I0312 09:46:00.247165 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gb59h\" (UniqueName: \"kubernetes.io/projected/37a6b2ec-c23e-4fda-9651-6dc234d5b04f-kube-api-access-gb59h\") pod \"auto-csr-approver-29555146-b9bqf\" (UID: \"37a6b2ec-c23e-4fda-9651-6dc234d5b04f\") " pod="openshift-infra/auto-csr-approver-29555146-b9bqf" Mar 12 09:46:00 crc kubenswrapper[4809]: I0312 09:46:00.349102 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gb59h\" (UniqueName: \"kubernetes.io/projected/37a6b2ec-c23e-4fda-9651-6dc234d5b04f-kube-api-access-gb59h\") pod \"auto-csr-approver-29555146-b9bqf\" (UID: \"37a6b2ec-c23e-4fda-9651-6dc234d5b04f\") " pod="openshift-infra/auto-csr-approver-29555146-b9bqf" Mar 12 09:46:00 crc kubenswrapper[4809]: I0312 09:46:00.379689 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gb59h\" (UniqueName: \"kubernetes.io/projected/37a6b2ec-c23e-4fda-9651-6dc234d5b04f-kube-api-access-gb59h\") pod \"auto-csr-approver-29555146-b9bqf\" (UID: \"37a6b2ec-c23e-4fda-9651-6dc234d5b04f\") " pod="openshift-infra/auto-csr-approver-29555146-b9bqf" Mar 12 09:46:00 crc kubenswrapper[4809]: I0312 09:46:00.489161 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555146-b9bqf" Mar 12 09:46:01 crc kubenswrapper[4809]: I0312 09:46:01.207126 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555146-b9bqf"] Mar 12 09:46:01 crc kubenswrapper[4809]: I0312 09:46:01.737736 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555146-b9bqf" event={"ID":"37a6b2ec-c23e-4fda-9651-6dc234d5b04f","Type":"ContainerStarted","Data":"ef94eff684a043c42181129794a9fd0396604c90904de531e481ea96cb43161c"} Mar 12 09:46:03 crc kubenswrapper[4809]: I0312 09:46:03.767092 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555146-b9bqf" event={"ID":"37a6b2ec-c23e-4fda-9651-6dc234d5b04f","Type":"ContainerStarted","Data":"dd6074b5dcdc8a520d40a923cd89fdac3ea666ccfe0e17fafa50e729cbe02803"} Mar 12 09:46:03 crc kubenswrapper[4809]: I0312 09:46:03.791389 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555146-b9bqf" podStartSLOduration=2.726508577 podStartE2EDuration="3.791366407s" podCreationTimestamp="2026-03-12 09:46:00 +0000 UTC" firstStartedPulling="2026-03-12 09:46:01.21353051 +0000 UTC m=+6434.795566243" lastFinishedPulling="2026-03-12 09:46:02.27838834 +0000 UTC m=+6435.860424073" observedRunningTime="2026-03-12 09:46:03.782190926 +0000 UTC m=+6437.364226649" watchObservedRunningTime="2026-03-12 09:46:03.791366407 +0000 UTC m=+6437.373402140" Mar 12 09:46:04 crc kubenswrapper[4809]: I0312 09:46:04.778548 4809 generic.go:334] "Generic (PLEG): container finished" podID="37a6b2ec-c23e-4fda-9651-6dc234d5b04f" containerID="dd6074b5dcdc8a520d40a923cd89fdac3ea666ccfe0e17fafa50e729cbe02803" exitCode=0 Mar 12 09:46:04 crc kubenswrapper[4809]: I0312 09:46:04.778593 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555146-b9bqf" event={"ID":"37a6b2ec-c23e-4fda-9651-6dc234d5b04f","Type":"ContainerDied","Data":"dd6074b5dcdc8a520d40a923cd89fdac3ea666ccfe0e17fafa50e729cbe02803"} Mar 12 09:46:06 crc kubenswrapper[4809]: I0312 09:46:06.249778 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555146-b9bqf" Mar 12 09:46:06 crc kubenswrapper[4809]: I0312 09:46:06.327012 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gb59h\" (UniqueName: \"kubernetes.io/projected/37a6b2ec-c23e-4fda-9651-6dc234d5b04f-kube-api-access-gb59h\") pod \"37a6b2ec-c23e-4fda-9651-6dc234d5b04f\" (UID: \"37a6b2ec-c23e-4fda-9651-6dc234d5b04f\") " Mar 12 09:46:06 crc kubenswrapper[4809]: I0312 09:46:06.333478 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37a6b2ec-c23e-4fda-9651-6dc234d5b04f-kube-api-access-gb59h" (OuterVolumeSpecName: "kube-api-access-gb59h") pod "37a6b2ec-c23e-4fda-9651-6dc234d5b04f" (UID: "37a6b2ec-c23e-4fda-9651-6dc234d5b04f"). InnerVolumeSpecName "kube-api-access-gb59h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 09:46:06 crc kubenswrapper[4809]: I0312 09:46:06.430706 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gb59h\" (UniqueName: \"kubernetes.io/projected/37a6b2ec-c23e-4fda-9651-6dc234d5b04f-kube-api-access-gb59h\") on node \"crc\" DevicePath \"\"" Mar 12 09:46:06 crc kubenswrapper[4809]: I0312 09:46:06.805062 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555146-b9bqf" event={"ID":"37a6b2ec-c23e-4fda-9651-6dc234d5b04f","Type":"ContainerDied","Data":"ef94eff684a043c42181129794a9fd0396604c90904de531e481ea96cb43161c"} Mar 12 09:46:06 crc kubenswrapper[4809]: I0312 09:46:06.805425 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef94eff684a043c42181129794a9fd0396604c90904de531e481ea96cb43161c" Mar 12 09:46:06 crc kubenswrapper[4809]: I0312 09:46:06.805138 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555146-b9bqf" Mar 12 09:46:06 crc kubenswrapper[4809]: I0312 09:46:06.863222 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555140-7nmbx"] Mar 12 09:46:06 crc kubenswrapper[4809]: I0312 09:46:06.873954 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555140-7nmbx"] Mar 12 09:46:07 crc kubenswrapper[4809]: I0312 09:46:07.120732 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="363c2df6-c2f2-4cfd-ad74-4a2a598cb809" path="/var/lib/kubelet/pods/363c2df6-c2f2-4cfd-ad74-4a2a598cb809/volumes" Mar 12 09:46:08 crc kubenswrapper[4809]: I0312 09:46:08.106105 4809 scope.go:117] "RemoveContainer" containerID="2cc4d61696c1a97a33dbf89b6027d85f4124e9d804e87e559c4cc8ddb20dd8d3" Mar 12 09:46:08 crc kubenswrapper[4809]: E0312 09:46:08.106732 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:46:12 crc kubenswrapper[4809]: I0312 09:46:12.139382 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rnmb7_85e8b987-8bed-4d15-b39c-5fd8834e6994/util/0.log" Mar 12 09:46:12 crc kubenswrapper[4809]: I0312 09:46:12.338315 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rnmb7_85e8b987-8bed-4d15-b39c-5fd8834e6994/pull/0.log" Mar 12 09:46:12 crc kubenswrapper[4809]: I0312 09:46:12.362495 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rnmb7_85e8b987-8bed-4d15-b39c-5fd8834e6994/util/0.log" Mar 12 09:46:12 crc kubenswrapper[4809]: I0312 09:46:12.404774 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rnmb7_85e8b987-8bed-4d15-b39c-5fd8834e6994/pull/0.log" Mar 12 09:46:12 crc kubenswrapper[4809]: I0312 09:46:12.583536 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rnmb7_85e8b987-8bed-4d15-b39c-5fd8834e6994/util/0.log" Mar 12 09:46:12 crc kubenswrapper[4809]: I0312 09:46:12.618031 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rnmb7_85e8b987-8bed-4d15-b39c-5fd8834e6994/extract/0.log" Mar 12 09:46:12 crc kubenswrapper[4809]: I0312 09:46:12.633566 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874rnmb7_85e8b987-8bed-4d15-b39c-5fd8834e6994/pull/0.log" Mar 12 09:46:12 crc kubenswrapper[4809]: I0312 09:46:12.763354 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14f9hp_829eb981-1af8-4c1f-982b-47e8141d9154/util/0.log" Mar 12 09:46:12 crc kubenswrapper[4809]: I0312 09:46:12.965646 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14f9hp_829eb981-1af8-4c1f-982b-47e8141d9154/util/0.log" Mar 12 09:46:12 crc kubenswrapper[4809]: I0312 09:46:12.975227 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14f9hp_829eb981-1af8-4c1f-982b-47e8141d9154/pull/0.log" Mar 12 09:46:12 crc kubenswrapper[4809]: I0312 09:46:12.992304 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14f9hp_829eb981-1af8-4c1f-982b-47e8141d9154/pull/0.log" Mar 12 09:46:13 crc kubenswrapper[4809]: I0312 09:46:13.250171 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14f9hp_829eb981-1af8-4c1f-982b-47e8141d9154/extract/0.log" Mar 12 09:46:13 crc kubenswrapper[4809]: I0312 09:46:13.256618 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14f9hp_829eb981-1af8-4c1f-982b-47e8141d9154/util/0.log" Mar 12 09:46:13 crc kubenswrapper[4809]: I0312 09:46:13.271907 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c14f9hp_829eb981-1af8-4c1f-982b-47e8141d9154/pull/0.log" Mar 12 09:46:13 crc kubenswrapper[4809]: I0312 09:46:13.459166 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jxn6b_72104bf6-f1c5-4957-b4d7-f6254d1c2121/util/0.log" Mar 12 09:46:13 crc kubenswrapper[4809]: I0312 09:46:13.693789 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jxn6b_72104bf6-f1c5-4957-b4d7-f6254d1c2121/util/0.log" Mar 12 09:46:13 crc kubenswrapper[4809]: I0312 09:46:13.708966 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jxn6b_72104bf6-f1c5-4957-b4d7-f6254d1c2121/pull/0.log" Mar 12 09:46:13 crc kubenswrapper[4809]: I0312 09:46:13.715526 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jxn6b_72104bf6-f1c5-4957-b4d7-f6254d1c2121/pull/0.log" Mar 12 09:46:13 crc kubenswrapper[4809]: I0312 09:46:13.904692 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jxn6b_72104bf6-f1c5-4957-b4d7-f6254d1c2121/pull/0.log" Mar 12 09:46:13 crc kubenswrapper[4809]: I0312 09:46:13.933157 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jxn6b_72104bf6-f1c5-4957-b4d7-f6254d1c2121/util/0.log" Mar 12 09:46:13 crc kubenswrapper[4809]: I0312 09:46:13.953850 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19jxn6b_72104bf6-f1c5-4957-b4d7-f6254d1c2121/extract/0.log" Mar 12 09:46:14 crc kubenswrapper[4809]: I0312 09:46:14.086747 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086xr6r_56b4ec18-bdd0-4122-ab71-3ff920814d18/util/0.log" Mar 12 09:46:14 crc kubenswrapper[4809]: I0312 09:46:14.321540 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086xr6r_56b4ec18-bdd0-4122-ab71-3ff920814d18/util/0.log" Mar 12 09:46:14 crc kubenswrapper[4809]: I0312 09:46:14.327804 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086xr6r_56b4ec18-bdd0-4122-ab71-3ff920814d18/pull/0.log" Mar 12 09:46:14 crc kubenswrapper[4809]: I0312 09:46:14.368314 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086xr6r_56b4ec18-bdd0-4122-ab71-3ff920814d18/pull/0.log" Mar 12 09:46:14 crc kubenswrapper[4809]: I0312 09:46:14.561633 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086xr6r_56b4ec18-bdd0-4122-ab71-3ff920814d18/util/0.log" Mar 12 09:46:14 crc kubenswrapper[4809]: I0312 09:46:14.575764 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086xr6r_56b4ec18-bdd0-4122-ab71-3ff920814d18/pull/0.log" Mar 12 09:46:14 crc kubenswrapper[4809]: I0312 09:46:14.576469 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086xr6r_56b4ec18-bdd0-4122-ab71-3ff920814d18/extract/0.log" Mar 12 09:46:14 crc kubenswrapper[4809]: I0312 09:46:14.770608 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zqzjb_f9d9e6d3-d87f-485b-bb03-6ed4f067de44/extract-utilities/0.log" Mar 12 09:46:14 crc kubenswrapper[4809]: I0312 09:46:14.971164 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zqzjb_f9d9e6d3-d87f-485b-bb03-6ed4f067de44/extract-utilities/0.log" Mar 12 09:46:15 crc kubenswrapper[4809]: I0312 09:46:15.004890 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zqzjb_f9d9e6d3-d87f-485b-bb03-6ed4f067de44/extract-content/0.log" Mar 12 09:46:15 crc kubenswrapper[4809]: I0312 09:46:15.005461 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zqzjb_f9d9e6d3-d87f-485b-bb03-6ed4f067de44/extract-content/0.log" Mar 12 09:46:15 crc kubenswrapper[4809]: I0312 09:46:15.180780 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zqzjb_f9d9e6d3-d87f-485b-bb03-6ed4f067de44/extract-utilities/0.log" Mar 12 09:46:15 crc kubenswrapper[4809]: I0312 09:46:15.200780 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zqzjb_f9d9e6d3-d87f-485b-bb03-6ed4f067de44/extract-content/0.log" Mar 12 09:46:15 crc kubenswrapper[4809]: I0312 09:46:15.411506 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-clb7l_ccd05106-f862-4106-bc2e-4ec90d4240dc/extract-utilities/0.log" Mar 12 09:46:15 crc kubenswrapper[4809]: I0312 09:46:15.808426 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-clb7l_ccd05106-f862-4106-bc2e-4ec90d4240dc/extract-content/0.log" Mar 12 09:46:15 crc kubenswrapper[4809]: I0312 09:46:15.838062 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-clb7l_ccd05106-f862-4106-bc2e-4ec90d4240dc/extract-content/0.log" Mar 12 09:46:15 crc kubenswrapper[4809]: I0312 09:46:15.883985 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-clb7l_ccd05106-f862-4106-bc2e-4ec90d4240dc/extract-utilities/0.log" Mar 12 09:46:16 crc kubenswrapper[4809]: I0312 09:46:16.235342 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zqzjb_f9d9e6d3-d87f-485b-bb03-6ed4f067de44/registry-server/0.log" Mar 12 09:46:16 crc kubenswrapper[4809]: I0312 09:46:16.248269 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-clb7l_ccd05106-f862-4106-bc2e-4ec90d4240dc/extract-utilities/0.log" Mar 12 09:46:16 crc kubenswrapper[4809]: I0312 09:46:16.279964 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-clb7l_ccd05106-f862-4106-bc2e-4ec90d4240dc/extract-content/0.log" Mar 12 09:46:16 crc kubenswrapper[4809]: I0312 09:46:16.557847 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989rdjh9_39c32eb2-651b-4aee-a586-4a8b123f07f8/util/0.log" Mar 12 09:46:16 crc kubenswrapper[4809]: I0312 09:46:16.564520 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-clb7l_ccd05106-f862-4106-bc2e-4ec90d4240dc/registry-server/0.log" Mar 12 09:46:16 crc kubenswrapper[4809]: I0312 09:46:16.792754 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989rdjh9_39c32eb2-651b-4aee-a586-4a8b123f07f8/pull/0.log" Mar 12 09:46:16 crc kubenswrapper[4809]: I0312 09:46:16.811210 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989rdjh9_39c32eb2-651b-4aee-a586-4a8b123f07f8/pull/0.log" Mar 12 09:46:16 crc kubenswrapper[4809]: I0312 09:46:16.842928 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989rdjh9_39c32eb2-651b-4aee-a586-4a8b123f07f8/util/0.log" Mar 12 09:46:16 crc kubenswrapper[4809]: I0312 09:46:16.995146 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989rdjh9_39c32eb2-651b-4aee-a586-4a8b123f07f8/pull/0.log" Mar 12 09:46:17 crc kubenswrapper[4809]: I0312 09:46:17.018552 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989rdjh9_39c32eb2-651b-4aee-a586-4a8b123f07f8/util/0.log" Mar 12 09:46:17 crc kubenswrapper[4809]: I0312 09:46:17.020286 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989rdjh9_39c32eb2-651b-4aee-a586-4a8b123f07f8/extract/0.log" Mar 12 09:46:17 crc kubenswrapper[4809]: I0312 09:46:17.096636 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-lncvk_49c3f940-85d8-49c5-a529-367c56018858/marketplace-operator/0.log" Mar 12 09:46:17 crc kubenswrapper[4809]: I0312 09:46:17.275524 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hrqzc_d0d3a3fc-1ad7-4a41-a732-d16ca4e7ceb1/extract-utilities/0.log" Mar 12 09:46:17 crc kubenswrapper[4809]: I0312 09:46:17.451658 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hrqzc_d0d3a3fc-1ad7-4a41-a732-d16ca4e7ceb1/extract-content/0.log" Mar 12 09:46:17 crc kubenswrapper[4809]: I0312 09:46:17.451784 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hrqzc_d0d3a3fc-1ad7-4a41-a732-d16ca4e7ceb1/extract-utilities/0.log" Mar 12 09:46:17 crc kubenswrapper[4809]: I0312 09:46:17.475178 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hrqzc_d0d3a3fc-1ad7-4a41-a732-d16ca4e7ceb1/extract-content/0.log" Mar 12 09:46:17 crc kubenswrapper[4809]: I0312 09:46:17.680592 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hrqzc_d0d3a3fc-1ad7-4a41-a732-d16ca4e7ceb1/extract-utilities/0.log" Mar 12 09:46:17 crc kubenswrapper[4809]: I0312 09:46:17.730544 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pd2vq_ec52f8eb-40dd-4475-9726-69b84829233d/extract-utilities/0.log" Mar 12 09:46:17 crc kubenswrapper[4809]: I0312 09:46:17.735188 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hrqzc_d0d3a3fc-1ad7-4a41-a732-d16ca4e7ceb1/extract-content/0.log" Mar 12 09:46:17 crc kubenswrapper[4809]: I0312 09:46:17.893196 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hrqzc_d0d3a3fc-1ad7-4a41-a732-d16ca4e7ceb1/registry-server/0.log" Mar 12 09:46:17 crc kubenswrapper[4809]: I0312 09:46:17.952202 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pd2vq_ec52f8eb-40dd-4475-9726-69b84829233d/extract-utilities/0.log" Mar 12 09:46:17 crc kubenswrapper[4809]: I0312 09:46:17.985364 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pd2vq_ec52f8eb-40dd-4475-9726-69b84829233d/extract-content/0.log" Mar 12 09:46:18 crc kubenswrapper[4809]: I0312 09:46:18.013356 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pd2vq_ec52f8eb-40dd-4475-9726-69b84829233d/extract-content/0.log" Mar 12 09:46:18 crc kubenswrapper[4809]: I0312 09:46:18.146833 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pd2vq_ec52f8eb-40dd-4475-9726-69b84829233d/extract-utilities/0.log" Mar 12 09:46:18 crc kubenswrapper[4809]: I0312 09:46:18.184218 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pd2vq_ec52f8eb-40dd-4475-9726-69b84829233d/extract-content/0.log" Mar 12 09:46:19 crc kubenswrapper[4809]: I0312 09:46:19.106041 4809 scope.go:117] "RemoveContainer" containerID="2cc4d61696c1a97a33dbf89b6027d85f4124e9d804e87e559c4cc8ddb20dd8d3" Mar 12 09:46:19 crc kubenswrapper[4809]: E0312 09:46:19.107452 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:46:19 crc kubenswrapper[4809]: I0312 09:46:19.212015 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pd2vq_ec52f8eb-40dd-4475-9726-69b84829233d/registry-server/0.log" Mar 12 09:46:30 crc kubenswrapper[4809]: I0312 09:46:30.106954 4809 scope.go:117] "RemoveContainer" containerID="2cc4d61696c1a97a33dbf89b6027d85f4124e9d804e87e559c4cc8ddb20dd8d3" Mar 12 09:46:30 crc kubenswrapper[4809]: E0312 09:46:30.108646 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:46:31 crc kubenswrapper[4809]: I0312 09:46:31.901425 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-56f5999595-xkcxn_ef4ced13-c901-407a-aa7b-ed5198a4cca8/prometheus-operator-admission-webhook/0.log" Mar 12 09:46:31 crc kubenswrapper[4809]: I0312 09:46:31.923397 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-fzj9d_a6312c6e-68f1-40c5-82ea-50fda65c492f/prometheus-operator/0.log" Mar 12 09:46:31 crc kubenswrapper[4809]: I0312 09:46:31.951324 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-56f5999595-jrnk6_d4977a4c-9481-45c8-ba76-6e985c4e11be/prometheus-operator-admission-webhook/0.log" Mar 12 09:46:32 crc kubenswrapper[4809]: I0312 09:46:32.120851 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-p9md4_a3c7bda2-fd1b-4e75-8991-a7f713283b7d/observability-ui-dashboards/0.log" Mar 12 09:46:32 crc kubenswrapper[4809]: I0312 09:46:32.132285 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-wwrvc_8f1a2dab-e883-409f-ba21-a52ea0947c1b/operator/0.log" Mar 12 09:46:32 crc kubenswrapper[4809]: I0312 09:46:32.186769 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-gb2mk_68cdd0d8-8927-4777-8067-995b7a404794/perses-operator/0.log" Mar 12 09:46:42 crc kubenswrapper[4809]: I0312 09:46:42.106362 4809 scope.go:117] "RemoveContainer" containerID="2cc4d61696c1a97a33dbf89b6027d85f4124e9d804e87e559c4cc8ddb20dd8d3" Mar 12 09:46:42 crc kubenswrapper[4809]: E0312 09:46:42.107661 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:46:46 crc kubenswrapper[4809]: I0312 09:46:46.311223 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-65bb5b59df-8jk5h_861c2912-a932-4142-9b25-c7c0e1aaf062/manager/0.log" Mar 12 09:46:46 crc kubenswrapper[4809]: I0312 09:46:46.325782 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-65bb5b59df-8jk5h_861c2912-a932-4142-9b25-c7c0e1aaf062/kube-rbac-proxy/0.log" Mar 12 09:46:53 crc kubenswrapper[4809]: I0312 09:46:53.106349 4809 scope.go:117] "RemoveContainer" containerID="2cc4d61696c1a97a33dbf89b6027d85f4124e9d804e87e559c4cc8ddb20dd8d3" Mar 12 09:46:53 crc kubenswrapper[4809]: E0312 09:46:53.107558 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:46:59 crc kubenswrapper[4809]: I0312 09:46:59.201182 4809 scope.go:117] "RemoveContainer" containerID="449e5cf6a59f9b7dfa719db451d1a172aeb37fbaac048edbe084ff068a76da5b" Mar 12 09:46:59 crc kubenswrapper[4809]: I0312 09:46:59.257632 4809 scope.go:117] "RemoveContainer" containerID="6d41a315fa4f3b1fd87c667d783f452ad0a4791ab752fcd92d86b57bf800f08d" Mar 12 09:47:07 crc kubenswrapper[4809]: I0312 09:47:07.114940 4809 scope.go:117] "RemoveContainer" containerID="2cc4d61696c1a97a33dbf89b6027d85f4124e9d804e87e559c4cc8ddb20dd8d3" Mar 12 09:47:07 crc kubenswrapper[4809]: E0312 09:47:07.115996 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:47:21 crc kubenswrapper[4809]: I0312 09:47:21.106790 4809 scope.go:117] "RemoveContainer" containerID="2cc4d61696c1a97a33dbf89b6027d85f4124e9d804e87e559c4cc8ddb20dd8d3" Mar 12 09:47:21 crc kubenswrapper[4809]: E0312 09:47:21.107671 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:47:32 crc kubenswrapper[4809]: I0312 09:47:32.107479 4809 scope.go:117] "RemoveContainer" containerID="2cc4d61696c1a97a33dbf89b6027d85f4124e9d804e87e559c4cc8ddb20dd8d3" Mar 12 09:47:32 crc kubenswrapper[4809]: E0312 09:47:32.108602 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:47:45 crc kubenswrapper[4809]: I0312 09:47:45.107863 4809 scope.go:117] "RemoveContainer" containerID="2cc4d61696c1a97a33dbf89b6027d85f4124e9d804e87e559c4cc8ddb20dd8d3" Mar 12 09:47:45 crc kubenswrapper[4809]: E0312 09:47:45.109002 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:47:59 crc kubenswrapper[4809]: I0312 09:47:59.106584 4809 scope.go:117] "RemoveContainer" containerID="2cc4d61696c1a97a33dbf89b6027d85f4124e9d804e87e559c4cc8ddb20dd8d3" Mar 12 09:47:59 crc kubenswrapper[4809]: E0312 09:47:59.108016 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:47:59 crc kubenswrapper[4809]: I0312 09:47:59.459460 4809 scope.go:117] "RemoveContainer" containerID="bba63ce049831f1e3377230ca5151228973952bd2a4c40d4fbda41d62ddc079c" Mar 12 09:48:00 crc kubenswrapper[4809]: I0312 09:48:00.167533 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555148-xt5ch"] Mar 12 09:48:00 crc kubenswrapper[4809]: E0312 09:48:00.169344 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37a6b2ec-c23e-4fda-9651-6dc234d5b04f" containerName="oc" Mar 12 09:48:00 crc kubenswrapper[4809]: I0312 09:48:00.169429 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="37a6b2ec-c23e-4fda-9651-6dc234d5b04f" containerName="oc" Mar 12 09:48:00 crc kubenswrapper[4809]: I0312 09:48:00.169800 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="37a6b2ec-c23e-4fda-9651-6dc234d5b04f" containerName="oc" Mar 12 09:48:00 crc kubenswrapper[4809]: I0312 09:48:00.170849 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555148-xt5ch" Mar 12 09:48:00 crc kubenswrapper[4809]: I0312 09:48:00.176616 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 09:48:00 crc kubenswrapper[4809]: I0312 09:48:00.176788 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-tvxqv" Mar 12 09:48:00 crc kubenswrapper[4809]: I0312 09:48:00.177031 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 09:48:00 crc kubenswrapper[4809]: I0312 09:48:00.182188 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555148-xt5ch"] Mar 12 09:48:00 crc kubenswrapper[4809]: I0312 09:48:00.320068 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjb4z\" (UniqueName: \"kubernetes.io/projected/edec864a-4369-4b47-a470-e466d4b1083f-kube-api-access-qjb4z\") pod \"auto-csr-approver-29555148-xt5ch\" (UID: \"edec864a-4369-4b47-a470-e466d4b1083f\") " pod="openshift-infra/auto-csr-approver-29555148-xt5ch" Mar 12 09:48:00 crc kubenswrapper[4809]: I0312 09:48:00.424595 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjb4z\" (UniqueName: \"kubernetes.io/projected/edec864a-4369-4b47-a470-e466d4b1083f-kube-api-access-qjb4z\") pod \"auto-csr-approver-29555148-xt5ch\" (UID: \"edec864a-4369-4b47-a470-e466d4b1083f\") " pod="openshift-infra/auto-csr-approver-29555148-xt5ch" Mar 12 09:48:00 crc kubenswrapper[4809]: I0312 09:48:00.457970 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjb4z\" (UniqueName: \"kubernetes.io/projected/edec864a-4369-4b47-a470-e466d4b1083f-kube-api-access-qjb4z\") pod \"auto-csr-approver-29555148-xt5ch\" (UID: \"edec864a-4369-4b47-a470-e466d4b1083f\") " pod="openshift-infra/auto-csr-approver-29555148-xt5ch" Mar 12 09:48:00 crc kubenswrapper[4809]: I0312 09:48:00.515211 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555148-xt5ch" Mar 12 09:48:01 crc kubenswrapper[4809]: I0312 09:48:01.410557 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555148-xt5ch"] Mar 12 09:48:01 crc kubenswrapper[4809]: I0312 09:48:01.436289 4809 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 09:48:02 crc kubenswrapper[4809]: I0312 09:48:02.398109 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555148-xt5ch" event={"ID":"edec864a-4369-4b47-a470-e466d4b1083f","Type":"ContainerStarted","Data":"6f1611ca3db144819f5c95762faffc03e08563c6d0bab4f0160031b35ef09473"} Mar 12 09:48:03 crc kubenswrapper[4809]: I0312 09:48:03.415986 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555148-xt5ch" event={"ID":"edec864a-4369-4b47-a470-e466d4b1083f","Type":"ContainerStarted","Data":"e184dc6af4c4fea4a4a0f265a066dde88a930a745b0913cc3d3e3bb5eb0ecc18"} Mar 12 09:48:03 crc kubenswrapper[4809]: I0312 09:48:03.434700 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555148-xt5ch" podStartSLOduration=2.542154309 podStartE2EDuration="3.434681691s" podCreationTimestamp="2026-03-12 09:48:00 +0000 UTC" firstStartedPulling="2026-03-12 09:48:01.429958112 +0000 UTC m=+6555.011993845" lastFinishedPulling="2026-03-12 09:48:02.322485494 +0000 UTC m=+6555.904521227" observedRunningTime="2026-03-12 09:48:03.429508049 +0000 UTC m=+6557.011543792" watchObservedRunningTime="2026-03-12 09:48:03.434681691 +0000 UTC m=+6557.016717424" Mar 12 09:48:04 crc kubenswrapper[4809]: I0312 09:48:04.428838 4809 generic.go:334] "Generic (PLEG): container finished" podID="edec864a-4369-4b47-a470-e466d4b1083f" containerID="e184dc6af4c4fea4a4a0f265a066dde88a930a745b0913cc3d3e3bb5eb0ecc18" exitCode=0 Mar 12 09:48:04 crc kubenswrapper[4809]: I0312 09:48:04.428904 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555148-xt5ch" event={"ID":"edec864a-4369-4b47-a470-e466d4b1083f","Type":"ContainerDied","Data":"e184dc6af4c4fea4a4a0f265a066dde88a930a745b0913cc3d3e3bb5eb0ecc18"} Mar 12 09:48:05 crc kubenswrapper[4809]: I0312 09:48:05.815898 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555148-xt5ch" Mar 12 09:48:05 crc kubenswrapper[4809]: I0312 09:48:05.964908 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjb4z\" (UniqueName: \"kubernetes.io/projected/edec864a-4369-4b47-a470-e466d4b1083f-kube-api-access-qjb4z\") pod \"edec864a-4369-4b47-a470-e466d4b1083f\" (UID: \"edec864a-4369-4b47-a470-e466d4b1083f\") " Mar 12 09:48:05 crc kubenswrapper[4809]: I0312 09:48:05.972941 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edec864a-4369-4b47-a470-e466d4b1083f-kube-api-access-qjb4z" (OuterVolumeSpecName: "kube-api-access-qjb4z") pod "edec864a-4369-4b47-a470-e466d4b1083f" (UID: "edec864a-4369-4b47-a470-e466d4b1083f"). InnerVolumeSpecName "kube-api-access-qjb4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 09:48:06 crc kubenswrapper[4809]: I0312 09:48:06.068698 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjb4z\" (UniqueName: \"kubernetes.io/projected/edec864a-4369-4b47-a470-e466d4b1083f-kube-api-access-qjb4z\") on node \"crc\" DevicePath \"\"" Mar 12 09:48:06 crc kubenswrapper[4809]: I0312 09:48:06.509521 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555148-xt5ch" Mar 12 09:48:06 crc kubenswrapper[4809]: I0312 09:48:06.509505 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555148-xt5ch" event={"ID":"edec864a-4369-4b47-a470-e466d4b1083f","Type":"ContainerDied","Data":"6f1611ca3db144819f5c95762faffc03e08563c6d0bab4f0160031b35ef09473"} Mar 12 09:48:06 crc kubenswrapper[4809]: I0312 09:48:06.509858 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f1611ca3db144819f5c95762faffc03e08563c6d0bab4f0160031b35ef09473" Mar 12 09:48:06 crc kubenswrapper[4809]: I0312 09:48:06.521316 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555142-zvrth"] Mar 12 09:48:06 crc kubenswrapper[4809]: I0312 09:48:06.537340 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555142-zvrth"] Mar 12 09:48:07 crc kubenswrapper[4809]: I0312 09:48:07.120205 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5935bc45-f371-4dd3-b7ce-65955339ea44" path="/var/lib/kubelet/pods/5935bc45-f371-4dd3-b7ce-65955339ea44/volumes" Mar 12 09:48:14 crc kubenswrapper[4809]: I0312 09:48:14.106108 4809 scope.go:117] "RemoveContainer" containerID="2cc4d61696c1a97a33dbf89b6027d85f4124e9d804e87e559c4cc8ddb20dd8d3" Mar 12 09:48:14 crc kubenswrapper[4809]: E0312 09:48:14.107087 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:48:27 crc kubenswrapper[4809]: I0312 09:48:27.819618 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xwc5h"] Mar 12 09:48:27 crc kubenswrapper[4809]: E0312 09:48:27.820772 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edec864a-4369-4b47-a470-e466d4b1083f" containerName="oc" Mar 12 09:48:27 crc kubenswrapper[4809]: I0312 09:48:27.820789 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="edec864a-4369-4b47-a470-e466d4b1083f" containerName="oc" Mar 12 09:48:27 crc kubenswrapper[4809]: I0312 09:48:27.821104 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="edec864a-4369-4b47-a470-e466d4b1083f" containerName="oc" Mar 12 09:48:27 crc kubenswrapper[4809]: I0312 09:48:27.824784 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xwc5h" Mar 12 09:48:27 crc kubenswrapper[4809]: I0312 09:48:27.854317 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xwc5h"] Mar 12 09:48:27 crc kubenswrapper[4809]: I0312 09:48:27.893612 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rdd2\" (UniqueName: \"kubernetes.io/projected/eb9d21bd-238f-4182-ac36-3c0db68c2d2c-kube-api-access-4rdd2\") pod \"certified-operators-xwc5h\" (UID: \"eb9d21bd-238f-4182-ac36-3c0db68c2d2c\") " pod="openshift-marketplace/certified-operators-xwc5h" Mar 12 09:48:27 crc kubenswrapper[4809]: I0312 09:48:27.893913 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb9d21bd-238f-4182-ac36-3c0db68c2d2c-utilities\") pod \"certified-operators-xwc5h\" (UID: \"eb9d21bd-238f-4182-ac36-3c0db68c2d2c\") " pod="openshift-marketplace/certified-operators-xwc5h" Mar 12 09:48:27 crc kubenswrapper[4809]: I0312 09:48:27.894094 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb9d21bd-238f-4182-ac36-3c0db68c2d2c-catalog-content\") pod \"certified-operators-xwc5h\" (UID: \"eb9d21bd-238f-4182-ac36-3c0db68c2d2c\") " pod="openshift-marketplace/certified-operators-xwc5h" Mar 12 09:48:27 crc kubenswrapper[4809]: I0312 09:48:27.999343 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb9d21bd-238f-4182-ac36-3c0db68c2d2c-utilities\") pod \"certified-operators-xwc5h\" (UID: \"eb9d21bd-238f-4182-ac36-3c0db68c2d2c\") " pod="openshift-marketplace/certified-operators-xwc5h" Mar 12 09:48:27 crc kubenswrapper[4809]: I0312 09:48:27.999562 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb9d21bd-238f-4182-ac36-3c0db68c2d2c-catalog-content\") pod \"certified-operators-xwc5h\" (UID: \"eb9d21bd-238f-4182-ac36-3c0db68c2d2c\") " pod="openshift-marketplace/certified-operators-xwc5h" Mar 12 09:48:28 crc kubenswrapper[4809]: I0312 09:48:28.000155 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rdd2\" (UniqueName: \"kubernetes.io/projected/eb9d21bd-238f-4182-ac36-3c0db68c2d2c-kube-api-access-4rdd2\") pod \"certified-operators-xwc5h\" (UID: \"eb9d21bd-238f-4182-ac36-3c0db68c2d2c\") " pod="openshift-marketplace/certified-operators-xwc5h" Mar 12 09:48:28 crc kubenswrapper[4809]: I0312 09:48:28.001639 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb9d21bd-238f-4182-ac36-3c0db68c2d2c-utilities\") pod \"certified-operators-xwc5h\" (UID: \"eb9d21bd-238f-4182-ac36-3c0db68c2d2c\") " pod="openshift-marketplace/certified-operators-xwc5h" Mar 12 09:48:28 crc kubenswrapper[4809]: I0312 09:48:28.001689 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb9d21bd-238f-4182-ac36-3c0db68c2d2c-catalog-content\") pod \"certified-operators-xwc5h\" (UID: \"eb9d21bd-238f-4182-ac36-3c0db68c2d2c\") " pod="openshift-marketplace/certified-operators-xwc5h" Mar 12 09:48:28 crc kubenswrapper[4809]: I0312 09:48:28.025750 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rdd2\" (UniqueName: \"kubernetes.io/projected/eb9d21bd-238f-4182-ac36-3c0db68c2d2c-kube-api-access-4rdd2\") pod \"certified-operators-xwc5h\" (UID: \"eb9d21bd-238f-4182-ac36-3c0db68c2d2c\") " pod="openshift-marketplace/certified-operators-xwc5h" Mar 12 09:48:28 crc kubenswrapper[4809]: I0312 09:48:28.106391 4809 scope.go:117] "RemoveContainer" containerID="2cc4d61696c1a97a33dbf89b6027d85f4124e9d804e87e559c4cc8ddb20dd8d3" Mar 12 09:48:28 crc kubenswrapper[4809]: E0312 09:48:28.106833 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:48:28 crc kubenswrapper[4809]: I0312 09:48:28.149097 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xwc5h" Mar 12 09:48:28 crc kubenswrapper[4809]: I0312 09:48:28.634434 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xwc5h"] Mar 12 09:48:28 crc kubenswrapper[4809]: I0312 09:48:28.868496 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xwc5h" event={"ID":"eb9d21bd-238f-4182-ac36-3c0db68c2d2c","Type":"ContainerStarted","Data":"caa38667df8d146b64087732f5a2a80d3ea8bd74989295a2e974608efe79c26d"} Mar 12 09:48:28 crc kubenswrapper[4809]: I0312 09:48:28.868547 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xwc5h" event={"ID":"eb9d21bd-238f-4182-ac36-3c0db68c2d2c","Type":"ContainerStarted","Data":"dd8b3de7b151a7098b7f0c1fd78efc6137d922e519ee355783680f6bb5c41e23"} Mar 12 09:48:29 crc kubenswrapper[4809]: I0312 09:48:29.881659 4809 generic.go:334] "Generic (PLEG): container finished" podID="eb9d21bd-238f-4182-ac36-3c0db68c2d2c" containerID="caa38667df8d146b64087732f5a2a80d3ea8bd74989295a2e974608efe79c26d" exitCode=0 Mar 12 09:48:29 crc kubenswrapper[4809]: I0312 09:48:29.881766 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xwc5h" event={"ID":"eb9d21bd-238f-4182-ac36-3c0db68c2d2c","Type":"ContainerDied","Data":"caa38667df8d146b64087732f5a2a80d3ea8bd74989295a2e974608efe79c26d"} Mar 12 09:48:30 crc kubenswrapper[4809]: I0312 09:48:30.897766 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xwc5h" event={"ID":"eb9d21bd-238f-4182-ac36-3c0db68c2d2c","Type":"ContainerStarted","Data":"ae6bb961c8c4903f82b415ee41df121a422873c62067781b95461ce844e13ad9"} Mar 12 09:48:33 crc kubenswrapper[4809]: I0312 09:48:33.007834 4809 generic.go:334] "Generic (PLEG): container finished" podID="eb9d21bd-238f-4182-ac36-3c0db68c2d2c" containerID="ae6bb961c8c4903f82b415ee41df121a422873c62067781b95461ce844e13ad9" exitCode=0 Mar 12 09:48:33 crc kubenswrapper[4809]: I0312 09:48:33.007931 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xwc5h" event={"ID":"eb9d21bd-238f-4182-ac36-3c0db68c2d2c","Type":"ContainerDied","Data":"ae6bb961c8c4903f82b415ee41df121a422873c62067781b95461ce844e13ad9"} Mar 12 09:48:34 crc kubenswrapper[4809]: I0312 09:48:34.022202 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xwc5h" event={"ID":"eb9d21bd-238f-4182-ac36-3c0db68c2d2c","Type":"ContainerStarted","Data":"185126f22153a210b867eb1e1ed3fb7f963ba5ae319d51fcbc05d3b65664bf86"} Mar 12 09:48:34 crc kubenswrapper[4809]: I0312 09:48:34.039933 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xwc5h" podStartSLOduration=3.503041698 podStartE2EDuration="7.039918118s" podCreationTimestamp="2026-03-12 09:48:27 +0000 UTC" firstStartedPulling="2026-03-12 09:48:29.883643522 +0000 UTC m=+6583.465679245" lastFinishedPulling="2026-03-12 09:48:33.420519932 +0000 UTC m=+6587.002555665" observedRunningTime="2026-03-12 09:48:34.03745837 +0000 UTC m=+6587.619494143" watchObservedRunningTime="2026-03-12 09:48:34.039918118 +0000 UTC m=+6587.621953851" Mar 12 09:48:38 crc kubenswrapper[4809]: I0312 09:48:38.150795 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xwc5h" Mar 12 09:48:38 crc kubenswrapper[4809]: I0312 09:48:38.151782 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xwc5h" Mar 12 09:48:39 crc kubenswrapper[4809]: I0312 09:48:39.210228 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-xwc5h" podUID="eb9d21bd-238f-4182-ac36-3c0db68c2d2c" containerName="registry-server" probeResult="failure" output=< Mar 12 09:48:39 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 09:48:39 crc kubenswrapper[4809]: > Mar 12 09:48:43 crc kubenswrapper[4809]: I0312 09:48:43.106147 4809 scope.go:117] "RemoveContainer" containerID="2cc4d61696c1a97a33dbf89b6027d85f4124e9d804e87e559c4cc8ddb20dd8d3" Mar 12 09:48:43 crc kubenswrapper[4809]: E0312 09:48:43.107056 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:48:49 crc kubenswrapper[4809]: I0312 09:48:49.207936 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-xwc5h" podUID="eb9d21bd-238f-4182-ac36-3c0db68c2d2c" containerName="registry-server" probeResult="failure" output=< Mar 12 09:48:49 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 09:48:49 crc kubenswrapper[4809]: > Mar 12 09:48:55 crc kubenswrapper[4809]: I0312 09:48:55.106558 4809 scope.go:117] "RemoveContainer" containerID="2cc4d61696c1a97a33dbf89b6027d85f4124e9d804e87e559c4cc8ddb20dd8d3" Mar 12 09:48:55 crc kubenswrapper[4809]: E0312 09:48:55.108054 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:48:58 crc kubenswrapper[4809]: I0312 09:48:58.204308 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xwc5h" Mar 12 09:48:58 crc kubenswrapper[4809]: I0312 09:48:58.272998 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xwc5h" Mar 12 09:48:59 crc kubenswrapper[4809]: I0312 09:48:59.027835 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xwc5h"] Mar 12 09:48:59 crc kubenswrapper[4809]: I0312 09:48:59.441760 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xwc5h" podUID="eb9d21bd-238f-4182-ac36-3c0db68c2d2c" containerName="registry-server" containerID="cri-o://185126f22153a210b867eb1e1ed3fb7f963ba5ae319d51fcbc05d3b65664bf86" gracePeriod=2 Mar 12 09:48:59 crc kubenswrapper[4809]: I0312 09:48:59.552494 4809 scope.go:117] "RemoveContainer" containerID="3612feef0c6610b319319f4affa5f692e9111fc6f14218b40866899ff4aff337" Mar 12 09:48:59 crc kubenswrapper[4809]: I0312 09:48:59.683992 4809 scope.go:117] "RemoveContainer" containerID="8aa61896e9119cbc4be4e9e4098ffca8515977f68996a0eb96da39217219014f" Mar 12 09:49:00 crc kubenswrapper[4809]: I0312 09:49:00.075466 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xwc5h" Mar 12 09:49:00 crc kubenswrapper[4809]: I0312 09:49:00.161719 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb9d21bd-238f-4182-ac36-3c0db68c2d2c-utilities\") pod \"eb9d21bd-238f-4182-ac36-3c0db68c2d2c\" (UID: \"eb9d21bd-238f-4182-ac36-3c0db68c2d2c\") " Mar 12 09:49:00 crc kubenswrapper[4809]: I0312 09:49:00.161894 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb9d21bd-238f-4182-ac36-3c0db68c2d2c-catalog-content\") pod \"eb9d21bd-238f-4182-ac36-3c0db68c2d2c\" (UID: \"eb9d21bd-238f-4182-ac36-3c0db68c2d2c\") " Mar 12 09:49:00 crc kubenswrapper[4809]: I0312 09:49:00.162140 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rdd2\" (UniqueName: \"kubernetes.io/projected/eb9d21bd-238f-4182-ac36-3c0db68c2d2c-kube-api-access-4rdd2\") pod \"eb9d21bd-238f-4182-ac36-3c0db68c2d2c\" (UID: \"eb9d21bd-238f-4182-ac36-3c0db68c2d2c\") " Mar 12 09:49:00 crc kubenswrapper[4809]: I0312 09:49:00.163052 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb9d21bd-238f-4182-ac36-3c0db68c2d2c-utilities" (OuterVolumeSpecName: "utilities") pod "eb9d21bd-238f-4182-ac36-3c0db68c2d2c" (UID: "eb9d21bd-238f-4182-ac36-3c0db68c2d2c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 09:49:00 crc kubenswrapper[4809]: I0312 09:49:00.169601 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb9d21bd-238f-4182-ac36-3c0db68c2d2c-kube-api-access-4rdd2" (OuterVolumeSpecName: "kube-api-access-4rdd2") pod "eb9d21bd-238f-4182-ac36-3c0db68c2d2c" (UID: "eb9d21bd-238f-4182-ac36-3c0db68c2d2c"). InnerVolumeSpecName "kube-api-access-4rdd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 09:49:00 crc kubenswrapper[4809]: I0312 09:49:00.234131 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb9d21bd-238f-4182-ac36-3c0db68c2d2c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eb9d21bd-238f-4182-ac36-3c0db68c2d2c" (UID: "eb9d21bd-238f-4182-ac36-3c0db68c2d2c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 09:49:00 crc kubenswrapper[4809]: I0312 09:49:00.271787 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb9d21bd-238f-4182-ac36-3c0db68c2d2c-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 09:49:00 crc kubenswrapper[4809]: I0312 09:49:00.271826 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb9d21bd-238f-4182-ac36-3c0db68c2d2c-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 09:49:00 crc kubenswrapper[4809]: I0312 09:49:00.271844 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rdd2\" (UniqueName: \"kubernetes.io/projected/eb9d21bd-238f-4182-ac36-3c0db68c2d2c-kube-api-access-4rdd2\") on node \"crc\" DevicePath \"\"" Mar 12 09:49:00 crc kubenswrapper[4809]: I0312 09:49:00.452433 4809 generic.go:334] "Generic (PLEG): container finished" podID="eb9d21bd-238f-4182-ac36-3c0db68c2d2c" containerID="185126f22153a210b867eb1e1ed3fb7f963ba5ae319d51fcbc05d3b65664bf86" exitCode=0 Mar 12 09:49:00 crc kubenswrapper[4809]: I0312 09:49:00.452501 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xwc5h" Mar 12 09:49:00 crc kubenswrapper[4809]: I0312 09:49:00.452507 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xwc5h" event={"ID":"eb9d21bd-238f-4182-ac36-3c0db68c2d2c","Type":"ContainerDied","Data":"185126f22153a210b867eb1e1ed3fb7f963ba5ae319d51fcbc05d3b65664bf86"} Mar 12 09:49:00 crc kubenswrapper[4809]: I0312 09:49:00.452653 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xwc5h" event={"ID":"eb9d21bd-238f-4182-ac36-3c0db68c2d2c","Type":"ContainerDied","Data":"dd8b3de7b151a7098b7f0c1fd78efc6137d922e519ee355783680f6bb5c41e23"} Mar 12 09:49:00 crc kubenswrapper[4809]: I0312 09:49:00.452703 4809 scope.go:117] "RemoveContainer" containerID="185126f22153a210b867eb1e1ed3fb7f963ba5ae319d51fcbc05d3b65664bf86" Mar 12 09:49:00 crc kubenswrapper[4809]: I0312 09:49:00.473649 4809 scope.go:117] "RemoveContainer" containerID="ae6bb961c8c4903f82b415ee41df121a422873c62067781b95461ce844e13ad9" Mar 12 09:49:00 crc kubenswrapper[4809]: I0312 09:49:00.501540 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xwc5h"] Mar 12 09:49:00 crc kubenswrapper[4809]: I0312 09:49:00.501659 4809 scope.go:117] "RemoveContainer" containerID="caa38667df8d146b64087732f5a2a80d3ea8bd74989295a2e974608efe79c26d" Mar 12 09:49:00 crc kubenswrapper[4809]: I0312 09:49:00.516631 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xwc5h"] Mar 12 09:49:00 crc kubenswrapper[4809]: I0312 09:49:00.535556 4809 scope.go:117] "RemoveContainer" containerID="185126f22153a210b867eb1e1ed3fb7f963ba5ae319d51fcbc05d3b65664bf86" Mar 12 09:49:00 crc kubenswrapper[4809]: E0312 09:49:00.536600 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"185126f22153a210b867eb1e1ed3fb7f963ba5ae319d51fcbc05d3b65664bf86\": container with ID starting with 185126f22153a210b867eb1e1ed3fb7f963ba5ae319d51fcbc05d3b65664bf86 not found: ID does not exist" containerID="185126f22153a210b867eb1e1ed3fb7f963ba5ae319d51fcbc05d3b65664bf86" Mar 12 09:49:00 crc kubenswrapper[4809]: I0312 09:49:00.536676 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"185126f22153a210b867eb1e1ed3fb7f963ba5ae319d51fcbc05d3b65664bf86"} err="failed to get container status \"185126f22153a210b867eb1e1ed3fb7f963ba5ae319d51fcbc05d3b65664bf86\": rpc error: code = NotFound desc = could not find container \"185126f22153a210b867eb1e1ed3fb7f963ba5ae319d51fcbc05d3b65664bf86\": container with ID starting with 185126f22153a210b867eb1e1ed3fb7f963ba5ae319d51fcbc05d3b65664bf86 not found: ID does not exist" Mar 12 09:49:00 crc kubenswrapper[4809]: I0312 09:49:00.536722 4809 scope.go:117] "RemoveContainer" containerID="ae6bb961c8c4903f82b415ee41df121a422873c62067781b95461ce844e13ad9" Mar 12 09:49:00 crc kubenswrapper[4809]: E0312 09:49:00.537345 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae6bb961c8c4903f82b415ee41df121a422873c62067781b95461ce844e13ad9\": container with ID starting with ae6bb961c8c4903f82b415ee41df121a422873c62067781b95461ce844e13ad9 not found: ID does not exist" containerID="ae6bb961c8c4903f82b415ee41df121a422873c62067781b95461ce844e13ad9" Mar 12 09:49:00 crc kubenswrapper[4809]: I0312 09:49:00.537403 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae6bb961c8c4903f82b415ee41df121a422873c62067781b95461ce844e13ad9"} err="failed to get container status \"ae6bb961c8c4903f82b415ee41df121a422873c62067781b95461ce844e13ad9\": rpc error: code = NotFound desc = could not find container \"ae6bb961c8c4903f82b415ee41df121a422873c62067781b95461ce844e13ad9\": container with ID starting with ae6bb961c8c4903f82b415ee41df121a422873c62067781b95461ce844e13ad9 not found: ID does not exist" Mar 12 09:49:00 crc kubenswrapper[4809]: I0312 09:49:00.537448 4809 scope.go:117] "RemoveContainer" containerID="caa38667df8d146b64087732f5a2a80d3ea8bd74989295a2e974608efe79c26d" Mar 12 09:49:00 crc kubenswrapper[4809]: E0312 09:49:00.537868 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"caa38667df8d146b64087732f5a2a80d3ea8bd74989295a2e974608efe79c26d\": container with ID starting with caa38667df8d146b64087732f5a2a80d3ea8bd74989295a2e974608efe79c26d not found: ID does not exist" containerID="caa38667df8d146b64087732f5a2a80d3ea8bd74989295a2e974608efe79c26d" Mar 12 09:49:00 crc kubenswrapper[4809]: I0312 09:49:00.537908 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"caa38667df8d146b64087732f5a2a80d3ea8bd74989295a2e974608efe79c26d"} err="failed to get container status \"caa38667df8d146b64087732f5a2a80d3ea8bd74989295a2e974608efe79c26d\": rpc error: code = NotFound desc = could not find container \"caa38667df8d146b64087732f5a2a80d3ea8bd74989295a2e974608efe79c26d\": container with ID starting with caa38667df8d146b64087732f5a2a80d3ea8bd74989295a2e974608efe79c26d not found: ID does not exist" Mar 12 09:49:01 crc kubenswrapper[4809]: I0312 09:49:01.123953 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb9d21bd-238f-4182-ac36-3c0db68c2d2c" path="/var/lib/kubelet/pods/eb9d21bd-238f-4182-ac36-3c0db68c2d2c/volumes" Mar 12 09:49:02 crc kubenswrapper[4809]: I0312 09:49:02.479970 4809 generic.go:334] "Generic (PLEG): container finished" podID="af6d2578-2db0-448f-9bf5-c695df62f63b" containerID="f15337705ffb729809abd03cd2160f0bac1743d4ea2cc0228f4e401bf2d6d723" exitCode=0 Mar 12 09:49:02 crc kubenswrapper[4809]: I0312 09:49:02.480097 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pnl96/must-gather-256gx" event={"ID":"af6d2578-2db0-448f-9bf5-c695df62f63b","Type":"ContainerDied","Data":"f15337705ffb729809abd03cd2160f0bac1743d4ea2cc0228f4e401bf2d6d723"} Mar 12 09:49:02 crc kubenswrapper[4809]: I0312 09:49:02.482660 4809 scope.go:117] "RemoveContainer" containerID="f15337705ffb729809abd03cd2160f0bac1743d4ea2cc0228f4e401bf2d6d723" Mar 12 09:49:03 crc kubenswrapper[4809]: I0312 09:49:03.334655 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-pnl96_must-gather-256gx_af6d2578-2db0-448f-9bf5-c695df62f63b/gather/0.log" Mar 12 09:49:10 crc kubenswrapper[4809]: I0312 09:49:10.106024 4809 scope.go:117] "RemoveContainer" containerID="2cc4d61696c1a97a33dbf89b6027d85f4124e9d804e87e559c4cc8ddb20dd8d3" Mar 12 09:49:10 crc kubenswrapper[4809]: E0312 09:49:10.107044 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:49:14 crc kubenswrapper[4809]: I0312 09:49:14.714170 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-pnl96/must-gather-256gx"] Mar 12 09:49:14 crc kubenswrapper[4809]: I0312 09:49:14.715698 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-pnl96/must-gather-256gx" podUID="af6d2578-2db0-448f-9bf5-c695df62f63b" containerName="copy" containerID="cri-o://e46cbea23cd2d8d9bf85c10e0cdd4581950b657acc31a512e1eb50225abc4ded" gracePeriod=2 Mar 12 09:49:14 crc kubenswrapper[4809]: I0312 09:49:14.725240 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-pnl96/must-gather-256gx"] Mar 12 09:49:15 crc kubenswrapper[4809]: I0312 09:49:15.341066 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-pnl96_must-gather-256gx_af6d2578-2db0-448f-9bf5-c695df62f63b/copy/0.log" Mar 12 09:49:15 crc kubenswrapper[4809]: I0312 09:49:15.342476 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pnl96/must-gather-256gx" Mar 12 09:49:15 crc kubenswrapper[4809]: I0312 09:49:15.408719 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9zbq\" (UniqueName: \"kubernetes.io/projected/af6d2578-2db0-448f-9bf5-c695df62f63b-kube-api-access-z9zbq\") pod \"af6d2578-2db0-448f-9bf5-c695df62f63b\" (UID: \"af6d2578-2db0-448f-9bf5-c695df62f63b\") " Mar 12 09:49:15 crc kubenswrapper[4809]: I0312 09:49:15.429646 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af6d2578-2db0-448f-9bf5-c695df62f63b-kube-api-access-z9zbq" (OuterVolumeSpecName: "kube-api-access-z9zbq") pod "af6d2578-2db0-448f-9bf5-c695df62f63b" (UID: "af6d2578-2db0-448f-9bf5-c695df62f63b"). InnerVolumeSpecName "kube-api-access-z9zbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 09:49:15 crc kubenswrapper[4809]: I0312 09:49:15.511156 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/af6d2578-2db0-448f-9bf5-c695df62f63b-must-gather-output\") pod \"af6d2578-2db0-448f-9bf5-c695df62f63b\" (UID: \"af6d2578-2db0-448f-9bf5-c695df62f63b\") " Mar 12 09:49:15 crc kubenswrapper[4809]: I0312 09:49:15.512383 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9zbq\" (UniqueName: \"kubernetes.io/projected/af6d2578-2db0-448f-9bf5-c695df62f63b-kube-api-access-z9zbq\") on node \"crc\" DevicePath \"\"" Mar 12 09:49:15 crc kubenswrapper[4809]: I0312 09:49:15.706324 4809 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-pnl96_must-gather-256gx_af6d2578-2db0-448f-9bf5-c695df62f63b/copy/0.log" Mar 12 09:49:15 crc kubenswrapper[4809]: I0312 09:49:15.709567 4809 generic.go:334] "Generic (PLEG): container finished" podID="af6d2578-2db0-448f-9bf5-c695df62f63b" containerID="e46cbea23cd2d8d9bf85c10e0cdd4581950b657acc31a512e1eb50225abc4ded" exitCode=143 Mar 12 09:49:15 crc kubenswrapper[4809]: I0312 09:49:15.709665 4809 scope.go:117] "RemoveContainer" containerID="e46cbea23cd2d8d9bf85c10e0cdd4581950b657acc31a512e1eb50225abc4ded" Mar 12 09:49:15 crc kubenswrapper[4809]: I0312 09:49:15.709996 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pnl96/must-gather-256gx" Mar 12 09:49:15 crc kubenswrapper[4809]: I0312 09:49:15.729872 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af6d2578-2db0-448f-9bf5-c695df62f63b-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "af6d2578-2db0-448f-9bf5-c695df62f63b" (UID: "af6d2578-2db0-448f-9bf5-c695df62f63b"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 09:49:15 crc kubenswrapper[4809]: I0312 09:49:15.800363 4809 scope.go:117] "RemoveContainer" containerID="f15337705ffb729809abd03cd2160f0bac1743d4ea2cc0228f4e401bf2d6d723" Mar 12 09:49:15 crc kubenswrapper[4809]: I0312 09:49:15.856321 4809 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/af6d2578-2db0-448f-9bf5-c695df62f63b-must-gather-output\") on node \"crc\" DevicePath \"\"" Mar 12 09:49:15 crc kubenswrapper[4809]: I0312 09:49:15.919455 4809 scope.go:117] "RemoveContainer" containerID="e46cbea23cd2d8d9bf85c10e0cdd4581950b657acc31a512e1eb50225abc4ded" Mar 12 09:49:15 crc kubenswrapper[4809]: E0312 09:49:15.919960 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e46cbea23cd2d8d9bf85c10e0cdd4581950b657acc31a512e1eb50225abc4ded\": container with ID starting with e46cbea23cd2d8d9bf85c10e0cdd4581950b657acc31a512e1eb50225abc4ded not found: ID does not exist" containerID="e46cbea23cd2d8d9bf85c10e0cdd4581950b657acc31a512e1eb50225abc4ded" Mar 12 09:49:15 crc kubenswrapper[4809]: I0312 09:49:15.920017 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e46cbea23cd2d8d9bf85c10e0cdd4581950b657acc31a512e1eb50225abc4ded"} err="failed to get container status \"e46cbea23cd2d8d9bf85c10e0cdd4581950b657acc31a512e1eb50225abc4ded\": rpc error: code = NotFound desc = could not find container \"e46cbea23cd2d8d9bf85c10e0cdd4581950b657acc31a512e1eb50225abc4ded\": container with ID starting with e46cbea23cd2d8d9bf85c10e0cdd4581950b657acc31a512e1eb50225abc4ded not found: ID does not exist" Mar 12 09:49:15 crc kubenswrapper[4809]: I0312 09:49:15.920051 4809 scope.go:117] "RemoveContainer" containerID="f15337705ffb729809abd03cd2160f0bac1743d4ea2cc0228f4e401bf2d6d723" Mar 12 09:49:15 crc kubenswrapper[4809]: E0312 09:49:15.920745 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f15337705ffb729809abd03cd2160f0bac1743d4ea2cc0228f4e401bf2d6d723\": container with ID starting with f15337705ffb729809abd03cd2160f0bac1743d4ea2cc0228f4e401bf2d6d723 not found: ID does not exist" containerID="f15337705ffb729809abd03cd2160f0bac1743d4ea2cc0228f4e401bf2d6d723" Mar 12 09:49:15 crc kubenswrapper[4809]: I0312 09:49:15.920780 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f15337705ffb729809abd03cd2160f0bac1743d4ea2cc0228f4e401bf2d6d723"} err="failed to get container status \"f15337705ffb729809abd03cd2160f0bac1743d4ea2cc0228f4e401bf2d6d723\": rpc error: code = NotFound desc = could not find container \"f15337705ffb729809abd03cd2160f0bac1743d4ea2cc0228f4e401bf2d6d723\": container with ID starting with f15337705ffb729809abd03cd2160f0bac1743d4ea2cc0228f4e401bf2d6d723 not found: ID does not exist" Mar 12 09:49:17 crc kubenswrapper[4809]: I0312 09:49:17.123859 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af6d2578-2db0-448f-9bf5-c695df62f63b" path="/var/lib/kubelet/pods/af6d2578-2db0-448f-9bf5-c695df62f63b/volumes" Mar 12 09:49:25 crc kubenswrapper[4809]: I0312 09:49:25.106707 4809 scope.go:117] "RemoveContainer" containerID="2cc4d61696c1a97a33dbf89b6027d85f4124e9d804e87e559c4cc8ddb20dd8d3" Mar 12 09:49:25 crc kubenswrapper[4809]: E0312 09:49:25.108027 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:49:36 crc kubenswrapper[4809]: I0312 09:49:36.105992 4809 scope.go:117] "RemoveContainer" containerID="2cc4d61696c1a97a33dbf89b6027d85f4124e9d804e87e559c4cc8ddb20dd8d3" Mar 12 09:49:36 crc kubenswrapper[4809]: E0312 09:49:36.107239 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:49:50 crc kubenswrapper[4809]: I0312 09:49:50.107258 4809 scope.go:117] "RemoveContainer" containerID="2cc4d61696c1a97a33dbf89b6027d85f4124e9d804e87e559c4cc8ddb20dd8d3" Mar 12 09:49:50 crc kubenswrapper[4809]: E0312 09:49:50.108635 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:50:00 crc kubenswrapper[4809]: I0312 09:50:00.181166 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555150-lj9n4"] Mar 12 09:50:00 crc kubenswrapper[4809]: E0312 09:50:00.182519 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb9d21bd-238f-4182-ac36-3c0db68c2d2c" containerName="registry-server" Mar 12 09:50:00 crc kubenswrapper[4809]: I0312 09:50:00.182539 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb9d21bd-238f-4182-ac36-3c0db68c2d2c" containerName="registry-server" Mar 12 09:50:00 crc kubenswrapper[4809]: E0312 09:50:00.182560 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af6d2578-2db0-448f-9bf5-c695df62f63b" containerName="gather" Mar 12 09:50:00 crc kubenswrapper[4809]: I0312 09:50:00.182567 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="af6d2578-2db0-448f-9bf5-c695df62f63b" containerName="gather" Mar 12 09:50:00 crc kubenswrapper[4809]: E0312 09:50:00.182584 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af6d2578-2db0-448f-9bf5-c695df62f63b" containerName="copy" Mar 12 09:50:00 crc kubenswrapper[4809]: I0312 09:50:00.182591 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="af6d2578-2db0-448f-9bf5-c695df62f63b" containerName="copy" Mar 12 09:50:00 crc kubenswrapper[4809]: E0312 09:50:00.182617 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb9d21bd-238f-4182-ac36-3c0db68c2d2c" containerName="extract-content" Mar 12 09:50:00 crc kubenswrapper[4809]: I0312 09:50:00.182625 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb9d21bd-238f-4182-ac36-3c0db68c2d2c" containerName="extract-content" Mar 12 09:50:00 crc kubenswrapper[4809]: E0312 09:50:00.182643 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb9d21bd-238f-4182-ac36-3c0db68c2d2c" containerName="extract-utilities" Mar 12 09:50:00 crc kubenswrapper[4809]: I0312 09:50:00.182650 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb9d21bd-238f-4182-ac36-3c0db68c2d2c" containerName="extract-utilities" Mar 12 09:50:00 crc kubenswrapper[4809]: I0312 09:50:00.182952 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="af6d2578-2db0-448f-9bf5-c695df62f63b" containerName="copy" Mar 12 09:50:00 crc kubenswrapper[4809]: I0312 09:50:00.182979 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb9d21bd-238f-4182-ac36-3c0db68c2d2c" containerName="registry-server" Mar 12 09:50:00 crc kubenswrapper[4809]: I0312 09:50:00.183008 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="af6d2578-2db0-448f-9bf5-c695df62f63b" containerName="gather" Mar 12 09:50:00 crc kubenswrapper[4809]: I0312 09:50:00.183981 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555150-lj9n4" Mar 12 09:50:00 crc kubenswrapper[4809]: I0312 09:50:00.186761 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 09:50:00 crc kubenswrapper[4809]: I0312 09:50:00.187058 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 09:50:00 crc kubenswrapper[4809]: I0312 09:50:00.193700 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555150-lj9n4"] Mar 12 09:50:00 crc kubenswrapper[4809]: I0312 09:50:00.195961 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-tvxqv" Mar 12 09:50:00 crc kubenswrapper[4809]: I0312 09:50:00.334631 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgzg2\" (UniqueName: \"kubernetes.io/projected/97a92a57-266f-404d-9d4f-80bd4c686a57-kube-api-access-jgzg2\") pod \"auto-csr-approver-29555150-lj9n4\" (UID: \"97a92a57-266f-404d-9d4f-80bd4c686a57\") " pod="openshift-infra/auto-csr-approver-29555150-lj9n4" Mar 12 09:50:00 crc kubenswrapper[4809]: I0312 09:50:00.436949 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgzg2\" (UniqueName: \"kubernetes.io/projected/97a92a57-266f-404d-9d4f-80bd4c686a57-kube-api-access-jgzg2\") pod \"auto-csr-approver-29555150-lj9n4\" (UID: \"97a92a57-266f-404d-9d4f-80bd4c686a57\") " pod="openshift-infra/auto-csr-approver-29555150-lj9n4" Mar 12 09:50:00 crc kubenswrapper[4809]: I0312 09:50:00.460274 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgzg2\" (UniqueName: \"kubernetes.io/projected/97a92a57-266f-404d-9d4f-80bd4c686a57-kube-api-access-jgzg2\") pod \"auto-csr-approver-29555150-lj9n4\" (UID: \"97a92a57-266f-404d-9d4f-80bd4c686a57\") " pod="openshift-infra/auto-csr-approver-29555150-lj9n4" Mar 12 09:50:00 crc kubenswrapper[4809]: I0312 09:50:00.509160 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555150-lj9n4" Mar 12 09:50:01 crc kubenswrapper[4809]: I0312 09:50:01.073612 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555150-lj9n4"] Mar 12 09:50:01 crc kubenswrapper[4809]: I0312 09:50:01.414522 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555150-lj9n4" event={"ID":"97a92a57-266f-404d-9d4f-80bd4c686a57","Type":"ContainerStarted","Data":"2a8f181a28152f511cf9fff1f735af78e270ccf5e4fc2757b5a9e13f7ea28370"} Mar 12 09:50:03 crc kubenswrapper[4809]: I0312 09:50:03.121812 4809 scope.go:117] "RemoveContainer" containerID="2cc4d61696c1a97a33dbf89b6027d85f4124e9d804e87e559c4cc8ddb20dd8d3" Mar 12 09:50:03 crc kubenswrapper[4809]: E0312 09:50:03.123040 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:50:03 crc kubenswrapper[4809]: I0312 09:50:03.492516 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555150-lj9n4" event={"ID":"97a92a57-266f-404d-9d4f-80bd4c686a57","Type":"ContainerStarted","Data":"121e3e90940ad4ce6d093b32d93cfdab8566c4bef10502d9ad37f0b53e6e0cc1"} Mar 12 09:50:03 crc kubenswrapper[4809]: I0312 09:50:03.556054 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555150-lj9n4" podStartSLOduration=2.700121242 podStartE2EDuration="3.556031632s" podCreationTimestamp="2026-03-12 09:50:00 +0000 UTC" firstStartedPulling="2026-03-12 09:50:01.093330126 +0000 UTC m=+6674.675365859" lastFinishedPulling="2026-03-12 09:50:01.949240516 +0000 UTC m=+6675.531276249" observedRunningTime="2026-03-12 09:50:03.523844301 +0000 UTC m=+6677.105880034" watchObservedRunningTime="2026-03-12 09:50:03.556031632 +0000 UTC m=+6677.138067375" Mar 12 09:50:05 crc kubenswrapper[4809]: I0312 09:50:05.516412 4809 generic.go:334] "Generic (PLEG): container finished" podID="97a92a57-266f-404d-9d4f-80bd4c686a57" containerID="121e3e90940ad4ce6d093b32d93cfdab8566c4bef10502d9ad37f0b53e6e0cc1" exitCode=0 Mar 12 09:50:05 crc kubenswrapper[4809]: I0312 09:50:05.516610 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555150-lj9n4" event={"ID":"97a92a57-266f-404d-9d4f-80bd4c686a57","Type":"ContainerDied","Data":"121e3e90940ad4ce6d093b32d93cfdab8566c4bef10502d9ad37f0b53e6e0cc1"} Mar 12 09:50:07 crc kubenswrapper[4809]: I0312 09:50:07.295450 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555150-lj9n4" Mar 12 09:50:07 crc kubenswrapper[4809]: I0312 09:50:07.388325 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgzg2\" (UniqueName: \"kubernetes.io/projected/97a92a57-266f-404d-9d4f-80bd4c686a57-kube-api-access-jgzg2\") pod \"97a92a57-266f-404d-9d4f-80bd4c686a57\" (UID: \"97a92a57-266f-404d-9d4f-80bd4c686a57\") " Mar 12 09:50:07 crc kubenswrapper[4809]: I0312 09:50:07.403722 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97a92a57-266f-404d-9d4f-80bd4c686a57-kube-api-access-jgzg2" (OuterVolumeSpecName: "kube-api-access-jgzg2") pod "97a92a57-266f-404d-9d4f-80bd4c686a57" (UID: "97a92a57-266f-404d-9d4f-80bd4c686a57"). InnerVolumeSpecName "kube-api-access-jgzg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 09:50:07 crc kubenswrapper[4809]: I0312 09:50:07.491255 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jgzg2\" (UniqueName: \"kubernetes.io/projected/97a92a57-266f-404d-9d4f-80bd4c686a57-kube-api-access-jgzg2\") on node \"crc\" DevicePath \"\"" Mar 12 09:50:07 crc kubenswrapper[4809]: I0312 09:50:07.552040 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555150-lj9n4" event={"ID":"97a92a57-266f-404d-9d4f-80bd4c686a57","Type":"ContainerDied","Data":"2a8f181a28152f511cf9fff1f735af78e270ccf5e4fc2757b5a9e13f7ea28370"} Mar 12 09:50:07 crc kubenswrapper[4809]: I0312 09:50:07.552137 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a8f181a28152f511cf9fff1f735af78e270ccf5e4fc2757b5a9e13f7ea28370" Mar 12 09:50:07 crc kubenswrapper[4809]: I0312 09:50:07.552105 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555150-lj9n4" Mar 12 09:50:08 crc kubenswrapper[4809]: I0312 09:50:08.403886 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555144-fddmz"] Mar 12 09:50:08 crc kubenswrapper[4809]: I0312 09:50:08.413598 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555144-fddmz"] Mar 12 09:50:09 crc kubenswrapper[4809]: I0312 09:50:09.120910 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97017784-7f50-41cc-82a7-d6aeb7265902" path="/var/lib/kubelet/pods/97017784-7f50-41cc-82a7-d6aeb7265902/volumes" Mar 12 09:50:18 crc kubenswrapper[4809]: I0312 09:50:18.106794 4809 scope.go:117] "RemoveContainer" containerID="2cc4d61696c1a97a33dbf89b6027d85f4124e9d804e87e559c4cc8ddb20dd8d3" Mar 12 09:50:18 crc kubenswrapper[4809]: E0312 09:50:18.107871 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:50:32 crc kubenswrapper[4809]: I0312 09:50:32.107943 4809 scope.go:117] "RemoveContainer" containerID="2cc4d61696c1a97a33dbf89b6027d85f4124e9d804e87e559c4cc8ddb20dd8d3" Mar 12 09:50:32 crc kubenswrapper[4809]: E0312 09:50:32.109154 4809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6d4c_openshift-machine-config-operator(101483ba-8ed3-40eb-9855-077e9add029f)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" Mar 12 09:50:46 crc kubenswrapper[4809]: I0312 09:50:46.106885 4809 scope.go:117] "RemoveContainer" containerID="2cc4d61696c1a97a33dbf89b6027d85f4124e9d804e87e559c4cc8ddb20dd8d3" Mar 12 09:50:47 crc kubenswrapper[4809]: I0312 09:50:47.414284 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" event={"ID":"101483ba-8ed3-40eb-9855-077e9add029f","Type":"ContainerStarted","Data":"cac5286245729b3abd73c38c92528cc73c27638d4f267bc01eb2d31b515874ed"} Mar 12 09:50:59 crc kubenswrapper[4809]: I0312 09:50:59.878147 4809 scope.go:117] "RemoveContainer" containerID="5e68cbd2883814ab74c042bc03851c28af6a20ba6df276e132c3c2a6c6e8b1d9" Mar 12 09:51:51 crc kubenswrapper[4809]: I0312 09:51:51.936597 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nz8p7"] Mar 12 09:51:51 crc kubenswrapper[4809]: E0312 09:51:51.937827 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97a92a57-266f-404d-9d4f-80bd4c686a57" containerName="oc" Mar 12 09:51:51 crc kubenswrapper[4809]: I0312 09:51:51.937841 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="97a92a57-266f-404d-9d4f-80bd4c686a57" containerName="oc" Mar 12 09:51:51 crc kubenswrapper[4809]: I0312 09:51:51.938080 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="97a92a57-266f-404d-9d4f-80bd4c686a57" containerName="oc" Mar 12 09:51:51 crc kubenswrapper[4809]: I0312 09:51:51.940949 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nz8p7" Mar 12 09:51:51 crc kubenswrapper[4809]: I0312 09:51:51.951020 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nz8p7"] Mar 12 09:51:52 crc kubenswrapper[4809]: I0312 09:51:52.028361 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9fcf998-7d69-4d68-a2c0-a795c8ae3d1a-catalog-content\") pod \"community-operators-nz8p7\" (UID: \"f9fcf998-7d69-4d68-a2c0-a795c8ae3d1a\") " pod="openshift-marketplace/community-operators-nz8p7" Mar 12 09:51:52 crc kubenswrapper[4809]: I0312 09:51:52.028428 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9fcf998-7d69-4d68-a2c0-a795c8ae3d1a-utilities\") pod \"community-operators-nz8p7\" (UID: \"f9fcf998-7d69-4d68-a2c0-a795c8ae3d1a\") " pod="openshift-marketplace/community-operators-nz8p7" Mar 12 09:51:52 crc kubenswrapper[4809]: I0312 09:51:52.028459 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tln9l\" (UniqueName: \"kubernetes.io/projected/f9fcf998-7d69-4d68-a2c0-a795c8ae3d1a-kube-api-access-tln9l\") pod \"community-operators-nz8p7\" (UID: \"f9fcf998-7d69-4d68-a2c0-a795c8ae3d1a\") " pod="openshift-marketplace/community-operators-nz8p7" Mar 12 09:51:52 crc kubenswrapper[4809]: I0312 09:51:52.132811 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9fcf998-7d69-4d68-a2c0-a795c8ae3d1a-catalog-content\") pod \"community-operators-nz8p7\" (UID: \"f9fcf998-7d69-4d68-a2c0-a795c8ae3d1a\") " pod="openshift-marketplace/community-operators-nz8p7" Mar 12 09:51:52 crc kubenswrapper[4809]: I0312 09:51:52.133474 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9fcf998-7d69-4d68-a2c0-a795c8ae3d1a-utilities\") pod \"community-operators-nz8p7\" (UID: \"f9fcf998-7d69-4d68-a2c0-a795c8ae3d1a\") " pod="openshift-marketplace/community-operators-nz8p7" Mar 12 09:51:52 crc kubenswrapper[4809]: I0312 09:51:52.133927 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tln9l\" (UniqueName: \"kubernetes.io/projected/f9fcf998-7d69-4d68-a2c0-a795c8ae3d1a-kube-api-access-tln9l\") pod \"community-operators-nz8p7\" (UID: \"f9fcf998-7d69-4d68-a2c0-a795c8ae3d1a\") " pod="openshift-marketplace/community-operators-nz8p7" Mar 12 09:51:52 crc kubenswrapper[4809]: I0312 09:51:52.135176 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9fcf998-7d69-4d68-a2c0-a795c8ae3d1a-utilities\") pod \"community-operators-nz8p7\" (UID: \"f9fcf998-7d69-4d68-a2c0-a795c8ae3d1a\") " pod="openshift-marketplace/community-operators-nz8p7" Mar 12 09:51:52 crc kubenswrapper[4809]: I0312 09:51:52.135756 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9fcf998-7d69-4d68-a2c0-a795c8ae3d1a-catalog-content\") pod \"community-operators-nz8p7\" (UID: \"f9fcf998-7d69-4d68-a2c0-a795c8ae3d1a\") " pod="openshift-marketplace/community-operators-nz8p7" Mar 12 09:51:52 crc kubenswrapper[4809]: I0312 09:51:52.174603 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tln9l\" (UniqueName: \"kubernetes.io/projected/f9fcf998-7d69-4d68-a2c0-a795c8ae3d1a-kube-api-access-tln9l\") pod \"community-operators-nz8p7\" (UID: \"f9fcf998-7d69-4d68-a2c0-a795c8ae3d1a\") " pod="openshift-marketplace/community-operators-nz8p7" Mar 12 09:51:52 crc kubenswrapper[4809]: I0312 09:51:52.284977 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nz8p7" Mar 12 09:51:53 crc kubenswrapper[4809]: I0312 09:51:53.159818 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nz8p7"] Mar 12 09:51:53 crc kubenswrapper[4809]: I0312 09:51:53.305002 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nz8p7" event={"ID":"f9fcf998-7d69-4d68-a2c0-a795c8ae3d1a","Type":"ContainerStarted","Data":"75f7cb443dfe5881e556270c454743118e5915c918d6def023c116c11b33a80f"} Mar 12 09:51:54 crc kubenswrapper[4809]: I0312 09:51:54.325044 4809 generic.go:334] "Generic (PLEG): container finished" podID="f9fcf998-7d69-4d68-a2c0-a795c8ae3d1a" containerID="0d955e5bf6a001970dfc84cb839e5a6bf22f5a9191a8946bef92ef8930dbd865" exitCode=0 Mar 12 09:51:54 crc kubenswrapper[4809]: I0312 09:51:54.325199 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nz8p7" event={"ID":"f9fcf998-7d69-4d68-a2c0-a795c8ae3d1a","Type":"ContainerDied","Data":"0d955e5bf6a001970dfc84cb839e5a6bf22f5a9191a8946bef92ef8930dbd865"} Mar 12 09:51:56 crc kubenswrapper[4809]: I0312 09:51:56.361351 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nz8p7" event={"ID":"f9fcf998-7d69-4d68-a2c0-a795c8ae3d1a","Type":"ContainerStarted","Data":"84d7b9664f9430420e0605c8e336d0c8419de4319e415f000c135de691327bb6"} Mar 12 09:51:57 crc kubenswrapper[4809]: I0312 09:51:57.374561 4809 generic.go:334] "Generic (PLEG): container finished" podID="f9fcf998-7d69-4d68-a2c0-a795c8ae3d1a" containerID="84d7b9664f9430420e0605c8e336d0c8419de4319e415f000c135de691327bb6" exitCode=0 Mar 12 09:51:57 crc kubenswrapper[4809]: I0312 09:51:57.374754 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nz8p7" event={"ID":"f9fcf998-7d69-4d68-a2c0-a795c8ae3d1a","Type":"ContainerDied","Data":"84d7b9664f9430420e0605c8e336d0c8419de4319e415f000c135de691327bb6"} Mar 12 09:51:58 crc kubenswrapper[4809]: I0312 09:51:58.392492 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nz8p7" event={"ID":"f9fcf998-7d69-4d68-a2c0-a795c8ae3d1a","Type":"ContainerStarted","Data":"c9cc207295420229c2ca27c82a51519364c4348c0f16b8a436514f259be0ea11"} Mar 12 09:51:58 crc kubenswrapper[4809]: I0312 09:51:58.433399 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nz8p7" podStartSLOduration=3.937842518 podStartE2EDuration="7.433370327s" podCreationTimestamp="2026-03-12 09:51:51 +0000 UTC" firstStartedPulling="2026-03-12 09:51:54.328031855 +0000 UTC m=+6787.910067608" lastFinishedPulling="2026-03-12 09:51:57.823559694 +0000 UTC m=+6791.405595417" observedRunningTime="2026-03-12 09:51:58.424870004 +0000 UTC m=+6792.006905747" watchObservedRunningTime="2026-03-12 09:51:58.433370327 +0000 UTC m=+6792.015406080" Mar 12 09:52:00 crc kubenswrapper[4809]: I0312 09:52:00.160364 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555152-dm6ss"] Mar 12 09:52:00 crc kubenswrapper[4809]: I0312 09:52:00.163827 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555152-dm6ss" Mar 12 09:52:00 crc kubenswrapper[4809]: I0312 09:52:00.166835 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 09:52:00 crc kubenswrapper[4809]: I0312 09:52:00.167323 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 09:52:00 crc kubenswrapper[4809]: I0312 09:52:00.167595 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-tvxqv" Mar 12 09:52:00 crc kubenswrapper[4809]: I0312 09:52:00.175538 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555152-dm6ss"] Mar 12 09:52:00 crc kubenswrapper[4809]: I0312 09:52:00.272547 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz6xz\" (UniqueName: \"kubernetes.io/projected/2466c4c0-5727-45bc-9502-d1a2d0d5c88d-kube-api-access-bz6xz\") pod \"auto-csr-approver-29555152-dm6ss\" (UID: \"2466c4c0-5727-45bc-9502-d1a2d0d5c88d\") " pod="openshift-infra/auto-csr-approver-29555152-dm6ss" Mar 12 09:52:00 crc kubenswrapper[4809]: I0312 09:52:00.374995 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bz6xz\" (UniqueName: \"kubernetes.io/projected/2466c4c0-5727-45bc-9502-d1a2d0d5c88d-kube-api-access-bz6xz\") pod \"auto-csr-approver-29555152-dm6ss\" (UID: \"2466c4c0-5727-45bc-9502-d1a2d0d5c88d\") " pod="openshift-infra/auto-csr-approver-29555152-dm6ss" Mar 12 09:52:00 crc kubenswrapper[4809]: I0312 09:52:00.398851 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bz6xz\" (UniqueName: \"kubernetes.io/projected/2466c4c0-5727-45bc-9502-d1a2d0d5c88d-kube-api-access-bz6xz\") pod \"auto-csr-approver-29555152-dm6ss\" (UID: \"2466c4c0-5727-45bc-9502-d1a2d0d5c88d\") " pod="openshift-infra/auto-csr-approver-29555152-dm6ss" Mar 12 09:52:00 crc kubenswrapper[4809]: I0312 09:52:00.487148 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555152-dm6ss" Mar 12 09:52:01 crc kubenswrapper[4809]: I0312 09:52:01.103318 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555152-dm6ss"] Mar 12 09:52:01 crc kubenswrapper[4809]: W0312 09:52:01.115345 4809 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2466c4c0_5727_45bc_9502_d1a2d0d5c88d.slice/crio-279c6530220dfb90aa35146597264362b62aa14a224db9c6bfaf3de4a8bb8a34 WatchSource:0}: Error finding container 279c6530220dfb90aa35146597264362b62aa14a224db9c6bfaf3de4a8bb8a34: Status 404 returned error can't find the container with id 279c6530220dfb90aa35146597264362b62aa14a224db9c6bfaf3de4a8bb8a34 Mar 12 09:52:01 crc kubenswrapper[4809]: I0312 09:52:01.425925 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555152-dm6ss" event={"ID":"2466c4c0-5727-45bc-9502-d1a2d0d5c88d","Type":"ContainerStarted","Data":"279c6530220dfb90aa35146597264362b62aa14a224db9c6bfaf3de4a8bb8a34"} Mar 12 09:52:02 crc kubenswrapper[4809]: I0312 09:52:02.286391 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nz8p7" Mar 12 09:52:02 crc kubenswrapper[4809]: I0312 09:52:02.286725 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nz8p7" Mar 12 09:52:02 crc kubenswrapper[4809]: I0312 09:52:02.355660 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nz8p7" Mar 12 09:52:03 crc kubenswrapper[4809]: I0312 09:52:03.449496 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555152-dm6ss" event={"ID":"2466c4c0-5727-45bc-9502-d1a2d0d5c88d","Type":"ContainerStarted","Data":"ab58e787898d6bd1a77660e9f2a94330bb2c6b29d44da99bef19f1794d0da7ca"} Mar 12 09:52:03 crc kubenswrapper[4809]: I0312 09:52:03.483395 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555152-dm6ss" podStartSLOduration=2.374660776 podStartE2EDuration="3.483367887s" podCreationTimestamp="2026-03-12 09:52:00 +0000 UTC" firstStartedPulling="2026-03-12 09:52:01.118558751 +0000 UTC m=+6794.700594484" lastFinishedPulling="2026-03-12 09:52:02.227265862 +0000 UTC m=+6795.809301595" observedRunningTime="2026-03-12 09:52:03.470751032 +0000 UTC m=+6797.052786775" watchObservedRunningTime="2026-03-12 09:52:03.483367887 +0000 UTC m=+6797.065403630" Mar 12 09:52:04 crc kubenswrapper[4809]: I0312 09:52:04.463257 4809 generic.go:334] "Generic (PLEG): container finished" podID="2466c4c0-5727-45bc-9502-d1a2d0d5c88d" containerID="ab58e787898d6bd1a77660e9f2a94330bb2c6b29d44da99bef19f1794d0da7ca" exitCode=0 Mar 12 09:52:04 crc kubenswrapper[4809]: I0312 09:52:04.463326 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555152-dm6ss" event={"ID":"2466c4c0-5727-45bc-9502-d1a2d0d5c88d","Type":"ContainerDied","Data":"ab58e787898d6bd1a77660e9f2a94330bb2c6b29d44da99bef19f1794d0da7ca"} Mar 12 09:52:05 crc kubenswrapper[4809]: I0312 09:52:05.908412 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555152-dm6ss" Mar 12 09:52:05 crc kubenswrapper[4809]: I0312 09:52:05.977469 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bz6xz\" (UniqueName: \"kubernetes.io/projected/2466c4c0-5727-45bc-9502-d1a2d0d5c88d-kube-api-access-bz6xz\") pod \"2466c4c0-5727-45bc-9502-d1a2d0d5c88d\" (UID: \"2466c4c0-5727-45bc-9502-d1a2d0d5c88d\") " Mar 12 09:52:05 crc kubenswrapper[4809]: I0312 09:52:05.987249 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2466c4c0-5727-45bc-9502-d1a2d0d5c88d-kube-api-access-bz6xz" (OuterVolumeSpecName: "kube-api-access-bz6xz") pod "2466c4c0-5727-45bc-9502-d1a2d0d5c88d" (UID: "2466c4c0-5727-45bc-9502-d1a2d0d5c88d"). InnerVolumeSpecName "kube-api-access-bz6xz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 09:52:06 crc kubenswrapper[4809]: I0312 09:52:06.082350 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bz6xz\" (UniqueName: \"kubernetes.io/projected/2466c4c0-5727-45bc-9502-d1a2d0d5c88d-kube-api-access-bz6xz\") on node \"crc\" DevicePath \"\"" Mar 12 09:52:06 crc kubenswrapper[4809]: I0312 09:52:06.506063 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555152-dm6ss" event={"ID":"2466c4c0-5727-45bc-9502-d1a2d0d5c88d","Type":"ContainerDied","Data":"279c6530220dfb90aa35146597264362b62aa14a224db9c6bfaf3de4a8bb8a34"} Mar 12 09:52:06 crc kubenswrapper[4809]: I0312 09:52:06.506176 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="279c6530220dfb90aa35146597264362b62aa14a224db9c6bfaf3de4a8bb8a34" Mar 12 09:52:06 crc kubenswrapper[4809]: I0312 09:52:06.506297 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555152-dm6ss" Mar 12 09:52:06 crc kubenswrapper[4809]: I0312 09:52:06.575301 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555146-b9bqf"] Mar 12 09:52:06 crc kubenswrapper[4809]: I0312 09:52:06.590630 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555146-b9bqf"] Mar 12 09:52:07 crc kubenswrapper[4809]: I0312 09:52:07.132370 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37a6b2ec-c23e-4fda-9651-6dc234d5b04f" path="/var/lib/kubelet/pods/37a6b2ec-c23e-4fda-9651-6dc234d5b04f/volumes" Mar 12 09:52:12 crc kubenswrapper[4809]: I0312 09:52:12.343989 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nz8p7" Mar 12 09:52:12 crc kubenswrapper[4809]: I0312 09:52:12.426147 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nz8p7"] Mar 12 09:52:12 crc kubenswrapper[4809]: I0312 09:52:12.585033 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nz8p7" podUID="f9fcf998-7d69-4d68-a2c0-a795c8ae3d1a" containerName="registry-server" containerID="cri-o://c9cc207295420229c2ca27c82a51519364c4348c0f16b8a436514f259be0ea11" gracePeriod=2 Mar 12 09:52:13 crc kubenswrapper[4809]: I0312 09:52:13.174871 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nz8p7" Mar 12 09:52:13 crc kubenswrapper[4809]: I0312 09:52:13.294130 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tln9l\" (UniqueName: \"kubernetes.io/projected/f9fcf998-7d69-4d68-a2c0-a795c8ae3d1a-kube-api-access-tln9l\") pod \"f9fcf998-7d69-4d68-a2c0-a795c8ae3d1a\" (UID: \"f9fcf998-7d69-4d68-a2c0-a795c8ae3d1a\") " Mar 12 09:52:13 crc kubenswrapper[4809]: I0312 09:52:13.294453 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9fcf998-7d69-4d68-a2c0-a795c8ae3d1a-catalog-content\") pod \"f9fcf998-7d69-4d68-a2c0-a795c8ae3d1a\" (UID: \"f9fcf998-7d69-4d68-a2c0-a795c8ae3d1a\") " Mar 12 09:52:13 crc kubenswrapper[4809]: I0312 09:52:13.294573 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9fcf998-7d69-4d68-a2c0-a795c8ae3d1a-utilities\") pod \"f9fcf998-7d69-4d68-a2c0-a795c8ae3d1a\" (UID: \"f9fcf998-7d69-4d68-a2c0-a795c8ae3d1a\") " Mar 12 09:52:13 crc kubenswrapper[4809]: I0312 09:52:13.297328 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9fcf998-7d69-4d68-a2c0-a795c8ae3d1a-utilities" (OuterVolumeSpecName: "utilities") pod "f9fcf998-7d69-4d68-a2c0-a795c8ae3d1a" (UID: "f9fcf998-7d69-4d68-a2c0-a795c8ae3d1a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 09:52:13 crc kubenswrapper[4809]: I0312 09:52:13.302978 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9fcf998-7d69-4d68-a2c0-a795c8ae3d1a-kube-api-access-tln9l" (OuterVolumeSpecName: "kube-api-access-tln9l") pod "f9fcf998-7d69-4d68-a2c0-a795c8ae3d1a" (UID: "f9fcf998-7d69-4d68-a2c0-a795c8ae3d1a"). InnerVolumeSpecName "kube-api-access-tln9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 09:52:13 crc kubenswrapper[4809]: I0312 09:52:13.397399 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9fcf998-7d69-4d68-a2c0-a795c8ae3d1a-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 09:52:13 crc kubenswrapper[4809]: I0312 09:52:13.397433 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tln9l\" (UniqueName: \"kubernetes.io/projected/f9fcf998-7d69-4d68-a2c0-a795c8ae3d1a-kube-api-access-tln9l\") on node \"crc\" DevicePath \"\"" Mar 12 09:52:13 crc kubenswrapper[4809]: I0312 09:52:13.414959 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9fcf998-7d69-4d68-a2c0-a795c8ae3d1a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f9fcf998-7d69-4d68-a2c0-a795c8ae3d1a" (UID: "f9fcf998-7d69-4d68-a2c0-a795c8ae3d1a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 09:52:13 crc kubenswrapper[4809]: I0312 09:52:13.499670 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9fcf998-7d69-4d68-a2c0-a795c8ae3d1a-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 09:52:13 crc kubenswrapper[4809]: I0312 09:52:13.604395 4809 generic.go:334] "Generic (PLEG): container finished" podID="f9fcf998-7d69-4d68-a2c0-a795c8ae3d1a" containerID="c9cc207295420229c2ca27c82a51519364c4348c0f16b8a436514f259be0ea11" exitCode=0 Mar 12 09:52:13 crc kubenswrapper[4809]: I0312 09:52:13.604469 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nz8p7" event={"ID":"f9fcf998-7d69-4d68-a2c0-a795c8ae3d1a","Type":"ContainerDied","Data":"c9cc207295420229c2ca27c82a51519364c4348c0f16b8a436514f259be0ea11"} Mar 12 09:52:13 crc kubenswrapper[4809]: I0312 09:52:13.604518 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nz8p7" event={"ID":"f9fcf998-7d69-4d68-a2c0-a795c8ae3d1a","Type":"ContainerDied","Data":"75f7cb443dfe5881e556270c454743118e5915c918d6def023c116c11b33a80f"} Mar 12 09:52:13 crc kubenswrapper[4809]: I0312 09:52:13.604536 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nz8p7" Mar 12 09:52:13 crc kubenswrapper[4809]: I0312 09:52:13.604551 4809 scope.go:117] "RemoveContainer" containerID="c9cc207295420229c2ca27c82a51519364c4348c0f16b8a436514f259be0ea11" Mar 12 09:52:13 crc kubenswrapper[4809]: I0312 09:52:13.654274 4809 scope.go:117] "RemoveContainer" containerID="84d7b9664f9430420e0605c8e336d0c8419de4319e415f000c135de691327bb6" Mar 12 09:52:13 crc kubenswrapper[4809]: I0312 09:52:13.675727 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nz8p7"] Mar 12 09:52:13 crc kubenswrapper[4809]: I0312 09:52:13.688812 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nz8p7"] Mar 12 09:52:13 crc kubenswrapper[4809]: I0312 09:52:13.696339 4809 scope.go:117] "RemoveContainer" containerID="0d955e5bf6a001970dfc84cb839e5a6bf22f5a9191a8946bef92ef8930dbd865" Mar 12 09:52:13 crc kubenswrapper[4809]: I0312 09:52:13.749833 4809 scope.go:117] "RemoveContainer" containerID="c9cc207295420229c2ca27c82a51519364c4348c0f16b8a436514f259be0ea11" Mar 12 09:52:13 crc kubenswrapper[4809]: E0312 09:52:13.755127 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9cc207295420229c2ca27c82a51519364c4348c0f16b8a436514f259be0ea11\": container with ID starting with c9cc207295420229c2ca27c82a51519364c4348c0f16b8a436514f259be0ea11 not found: ID does not exist" containerID="c9cc207295420229c2ca27c82a51519364c4348c0f16b8a436514f259be0ea11" Mar 12 09:52:13 crc kubenswrapper[4809]: I0312 09:52:13.755394 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9cc207295420229c2ca27c82a51519364c4348c0f16b8a436514f259be0ea11"} err="failed to get container status \"c9cc207295420229c2ca27c82a51519364c4348c0f16b8a436514f259be0ea11\": rpc error: code = NotFound desc = could not find container \"c9cc207295420229c2ca27c82a51519364c4348c0f16b8a436514f259be0ea11\": container with ID starting with c9cc207295420229c2ca27c82a51519364c4348c0f16b8a436514f259be0ea11 not found: ID does not exist" Mar 12 09:52:13 crc kubenswrapper[4809]: I0312 09:52:13.755509 4809 scope.go:117] "RemoveContainer" containerID="84d7b9664f9430420e0605c8e336d0c8419de4319e415f000c135de691327bb6" Mar 12 09:52:13 crc kubenswrapper[4809]: E0312 09:52:13.756413 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84d7b9664f9430420e0605c8e336d0c8419de4319e415f000c135de691327bb6\": container with ID starting with 84d7b9664f9430420e0605c8e336d0c8419de4319e415f000c135de691327bb6 not found: ID does not exist" containerID="84d7b9664f9430420e0605c8e336d0c8419de4319e415f000c135de691327bb6" Mar 12 09:52:13 crc kubenswrapper[4809]: I0312 09:52:13.756482 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84d7b9664f9430420e0605c8e336d0c8419de4319e415f000c135de691327bb6"} err="failed to get container status \"84d7b9664f9430420e0605c8e336d0c8419de4319e415f000c135de691327bb6\": rpc error: code = NotFound desc = could not find container \"84d7b9664f9430420e0605c8e336d0c8419de4319e415f000c135de691327bb6\": container with ID starting with 84d7b9664f9430420e0605c8e336d0c8419de4319e415f000c135de691327bb6 not found: ID does not exist" Mar 12 09:52:13 crc kubenswrapper[4809]: I0312 09:52:13.756542 4809 scope.go:117] "RemoveContainer" containerID="0d955e5bf6a001970dfc84cb839e5a6bf22f5a9191a8946bef92ef8930dbd865" Mar 12 09:52:13 crc kubenswrapper[4809]: E0312 09:52:13.757303 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d955e5bf6a001970dfc84cb839e5a6bf22f5a9191a8946bef92ef8930dbd865\": container with ID starting with 0d955e5bf6a001970dfc84cb839e5a6bf22f5a9191a8946bef92ef8930dbd865 not found: ID does not exist" containerID="0d955e5bf6a001970dfc84cb839e5a6bf22f5a9191a8946bef92ef8930dbd865" Mar 12 09:52:13 crc kubenswrapper[4809]: I0312 09:52:13.757356 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d955e5bf6a001970dfc84cb839e5a6bf22f5a9191a8946bef92ef8930dbd865"} err="failed to get container status \"0d955e5bf6a001970dfc84cb839e5a6bf22f5a9191a8946bef92ef8930dbd865\": rpc error: code = NotFound desc = could not find container \"0d955e5bf6a001970dfc84cb839e5a6bf22f5a9191a8946bef92ef8930dbd865\": container with ID starting with 0d955e5bf6a001970dfc84cb839e5a6bf22f5a9191a8946bef92ef8930dbd865 not found: ID does not exist" Mar 12 09:52:15 crc kubenswrapper[4809]: I0312 09:52:15.117604 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9fcf998-7d69-4d68-a2c0-a795c8ae3d1a" path="/var/lib/kubelet/pods/f9fcf998-7d69-4d68-a2c0-a795c8ae3d1a/volumes" Mar 12 09:53:00 crc kubenswrapper[4809]: I0312 09:53:00.079511 4809 scope.go:117] "RemoveContainer" containerID="dd6074b5dcdc8a520d40a923cd89fdac3ea666ccfe0e17fafa50e729cbe02803" Mar 12 09:53:09 crc kubenswrapper[4809]: I0312 09:53:09.799048 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ckt5g"] Mar 12 09:53:09 crc kubenswrapper[4809]: E0312 09:53:09.801085 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9fcf998-7d69-4d68-a2c0-a795c8ae3d1a" containerName="registry-server" Mar 12 09:53:09 crc kubenswrapper[4809]: I0312 09:53:09.802047 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9fcf998-7d69-4d68-a2c0-a795c8ae3d1a" containerName="registry-server" Mar 12 09:53:09 crc kubenswrapper[4809]: E0312 09:53:09.802107 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9fcf998-7d69-4d68-a2c0-a795c8ae3d1a" containerName="extract-utilities" Mar 12 09:53:09 crc kubenswrapper[4809]: I0312 09:53:09.802241 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9fcf998-7d69-4d68-a2c0-a795c8ae3d1a" containerName="extract-utilities" Mar 12 09:53:09 crc kubenswrapper[4809]: E0312 09:53:09.802318 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9fcf998-7d69-4d68-a2c0-a795c8ae3d1a" containerName="extract-content" Mar 12 09:53:09 crc kubenswrapper[4809]: I0312 09:53:09.802328 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9fcf998-7d69-4d68-a2c0-a795c8ae3d1a" containerName="extract-content" Mar 12 09:53:09 crc kubenswrapper[4809]: E0312 09:53:09.802385 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2466c4c0-5727-45bc-9502-d1a2d0d5c88d" containerName="oc" Mar 12 09:53:09 crc kubenswrapper[4809]: I0312 09:53:09.802396 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="2466c4c0-5727-45bc-9502-d1a2d0d5c88d" containerName="oc" Mar 12 09:53:09 crc kubenswrapper[4809]: I0312 09:53:09.808344 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="2466c4c0-5727-45bc-9502-d1a2d0d5c88d" containerName="oc" Mar 12 09:53:09 crc kubenswrapper[4809]: I0312 09:53:09.808417 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9fcf998-7d69-4d68-a2c0-a795c8ae3d1a" containerName="registry-server" Mar 12 09:53:09 crc kubenswrapper[4809]: I0312 09:53:09.819156 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ckt5g" Mar 12 09:53:09 crc kubenswrapper[4809]: I0312 09:53:09.860487 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ckt5g"] Mar 12 09:53:09 crc kubenswrapper[4809]: I0312 09:53:09.951433 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15168b4f-7cd8-4a03-b3b7-5fddd5e32e97-utilities\") pod \"redhat-marketplace-ckt5g\" (UID: \"15168b4f-7cd8-4a03-b3b7-5fddd5e32e97\") " pod="openshift-marketplace/redhat-marketplace-ckt5g" Mar 12 09:53:09 crc kubenswrapper[4809]: I0312 09:53:09.951972 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15168b4f-7cd8-4a03-b3b7-5fddd5e32e97-catalog-content\") pod \"redhat-marketplace-ckt5g\" (UID: \"15168b4f-7cd8-4a03-b3b7-5fddd5e32e97\") " pod="openshift-marketplace/redhat-marketplace-ckt5g" Mar 12 09:53:09 crc kubenswrapper[4809]: I0312 09:53:09.952174 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pz5s\" (UniqueName: \"kubernetes.io/projected/15168b4f-7cd8-4a03-b3b7-5fddd5e32e97-kube-api-access-7pz5s\") pod \"redhat-marketplace-ckt5g\" (UID: \"15168b4f-7cd8-4a03-b3b7-5fddd5e32e97\") " pod="openshift-marketplace/redhat-marketplace-ckt5g" Mar 12 09:53:10 crc kubenswrapper[4809]: I0312 09:53:10.054772 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15168b4f-7cd8-4a03-b3b7-5fddd5e32e97-catalog-content\") pod \"redhat-marketplace-ckt5g\" (UID: \"15168b4f-7cd8-4a03-b3b7-5fddd5e32e97\") " pod="openshift-marketplace/redhat-marketplace-ckt5g" Mar 12 09:53:10 crc kubenswrapper[4809]: I0312 09:53:10.054956 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pz5s\" (UniqueName: \"kubernetes.io/projected/15168b4f-7cd8-4a03-b3b7-5fddd5e32e97-kube-api-access-7pz5s\") pod \"redhat-marketplace-ckt5g\" (UID: \"15168b4f-7cd8-4a03-b3b7-5fddd5e32e97\") " pod="openshift-marketplace/redhat-marketplace-ckt5g" Mar 12 09:53:10 crc kubenswrapper[4809]: I0312 09:53:10.055049 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15168b4f-7cd8-4a03-b3b7-5fddd5e32e97-utilities\") pod \"redhat-marketplace-ckt5g\" (UID: \"15168b4f-7cd8-4a03-b3b7-5fddd5e32e97\") " pod="openshift-marketplace/redhat-marketplace-ckt5g" Mar 12 09:53:10 crc kubenswrapper[4809]: I0312 09:53:10.055348 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15168b4f-7cd8-4a03-b3b7-5fddd5e32e97-catalog-content\") pod \"redhat-marketplace-ckt5g\" (UID: \"15168b4f-7cd8-4a03-b3b7-5fddd5e32e97\") " pod="openshift-marketplace/redhat-marketplace-ckt5g" Mar 12 09:53:10 crc kubenswrapper[4809]: I0312 09:53:10.055531 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15168b4f-7cd8-4a03-b3b7-5fddd5e32e97-utilities\") pod \"redhat-marketplace-ckt5g\" (UID: \"15168b4f-7cd8-4a03-b3b7-5fddd5e32e97\") " pod="openshift-marketplace/redhat-marketplace-ckt5g" Mar 12 09:53:10 crc kubenswrapper[4809]: I0312 09:53:10.075911 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pz5s\" (UniqueName: \"kubernetes.io/projected/15168b4f-7cd8-4a03-b3b7-5fddd5e32e97-kube-api-access-7pz5s\") pod \"redhat-marketplace-ckt5g\" (UID: \"15168b4f-7cd8-4a03-b3b7-5fddd5e32e97\") " pod="openshift-marketplace/redhat-marketplace-ckt5g" Mar 12 09:53:10 crc kubenswrapper[4809]: I0312 09:53:10.160692 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ckt5g" Mar 12 09:53:10 crc kubenswrapper[4809]: I0312 09:53:10.706036 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ckt5g"] Mar 12 09:53:11 crc kubenswrapper[4809]: I0312 09:53:11.360294 4809 generic.go:334] "Generic (PLEG): container finished" podID="15168b4f-7cd8-4a03-b3b7-5fddd5e32e97" containerID="8d63f5a618df06b389a960513e4f7643454390472efeea03f3b43a00bd40e173" exitCode=0 Mar 12 09:53:11 crc kubenswrapper[4809]: I0312 09:53:11.360917 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ckt5g" event={"ID":"15168b4f-7cd8-4a03-b3b7-5fddd5e32e97","Type":"ContainerDied","Data":"8d63f5a618df06b389a960513e4f7643454390472efeea03f3b43a00bd40e173"} Mar 12 09:53:11 crc kubenswrapper[4809]: I0312 09:53:11.360956 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ckt5g" event={"ID":"15168b4f-7cd8-4a03-b3b7-5fddd5e32e97","Type":"ContainerStarted","Data":"88ad90da58c78894252ddc6c4c915ecc940c8f745cc74f629085f28e10aad507"} Mar 12 09:53:11 crc kubenswrapper[4809]: I0312 09:53:11.363956 4809 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 09:53:12 crc kubenswrapper[4809]: I0312 09:53:12.380597 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ckt5g" event={"ID":"15168b4f-7cd8-4a03-b3b7-5fddd5e32e97","Type":"ContainerStarted","Data":"49447f679afb1d068252ed26ed99b86db024b5087cf712fb39ae11a320ab1b35"} Mar 12 09:53:14 crc kubenswrapper[4809]: I0312 09:53:14.409079 4809 generic.go:334] "Generic (PLEG): container finished" podID="15168b4f-7cd8-4a03-b3b7-5fddd5e32e97" containerID="49447f679afb1d068252ed26ed99b86db024b5087cf712fb39ae11a320ab1b35" exitCode=0 Mar 12 09:53:14 crc kubenswrapper[4809]: I0312 09:53:14.409179 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ckt5g" event={"ID":"15168b4f-7cd8-4a03-b3b7-5fddd5e32e97","Type":"ContainerDied","Data":"49447f679afb1d068252ed26ed99b86db024b5087cf712fb39ae11a320ab1b35"} Mar 12 09:53:15 crc kubenswrapper[4809]: I0312 09:53:15.048194 4809 patch_prober.go:28] interesting pod/machine-config-daemon-h6d4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 09:53:15 crc kubenswrapper[4809]: I0312 09:53:15.048822 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 09:53:15 crc kubenswrapper[4809]: I0312 09:53:15.423599 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ckt5g" event={"ID":"15168b4f-7cd8-4a03-b3b7-5fddd5e32e97","Type":"ContainerStarted","Data":"12409266bd6ba5480c0a2683e1a029015ff2136ab64d8f9644d96daffe7a5a01"} Mar 12 09:53:15 crc kubenswrapper[4809]: I0312 09:53:15.446654 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ckt5g" podStartSLOduration=2.908546865 podStartE2EDuration="6.446634947s" podCreationTimestamp="2026-03-12 09:53:09 +0000 UTC" firstStartedPulling="2026-03-12 09:53:11.362586879 +0000 UTC m=+6864.944622632" lastFinishedPulling="2026-03-12 09:53:14.900674981 +0000 UTC m=+6868.482710714" observedRunningTime="2026-03-12 09:53:15.443022868 +0000 UTC m=+6869.025058601" watchObservedRunningTime="2026-03-12 09:53:15.446634947 +0000 UTC m=+6869.028670680" Mar 12 09:53:20 crc kubenswrapper[4809]: I0312 09:53:20.161779 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ckt5g" Mar 12 09:53:20 crc kubenswrapper[4809]: I0312 09:53:20.163162 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ckt5g" Mar 12 09:53:20 crc kubenswrapper[4809]: I0312 09:53:20.243625 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ckt5g" Mar 12 09:53:20 crc kubenswrapper[4809]: I0312 09:53:20.532492 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ckt5g" Mar 12 09:53:20 crc kubenswrapper[4809]: I0312 09:53:20.590225 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ckt5g"] Mar 12 09:53:22 crc kubenswrapper[4809]: I0312 09:53:22.499205 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ckt5g" podUID="15168b4f-7cd8-4a03-b3b7-5fddd5e32e97" containerName="registry-server" containerID="cri-o://12409266bd6ba5480c0a2683e1a029015ff2136ab64d8f9644d96daffe7a5a01" gracePeriod=2 Mar 12 09:53:23 crc kubenswrapper[4809]: I0312 09:53:23.191492 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ckt5g" Mar 12 09:53:23 crc kubenswrapper[4809]: I0312 09:53:23.306425 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pz5s\" (UniqueName: \"kubernetes.io/projected/15168b4f-7cd8-4a03-b3b7-5fddd5e32e97-kube-api-access-7pz5s\") pod \"15168b4f-7cd8-4a03-b3b7-5fddd5e32e97\" (UID: \"15168b4f-7cd8-4a03-b3b7-5fddd5e32e97\") " Mar 12 09:53:23 crc kubenswrapper[4809]: I0312 09:53:23.306899 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15168b4f-7cd8-4a03-b3b7-5fddd5e32e97-catalog-content\") pod \"15168b4f-7cd8-4a03-b3b7-5fddd5e32e97\" (UID: \"15168b4f-7cd8-4a03-b3b7-5fddd5e32e97\") " Mar 12 09:53:23 crc kubenswrapper[4809]: I0312 09:53:23.307079 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15168b4f-7cd8-4a03-b3b7-5fddd5e32e97-utilities\") pod \"15168b4f-7cd8-4a03-b3b7-5fddd5e32e97\" (UID: \"15168b4f-7cd8-4a03-b3b7-5fddd5e32e97\") " Mar 12 09:53:23 crc kubenswrapper[4809]: I0312 09:53:23.309644 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15168b4f-7cd8-4a03-b3b7-5fddd5e32e97-utilities" (OuterVolumeSpecName: "utilities") pod "15168b4f-7cd8-4a03-b3b7-5fddd5e32e97" (UID: "15168b4f-7cd8-4a03-b3b7-5fddd5e32e97"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 09:53:23 crc kubenswrapper[4809]: I0312 09:53:23.316338 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15168b4f-7cd8-4a03-b3b7-5fddd5e32e97-kube-api-access-7pz5s" (OuterVolumeSpecName: "kube-api-access-7pz5s") pod "15168b4f-7cd8-4a03-b3b7-5fddd5e32e97" (UID: "15168b4f-7cd8-4a03-b3b7-5fddd5e32e97"). InnerVolumeSpecName "kube-api-access-7pz5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 09:53:23 crc kubenswrapper[4809]: I0312 09:53:23.337974 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15168b4f-7cd8-4a03-b3b7-5fddd5e32e97-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "15168b4f-7cd8-4a03-b3b7-5fddd5e32e97" (UID: "15168b4f-7cd8-4a03-b3b7-5fddd5e32e97"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 09:53:23 crc kubenswrapper[4809]: I0312 09:53:23.410586 4809 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15168b4f-7cd8-4a03-b3b7-5fddd5e32e97-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 09:53:23 crc kubenswrapper[4809]: I0312 09:53:23.410620 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7pz5s\" (UniqueName: \"kubernetes.io/projected/15168b4f-7cd8-4a03-b3b7-5fddd5e32e97-kube-api-access-7pz5s\") on node \"crc\" DevicePath \"\"" Mar 12 09:53:23 crc kubenswrapper[4809]: I0312 09:53:23.410630 4809 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15168b4f-7cd8-4a03-b3b7-5fddd5e32e97-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 09:53:23 crc kubenswrapper[4809]: I0312 09:53:23.511682 4809 generic.go:334] "Generic (PLEG): container finished" podID="15168b4f-7cd8-4a03-b3b7-5fddd5e32e97" containerID="12409266bd6ba5480c0a2683e1a029015ff2136ab64d8f9644d96daffe7a5a01" exitCode=0 Mar 12 09:53:23 crc kubenswrapper[4809]: I0312 09:53:23.511728 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ckt5g" event={"ID":"15168b4f-7cd8-4a03-b3b7-5fddd5e32e97","Type":"ContainerDied","Data":"12409266bd6ba5480c0a2683e1a029015ff2136ab64d8f9644d96daffe7a5a01"} Mar 12 09:53:23 crc kubenswrapper[4809]: I0312 09:53:23.511761 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ckt5g" Mar 12 09:53:23 crc kubenswrapper[4809]: I0312 09:53:23.511786 4809 scope.go:117] "RemoveContainer" containerID="12409266bd6ba5480c0a2683e1a029015ff2136ab64d8f9644d96daffe7a5a01" Mar 12 09:53:23 crc kubenswrapper[4809]: I0312 09:53:23.511769 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ckt5g" event={"ID":"15168b4f-7cd8-4a03-b3b7-5fddd5e32e97","Type":"ContainerDied","Data":"88ad90da58c78894252ddc6c4c915ecc940c8f745cc74f629085f28e10aad507"} Mar 12 09:53:23 crc kubenswrapper[4809]: I0312 09:53:23.553647 4809 scope.go:117] "RemoveContainer" containerID="49447f679afb1d068252ed26ed99b86db024b5087cf712fb39ae11a320ab1b35" Mar 12 09:53:23 crc kubenswrapper[4809]: I0312 09:53:23.558375 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ckt5g"] Mar 12 09:53:23 crc kubenswrapper[4809]: I0312 09:53:23.578513 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ckt5g"] Mar 12 09:53:23 crc kubenswrapper[4809]: I0312 09:53:23.629582 4809 scope.go:117] "RemoveContainer" containerID="8d63f5a618df06b389a960513e4f7643454390472efeea03f3b43a00bd40e173" Mar 12 09:53:23 crc kubenswrapper[4809]: I0312 09:53:23.656440 4809 scope.go:117] "RemoveContainer" containerID="12409266bd6ba5480c0a2683e1a029015ff2136ab64d8f9644d96daffe7a5a01" Mar 12 09:53:23 crc kubenswrapper[4809]: E0312 09:53:23.656899 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12409266bd6ba5480c0a2683e1a029015ff2136ab64d8f9644d96daffe7a5a01\": container with ID starting with 12409266bd6ba5480c0a2683e1a029015ff2136ab64d8f9644d96daffe7a5a01 not found: ID does not exist" containerID="12409266bd6ba5480c0a2683e1a029015ff2136ab64d8f9644d96daffe7a5a01" Mar 12 09:53:23 crc kubenswrapper[4809]: I0312 09:53:23.656939 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12409266bd6ba5480c0a2683e1a029015ff2136ab64d8f9644d96daffe7a5a01"} err="failed to get container status \"12409266bd6ba5480c0a2683e1a029015ff2136ab64d8f9644d96daffe7a5a01\": rpc error: code = NotFound desc = could not find container \"12409266bd6ba5480c0a2683e1a029015ff2136ab64d8f9644d96daffe7a5a01\": container with ID starting with 12409266bd6ba5480c0a2683e1a029015ff2136ab64d8f9644d96daffe7a5a01 not found: ID does not exist" Mar 12 09:53:23 crc kubenswrapper[4809]: I0312 09:53:23.656982 4809 scope.go:117] "RemoveContainer" containerID="49447f679afb1d068252ed26ed99b86db024b5087cf712fb39ae11a320ab1b35" Mar 12 09:53:23 crc kubenswrapper[4809]: E0312 09:53:23.657260 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49447f679afb1d068252ed26ed99b86db024b5087cf712fb39ae11a320ab1b35\": container with ID starting with 49447f679afb1d068252ed26ed99b86db024b5087cf712fb39ae11a320ab1b35 not found: ID does not exist" containerID="49447f679afb1d068252ed26ed99b86db024b5087cf712fb39ae11a320ab1b35" Mar 12 09:53:23 crc kubenswrapper[4809]: I0312 09:53:23.657283 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49447f679afb1d068252ed26ed99b86db024b5087cf712fb39ae11a320ab1b35"} err="failed to get container status \"49447f679afb1d068252ed26ed99b86db024b5087cf712fb39ae11a320ab1b35\": rpc error: code = NotFound desc = could not find container \"49447f679afb1d068252ed26ed99b86db024b5087cf712fb39ae11a320ab1b35\": container with ID starting with 49447f679afb1d068252ed26ed99b86db024b5087cf712fb39ae11a320ab1b35 not found: ID does not exist" Mar 12 09:53:23 crc kubenswrapper[4809]: I0312 09:53:23.657296 4809 scope.go:117] "RemoveContainer" containerID="8d63f5a618df06b389a960513e4f7643454390472efeea03f3b43a00bd40e173" Mar 12 09:53:23 crc kubenswrapper[4809]: E0312 09:53:23.657461 4809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d63f5a618df06b389a960513e4f7643454390472efeea03f3b43a00bd40e173\": container with ID starting with 8d63f5a618df06b389a960513e4f7643454390472efeea03f3b43a00bd40e173 not found: ID does not exist" containerID="8d63f5a618df06b389a960513e4f7643454390472efeea03f3b43a00bd40e173" Mar 12 09:53:23 crc kubenswrapper[4809]: I0312 09:53:23.657485 4809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d63f5a618df06b389a960513e4f7643454390472efeea03f3b43a00bd40e173"} err="failed to get container status \"8d63f5a618df06b389a960513e4f7643454390472efeea03f3b43a00bd40e173\": rpc error: code = NotFound desc = could not find container \"8d63f5a618df06b389a960513e4f7643454390472efeea03f3b43a00bd40e173\": container with ID starting with 8d63f5a618df06b389a960513e4f7643454390472efeea03f3b43a00bd40e173 not found: ID does not exist" Mar 12 09:53:25 crc kubenswrapper[4809]: I0312 09:53:25.118974 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15168b4f-7cd8-4a03-b3b7-5fddd5e32e97" path="/var/lib/kubelet/pods/15168b4f-7cd8-4a03-b3b7-5fddd5e32e97/volumes" Mar 12 09:53:45 crc kubenswrapper[4809]: I0312 09:53:45.048408 4809 patch_prober.go:28] interesting pod/machine-config-daemon-h6d4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 09:53:45 crc kubenswrapper[4809]: I0312 09:53:45.049293 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 09:54:00 crc kubenswrapper[4809]: I0312 09:54:00.169437 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555154-8llz4"] Mar 12 09:54:00 crc kubenswrapper[4809]: E0312 09:54:00.170726 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15168b4f-7cd8-4a03-b3b7-5fddd5e32e97" containerName="extract-content" Mar 12 09:54:00 crc kubenswrapper[4809]: I0312 09:54:00.170744 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="15168b4f-7cd8-4a03-b3b7-5fddd5e32e97" containerName="extract-content" Mar 12 09:54:00 crc kubenswrapper[4809]: E0312 09:54:00.170786 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15168b4f-7cd8-4a03-b3b7-5fddd5e32e97" containerName="registry-server" Mar 12 09:54:00 crc kubenswrapper[4809]: I0312 09:54:00.170794 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="15168b4f-7cd8-4a03-b3b7-5fddd5e32e97" containerName="registry-server" Mar 12 09:54:00 crc kubenswrapper[4809]: E0312 09:54:00.170827 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15168b4f-7cd8-4a03-b3b7-5fddd5e32e97" containerName="extract-utilities" Mar 12 09:54:00 crc kubenswrapper[4809]: I0312 09:54:00.170839 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="15168b4f-7cd8-4a03-b3b7-5fddd5e32e97" containerName="extract-utilities" Mar 12 09:54:00 crc kubenswrapper[4809]: I0312 09:54:00.171152 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="15168b4f-7cd8-4a03-b3b7-5fddd5e32e97" containerName="registry-server" Mar 12 09:54:00 crc kubenswrapper[4809]: I0312 09:54:00.172343 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555154-8llz4" Mar 12 09:54:00 crc kubenswrapper[4809]: I0312 09:54:00.175621 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 09:54:00 crc kubenswrapper[4809]: I0312 09:54:00.176189 4809 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-tvxqv" Mar 12 09:54:00 crc kubenswrapper[4809]: I0312 09:54:00.176948 4809 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 09:54:00 crc kubenswrapper[4809]: I0312 09:54:00.180522 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555154-8llz4"] Mar 12 09:54:00 crc kubenswrapper[4809]: I0312 09:54:00.315417 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn7fw\" (UniqueName: \"kubernetes.io/projected/236db853-92f1-4bc1-8653-07489c86a72b-kube-api-access-zn7fw\") pod \"auto-csr-approver-29555154-8llz4\" (UID: \"236db853-92f1-4bc1-8653-07489c86a72b\") " pod="openshift-infra/auto-csr-approver-29555154-8llz4" Mar 12 09:54:00 crc kubenswrapper[4809]: I0312 09:54:00.418167 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zn7fw\" (UniqueName: \"kubernetes.io/projected/236db853-92f1-4bc1-8653-07489c86a72b-kube-api-access-zn7fw\") pod \"auto-csr-approver-29555154-8llz4\" (UID: \"236db853-92f1-4bc1-8653-07489c86a72b\") " pod="openshift-infra/auto-csr-approver-29555154-8llz4" Mar 12 09:54:00 crc kubenswrapper[4809]: I0312 09:54:00.438044 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zn7fw\" (UniqueName: \"kubernetes.io/projected/236db853-92f1-4bc1-8653-07489c86a72b-kube-api-access-zn7fw\") pod \"auto-csr-approver-29555154-8llz4\" (UID: \"236db853-92f1-4bc1-8653-07489c86a72b\") " pod="openshift-infra/auto-csr-approver-29555154-8llz4" Mar 12 09:54:00 crc kubenswrapper[4809]: I0312 09:54:00.525392 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555154-8llz4" Mar 12 09:54:01 crc kubenswrapper[4809]: I0312 09:54:01.078817 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555154-8llz4"] Mar 12 09:54:02 crc kubenswrapper[4809]: I0312 09:54:02.059667 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555154-8llz4" event={"ID":"236db853-92f1-4bc1-8653-07489c86a72b","Type":"ContainerStarted","Data":"557469496afcd22b383e4e4901ccc2abecff090bac22cf4d913a3c1a07b385d8"} Mar 12 09:54:03 crc kubenswrapper[4809]: I0312 09:54:03.211645 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555154-8llz4" podStartSLOduration=1.807079977 podStartE2EDuration="3.211621956s" podCreationTimestamp="2026-03-12 09:54:00 +0000 UTC" firstStartedPulling="2026-03-12 09:54:01.086215055 +0000 UTC m=+6914.668250808" lastFinishedPulling="2026-03-12 09:54:02.490757054 +0000 UTC m=+6916.072792787" observedRunningTime="2026-03-12 09:54:03.157232208 +0000 UTC m=+6916.739267931" watchObservedRunningTime="2026-03-12 09:54:03.211621956 +0000 UTC m=+6916.793657689" Mar 12 09:54:03 crc kubenswrapper[4809]: I0312 09:54:03.229224 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555154-8llz4" event={"ID":"236db853-92f1-4bc1-8653-07489c86a72b","Type":"ContainerStarted","Data":"d4ea17584bd34d55d797db5fa7896503dee63e50e7d3ac65d8b33e706ae47789"} Mar 12 09:54:05 crc kubenswrapper[4809]: I0312 09:54:05.144926 4809 generic.go:334] "Generic (PLEG): container finished" podID="236db853-92f1-4bc1-8653-07489c86a72b" containerID="d4ea17584bd34d55d797db5fa7896503dee63e50e7d3ac65d8b33e706ae47789" exitCode=0 Mar 12 09:54:05 crc kubenswrapper[4809]: I0312 09:54:05.145055 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555154-8llz4" event={"ID":"236db853-92f1-4bc1-8653-07489c86a72b","Type":"ContainerDied","Data":"d4ea17584bd34d55d797db5fa7896503dee63e50e7d3ac65d8b33e706ae47789"} Mar 12 09:54:06 crc kubenswrapper[4809]: I0312 09:54:06.587234 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555154-8llz4" Mar 12 09:54:06 crc kubenswrapper[4809]: I0312 09:54:06.716848 4809 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zn7fw\" (UniqueName: \"kubernetes.io/projected/236db853-92f1-4bc1-8653-07489c86a72b-kube-api-access-zn7fw\") pod \"236db853-92f1-4bc1-8653-07489c86a72b\" (UID: \"236db853-92f1-4bc1-8653-07489c86a72b\") " Mar 12 09:54:06 crc kubenswrapper[4809]: I0312 09:54:06.731686 4809 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/236db853-92f1-4bc1-8653-07489c86a72b-kube-api-access-zn7fw" (OuterVolumeSpecName: "kube-api-access-zn7fw") pod "236db853-92f1-4bc1-8653-07489c86a72b" (UID: "236db853-92f1-4bc1-8653-07489c86a72b"). InnerVolumeSpecName "kube-api-access-zn7fw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 09:54:06 crc kubenswrapper[4809]: I0312 09:54:06.820505 4809 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zn7fw\" (UniqueName: \"kubernetes.io/projected/236db853-92f1-4bc1-8653-07489c86a72b-kube-api-access-zn7fw\") on node \"crc\" DevicePath \"\"" Mar 12 09:54:07 crc kubenswrapper[4809]: I0312 09:54:07.166669 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555154-8llz4" event={"ID":"236db853-92f1-4bc1-8653-07489c86a72b","Type":"ContainerDied","Data":"557469496afcd22b383e4e4901ccc2abecff090bac22cf4d913a3c1a07b385d8"} Mar 12 09:54:07 crc kubenswrapper[4809]: I0312 09:54:07.166730 4809 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="557469496afcd22b383e4e4901ccc2abecff090bac22cf4d913a3c1a07b385d8" Mar 12 09:54:07 crc kubenswrapper[4809]: I0312 09:54:07.166771 4809 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555154-8llz4" Mar 12 09:54:07 crc kubenswrapper[4809]: I0312 09:54:07.666825 4809 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555148-xt5ch"] Mar 12 09:54:07 crc kubenswrapper[4809]: I0312 09:54:07.677480 4809 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555148-xt5ch"] Mar 12 09:54:09 crc kubenswrapper[4809]: I0312 09:54:09.120252 4809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edec864a-4369-4b47-a470-e466d4b1083f" path="/var/lib/kubelet/pods/edec864a-4369-4b47-a470-e466d4b1083f/volumes" Mar 12 09:54:15 crc kubenswrapper[4809]: I0312 09:54:15.048833 4809 patch_prober.go:28] interesting pod/machine-config-daemon-h6d4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 09:54:15 crc kubenswrapper[4809]: I0312 09:54:15.049403 4809 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 09:54:15 crc kubenswrapper[4809]: I0312 09:54:15.049440 4809 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" Mar 12 09:54:15 crc kubenswrapper[4809]: I0312 09:54:15.049963 4809 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cac5286245729b3abd73c38c92528cc73c27638d4f267bc01eb2d31b515874ed"} pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 09:54:15 crc kubenswrapper[4809]: I0312 09:54:15.050026 4809 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" podUID="101483ba-8ed3-40eb-9855-077e9add029f" containerName="machine-config-daemon" containerID="cri-o://cac5286245729b3abd73c38c92528cc73c27638d4f267bc01eb2d31b515874ed" gracePeriod=600 Mar 12 09:54:15 crc kubenswrapper[4809]: I0312 09:54:15.260790 4809 generic.go:334] "Generic (PLEG): container finished" podID="101483ba-8ed3-40eb-9855-077e9add029f" containerID="cac5286245729b3abd73c38c92528cc73c27638d4f267bc01eb2d31b515874ed" exitCode=0 Mar 12 09:54:15 crc kubenswrapper[4809]: I0312 09:54:15.260979 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" event={"ID":"101483ba-8ed3-40eb-9855-077e9add029f","Type":"ContainerDied","Data":"cac5286245729b3abd73c38c92528cc73c27638d4f267bc01eb2d31b515874ed"} Mar 12 09:54:15 crc kubenswrapper[4809]: I0312 09:54:15.261157 4809 scope.go:117] "RemoveContainer" containerID="2cc4d61696c1a97a33dbf89b6027d85f4124e9d804e87e559c4cc8ddb20dd8d3" Mar 12 09:54:16 crc kubenswrapper[4809]: I0312 09:54:16.275409 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6d4c" event={"ID":"101483ba-8ed3-40eb-9855-077e9add029f","Type":"ContainerStarted","Data":"a77b6cedebc5748a29ec57f63d56471e20daa8b151935acc43894fd8886b7a83"} Mar 12 09:54:23 crc kubenswrapper[4809]: I0312 09:54:23.214569 4809 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zsrnd"] Mar 12 09:54:23 crc kubenswrapper[4809]: E0312 09:54:23.216066 4809 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="236db853-92f1-4bc1-8653-07489c86a72b" containerName="oc" Mar 12 09:54:23 crc kubenswrapper[4809]: I0312 09:54:23.216082 4809 state_mem.go:107] "Deleted CPUSet assignment" podUID="236db853-92f1-4bc1-8653-07489c86a72b" containerName="oc" Mar 12 09:54:23 crc kubenswrapper[4809]: I0312 09:54:23.216458 4809 memory_manager.go:354] "RemoveStaleState removing state" podUID="236db853-92f1-4bc1-8653-07489c86a72b" containerName="oc" Mar 12 09:54:23 crc kubenswrapper[4809]: I0312 09:54:23.219242 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zsrnd" Mar 12 09:54:23 crc kubenswrapper[4809]: I0312 09:54:23.242059 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zsrnd"] Mar 12 09:54:23 crc kubenswrapper[4809]: I0312 09:54:23.358984 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2caf88fc-42ed-4977-935a-2ff9c74ca76d-utilities\") pod \"redhat-operators-zsrnd\" (UID: \"2caf88fc-42ed-4977-935a-2ff9c74ca76d\") " pod="openshift-marketplace/redhat-operators-zsrnd" Mar 12 09:54:23 crc kubenswrapper[4809]: I0312 09:54:23.359252 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64wgh\" (UniqueName: \"kubernetes.io/projected/2caf88fc-42ed-4977-935a-2ff9c74ca76d-kube-api-access-64wgh\") pod \"redhat-operators-zsrnd\" (UID: \"2caf88fc-42ed-4977-935a-2ff9c74ca76d\") " pod="openshift-marketplace/redhat-operators-zsrnd" Mar 12 09:54:23 crc kubenswrapper[4809]: I0312 09:54:23.359952 4809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2caf88fc-42ed-4977-935a-2ff9c74ca76d-catalog-content\") pod \"redhat-operators-zsrnd\" (UID: \"2caf88fc-42ed-4977-935a-2ff9c74ca76d\") " pod="openshift-marketplace/redhat-operators-zsrnd" Mar 12 09:54:23 crc kubenswrapper[4809]: I0312 09:54:23.462832 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2caf88fc-42ed-4977-935a-2ff9c74ca76d-utilities\") pod \"redhat-operators-zsrnd\" (UID: \"2caf88fc-42ed-4977-935a-2ff9c74ca76d\") " pod="openshift-marketplace/redhat-operators-zsrnd" Mar 12 09:54:23 crc kubenswrapper[4809]: I0312 09:54:23.462971 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64wgh\" (UniqueName: \"kubernetes.io/projected/2caf88fc-42ed-4977-935a-2ff9c74ca76d-kube-api-access-64wgh\") pod \"redhat-operators-zsrnd\" (UID: \"2caf88fc-42ed-4977-935a-2ff9c74ca76d\") " pod="openshift-marketplace/redhat-operators-zsrnd" Mar 12 09:54:23 crc kubenswrapper[4809]: I0312 09:54:23.463105 4809 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2caf88fc-42ed-4977-935a-2ff9c74ca76d-catalog-content\") pod \"redhat-operators-zsrnd\" (UID: \"2caf88fc-42ed-4977-935a-2ff9c74ca76d\") " pod="openshift-marketplace/redhat-operators-zsrnd" Mar 12 09:54:23 crc kubenswrapper[4809]: I0312 09:54:23.463807 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2caf88fc-42ed-4977-935a-2ff9c74ca76d-catalog-content\") pod \"redhat-operators-zsrnd\" (UID: \"2caf88fc-42ed-4977-935a-2ff9c74ca76d\") " pod="openshift-marketplace/redhat-operators-zsrnd" Mar 12 09:54:23 crc kubenswrapper[4809]: I0312 09:54:23.463822 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2caf88fc-42ed-4977-935a-2ff9c74ca76d-utilities\") pod \"redhat-operators-zsrnd\" (UID: \"2caf88fc-42ed-4977-935a-2ff9c74ca76d\") " pod="openshift-marketplace/redhat-operators-zsrnd" Mar 12 09:54:23 crc kubenswrapper[4809]: I0312 09:54:23.491165 4809 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64wgh\" (UniqueName: \"kubernetes.io/projected/2caf88fc-42ed-4977-935a-2ff9c74ca76d-kube-api-access-64wgh\") pod \"redhat-operators-zsrnd\" (UID: \"2caf88fc-42ed-4977-935a-2ff9c74ca76d\") " pod="openshift-marketplace/redhat-operators-zsrnd" Mar 12 09:54:23 crc kubenswrapper[4809]: I0312 09:54:23.543393 4809 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zsrnd" Mar 12 09:54:24 crc kubenswrapper[4809]: I0312 09:54:24.126473 4809 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zsrnd"] Mar 12 09:54:24 crc kubenswrapper[4809]: I0312 09:54:24.370386 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zsrnd" event={"ID":"2caf88fc-42ed-4977-935a-2ff9c74ca76d","Type":"ContainerStarted","Data":"561af2e814d67548414ed05f0e5fd6a055855c2faf41dcfa4551c08fb1415f34"} Mar 12 09:54:25 crc kubenswrapper[4809]: I0312 09:54:25.384018 4809 generic.go:334] "Generic (PLEG): container finished" podID="2caf88fc-42ed-4977-935a-2ff9c74ca76d" containerID="fee00e2284bdfbefddfad8712da516cad1a5ce72a164a712f97fc83cd1ff1e28" exitCode=0 Mar 12 09:54:25 crc kubenswrapper[4809]: I0312 09:54:25.384268 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zsrnd" event={"ID":"2caf88fc-42ed-4977-935a-2ff9c74ca76d","Type":"ContainerDied","Data":"fee00e2284bdfbefddfad8712da516cad1a5ce72a164a712f97fc83cd1ff1e28"} Mar 12 09:54:26 crc kubenswrapper[4809]: I0312 09:54:26.397255 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zsrnd" event={"ID":"2caf88fc-42ed-4977-935a-2ff9c74ca76d","Type":"ContainerStarted","Data":"65690ddaa112af4a61cf9314c82ab78560604ee00717049343d33b9acab2b8d7"} Mar 12 09:54:32 crc kubenswrapper[4809]: I0312 09:54:32.507567 4809 generic.go:334] "Generic (PLEG): container finished" podID="2caf88fc-42ed-4977-935a-2ff9c74ca76d" containerID="65690ddaa112af4a61cf9314c82ab78560604ee00717049343d33b9acab2b8d7" exitCode=0 Mar 12 09:54:32 crc kubenswrapper[4809]: I0312 09:54:32.507701 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zsrnd" event={"ID":"2caf88fc-42ed-4977-935a-2ff9c74ca76d","Type":"ContainerDied","Data":"65690ddaa112af4a61cf9314c82ab78560604ee00717049343d33b9acab2b8d7"} Mar 12 09:54:33 crc kubenswrapper[4809]: I0312 09:54:33.538506 4809 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zsrnd" event={"ID":"2caf88fc-42ed-4977-935a-2ff9c74ca76d","Type":"ContainerStarted","Data":"71bbe26e82a48291e39a0f35008e6e9539e102e8654f922a1fc71d41d0c17641"} Mar 12 09:54:33 crc kubenswrapper[4809]: I0312 09:54:33.543656 4809 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zsrnd" Mar 12 09:54:33 crc kubenswrapper[4809]: I0312 09:54:33.543698 4809 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zsrnd" Mar 12 09:54:33 crc kubenswrapper[4809]: I0312 09:54:33.578719 4809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zsrnd" podStartSLOduration=2.864940584 podStartE2EDuration="10.578697554s" podCreationTimestamp="2026-03-12 09:54:23 +0000 UTC" firstStartedPulling="2026-03-12 09:54:25.386566239 +0000 UTC m=+6938.968601982" lastFinishedPulling="2026-03-12 09:54:33.100323219 +0000 UTC m=+6946.682358952" observedRunningTime="2026-03-12 09:54:33.561793532 +0000 UTC m=+6947.143829275" watchObservedRunningTime="2026-03-12 09:54:33.578697554 +0000 UTC m=+6947.160733277" Mar 12 09:54:34 crc kubenswrapper[4809]: I0312 09:54:34.611615 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zsrnd" podUID="2caf88fc-42ed-4977-935a-2ff9c74ca76d" containerName="registry-server" probeResult="failure" output=< Mar 12 09:54:34 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 09:54:34 crc kubenswrapper[4809]: > Mar 12 09:54:44 crc kubenswrapper[4809]: I0312 09:54:44.614765 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zsrnd" podUID="2caf88fc-42ed-4977-935a-2ff9c74ca76d" containerName="registry-server" probeResult="failure" output=< Mar 12 09:54:44 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 09:54:44 crc kubenswrapper[4809]: > Mar 12 09:54:54 crc kubenswrapper[4809]: I0312 09:54:54.611322 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zsrnd" podUID="2caf88fc-42ed-4977-935a-2ff9c74ca76d" containerName="registry-server" probeResult="failure" output=< Mar 12 09:54:54 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 09:54:54 crc kubenswrapper[4809]: > Mar 12 09:55:00 crc kubenswrapper[4809]: I0312 09:55:00.280274 4809 scope.go:117] "RemoveContainer" containerID="e184dc6af4c4fea4a4a0f265a066dde88a930a745b0913cc3d3e3bb5eb0ecc18" Mar 12 09:55:04 crc kubenswrapper[4809]: I0312 09:55:04.599785 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zsrnd" podUID="2caf88fc-42ed-4977-935a-2ff9c74ca76d" containerName="registry-server" probeResult="failure" output=< Mar 12 09:55:04 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 09:55:04 crc kubenswrapper[4809]: > Mar 12 09:55:14 crc kubenswrapper[4809]: I0312 09:55:14.602969 4809 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zsrnd" podUID="2caf88fc-42ed-4977-935a-2ff9c74ca76d" containerName="registry-server" probeResult="failure" output=< Mar 12 09:55:14 crc kubenswrapper[4809]: timeout: failed to connect service ":50051" within 1s Mar 12 09:55:14 crc kubenswrapper[4809]: >